SLGP Header

Software Effort Estimation Using Scott Knott Test

IJCSEC Front Page

Abstract
Software Cost Estimation is used for large-scaled and complex software systems leads managers to settle SCE as one of the most vital activities that is closely related to predicate the success or failure of the whole development process. Propose a statistical framework based on a multiple comparisons algorithm in order to rank several cost estimation models, identifying those which have significant differences in accuracy, and clustering them in non-overlapping groups. In the existing work Scott-Knott test was used to rank and cluster the software estimation models. The test proposed by Scott Knott, a procedure of means grouping, is an effective alternative to perform procedures of multiple comparisons without ambiguity. This study aimed to propose a modification related to the partitioning and means grouping in the said procedure, to obtain results without ambiguity among treatments, organized in more homogeneous groups. In the proposed methodology, treatments that did not participate in the initial group are joined for a new analysis, which allows for a better group distribution. The proposed methodology is considered effective, aiming at the identification of elite cultivar groups for recommendation.
Index Terms: software cost estimation; software metrics; software effort estimation; statistical methods.
Introduction
Prediction of the effort is used to complete the software project by comparing the prediction models over past historical data set. This framework is based on a multiple comparisons algorithm, to rank several cost estimation models. Software Engineering cost model and estimation techniques are used for budgeting, trade-off, risk analysis, and project planning with control to provide software improvement investment analysis. The estimation increases the breadth of the search for relevant studies which conduct more studies on estimation methods commonly used by the software industry and also increases the awareness of how properties of the dataset impacts the results when evaluating the estimation methods. Accuracy is measured by the Magnitude of Relative Error (MRE) and MRE to the Estimate (MER). This can be achieved by accurate cost estimation. This needs the knowledge of size specifications, source code, manuals and the rate at which the requirements are likely to change during development and also the probable number of bugs that are likely to be encounter. The capability of development team and the salary overhead incase if team increases along with the tools are necessary for estimation.

References:

  1. [1] NikolaosMittas and Lefteris Angelis, “Ranking and Clustering Software Cost Estimation Models through a Multiple Comparisons Algorithm”, IEEE Trans. Software Eng., vol. 39, no.4, pp. 537-551,April. 2013.
  2. M. Jorgensen and M. Shepperd, “A Systematic Review of Software Development Cost Estimation Studies”, IEEE Trans. Software Eng., vol. 33, no. 1, pp. 33-53, Jan. 2007.
  3. M. Shepperd and G. Kadoda, “Comparing Software Prediction Techniques Using Simulation”, IEEE Trans. Software Eng., vol. 27, no. 11, pp. 1014-1022, Nov. 2001.
  4. B.A.Kitchenham, L.M.Pickard,S. G. MacDonell, and M. J.Shepperd, “What Accuracy Statistics Really Measure”, IEE Proc. Software Eng., vol. 148, pp. 81-85, June 2001.
  5. T. Foss, E. Stensrud, B. Kitchenham, and I. Myrtveit, “A Simulation Study of the Model Evaluation Criterion MMRE”, IEEE Trans. Software Eng., vol. 29, no. 11, pp. 985-995, Nov. 2003.
  6. N. Mittas and L. Angelis, “Comparing Cost Prediction Models by Resampling Technique”, J. Systems and Software, vol. 81, no. 5, pp. 616-632, May 2008.
  7. E. Stensrud and I. Myrtveit, “Human Performance Estimating with Analogy and Regression Models: An Empirical Validation”, Proc. IEEE Fifth Int’l Software Metrics Symp., pp. 205-213, Nov. 1998.
  8. B. Kitchenham and E. Mendes, “Why Comparative Effort Prediction Studies May Be Invalid”, Proc. ACM Fifth Int’l Conf. Predictor Models in Software Eng., pp. 1-5, May 2009.
  9. I. Myrtveit, E. Stensrud, and M. Shepperd, “Reliability and Validity in Comparative Studies of Software Prediction Models”, IEEE Trans. Software Eng., vol. 31, no. 5, pp. 380-391, May 2005.
  10. S.Lessmann, B. Baesens, C. Mues, and S. Pietsch, “Benchmarking Classification Models for Software Defect Prediction: A Proposed Framework and Novel Findings”, IEEE Trans. Software Eng., vol. 34, no. 4, pp. 485-496, July/Aug. 2008.