Capers Jones’
Software Risk Master (SRM)

Estimate all software development
project metrics BEFORE project begins

Get Started Now!

Ten Examples of Software Risk Master (SRM) Sizing and Estimation



SRM is a powerful tool that analyzes many variable factors in order to produce estimates. It is easier to see how SRM works if the major factors are shown individually rather than being combined with all of the others.

These 12 examples all use function point metrics from the International Function Point User’s Group (IFPUG). This is the 30th anniversary of the IFPUG organization and function points have now become the most widely used software metric.

However several very useful metrics are older than IFPUG. Function points were first invented by Allan Albrecht and colleagues at IBM White Plains circa 1973. Other IBM metrics used in SRM include "defect potentials", "defect removal efficiency (DRE)", and "Language Levels". All of these metrics were invented at IBM in the 1970’s. Today in 2017 all are widely used by technology companies and leading software groups.

These metrics are all effective and accurate. Unfortunately many other metrics such as lines of code, cost per defect, and story points are ineffective and inaccurate, and actually distort reality thus making real progress invisible. LOC penalizes high-level languages while cost per defect penalizes quality. Story points are not standardized and vary by hundreds of percent.

The first 11 examples in this report shows a specific topic, and each is a single one-page Excel spreadsheet. The 12th example shows changes in many factors simultaneously, which occurs in real life. The topics included are the following:

Example 1: Team Experience (experts, average, novice)
Expert developers tend to produce smaller and more concise programs. They also code faster. Even more important, experts have fewer bugs in their code and they find and remove almost all of them before release. Novices tend to write bulky code slowly, and their code tends to have distressing numbers of bugs. Click here for Example 1

Example 2: Software Quality Control
Software quality control is a weak line in software engineering. Due to poor metrics choices and poor measurement practices, very few people have reliable data on effective quality control techniques. Example 2 shows accurate quantified data on the effectiveness of defect prevention, pre-test defect removal such as inspections and static analysis, and the effectiveness of common forms of testing. The goal of effective software quality control is to have defect potentials below 3.00 per function point combined with defect removal efficiency (DRE) above 99%. The current U.S. average is a defect potential of about 4.25 bugs per function point and only 92.50% DRE. Click here for Example 2

Example 3: National Work Hours
Because software engineering is a labor-intensive occupation with large amounts of human effort, the number of hours worked per month is very important. Even more important are the amount of hours worked in the form of unpaid overtime. SRM has quantitative data on the average number of work hours for 52 countries. However users of SRM can provide their own local work hour patterns. The default values are provided for informational purposes but can be changed to match local data. The range of work hours per month is from a low of about 115 to a high of about 200. Click here for Example 3

Example 4: Industry Norms for Work Hours
There are over 200 U.S. industries that produce software applications. As with countries, work hour patterns vary widely by industry and also by company. The hardest working industry sector is that of start-up technology companies. The lowest working industry sector is that of state governments. Here too users of SRM can provide their own local data on work hour patterns. Click here for Example 4

Example 5: Software Methodologies
The author’s most recent book is a Quantitative Comparison of 60 Software Development Methodologies published in July of 2017 by CRC Press. The example here illustrates the differences in schedules, costs, and quality for three of these methodologies: Agile, Waterfall, and the combination of Team Software Process (TSP) and Personal Software Process (PSP) both of which were developed by Watts Humphrey of the Software Engineering Institute (SEI). TSP/PSP are normally used together. Click here for Example 5

Example 6: Programming Languages
For reasons that are mainly sociological, the software industry has over 3000 programming languages in 2017. SRM itself supports sizing and estimating for a total of 89 languages, including combinations such as Java and HTML or Ruby and SQL This example shows three languages. One is COBOL, a low-level programming language. The second is Java, a mid-level programming language. The third is Objective-C, a high-level language used by Apple for all of their software. The concept of “language levels” was first quantified by IBM circa 1973. The original definition was the number of assembly language statements needed to produce one statement in a target language. For example Java is a “level 6” language because it takes about 6 assembly statements to create the functionality of 1 Java statements. After IBM developed function point metrics the definition of language levels was expanded to source code statements per function point. Click here for Example 6



Example 7: Sizing with Software Risk Master (SRM)
SRM has a unique automatic sizing feature that can size any application in about 2 minutes or less. SRM sizing is based on pattern matching. This example shows the results in terms of effort, staffing, schedules, costs, and quality for four size plateaus: 100, 1,000, 10,000, and 100,000 function points. Software risks go up with application size as do schedules and paperwork volumes. Quality declines with application size unless very effective quality control steps are used that include static analysis and inspections. Click here for Example 7

Example 8: Software Paperwork
Readers may be surprised to find that for military and defense software applications the costs of producing paper documents is over 55% of the total cost of software production. This is due mainly to the Department of Defense (DoD) standards which generate enormous quantities of paper documents, many of which have little value. This example shows software documents sizes and costs for defense software using waterfall, commercial software using iterative, and an internal application using Agile. Agile has greatly reduced software paperwork volumes and costs, and for Agile projects paperwork is only about 11% of total cost. The largest volume of paper is usually in the form of bug reports. Click here for Example 8

Example 9: Software Team Composition and Occupation Groups
A study funded by AT&T and carried out by the author’s team was charged with finding the numbers and kinds of occupation groups that worked on software in major organizations such as IBM, AT&T, the Navy, and about a dozen others. We found a total of 126 occupations. Software Risk Master (SRM) shows the number of employees that work in 20 different occupations. Small applications of 100 function points may only have two or three occupations, but large systems above 10,000 function points have at least 20 occupation groups. Because the various occupations work on different materials at different rates of speed, it is useful to have this data for estimates of software schedules and costs. Click here for Example 9

Example 10: Three years of Maintenance, Enhancement, and Support
Software Risk Master does not stop with development estimates. It also predicts three years of software maintenance (bug repairs), customer support, and enhancements. For large applications above 10,000 function points these maintenance tasks often cost more than original development. Maintenance costs are driven by numbers of customers and numbers of latent bugs released in the software. As readers might expect, software with thousands of users and thousands of bugs will be very expensive to maintain and customer support costs will be high as well. Click here for Example 10

Example 11: Problem, Code, and Data Complexity
Complexity is one of the most subtle and subjective factors in all of software estimating. SRM uses three different forms of complexity:
  • 1) problem complexity;
  • 2) code complexity;
  • 3) data complexity
(The SRM inputs provide examples). The reason for complexity being subjective is that the same set of problems are not equally complex to all people. For example an expert in cyber-security might regard building a new firewall application as a low complexity problem. But to a novice in cyber-security the same firewall would be very high in complexity because they have no prior experience. Thus complexity is close to being a reciprocal of experience levels. Experts will evaluate problem and data complexity as being lower than novices. Of course code complexity has exact quantification. The "cyclomatic complexity" metric which is widely used calculates code complexity based on graph theory and a control flow graph of the modules in an application. Cyclomatic complexity uses the formula of "graph edges minus nodes plus 2." Code that has no branches has a cyclomatic complexity value of 1. As branches increase, cyclomatic complexity also increases. Modules with cyclomatic complexity scores above 10 tend to be buggy and difficult to test. However for problem complexity (the algorithms and research needed) and for data complexity (files and data relationships) only subjective metrics exist. It is best to experiment with the SRM complexity settings by running SRM against completed projects where complexity is understood and team members are available to explain complexity values. Click here for Example 11

Example 12: Simultaneous Changes in Experience, Methods, Languages, and Quality Control
Thus far the examples have been limited to a single key factor. In real life all of these factors can change simultaneously. Table 12 shows the wide range between best-case results and worst-case results when key factors all change at once; experience, languages, methods, tools, reuse, and work hours. Table 12 changes CMMI levels, languages, experience, and methodologies simultaneously.
Click here for Example 12

The overall purpose of these 12 examples is to illustrate specific topics individually even though Software Risk Master (SRM) combines all of them when performing estimates.

  • We support up to 60 development methodologies
  • Our quality predictions are more complete than any other estimation tool
  • We size earlier than any other method in the world
  • We support 84 programming languages and add new ones every year