I got my undergraduate degree in computer science from the University of Louisiana at Lafayette (UL Lafayette). All my childhood I grew up thinking I would go to Louisiana State University (LSU), following in the footsteps of my brother and sister. What turned the tide for me was that back in 1977, UL Lafayette had a $6.5M interactive Honeywell Multics system whereas students at LSU programmed with punch cards. The focus of my studies at UL Lafayette was the use of various programming languages and top-down structured programming. A programmer's job was to write code that was easy to understand by other programmers, used the smallest amount of memory as possible, and executed with a minimum number of CPU cycles.
My high-school decision to attend UL Lafayette had far-reaching consequences as I went to work for Honeywell after college graduation. This employment choice relocated me from Louisiana to Phoenix, where I got my Masters Degree in Computer Science from Arizona State University (ASU). The focus of my studies at ASU was in software engineering with topics including the study of tracing program code back to original requirements, designing around data objects instead of merely decomposing functionality, coding using a variety of styles, and testing in systematic ways that ensured that all of the lines of code were covered. In short, if my UL Lafayette was about the HOW, the ASU experience was about the process and measurement of the HOW.
So it is with this background that I enthusiastically approached my reading of Codermetrics: Analytics for Improving Software Teams by Jonathan Alexander. I decided to write this up by drawing little diagrams in the tradition of Jessica Hagy who publishes her Indexed blog with images that fit on index cards.
Codermetrics has nine chapters:
Concepts
-
Introduction
"Software products are typically not produced by an individual but by a team, and even in the case where one coder works alone, that coder must fill the various roles of a larger team." [page 5]
-
Measuring What Coders Do
- "The first purpose of metrics is simply to help you track and understand what has happened." [page 11]
- "The second purpose of metrics is to help people communicate about what has happened." [page 12]
- "The third purpose of metrics is to help people focus on what they need to do to improve." [page 12]
-
The Right Data
"...Software teams are more likely to succeed if they have:
- Centralization of higher complexity tasks among a few coders.
- Some coders working across many product areas.
- Coders who feel challenged and want to prove themselves." [page 65]
Metrics
-
Skill Metrics
"When you look at [skill] metrics for a set of coders taken over periods of time, you begin to see patterns about the individuals and the team makeup. Identifying those patterns can help you understand how the team is functioning, where the team is strong, and where it might be weak." [page 69]
-
Response Metrics
"The Response Metrics show you how well and in what ways each project succeeded or failed. When examined side-by-side with a team's Skill Metrics, you will be able to analyze which skills or combination of skills correlate with positive or negative results." [page 101]
-
Value Metrics
"Using the Skill Metrics and the Response Metrics, Value Metrics help identify the specific type of value that each coder brings to the team and highlight how the skills add up to specific strengths, and how you can measure coder contributions in terms of team achievements." [page 133]
Processes
-
Metrics in Use
"Likely candidates for a focus group are team managers and team leaders. You might also want to include one or more coders from the team. The best participants will be those who are experienced and respected by others on the team." [page 160]
-
Building Software Teams
"Codermetrics give you the ability to examine teams — both current and past — in new and different ways. By capturing a set of metrics, you can create metrics-based 'profiles' of teams. If you gather profiles of multiple teams and determine the relative level of success of each team, then you can begin to compare profiles to identify the key attributes of success." [page 204]
-
Conclusion
"Metrics, however, are not a solution in and of themselves. ...Improvement for each coder is not going to happen because you start putting numbers in spreadsheets. But if you've decided you want to get better and that you are willing to do the hard work to improve, metrics are an extremely useful tool to help you choose the right path to get there." [page 230]
As I read Chapters 4 through 6, I thought to myself "The problem with this book is that there are just too many metrics." Take any two data items, put them together, and voila — you've got a metric. It's like all of those statistics in baseball. "He's the best right-handed hitter in the league when facing a left-handed picture at a non-home game in the third inning when the team is down by one run, and there are two outs." That's nice to know, but a lot to process. There are 35 metrics computed from 35 data items in Codermetrics.
When I was a Departmental Quality Consultant working at GTE on telephone switches (special purpose computers that process phone calls), we collected software review metrics. We had items like: number of defects found, size of item being reviewed, and time spent before and during the review. Even with metrics based on all of the combinations of these data items, it was hard to really ascertain what was going on. If a large number of defects were found (defects per size), does that mean they have all been found, or does that mean there are so many that there are even more to be found? If a few number of defects were found (defects per time), is the item under review really that good, or were the reviewers not thorough (time per size)?
When I got to Chapter 7, I saw that Alexander suggests using 4 or 5 metrics for the projects and 4 or 5 for the team members. This seems more reasonable. When I was in college, software development was accomplished with a waterfall model broken into 5 (often months long) phases:
- ANALYSIS
- DESIGN
- CODE
- TEST
- RELEASE
where all of a phase was completed before proceeding to the next, as in all of the design work was done before any coding work was done. The thinking here was to avoid having to throw away code that was no longer appropriate now that the rest of the design work had been completed. The strategy was "Let's design everything before coding anything." This approach proved unworkable because no one could ever account for every minute aspect during a phase, and phases had to be revisited. Given this fact, Agile methods are used today where each of the phases is quickly exercised for a subset of the functionality to be delivered in what are called sprints. So it seems only natural to collect metrics during a sprint and review them at the end of the (one or two week) sprint. As patterns in the metrics emerge, corrective action can be taken in subsequent sprints.
Autodesk has a mix of development project types. Some products are mature, like AutoCAD, where it is important to design, review, and test every change, no matter how small, to ensure that no regressions are introduced into the product. In contrast, we have projects like Autodesk Labs technology previews, where what is being demonstrated is a proof of concept, so more latitude is possible. Despite these differences, one thing remains common: programmers don't like mistakes. They pride themselves on well-designed, clean, efficient, and re-usable code. For any Autodesk customers who have encountered a problem using our software, this may seem like "no way," but I assure you that the existence of defects in our products demonstrates just how challenging software development is. Given the variety of projects at Autodesk, the Codermetrics that seem most applicable include:
Metric Type | Specific Metric |
---|---|
Skill | Points = Sum (Complexity for all completed tasks) |
Range = Number of areas that a coder works on | |
Response | Wins = Number of active users added to Subscription |
Penalties = Sum (Urgency for each customer issue) | |
Value | Efficiency = 1.0 - Individual (Turnovers + Errors) / Team (Turnovers + Errors) |
Teamwork = Assists + Saves + Range - Turnovers |
Measurement is alive in the lab.