Article summary
Software product development is difficult, expensive and risky. How should a company decide between an in-house team and outsourcing? If outsourcing, how should they chose a vendor? I’ve developed a very simple model to help answer this question. My model produces a “suitability” factor that measures how suitable a team is for a particular project. I see suitability as a rough inverse of risk. The more suitable a team, the less risk in the project. Teams with low suitability increase project risk.
While I’ve used simple math to create a model that produces a suitability metric, I don’t expect it to produce three significant digits of accuracy. Instead I think it’s more useful to shed perspective on the dimensions of the problem, and the relative importance of those dimensions. I believe it also has something to say about hiring, but that’s another story.
Dimensions
I think the suitability of a team for a project can be measured in these four dimensions:
- Domain expertise
- Technology mastery
- Process and practices
- Skills of individuals
For software product development, these dimensions include:
Domain
Knowledge of the domain. Examples from some Atomic customers include: color science for a color measurement company, bond pricing and markets for a hedge fund, file formats for automotive controllers for an automotive diagnostics company, medical procedure coding and insurance for health care companies. Each of these domains is rich and complex, and people sometimes dedicate years to mastering them.
Technology
Mastery of technology. Languages, frameworks, operating systems, web versus desktop versus embedded versus enterprise. Each of these technology domains has their own special considerations and details to master. Developers, and even companies, often specialize and dedicate themselves to just one of these. There’s definitely an efficiency to using the specializing and using the same technology on every project the team tackles.
Process & Practices
A consistent, repeatable process, effective practices, and the corresponding underlying beliefs, encapsulates previous lessons learned and codifies a means for tackling new projects. Back in the early days of agile, when the perceived common enemy was waterfall, someone smart observed that any process is better than no process, and the low hanging fruit was the many teams and companies who had no process whatsoever. For software product development, process needs to span a broad range of activities, including business modeling, product design, project management, development, testing and deployment. A team that has mastery of critical practices such as user research, persona development, wireframing, release planning, weekly iterations, test-driven development, and exploratory testing applies them across any domain and technology they work in.
Skills
Individual skills. Process can’t eliminate the criticality of individual skills. If it could, heavyweight software engineering methodologies that strove to eliminate the human element of software craftsmanship would have been more effective. The individual skills that matter are not tied to technology—think “programming” versus “C++”, “encapsulation” versus “C# class”, “testing” versus “WinRunner”. Skillful, smart people know how to learn new domains quickly and partner effectively with domain experts.
Weights
The critical question for selecting a software product development team is how relatively important each of the four dimensions is. For the purpose of the suitability model, the weights of each of these is known as:
- Wd = weight of domain
- Wt = weight of technology
- Wp = weight of process/practices
- Ws = weight of skills
If we believe that a project is so complex that the developers must all be masters of the domain, we assign a high value to Wd. This is the case in university labs where highly educated, incredibly specialized researchers express their domain knowledge in software. In all cases, the sum of the weights has to be one:
Wd + Wt + Wp + Ws = 1
We could use any consistent scale for ranking a team in each of the dimensions. For example, we could assign a team a score of 1-4 in each dimension, or we could use a 100 point scale. As difficult as these things tend to be to measure, we should probably be realistic and use a fairly crude scale with relatively few choices.
Suitability
Overall team suitability is measured on the same scale as the individual dimensions are measured by applying the vector of weights:
Suitability = Wd * D + Wt * T + Wp * P + Ws * S
where D, T, P, and S represents the rating of a team in each of our dimensions.
Simple Model
If we devise a really simple rating scale and measure each of the four dimensions on a three point scale:
mastery - 2
average - 1
none - 0
then suitability will be somewhere between 0 (“not suited”) and 2 (“perfectly suited”).
What about weights? My belief is that project success or failure is 80% determined by P (process) and S (skills). Let’s assume they’re each 40%. That leaves only 20% of the suitability to be determined by domain expertise and technology mastery. Let’s assume that D (domain) and T (technology) are equal at 10% each.
Wd = 0.1
Wt = 0.1
Wp = 0.4
Ws = 0.4
Interesting cases
Applying the simple model to some common situations yields interesting results. I’ve included an exemplar of each team to illustrate the idea.
Team A. Team with no process, few principles or practices, heavy domain and technology expertise, and average skilled people. Exemplar: status-quo, internal team, average company.
D = 2
T = 2
P = 0
S = 1
Suitability = 0.1 * 2 + 0.1 * 2 + .4 * 0 + 0.4 * 1 = 0.8
Team B. External team with no prior domain expertise. Significant technology experience, effective process and skilled people. Exemplar: Pivotal Labs or Atomic Object working on a Rails app in a novel domain.
D = 0
T = 2
P = 2
S = 2
Suitability = 0.1 * 0 + 0.1 * 2 + 0.4 * 2 + 0.4 * 2 = 1.8
Team C. External team with NO prior domain or technology experience, effective process and skilled people. Exemplar: Atomic Object working for a new client in a new technology.
D = 0
T = 0
P = 2
S = 2
Suitability = 0.1 * 0 + 0.1 * 0 + 0.4 * 2 + 0.4 * 2 = 1.6
Team D. Dream team: domain experts, technology gurus, effective process and skilled people.
D = 2
T = 2
P = 2
S = 2
Suitability = 0.1 * 2 + 0.1 * 2 + 0.4 * 2 + 0.4 * 2 = 2.0
Team E. Nightmare team: no domain experience, no technology expertise, no process, unskilled people.
D = 0
T = 0
P = 0
S = 0
Suitability = 0.1 * 0 + 0.1 * 0 + 0.4 * 0 + 0.4 * 0 = 0.0
Observations
The suitability of the teams in the common situations I considered above is summarized in the table below.
Team Characteristics | Suitability |
Dream team. | 2.0 |
Strong technology, process and skills. New to domain. |
1.8 |
No domain or technology experience. Strong process and skills. |
1.6 |
Typical internal. Domain and technology experts. No process, average skills. |
0.8 |
Nightmare team. | 0.0 |
Assuming you don’t own, or can’t find, a dream team, then your next best bet is an external vendor that specializes in the technology you’re using (suitability 1.8). The many happy Ruby on Rails customers of Pivotal Labs or Atomic Object exemplify this choice. If you can’t afford, or can’t find, a vendor that specializes in your technology, then your next best bet is a vendor that has a strong process and skilled people.
Perhaps the most interesting result from my simple model is that the suitability of an external team with no relevant domain or technology experience is twice that of a status-quo internal team (who are presumably masters of the domain and technology).
In a nutshell, people, process, and practices trump domain and technology expertise.
Team F. A combination of two teams: one with all the domain knowledge and the other simply coding to flowed down requirements, following flowed down
processes. Exemplar: the IBM “master programmer” model, masquerading nowadays as the “offshoring” model.
In this model, the primary company believes that they can create a successful program by having a few experts write requirements to be implemented by inexperienced (thus inexpensive) coders. This belief is promulgated in the CMMi/ISO-900x concept that all we need to do is write good requirements and have good processes, then any coder can successfully implement the program.
In other words, domain knowledge, technology, process, and skill are independent variables; thus they can be arbitrarily apportioned to different subsets of the team without impacting the ability of the team as a whole.
In this scenario, the team is really two teams, one team made up of domain experts that write requirements but no code and the other made up of coders that simply follow the processes to implement the requirements.
Master programmer team:
D = 2
T = 1
P = 0
S = 0
Since the master programmers are not doing the actual programming, their process and skill levels are immaterial. Their technology knowledge is poorly utilized since they are writing requirements, not actually using the software technology.
SuitabilityM = 0.1 * 2 + 0.1 * 1 + .4 * 0 + 0.4 * 0 = 0.3
Coder team:
D = 0
T = 1
P = 2
S = 0
Since the coders are inexperienced, they have little or no skill and no domain knowledge. We’ll assume some familiarity with the technology.
SuitabilityC = 0.1 * 0 + 0.1 * 1 + .4 * 2 + 0.4 * 0 = 0.9
Now the question becomes, how to combine the scores of the two teams. In the pessimistic case, the score would be the product of the two teams. This would happen if the communications between the “master programmer” and the “coders” is very difficult or poorly managed.
Suitability = SuitabilityM * SuitabilityC = 0.3 * 0.9 = 0.3
Ouch. OK, even with perfect communications, I very much doubt that the combined team can ever exceed the average suitability factor.
Suitability = (SuitabilityM + SuitabilityC) / 2 = (0.3 + 0.9) / 2 = 0.6
Hmmm, maybe that is why the master programmer model is a failure. It also doesn’t bode well for a naive offshoring strategy.