Wednesday, November 19, 2008

Can Software ever get IT right?

Matt Heusser wrote this beautiful piece of writing about software development practices – quoting another famous Blogger Joel Spolsky.

"... which has a programming method in which programmers code stories based on notes written by designers that are based on requirements documents created by analysts that are assessments of what the customer actually wants. It's practically designed to get everything wrong, to insure that, no matter how ignorant the analysts and architects are on an issue, they'll find someone who knows even less to write the actual code ..."

It is interesting that with so many "loose" ends and human elements (thinking, question, modeling and analyzing), many still fancy the changes of "zero defect software", compare software to manufacturing, glorify processes to fix the problems in software that is INHERRINTLY designed to GO WRONG. If you look at the chain of analyst ->Designer->developer ->tester -> Customer, each one works with less or totally different set of information than all others.

Notice what Matt has to say – to make sure that they get the final software wrong – they will find someone who knows probably the least to write actual code !!!!

How can this (software) EVER GO RIGHT? Can this?

Read entire blog post here

Shrini

Perils of Quantification – what harm metrics can do for you?

This is a hurriedly written post (just to make sure that I do not lose the thought – "fieldstone" in Jerry Weinberg's terminology) – I plan to use this as a place holder for expanding ideas on this topic … Please bear with me for a while with this "being cooked" idea.

I stumbled on something that Michael Bolton said about metrics – in response a Google group discussion thread. Michael mentions "What you want to beware of, in particular, is turning rich information (stories about bugs, problems, risks, value) into impoverished data (numbers).

I think that is a great way (rather an interesting way) to think about "software metrics". To me, software metrics are great way to "squeeze", heavenly simplify" and "horribly trivialize" rich information about bugs, test ideas, problems, risks, value about software. While they provide a simplistic view of rich and often qualitative/subjective data/information – there is huge danger of "oversimplifying and information loss".

Many people argue with me saying "quantification" – associating something that we try to understand in term of numbers – is essential for science and engineering. Some even quote "you cannot improve anything that you cannot measure". I feel that the "urge" for measuring, notion of be quantitative /objective is simply "over emphasized". Let us consider the perils (ill effects) of quantification. Some entities lend themselves for quantification – say counting. Counting people, counting vehicles on road, counting fruits on a tree, marks a student scores in an exam. Many entities that are related to humans especially are difficult to quantify – tend lose lots of information when quantification is attempted. This is very true with software.

Consider following quantified information – what do you think? What do you lose when you quantify …

  1. One tsunami
  2. 1 billion Indians
  3. 1.3 billion people in the world below the poverty line of 1$ /day
  4. 8 million people affected with AIDS disease
  5. Software Quality of Six sigma
  6. In 2003 there were 6328000 car accidents in the US.

    Finally

    6300 bugs in Windows 2000 ..

Notice that each of these numbers have rich information about loss of life, health of people, quality of life and so on. By squeezing rich information into a number, we lose the information. Numbers can be manipulated, argued in any way you want, they hide information, you can be cheated by numbers. Numbers are single dimensional where as information they tend to represent are often multidimensional.

"As proven by modern accounting scandals, you can make the numbers say whatever you want" – Mike Kelly

To be continued …

Shrini

Monday, November 17, 2008

A conversation on Automation ROI Part 1 …

When automation is required, either by contract or due to technical constraints, ROI computation may not be helpful. Intangible factors may constitute the bulk of the return, and thus arithmetic computations won't indicate the real value of automation. Fortunately in these situations we often aren't faced with questions about the value of automation because it must be employed regardless.

-Doug Hoffman


Here goes a conversation with a colleague of mine who wanted me to help him with some ROI calculation for an automation project.

Colleague: Do you have a formula or framework for calculating ROI from automation?

Me: I might … first let me understand what you are looking for.


Colleague: It is simple man … Here is a client who is looking for investing in automation and she is interested in knowing the ROI so that she can take it to her boss with business case.

Me: That is good. What are the elements of ROI you are interested in knowing now?


Colleague: What do you mean?

Me: To me, ROI has three elements – a notion of investment (effort, money, time etc – all these can be interdependent in some way), a notion of "return" (called as benefits – some tangible, meaning quantified in terms of numbers, some intangible – soft benefits – qualitative measures) and finally a timeline usually in terms of direct measures like calendar months, or work/effort months OR indirect measures like number releases, number of test cycles, number of platforms covered etc. Which one is of interest to you…?


Colleague: All three … of course!!!

Me: Then you have some hard work to do gather information, data, expectations, some historical and some current.


Colleague: Well... I thought it is easy to find out ROI … I was told that there are many freely available ROI calculators especially catering to automation … are you aware of them?

Me: Yes, I have seen few of them … not so impressed … One problem that I have with most (or all) of these calculators is that a) they use a highly simplified model of testing that is totally out of context. Meaning you can just apply that to any project, any technology, any tool … you will have some numbers coming out … That is too good to believe 2) They equate automation to human testing literally 1:1 … In my opinion, automation is a different kind of testing – remove the human being (to the extent possible) and introduce the machine (automation script) – then think (dream, pray and wish) that program does EXACTLY like a human.


Colleague: This is too confusing …. Let me try to explain my problem in a different way. Customer is investing x dollars for automation, she wants to know when will she be able to recover the investment and when she will start reaping benefits (possibly without investing anything incrementally). How can we help her?

Me : OK … that is fair … Here we come again to same structure – x dollars (investment), when will she recover the investment (time lines) and when/what benefits she can expect, without possibly not incrementally investing (returns). Let us attack one by one … How your client wants to recover the investment?


Colleague: that is silly question … she wants to save manual testing effort by automating all or whatever is technically feasible. So, cycle time reduction is what she is looking at.

Me: So, the questions are – How much will be the cycle time reduction (assuming that that is possible and worth pursuing), by when, that reduction will be realized? What are the incremental benefits till that point of time? Right? Anything else I am missing?


Colleague: Good … I think now you have understood my problem … what next?

Me: Are all cycles of the same size? What happens to application under test for all these cycles (meaning does it under go change or not?) What is the current test cycle time (manual)? What all happens in a cycle? What are things under tester's control and what are not? When do you repeat cycle (assuming that you repeat cycles)?


Colleague: Oh!! My God … my head is spinning … I will have to get all the information and data... Are you sure these are required for ROI calculation? Anything else?

Me: Yes, at least in my opinion, to give a reasonable picture of ROI where R = cycle time reduction – I would need these. There are some more things that I would require to complete the equation … but let us get started with this ….

BTW, what makes your client believe that machine replicate what humans do? There are things machines are good at and there are things humans are good at. No matter what you do … I think in the context of test automation – machine cannot do what a sapient human tester can do (unless, human by design behave as though they are brain dead and emotionless).

Colleague: No …. No … not again … Do not try to confuse me … I will get you the details that you asked for… then let us fit an ROI formula. Please put you're the tester in you to sleep till then …

Me: (smiled) OK … Please bear in mind that "you cannot compare even one cycle of automated execution to same cycle of manual execution".

While my colleague is out to get the data that I asked for … what do you think? What have been your experiences of calculating ROI with automation … how did you deal with "improbable" yet simplistic model of treating automation execution equivalent to what human tester does – hence talking about cycle time reduction etc? What other returns (benefits) that automation provides, have been successful with your clients? How did you quantify them?

I work in IT services industry. Day in and out, I hear people asking me such things - while I attempt to explain them the hazards of the simplistic model of testing and automation they use in ROI, need for business case to push automation (that requires numbers and quantified measures) makes me to look for innovative ways to articulate what I want to say but in way that "business" people can agree on …

To be continued ….

Shrini


Extraas:

Here are 3 useful and well written papers on "Automation ROI"

  1. ROI of Test Automation by Mike Kelly (2004)
  2. Cost Benefit Analysis by Doug Hoffman (1999)
  3. Bang for the Buck Test Automation by Elisabeth Hendrickson (2001)


Closing thought:

"… every time I hear "let's take a look at the ROI," or "it will increase your ROI," or "all we need to do is use the ROI calculator" some little part of me shrivels up and dies. It drives me insane. I refuse to believe that for products as complex and involved as automation and performance testing services (where you need to understand infrastructure, application architecture, business and use cases, deployment models, culture, risk tolerance, and the other aspects of the design, development, and testing taking place for a project) that you can so easily capture the ROI. If it were that easy you wouldn't be talking to me about it."

- Mike Kelly.

Thursday, November 13, 2008

2 Notorious “E”’s of Testing

Efficiency is doing things right; effectiveness is doing the right things – Peter Drucker

Let me make a dig on these two notorious and the most abused terms - testing effectiveness and efficiency. Raj Kamal has a post that discusses this aspect here.

To me, effectiveness has a notion of "degree of serving the purpose". For example, we can say "this measure" taken to curb the inflation has been effective (means it appears to have served its purpose). This medicine is effective in slowing down the disease. So when talking about effectiveness with respect to testing - we should map the results to the mission of testing and say the techniques, approaches that you have deployed served their purpose or not. Remember, as testers we serve our stakeholders. Different stakeholders have different expectations from testing. Testers form their mission to suit those expectations.

So, in order to be effective in testing, we need to understand the possible stakeholders, their expectations and which one to focus on. That would lead to testing mission. Any testing that happens, should serve the mission. Along the way, testers employ different approaches, techniques, tools and methods. Few of these would be "effective" in serving the mission and hence serve the stakeholder the information that they are interested in knowing, few many not. Therefore, if you are thinking about articulating about effectiveness in testing, think about stakeholders first, then their expectations, then the testing missions, then approaches, tools and techniques, finally link all of them to the results that you produce. I am not sure if a simplistic metric of an equation that counts "reified" entities like bugs and doing some math (like taking cube root of sum of all bugs and so on). Bugs are not real things but they are the emotions and opinions of frustrated "someone" who matter in your project context. Can you quantify frustration?

Also remember, since there could be multiple stakeholders (hence multiple testing missions), your testing (approach, tools and techniques) cannot be effective for all missions. Accept this fact and you don't have to be guilty about it. This becomes very visible when there are contradicting expectations and hence contradicting missions. Learn to negotiate with the stakeholders, try to iron out conflicts and state in your test strategy which missions you are focusing on and why.

Now, let me come to the term "efficiency". You might have heard people saying – "this vehicle is fuel efficient", "this equipment is energy efficient", "this worker is efficient". To me, the term efficiency is related to the notion of degree of conversion of deployed input (human and machine capital) to desired outcomes. Let us take the example of an internal combustion engine and put the definition of efficiency into perspective - The ratio of useful work to energy expended. Like effectiveness, identifying the best way to convert useful energy for testing to useful results that our stakeholders value – is never a simple task. There is no one right way to things also. To serve multiple stakeholders and testing missions, we as testers need to employ a diverse set of techniques, tools and methods. Hence there can be multiple ways to define "efficiency" with respect to software testing.

While Peter Drucker provides a simple framework for thinking about these terms, I would say it is too simplistic and rudimentary model to apply it to software testing. We neither have one "right" way of things nor a specific set of "right things to do". There are many right ways to do things and there are many right things to do. Who defines the notion of "right"? Our stakeholders. Therefore, it is very important to align our work as testers, to what stakeholder expect. First step towards this is to identify our stakeholders. Have you done that for your project?

I would like to highlight another thing here. Since the notions of efficiency and effectiveness as applied to software testing, are multi-dimensional and cannot be reduced simple set of numbers. Avoid temptation to simplify these parameters into simple metrics defined in terms of entities like bug counts and test cases counts etc. Think broadly and deeply, consider multiple stakeholders and testing missions.

In short, effectiveness deals with "fitness of approach/tool/techniques to serving mission" and efficiency deals with "conversation rate of deployed capital (humans and machines) to intended output". In other words, effectiveness is about "how powerful is your way of doing things" and efficiency is about "how well you do things". Both of these parameters are important indicators of testing work and are multidimensional in nature.

Shrini