Saturday, December 29, 2007

Exploratory Testing challenged - Part I


Here is a conversation that I had recently with one of my manager. We were discussing about merits and demerits of “Exploratory Testing” (ET) as a testing approach. This manager is a die-hard fan of “scripted testing” and apparently swears by “Quality/Factory” school of Testing.

Me: We should propose ET for this
Manager: Why? What are the benefits?

Me: ET will extend the test coverage over traditional scripted testing, you will be able discover those bugs that are not “catchable” by scripted tests.
Manager: Why our test scripts fail to find those bugs? I trust that our specifications are exhaustive and our scripts are thoroughly reviewed and signed off by the business experts. Test scripts should be able to find nearly all bugs that I consider important to be fixed.

Me: Our scripts are based on specifications which are one narrow, fallible source of information about “intended software behavior”. Since specifications are written in English – there could interpretations/misinterpretations. Since our specifications are fallible, so are our scripts. There is a human limitation to understand and interpret specifications (objectively) and design the test cases that cover the entire test space. So, there is good possibility that scripts will not find all bugs that potentially be discovered.

Manager: What are other benefits of ET?
Me: ET helps the test coverage and provide enhanced bug finding capabilities over scripted testing – especially to nullify “Pesticide Paradox” associated with scripts.

Manager: How? What is this pesticide paradox?
Me: Just as pests in soil over a period of repeated application of a specific pesticide acquire immunity to it and fail to die or show up – software bugs become immune to repeated application of specific Test cases. Over the period of time developers are become aware of test cases that are executed and *specially* test to make sure that new build of the application good just enough to pass those tests. As result of this, there is a “false” sense of stability and quality of the application.

Manager: So... Test cases wear out … why so?
Me: Test cases wear out as they do not have any in built mechanism in them to alter themselves to changing product environment. Test scripts can not think, infer, improvise, get frustrated as intelligent human testers do. Hence test scripts can not find bugs that repeatedly than a human tester.

Manager: what else …?
Me: Quoting James Bach – “The scripted approach to testing attempts to mechanize the test process by taking test ideas out of a test designer's head and putting them on paper. There's a lot of value in that way of testing. But exploratory testers take the view that writing down test scripts and following them tends to disrupt the intellectual processes that make testers able to find important problems quickly.”

Manager: What are other attributes of ET?
Me: Cem Kaner states that exploratory tests provide
- Interactive
- Concurrence of cognition and execution
- Creativity
- Drive towards fast results
- De-emphasize archived testing materials

Manager: I heard another related term “adhoc testing” is this similar to ET?
Me: Yes and no … Yes as Adhoc testing is well known predecessor to ET. Cem Kaner coined this term ET around early 80’s to distinguish ET and Adhoc Testing. Cem thought that there were lots of confusions regarding some kind of “impromptu” testing that does not rely on predefined scripts. Ad hoc testing normally refers to a process of improvised, impromptu bug searching. By definition, anyone can do ad hoc testing. The term "exploratory testing"--coined by Cem Kaner, in “Testing Computer Software” -- refers to ET as a sophisticated, thoughtful approach to ad hoc testing.

Manager: What is specific about ET vis-à-vis scripted testing?
Me: ET is more of investigative approach where as scripted testing is more of “validation” or “conformance” oriented. In scripted Testing, the tests, the sequences, data, variations etc is pre-defined where as in ET, test design/execution and learning all happen more or less at the same time.

Manager: I heard that ET requires “experience” and “Domain knowledge”. Can an average tester do a good ET?
Me: I am not sure how you define “average tester”, “experience” and “domain knowledge”. I believe, ET requires skills like “questioning”, “modeling”, “critical thinking” among others. Domain knowledge certainly helps in ET but I do not consider it as mandatory.

Manager: Fair enough… What types of bugs ET can discover?
Me: It depends upon what kind of bugs you want to discover. ET can be performed in controlled, small time boxed sessions with specific charters to explore a specific feature of the application. ET can be configured to cater to specific investigative missions. You could use few ET sessions to develop a software product documentation or to analyse and isolate performance test results.

Manager: I notice all along you argued like a “purist” in Testing. I am more of a Business owner; I would need to relate every dollar I spent to the return or the benefit that it gives.

Me: No … I would not call myself as purist, not at least here. I bring in lots of business considerations in my recommendations related to testing. ET provides a way of optimizing testing efforts by time-boxed sessions with charters. Depending upon the nature of the information stakeholders are looking for, ET sessions can be accurately planned.

Manager: Yes …

Me: Let us say you have 5000 scripts for an application and they pass all the time. Would you be worried?
Manager: Ummmm… It depends upon the context. But mostly I would not worry about it. I interpret that as a sign of enhanced maturity of those specific application areas. It is quite possible that there are no bugs that I should worry about - the “scripts passing” is a “confirmation” of that fact.

Me: What if this trend continues and next 5 cycles of testing also do not produce any bugs? Would you be worried then?
Manager: No …not at all In fact, I would reduce the size of scripts being executed to say half – 2500 as the application has become stable. This is an indication for me to "cut down" testing effort, I can possibly look at automation as well.

Me: Here is a twist, what if your ALL scripts are passing but your are seeing bugs (either detected by other means or by the customers) .. Would not you doubt your test cases?
Manager: Depends upon the kinds of bugs I see … If I were doubt something or someone at all … I would doubt test results, testers integrity and project management in general. Test scripts are less likely to be at “falult”. That would process issue - we would need to tighten the process.

Me: OK … What corrective action you would take then? What you steps will you take?
Manager: I would immediately order a thorough Root Cause analysis of the defects and identify what is causing them in the first place. Tighten the development, configuration and deployment process. I would strictly enforce the processes (improved) in Testing and mandate the testers to correctly and meticulously execute the scripts and report the relevant results correctly.

Me: What if you still find bugs outside your scripts?
Manager: That would be a “hypothetical question” – not likely to happen. In any case my focus would be to improve the testing process and strengthen Test scripts. Again, if you are finding still finding bugs – probably those bugs would be “obscure” type – I might not have to bother about them…

Manager: Good… I am still not convinced that ET can give the bang for the buck. As someone who is interested in predictability and repeatability of testing, I am interested in a test process that can scale.
Me: Ummmmm … OK… what is a testing process? Is this something that “actually happens” or “something that is intended”? Is repeatability and predictability - all you care?
Manager: You are too much … there is a limit to asking questions …I don’t think this discussion is leading to any good … Let us talk about it some other time [walks out of the room]

I am continuing my discussion with this manager and post the views and continued discussions in Part 2....

A very happy new year to all …

Shrini

12 comments:

Anonymous said...

Shrini,

I'm sending this post to some of my clients that are advocates of scripted testing.

This post is a great source of the benefits of ET, explained in a very understandable language.

Great work.

Can't wait to read the second part of it.

Shrini Kulkarni said...

Hi Jose,

Thanks for your kind words. I am happy that you liked the post.

Part 2 deals mainly with resources challenges and "teaching" part of ET as that seems to be the mail problem for the manager to impliment ET. He feels that ET requires experience and deep domain knowledge. We in our company majority of our test force is in their initial years of software and testing experience. Hence he feels that training these folks would a real problem.

I am composing that post ..

Shrini

Cem Kaner said...

Shrini:

I rarely try to persuade a test manager to switch to exploratory testing. Instead, I recommend to them that they include some exploratory testing in their regular testing process. Over time, a thoughtful manager will find a context- appropriate balance between scripted and exploratory tests.

I used to recommend a standard formula:

In every build, even the last build:

- spend 25% of your time trying new things. Some of these will find bugs. Some will give you good ideas for reusable tests.

- spend 25% of your time implementing new tests as reusable tests.

- spend 50% of your time running old tests (including bug regressions).

The problem with this formulation is that it focuses too much on test execution and too little on learning. Learning is a very big part of exploration, and that doesn't come just from running tests and seeing results. It comes from finding information about the market, the platform, the customer expectations, the risks for similar products, etc. This learning guides initial test design (the 25% for new things) and the design of the reusable tests. It also provides important guidance for writing bug reports (helping you decide, and explain) whether a bug is serious and what customer reactions might be.

The advantage of the simple 25/25/50 formulation is that it is very easy for many managers to understand and try.

The disadvantage -- that it hides the non-test-execution research that competent explorers must do -- is a serious problem. If you are the consultant, then as your client gains experience with exploratory work, you can raise higher level cognitive issues and the value of preparatory research gradually, as examples present themselves in the situation of your client.

-- Cem

Anonymous said...

Great Post, Shrini.
I like this conversation. I often get into a similar conversation - where we have a traditional scripted approach (both manual and automated). However, the ambiguity always comes in terms of the the deliverable and quality measurement. I think this was fairly depicted in your post. Specially something to show the project sponsors and the program management in terms of tangible timebox. Some ask me if this is equivalent to the Bug-Bash (where bunch of people including engineering and customers get-together in a timebox and log all the bugs they find). I say yes and No; Yes because it involves real customers trying out their real business scenarios and engineering trying from their technical and testing skills perspective.

Looking forward for 2nd part of the conversation.

-Ram
http://ramsblog.wordpress.com

Anonymous said...

Shrini;
Great post. I don't have much experience in testing. The organization in which i work follows scripted testing. In our organization there is no any QA manager or experienced Tester. So how to deal with such situations. Always pressure on testing team (QA called in our organization) about time. Spending more time on testing. I had number of questions run in my mind but not able to express it properly.

Push said...

Hi Shrini,
For last few days, I am struggling with the concept of "Pesticide Paradox". Though most of the people talk about it nobody gives any statistics to corroborate his/her claim. Could you point me to any of such document. What I expect is something like
1. Number of Bugs found per month
With traditional automation testing.
2. Number of Bugs found after using ET

Cem Kaner said...

The first meeting of LAWST involved an extensive discussion of this. One presentation of that discussion was in a paper of mine at http://www.kaner.com/lawst1.htm

In most testing projects, we have some repetitive testing (smoke tests, scripted or automated regression tests, etc.), some research done for the purpose of creating new to-be-repeated tests, and some exploratory testing. LAWST's discussion focused on automated GUI-level regression tests. The practitioners at the meeting had extensive experience (typically 10 or more years) and reported that their automated suites found 5-20% of the bugs they found in their projects. There have been plenty of experience reports at practitioner conferences with similar descriptions of very heavy investment in scripted (including automated) testing with most of the bugs coming from exploration.

I am not aware of controlled experiments in this area. Most of the data I have seen as a consultant is confidential and cannot be cited in the kind of detail necessary for proper publication.

Shrini Kulkarni said...

Thanks Cem, for droping in and sharing your valuable views.

I like your 25-25-50 approach. That is really a way to work with managers.

One thing that we should be doing is try not to sell ET as an approach - instead increamentally try to show the effectiveness of the approach by "adding" ET in existing process.

Shrini

Anonymous said...

hi shrini,
Its a greate work.
I like the way you presented
the information

SrinivasRadaram said...

Shrini,

As far as I know we do "ET" knowingly or unknowingly at some part of our testing cycle. Let me try to explain you with what I got in my mind.

While a tester is executing the test cases, he/she will go thinking out of box and does something more than trying to break the system. Which I think is exploring the system, which is one of the best ways to learn a product.

Let me know if my message made any sence?

-Srinivas.

Shrini Kulkarni said...

@Srinivas ..

>>>As far as I know we do "ET" knowingly or unknowingly at some part of our testing cycle.

You are right... this is the way we all ET world see ET as part of every testers work. But the emphasis is on "doing it better" - becoming conscious about it.

James Bach/Cem Kaner describes good testing as continuum (spectrum) where there is pure scripted (test case driven) on end and pure exploratory testing on the other. Depending upon the nature of application, testing mission - a sapient tester mixes both approached to gain maximum out of testing...

Shrini

Unknown said...

Hi there. I found this to be a very interesting article. Your manager sounds like he wants to learn more but then doesn't get the answer he / she is looking for and does the passive aggresive exit stage left. :)

Might I suggest that exploratory testing is something that isn't functionally driven, but process driven. Test scenarios and / or scripts are usually modelled on functionality and based on SRS System Requirements Specs and FRS Functional Requirement Specs, whereas process is based on BRS Business Requirement Specs or Domain Expertise (if these people are available in your test team).

Some of the best defects I've encountered were submitted by end users who constantly prove to me that processing for them is based on the 'path of least resistance' or the path that gives me the least amount of errors to get a successful transaction. Looking forward to reading the second part...

Chris