Monday, July 24, 2006

Testing - A success Story?

Found this article recently " Why software testing doesn't become a success story" - interesting!

Have a read below:

Software testing essentially reveals the mistakes done by a human mind when building up a piece of code. However, in some cases software testing can become a never-ending story. Testing team completes first testing cycle, no. of defects are found, development team fixes the defects, again testing is carried out, some more defects crop up and so on.

If project development goes this way then project manager's tension builds up and the estimates goes haywire. Release date gets extended by days, weeks and sometimes by a month or two. However, this kind of situations can be avoided if few things are taken into consideration:

* Functionality of the application to be developed should be clear and well documented with a support of good change management process
* Development phase of the project should be completed
* Test cases must cover entire functionality of the application and they must be executed in a controlled environment
* A robust process for determining the severity and priority of defect
* Carrying out analysis of testing phase --> No. defects found against no. of test cases executed. If application is in second stage of testing then no. of defects reoccurred, time taken for fixing up of defects according their severity.

So what do you think how a software testing can become a success story in large no. of applications?

Monday, July 10, 2006

What kind of fish is a tester?

Imagine if everybody were like you…

Would life be the better or the worse for that?

Would testing be better or worse?

This is an article from the July edition of the EuroSTAR Newsletter - STARTester, from Anne Mette Jonassen Hass. You can view the complete newsletter by clicking here and dont forget to subscribe to receive future issues.

I must admit that I think if everybody were like me, (testing) life would perhaps be easier, but also dull, predictable and lacking important aspects. Finally after more than 50 years of life, i have realized that other people – and hence testers – are different from me! Other testers see the world differently and have different values. What a relief!

I’m not fast. The fact that all testers are not alike has been known since the ancient Greek philosopher Galenus defined 4 temperaments (some people like systems and order)

• Phlegmatic
• Sanguine
• Choleric
• Melancholic

Galenus also said: “We all have our share of each – in different mixtures.” Since then others have studied personalities including Freud, Jung, and Myers-Briggs. Based on Jung’s work Myers-Briggs defines sixteen personality types composed from four dimensions. The dimensions are:

• How do you get energy:
Extraversion (E) / Introversion (I)

• How do you get information and knowledge:
Sensing (S) / Intuition (N)

• How do you decide:
Thinking (T) / Feeling (F)

• How do you act:
Judging (J) / Perceptive (P)

The Greek view is quite simple, the Myers-Briggs view rather complex, and they are both concerned with the individual person as just that: an individual. In addition to this, Dr. M. Belbin has defined nine team roles. A team role as defined by Dr. M. Belbin is “A tendency to behave, contribute and interrelate with others in a particular way."

If you go around thinking that all people basically are like you, you are terribly mistaken. And that mistake can lead to misunderstandings and tensions in test teams, and may even cause test teams to break down. When working in test teams, awareness and understanding of peoples’ differences are essential.

I once worked on a team with many frictions and a fair amount of mistrust. One of the team members had heard of the Belbin roles and we all had a test. This was a true revelation to us all. The two team members with the most friction between them were very different types. They had both been completely at a loss as to why the other acted as he did. Having understood that neither had meant any harm, but that it was simply a question of being very different personalities, they worked much better together in the team.

The nine Belbin roles are:

Action-oriented roles

• Shaper
• Implementer
• Completer/Finisher

People-oriented roles

• Co-ordinator
• Team-worker
• Resource Investigator

Cerebral roles

• Plant
• Monitor/Evaluator
• Specialist

Each of the roles has some valuable contributions to the progress of the team in which it acts. They also have some weaknesses that may have an adverse effect on the team.

Two examples:

A Shaper is challenging, dynamic, and thrives on pressure.
He or she has the drive and courage to overcome obstacles.
The weaknesses are that a Shaper is prone to provocation, and may offend people's feelings.

A Team-worker is co-operative, mild, perceptive and diplomatic.
He or she listens, builds, and averts friction.
The weakness is that a Teamworker can be indecisive in crunch situations.
Everybody is a mixture of more team roles, usually with one or two being dominant. An analysis of one’s Belbin team role will give a team role profile showing the weight of each role in one’s personality.

Every person on a team should know his or hers own type and those of the others. It is done by filling in fairly simple questionnaires – not going into deep psychological searches in peoples’ minds. The aim is to provide a basic understanding of ones own and the other team members’ ways of interacting and primary values. It is not about finding out why people are like they are and not to try and change anything either.

It is the test manager’s responsibility to get the test team to work for a specific testing task. And it is the higher management’s responsibility
to choose a test manager with the right traits, skills, and capabilities to be a test manager.

There are two aspects to a team: the people and the roles assigned to the people.
Each individual person in a team has his or her personal team role profile and a
number of skills and capabilities. Each role has certain requirements toward the
person or the people who are going to fill it.

On top of that the people in the team need to be able to work together and not have too many personality conflicts. It can be quite a puzzle to form a synthesis of all this. But the idea is to choose people to match the requirements of the roles, and for them to fit together as a team.

The ideal situation is of course when the test manager or test leader can analyze the roles he or she has to find people for at the beginning of a test project, and then hire exactly the right people. Advertisements can then be tailored to the needs.
The applicants can be tested, both for their skills and capabilities and for personal traits.
The team can then be formed by the most suitable people – and ahead we go.

Unfortunately life is rarely that easy. In most cases the test manager either has an already defined group of people of which to form a team. Or he or she has a limited and specific group of people to choose from. It could also be that the manager has to find one or more new people to fill vacancies on an existing team. In all cases the knowledge of people’s team role profiles is a great advantage.

Forming teams and getting them to work is not an easy task. There is no absolute solution. But a well-formed team is a strong team, and a team tailored for the task is the strongest team you can get.

There will be more examples of types of fish – sorry testers, at EuroSTAR and examples of which Belbin roles fit the best to different test roles in test teams with different targets such as component testing and acceptance testing.
While waiting for this you can try to find out how many fish are hidden in this picture:

Mrs. Anne Mette Jonassen Hass, M.Sc.C.E. has worked in IT since 1980; since 1995 for DELTA, IT-Processes mainly in software test and software process improvement. Mrs. Hass holds ISEB Foundation and Practitioner Certificate in Software Testing and is an accredited and experienced teacher for both. Mrs. Hass is a frequent speaker and has a solid experience in teaching at many levels. Mrs. Hass has written two books, developed the team-game ”Process Contest”, and created the poster “Software Testing at a Glance – or two”.

Performance Measurement Framework for Outsourced Testing Projects

This is an article from the July edition of the EuroSTAR Newsletter -STARTester, from Kaylana Roa Konda, Applabs Technologies, India . You can view the complete newsletter by clicking here and dont forget to subscribe for future issues

Industry estimates peg the current global market size of outsourced testing services at around $13 billion. This is a strong indication that Outsourcing of testing processes (partially or fully) is here to stay and flourish. Out of the many varieties in outsourcing, Off-shoring is gaining momentum , in which testing activities are typically outsourced to low wage countries like India, Russia, China etc..

This new paradigm of getting the testing done at remote locations is posing significant challenges to both, client and vendor. Some of the key aspects that are demanding attention in managing testing engagements are, differences in test maturity levels, separating test teams from development teams, sharing test environments, managing test tool licenses, changes in roles and responsibilities at client side, defining SLA’s to protect business interests etc., Test outsourcing management and monitoring is indeed very crucial step in supporting and making the outsourcing engagement successful.

Key factors to be considered in managing outsourcing relationship are, business drivers, different outsourcing test scenarios, and potential expectations from the client. Lack of performance measurement framework can often lead to the below situation.

> Excessive communication
> Micro management by client
> Supplier spends too much time in reporting
> Every stakeholder feeling out of control

There is a strong need for performance measurement framework that can prevent the above potential mishaps. A Performance Measurement Framework (PMF) is an essential part of any test-outsourcing project. It defines the boundaries of the project in terms of the services that the service provider will offer to their clients, the volume of work that will be accepted and delivered, and acceptance criteria for responsiveness and the quality of deliverables. A well-defined PMF correctly sets expectations for both sides of the relationship and provides targets for accurately measuring performance against those objectives. At the heart of an effective PMF is its performance metrics. During the course of the test outsourcing engagement, these metrics will be used to measure the service provider's performance and determine whether the service provider is meeting its commitments or not.

‘5P’ performance measurement framework is introduced to establish accountability on both the sides (Client and Vendor), jointly manage and achieve a win-win situation. The 5P’s are - product, project, process, people and price. ‘5P’ performance measurement framework is easy to apply, proven and practical in nature and was developed based on knowledge and experience. This framework provides collection of metrics to choose from multiple dimensions of the testing engagement namely project, process, product, price and people. Metrics can be provided that can cater to wide variety of testing engagements namely test automation, performance testing, certification testing, functional system testing, white box testing, security testing etc.,

Sample metrics against each category are mentioned below to give you some ideas on direction to think about.

• Project: Test effort Vs development effort, Productivity.
• Process: Cycle time improvement, defect leakage index.
• Product: Time to find a defect, test coverage.
• People: Attrition, average experience.
• Price: $ amount saved, Price variance.

Vendor and client have to understand the business drivers of the testing engagement. Identification of the key results areas have to be done based on the business drivers. Appropriate test metrics selection happens based on the nature of the project, test types, test phases etc., Metrics selection is based on the principle that every metric in isolation gives information to track business drivers. The idea of multiple measurements is to put together a pattern of information that collectively gives a complete and accurate picture of the system. Install a metric system in place that allows you to collect the needed information to measure and analyze information and steer projects into the right direction.

Benefits of the model:

Implementation of proper performance measurement framework for outsourced test activities has numerous benefits. Few of them are listed below.

• Helps companies manage their test service providers in an optimal manner for win-win relationships.
• Proper visibility on the return on investment by the outsourced service provider.
• Consideration of all the quality measures into account while analyzing the performance.
• Introduction of a standard evaluation process across the company.
• Identification of the potential risk areas that affect the productivity of the test team.
• Higher level of abstraction with carefully choosen test metrics and the presentation format enabled management to spot the critical issues quickly.
• Past history of the results from the framework can help the success probability of future projects.


Kalyana Rao Konda is a senior technical services manager at AppLabs Technologies India Pvt Ltd, a company that provides development and testing services. He has been interested in testing from the beginning of his career. He has immense experience in managing product testing groups and also providing test services to clients world-wide. He has a proven track record of managing large scale test automation projects across various technologies, test tools for wide variety of organizations. He has published papers and spoken at international testing conferences and leading web-sites. He holds PMP and CSQA certifications.. He holds a in Electronics and Communications Engineering.

Collaborative Practices for the Dream Team

Fran O'Hara, Insight Test Services, Ireland

This is an article from the August edition of the EuroSTAR newsletter.
Click here to view the entire Newsletter and to subscribe for future editions.

What are the best team-based practices to help testers and developers collaborate to deliver better software more quickly and less expensively? This article will highlight and provide insight into two high value practices that are both practical and proven in industry.

(Note these and other team based practices such as collaborative planning, project reviews, agile practices, etc. will be expanded upon in Fran O’Hara’s tutorial of the same title at EuroSTAR 2006)

1. Reviews.

Reviews are a key team-based practice that helps develop better collaboration between developers and testers…. if they are well executed. Appropriate use of an efficient and effective review process (one that finds a high percentage of important problems quickly and which also promotes learning) is the best way to gain cultural acceptance and facilitate collaboration. Testers need knowledge to test – reviews are a practical way to gain much of that requirements/system knowledge. Testers are also excellent at finding documentation faults so their involvement adds considerable value. Key documents that benefit significantly from collaborative review involving developers, business analysts, users and testers include User Requirements and Functional Specifications as well as Test Strategies and Plans.

Typical pitfalls with reviews include:

1. Reviews aren’t planned into the project schedule so they have to be done for free in zero time! Without enough time to prepare or indeed without having the right review team, reviews will not find a sufficient percentage of important problems.

2. Review meetings drag on and aren’t well managed. Trivial issues like spelling mistakes are raised, discussions about solutions occur and conflicts arise about the severity of problems or which solution is best.

3. A ‘review everything’ mandate has come from management. When too much has to be reviewed together, the natural tendency is to check the documents quickly just to get through them. This results in finding the more obvious and trivial problems but many of the more subtle and important problems are missed.

4. A ‘one size fits all’ process is being used. Sometimes this is too formal and rigorous for what is really required or indeed for the existing level of maturity of the organisation. This can then result in going through the motions with reviews – this lack of buy-in is often fatal for the process.

5. Review leader role is not emphasized (e.g. no training provided) – leading to poorly planned preparation and poor management of the review meeting – see point b. above. The review leader role is there to ensure efficient and effective reviews by maintaining the team’s focus on finding major defects.

A sample practical peer review process which avoids the above pitfalls will be presented and practiced at the upcoming EuroSTAR conference tutorial.

2. Risk based testing practices.
Risk-based testing provides a common language between all stakeholders including test, development, management and customers/users. Workshops where key stakeholders collaborate to identify and analyse risks and then develop a full lifecycle risk-based test strategy are powerful collaborative activities. They unite development and test in a collaborative approach to testing and addressing risk (including the go/no-go decisions on release). The knowledge transfer and shared vision resulting from such collaborations go a long way to helping ensure a successful project.

Risk-based testing typically involves:

1. Identifying and analyzing/prioritising product risks that can be addressed by testing. This is best done in collaboration with customers/users that can provide business risks and developers who can provide system/technical risks. Examples of business risks include critical functions/features that the users need to do their job. System/technical risks could include core system functions, performance, security or other issues that are critical from a system operational viewpoint. Workshops are an effective approach to use here.

2. Developing a testing strategy that can mitigate these prioritised risks. This may involve assigning critical features to be tested in particular stages or iterations of testing (ranging from static testing such as peer reviews of designs/code to dynamic testing such as functional system testing). Again, focused workshops facilitate this collaboration and agreement on the testing approach throughout the full lifecycle for best results.

3. Designing tests within each test stage that extensively check the allocated high risk elements with less testing of lower risk elements. The result is a prioritised set of test cases agreed by project stakeholders to address the most important product risks.

4. Executing the tests in order of priority.

5. Reporting progress on the basis of risks addressed and residual risks remaining.

Users have their concerns addressed and are provided with information in a language they understand i.e. risk. Developers provide input on technical risk identification as well as executing their part of an integrated test strategy. Testers gain valuable knowledge to help ensure they add significant value to the project.

In summary, at Insight Test Services ( ) our experience in multiple environments and domains has been that the above two team-based collaborative practices are both practical and highly beneficial in terms of project success. Properly planned and managed reviews used throughout the complete lifecycle make a significant contribution to quality of the final product. The deployment of our risk-based methodology Test Control™ on projects has helped to ensure the positive involvement and collaboration of all stakeholders on a wide range of projects of different size and complexity.


Fran O’Hara is European director of RPI Alliance,Ltd, ( an alliance of consulting companies collaborating to deliver advanced process improvement technologies. In 1996, he formed Insight Consulting ( providing Process Improvement services. In 2002, he co-founded Insight Test Services providing managed test/validation services. He is a co-founder and ex-chairman of the Irish Software Testing SIG, SoftTest Ireland (, a fellow of the Irish Computer Society and a regular presenter at international conferences and seminars.

Thursday, July 06, 2006

Learning testing through analogies

Hi Reader,

Learning something new is always a challenge and learning testing is like challenge of challenge. When I started learning testing, I found it challenging not only because that learning testing was a challenge but every individual had their own opinion of testing, its methodologies, its branches, its definitions, its vocabulary, its guru's.

"How do I learn testing or from whom do I learn testing and how do I know who is right?" .. were the questions running in my mind.

It was indeed a tough phase of life for me. I happened to be misguided by all the people whom I ask "What is ..... in testing?".

One such basic question I asked was "What is regression testing?" and the answers that confused me are "It is execution of all test cases" , "It is a selective re-testing" , "It is the cycle of testing where all cases should pass", "It is a product qualification test", "It is a combination of Sanity, Comprehensive and Extended set of test cases"... ( goes on )

Well I am lucky to have worked with many organizations in a short period of time. Why I term it "lucky" because, I got a chance to get more confused about what testing is all about. In some places they referred to me as a Quality Assurance personnel and in some other places I was referred to as Test Engineer. Of course, I did ask myself "Who am I?" because I did testing, irrespective of some organization calling me a QA guy or a Test Engineer.

Fed up with the confusions, getting inspired by James Bach, developing passion towards learning testing, I started to think on my own.... to find the answers.

3 years passed ....

Now let me take you to present tense ...

Recently a tester in India contacted me to know the difference between Load and Stress testing and this is how I explained ...

_ Learning testing through analogies __

Ok, so you would want to know the difference in Load and Stress testing and how can it be done, well let us start thinking then...

Assume you are asked to test a chair that can take a load of 50 kilos. For a chair Load and Stress tests are the most important ones as it will be subjected to either of these at usage.

Now let us take up a dictionary and find what the meaning of load is?
Word Web says "Weight to be borne or conveyed" is load.

Ok the next step as a tester is to think of use cases, test cases and test content.

Let us make it brief by starting off with the collection of test content, i.e 10 kilos * 5 bars, 1 kilo * 10 bars, 100 grams * 10 bars, 10 grams * 100 bars.....

Now start testing with a minimum load of say 10 kilos and gradually keep increasing the load in equal steps till 40 kilos. Once you have reached 40 start using 1 kilo bars up to 45 and then later till 49 use the 100 gram and as you near the required load bearing capacity add the weight in smaller steps.

Once the chair is loaded for 50 kilos, allow it for sometime and check for deformations if any on the chair legs.

If the chair takes 50 kilos comfortably then try adding more weight, again, in very smaller steps to see where it breaks. This does not mean you are breaking the system but to give a data that, a system expected to take 50 kilos load is designed to take more and hence the cost has increased.

OK, you are done with load testing and now let us shift our focus to stress testing.
Word Web says "force that produces strain on a physical body"

A very common question or confusion people have is "Is adding more load a stress test", the answer out of my research on this topic is "Need not be".

Well, I could use the same 50 kilos load to stress the system. How ?

There are different ways you can apply a load -

a) Axial Load - The whole load is concentrated on the axis of the chair. By applying a load to the chair in such a manner is a stress test. Use case - A person standing on the chair by putting his entire weight on the toe. People usually do this when they want to stretch for something which is still not rechable after using a chair to reach that ( book or something).

b) Truss - This kind of load becomes a stress where the the mass is not equally distributed over the chair. One leg of the chair could have to take more weight than the other and hence this too becomes a stress test. Use case - 2 children made to sit on the same chair. (Atleast, it happens in India)

c) .... (the only limitation is your imagination)

"Wow, I got a clear picture of Load and Stress testing and using this example, now I can think, how to load and stress test web application." was the reply I got from the person who asked me the question of load and stress testing.

Now, that is how I learn't some of the testing concepts, perhaps that is how I have been learning/teaching it and will continue to learn in the same way unless I see a better way.

_ Learning testing through analogies __

"To know what Stress Testing is say.. *Stress Testing Stress Testing ...* continously without gap..."


Pradeep Soundararajan
Tester Tested !