Thursday, June 22, 2006

Hurrah for Automation, But Neglect Manual Testing at High Cost

This is an article from John Scarborough which appears in the 3rd Edition of the EuroSTAR Newsletter entitled 'Hurrah for Automation, but neglect Manual Testing at High Cost'

The article was published in the EuroSTAR Newsletter. Click here to view the entire newsletter and to subscribe to future issues

Hurrah for Automation, but neglect Manual Testing at High Cost

A few months ago I talked with the VP of Engineering for a $50+M software producer whose flagship product, Buckflow (not its real name), was in serious trouble. His team had followed the same Quality Assurance (QA) routine for a few years without running into serious problems, but customer deployment of their last upgrade, which contained patches for 20 customer-reported problems, had resulted in an uproar. At a few sites, Buckflow would not initialize. At another site, a key workflow management tool had been disabled. More patches were scheduled, and a major dot release was scheduled to come out in the summer.

Their board of directors had been alarmed by the report of deployment failures, and was adamant in insisting that this should never happen again. Because of the number of customizations, and because of the frequency of upgrades, the VP believed that the only solution was full automation. He understood that this could only be a long-term goal, but he wanted my company’s help in making it happen, along with whatever bridge solutions were required between now and then.

Their routine for QA and testing, which until now had worked satisfactorily, lacked sophistication. Their development team provided unit testing, and business analysts provided acceptance testing. Their small QA team of four full-time test engineers spent all their time developing and executing test cases for features or fixes that were to be rolled into the next scheduled monthly patch. A week before its scheduled release, everyone in the product development group installed the release candidate patch on their machines and for a couple of days ran certain scenarios selected from a distributed list. The test team barely had time to run their new test cases. They did not have modular tests; and they had stopped running regression tests at least two years ago.

Lack of sophistication in test strategy, the obvious problem at Buckflow, is not unusual. I pointed out that bugs found in the design stage are far less expensive to fix than bugs found during product integration testing. Also – especially applicable to Buckflow – every bug fix is a weak link because the product’s original design did not address it, and therefore must be tested in every release. The VP nodded with evident regret, and said that they had thought that disciplined development combined with unit testing would be sufficient.

It’s also not unusual to find companies who continue to have naive faith in automation, in spite of evidence against such disturbingly resilient illusions as:

* automation eliminates human errors that result from fatigue, boredom, and disinterest;
* automation can be re-used indefinitely (write once, run many);
* automation provides more coverage than manual testing
* automation eliminates the need for costly manual testing;

Every one of the above statements makes sense if properly qualified. Automation may eliminate some or all of the errors that result from manual testers growing weary, but it may also introduce other errors that are equally due to fatigue, boredom and disinterest, arising here in the people who develop automation.

Automation can be re-used indefinitely, provided that the application or system under test does not change, and that nothing else changes that might affect execution, such as common libraries or runtime environments (e.g. Java). Whatever return on investment may have been realized from automation will be quickly wiped out by maintenance costs, at which point the only advantages of automation are strategic, not economic.

If “coverage” means “test coverage” (rather than “code coverage”), then yes, automation can even provide 100% coverage: one need only automate all available test cases. A more significant data point however is the degree of code function or code path coverage provided by available test cases. While achieving 80% code path coverage may be better than 70%, a more significant consideration is what has not been covered, and why.

To avoid manual testing at all costs would be the costliest option, because only in manual testing can all of a test engineer’s understanding, probity, logic, and cleverness be put to work. Security testing of Buckflow at the application level, for example, depends on how the application was developed, where it stores its cookies, what scripts it runs during initialization and various transactions, how stateful connections are established in the inherently stateless HTTP protocol, etc.

While there are commercial test tools that can verify an application’s defense against cookie-cutter varieties of denial of service, or even 50% of the threat model for most applications, interoperation with new applications and with new versions of underlying technologies requires at least a significant investment in manual testing.

More obvious needs for manual testing include penetration testing, usability testing, and localization testing. But Buckflow had a particularly acute need for testing massively configurable applications in diverse environments. While there was room to talk about keyword-driven automation, it was clear that only manual testing would be able to identify configuration issues. In the end, we agreed that the best approach would be a combination of carefully orchestrated automated tests with rigorous manual testing.


As VP of System Engineering for Aztecsoft, John Scarborough manages and orchestrates pre-sales processes across Sales, Proposal Engineering, and Delivery, from project-based needs analysis to solution design to estimation to retrospective analysis of completed projects. He is also responsible for providing access across Aztecsoft to project-based operational knowledge. Scarborough previously served as Aztecsoft's Principal System Engineer and Quality Architect. Areas covered by his published papers include interoperability testing for web services, model-based estimation, and capability assessment in Agile environments.
Prior to his joining us in 2001, Scarborough was at Microsoft for 11 years, where he built and managed large teams in the test division of its Windows Operating Systems group, including system integration testing, application compatibility, user-context automation, and system validation.

A Software Testing Body of Knowledge?

This is an article from Stuart Reid which appeared in the 3rd Edition of the EuroSTAR Newsletter entitled - A Software Testing Body of Knowledge?

A Software Testing Body of Knowledge?

So, what is a Body of Knowledge or BOK?
A BOK describes the generally accepted knowledge for a particular discipline; it is a formal inventory of the intellectual content of the field. A BOK is thus one way of defining a profession. For a BOK to be accepted there should be widespread consensus within the community that the knowledge and practices within the BOK are both valuable and useful, and applicable to most projects most of the time. The BOK provides the basis for the regulation of the profession; it also defines its boundaries.

Example BOKS in the IT area cover disciplines such as Project Management (APM and PMI variants) and Configuration Management (CMBOK). There is also the IEEE Software Engineering BOK (SWEBOK), which includes a chapter on software testing. The SWEBOK is being advanced to ISO status, but has been dogged by disagreements and, so far, has not been widely accepted by the community.

Who uses a BOK?

Unsurprisingly, a BOK has various stakeholders. New entrants to a field can use it to identify what they need to know, while practitioners can use it as an essential source of information on topics that they only need to reference infrequently. Certification (and licensing) bodies and academics may use it in the form of a syllabus as the basis for qualifications, which, in turn, will mean that training providers and students are also users. Does a Software Testing BOK already exist? Although the authors may disagree, it seems clear that the discipline already includes a number of ‘pseudo’ BOKS. By this I mean that there are several well-used software testing resources, but not one that covers the complete discipline and there is also not one in which there is general consensus. Examples of these ‘pseudo’ BOKs are:

* qualification syllabi created by certification bodies such as ISEB/ISTQB;
* approaches to testing such as TMap®;
* test process improvement models such as TPI® and TMMi™;
* well-regarded text books such as Glen Myer’s original edition of The Arts of Software Testing;
* standards on software testing, such as IEEE 829 and BS 7925; and
* the software testing chapter of the SWEBOK.

Although providing various levels of coverage of the field of software testing, not one of these ‘pseudo’ BOKs on it own satisfies the criteria of becoming the single BOK for the industry. This is because none of them provides broad enough coverage of the discipline of software testing. Neither do any of them appear to command the respect and trust of a large enough proportion of the software testing community to be considered as representing a true consensus.

Is the discipline of software testing ready for a BOK?
Implicitly many contributors to the ‘pseudo’ BOKs appear to believe so; however there is also a strongly-held opposing point of view. Let’s consider the opponents’ view first. Some consider that a BOK acts as a barrier in a number of ways. They feel that BOKs are, by nature, inert and rarely evolve, restricting new thinking and debate on currently accepted ‘truths’. They also point to the continuous stream of project failures and the apparent lack of ‘engineering’ in software testing where scientific theories are not backed-up by solid empirical data. Both points are presented as evidence of the field’s immaturity.

Another argument presented against a software testing BOK is that the discipline is too diffused and changes from domain to domain. Detractors question whether there are enough generally good practices in software testing that apply to most projects and suggest that many good practices are only applicable to specific application domains. For instance, they say that the generally useful practices applied to testing safety-critical system may not be appropriate for the testing of low integrity commercial applications.

The supporters of a software testing BOK point to the benefit of certification in providing a means of regulating the industry and defining training for new entrants. They argue that certification also lends software testing credibility with both customers and developers, while the availability of a single consensus BOK would encourage academics (even those with little interest in, or knowledge of testing) to adopt it. Another suggested advantage of a BOK is that it provides guidance to practitioners on how to improve their current practices. Many of those who feel that software testing should be considered a legitimate engineering discipline see a BOK as a necessary stepping stone to a profession of software testing.

Should a software testing BOK be created?

If the industry decides that a BOK is needed for software testing then it is most important (and probably very difficult) to ensure that consensus is reached. Any initiative must be an inclusive, multi-national effort and care must be taken to ensure that the stakeholders in the previously-mentioned ‘pseudo’ BOKs are invited to join the development process. Ownership of a new BOK could be difficult to manage, and although it is often argued that anything provided for free may be considered worthless by the recipient, I believe that any newly-created software testing BOK should be made freely available to the whole community.

Developers of a BOK must ensure that it does not include practices that are new and unproven with no evidence of their efficacy. A BOK should embody achievable good practice and not simply be a reiteration of academic texts, which may have little connection with the real world. The speed of evolution of the software testing discipline means that its BOK must carry with it the requirement for its continual review and revision. Although a difficult task, I believe that simply by attempting to build a BOK the software testing industry will continue to expand its knowledge of the discipline and so add value to the testing community.

EuroSTAR 2006 Workshop

The topic of a software testing BOK will be covered by an advanced workshop at the EuroSTAR conference in December. The aim is to open up debate on whether the industry should support its creation (with all the attendant questions) or wait until we have more obviously reached maturity. If you feel you would like to contribute to the discussion then please make a note to attend in your diary.


Stuart Reid has spent the last 17 years involved in software testing, having previously worked on high-integrity systems. He is Chair of the BCS SIGiST and its Standards Working Party and was Chair of the ISEB Software Testing Board and founder of the ISTQB.

Tuesday, June 20, 2006

Thinking like a Tester

Found this on the useful software testing advice site -

I liked it, hope you do:

Welcome Everyone Who: Wants: To do right things right first time.

Does not want: To do things without value.

Knows that: Testing never ends, it just stops.

Quality does not happen by accident; it has to be planned.

Believes that: There's always one more bug. Testing is the art of thinking.

Thursday, June 15, 2006

Looking for Testers in France

I was curious if there are any testers located in Paris France. Email me a resume at if you are interested in an 18 to 24 month contract.


Thursday, June 08, 2006

Software Testing

The VERIFY 2006 International Software Testing Conference is now just around the corner, and Early Bird discount ends on June 30th.

VERIFY consists solely of front line experts in software development, security and testing. Presentations and tutorials are delivered by industry leaders who daily face real-world challenges, work real-world projects and face difficult implementation timelines.

A sample of the background and skills of VERIFY speakers and presenters:

> Co-Inventor of Adaptive Automated Testing Technique
> Industry Leaders in Test Driven Development (TDD)
> Security experts for .NET based Applications
> Successful Automated Testing on mission critical applications
> Inventor of Automated Test Lifecycle Methodology (ATLM)
> Early Pioneers of Automated Software Testing
> Co-Author of Java Testing Patterns
> Software Security Testing for COTS Products (at Symantec Corporation)
> Authors of books on Software Architecture and Continuous Software Integration
> Author of Best Practices for Formal Software Testing Process
> Experts on Opensource Development
> Author of Rational Guide to IT Project Management
> Software Security Testing for Embedded Systems (lottery systems, cell phones, casino gaming, and smart cards)
> Inventor of Secure Software Development Lifecycle (SSDL)
> Author of Tester’s Guide to .NET Programming
> Software Engineering & Testing for Service Oriented Architecture (SOA)
> Security Experts on Application Penetration Tests & Application Security Standards
> Industry Leaders on Introducing Successful Automated Testing Programs
> Co-Author of Testing Extreme Programming

Register now and save money.

Jeff Rashka
VERIFY 2006Serving Software Professionals

Friday, June 02, 2006

Recommendations for Testing Books


It would be great to establish a list of recommended books for test professionals
Just click on the comments link below and let us know what you enjoyed reading.

Randy Rice recommended Sudoko for testers on his blog - so really it can be anything you found beneficial