Tuesday, August 21, 2007

Next EuroSTAR Webinar - Wednesday, 5th September!

An Introduction to Testing on Agile Teams – The Practices & Beyond: presented by Antony Marcano, testingReflections.com, UK.

Date: Wednesday, 5th September, 2007
Time: 10:00 am London-Dublin / 11:00am CET
Duration: 30 minutes

Abstract: An increasing number of organisations are considering, or are in the process of, adopting Agile software development practices. Often, how testers integrate into this process is an afterthought. Worse still, organisations assume that it changes nothing about how testers function and operate. This couldn’t be further from the truth. In fact, a capable Agile team can change the very raison d’etre of a tester in all the ways that testers have often hoped for. No longer does the tester *need* to be the gatekeeper of quality; the whole development team cares about quality like never before. No longer are testers at the end of the process; testers are involved from the outset of the project!

During this webinar, Antony discusses:

Key Points
• What is it that makes a team ‘Agile’? – Practices such as Test Driven Development are a reflection of underlying values and goals. It’s the adoption of these values and goals that allows a team to gain the greatest benefit from adopting an Agile approach to software development.

• What are the common ‘Gotchas’ for testers on Agile teams? – For example, extraordinarily short iterations producing software with end-to-end features can catch out many testing teams. This is especially true if the test team is used to being segregated from the developers as a separate team and/or rely on large amounts of manually executed scripted tests.

• What role do testers play and how can you deliver the most value? – Your primary role is no longer just to inform the project of how the software doesn’t work, but to be a welcomed guide who helps, before the first line of code is written, to make sure that the software does work.

Saturday, August 04, 2007

Checklist for Test Preparation

Listed below are questions/suggestions for systematically planning and preparing software testing.
  • Have you planned for an overall testing schedule and the personnel required, and associated training requirements?

  • Have the test team members been given assignments?

  • Have you established test plans and test procedures for

  • module testing,

  • integration testing,

  • system testing, and

  • acceptance testing?

  • Have you designed at least one black-box test case for each system function?

  • Have you designed test cases for verifying quality objectives/factors (e.g. reliability, maintainability, etc.)?

  • Have you designed test cases for verifying resource objectives?

  • Have you defined test cases for performance tests, boundary tests, and usability tests?

  • Have you designed test cases for stress tests (intentional attempts to break system)?

  • Have you designed test cases with special input values (e.g. empty files)?

  • Have you designed test cases with default input values?

  • Have you described how traceability of testing to requirements is to be demonstrated (e.g. references to the specified functions and requirements)?

  • Do all test cases agree with the specification of the function or requirement to be tested?

  • Have you sufficiently considered error cases? Have you designed test cases for invalid and unexpected input conditions as well as valid conditions?

  • Have you defined test cases for white-box-testing (structural tests)?

  • Have you stated the level of coverage to be achieved by structural tests?

  • Have you unambiguously provided test input data and expected test results or expected messages for each test case?

  • Have you documented the purpose of and the capability demonstrated by each test case?

  • Is it possible to meet and to measure all test objectives defined (e.g. test coverage)?

  • Have you defined the test environment and tools needed for executing the software test?

  • Have you described the hardware configuration an resources needed to implement the designed test cases?

  • Have you described the software configuration needed to implement the designed test cases?

  • Have you described the way in which tests are to be recorded?

  • Have you defined criteria for evaluating the test results?

  • Have you determined the criteria on which the completion of the test will be judged?

  • Have you considered requirements for regression testing?