Thursday, November 09, 2006

QualTech Education In-House Training Course Offerings

Qualtech Education offer In-House Training delivered by some of Europe's leading test experts on a wide variety of topics. Our courses are flexible and we can tailor a training solution to meet your specific requirements.
Call us today on +353 91 514472 or
email Tracy at tracy@qualtecheducation.com to discuss your requirements.

Take a look at our selection of course offerings

Testing SOA Applications And Services

This is an article from the November edition of the EuroSTAR Newsletter - STARTester, written by by Colin Robb, Mercury, UK. You can view the complete newsletter by clicking here and don't forget to subscribe to receive future issues.

Few other innovations in IT offer the transformative potential of Service-Oriented Architectures (SOA), and Gartner estimates that by 2008, 80 percent of IT initiatives will be service-oriented.

The basis of SOA is not new - composite business applications made up of separate, distributed services which can be shared and reused, from internal or external sources - we've seen it before in guises such as CORBA. However, it is the adoption of global standards which is driving the current popularity of the SOA approach.


Read more...

Iterative Automation

This is an article from the November edition of the EuroSTAR Newsletter - STARTester, written by by Vijay Acharya, RelQ, UK. You can view the complete newsletter by clicking here and don't forget to subscribe to receive future issues.

In this article, I would like to share my experiences of test automation, when confronted with requirement of automating multiple products within a suite and to deliver automated scripts within short span of time.

A Peek into the Product


The product is an Industrial Engineering workflow management system, pre-dominantly used by plant design engineers. The product has series of interfaces to process information from disparate plant design systems, and sends the output in the form of 2d or 3d graphical images over an intranet portal.
The suite of four products are tightly integrated to manage the chain of activities starting from creating individual design elements to integrating and creating whole plant design diagrams.

Read more...

Off-shoring - A False Economy In A Testing Market

The following is an extract from the November issue of STAR Tester Newsletter- written by Pete Stock, SDLC, UK. You can view the complete article and newsletter by clicking here and don't forget to subscribe to receive future issues.

Testing your company's IT systems and applications. A necessary job, but an easy one to tick off the list, right? You just offshore your requirements to someone sitting in a room far, far away, they test your software out of sight and you sit back, smug in the knowledge that you're realising great savings in time and money, unlike many of your competitors who are still utilising UK based software testers.

Read more...

Wednesday, November 01, 2006

Hoover Dam and IT

This is a blog entry i wrote at http://www.techtribe.com, you need to be member of this site to view the blog.
Anyways i am recreating the same entry here again:


I have been amazed from my childhood about the Hoover Dam. It size made me feel tiny even on pictures and the stories of its construction got etched to my memory for ever. So when I got a chance to visit Hoover Dam during April 2006, I was very excited.
Excited – is a very mild word to describe my feelings. It was something that I dreamt from my childhood and was very happy to see it come through.
As our car approached the dam, my heart started racing.
The sheer size of the construction amazed me. I walked around. Saw the dam from different angles. Oh boy, I didn’t get tired even after spending 4 hours at the dam site.
I purchased books about the construction of the Hoover Dam and started back.

I started reading the book once I reached my hotel. As I kept on reading I was amazed to know the details of the construction the amount of materials used etc., but one detail caught my attention more:

“… was accepted by the Bureau of Reclamation on March 1,1936, more than two years in advance of the scheduled date of completion”

Not only Hoover Dam, the Empire state building in NY was also completed 5 months ahead of its scheduled time.

For me working in IT projects this piece of information made me think, what lacks us in delivering projects ahead of schedule and at the required quality?

Here I chose to use “required quality” as we may have to go for stringent quality depending upon the type of projects we do.

What ails the IT world? Why do projects dont end on time and are plagued with bugs? Though many articles have been written about this I thought we should learn from the construction/civil engineering peers on how they execute these mounumental projects on time and with the quality that is needed.

I pondered through the resources and I observe these may be the reasons:

1. Vision
a. The vision with these projects are created is to be admired. Clarity in thought while conceiving the projects.
2. Design
a. All the design aspects validated by qualified professionals. All the design alternates are considered and the best selected based on the impact to environment, wildlife etc., There are no two ways to design once the design is baselined
b. Design in such a way that parallel activities are always possible
3. Attention to details
a. The attention to details before start of the project, during the project and while completing the project and during each and every activity carried out
4. Coordinated execution
a. There is no blame game between the teams and every one is clear on what they are doing
5. Metrics on progress
a. Collect only Metrics that give information and not confusion
b. Filter the metrics as what the exective needs
6. A leadership that knew what its doing and what the people are doing.
a. The leadership then didn’t use complicates ERP/CRM/SCM systems and accounting softwares to manage their resources (men,material and money)
7. Innovation, which was integrated with the way design and execution, was done.
a. It is inspiring to read the innovation that has been used while constructing the Hoover dam like “large 10-ton trucks were modified to support platforms holding 30 drills which were used to prepare the rock face for dynamiting tunnels through the canyon walls”

These 7 factors provide a huge impact on the delivery of any project. I will not say that the IT projects lack this completely but the maturity in which it has been used by the construction industry even 50 years back is amazing and that is lacking in IT industry.
Have i missed any factors?
We should start looking towards construction industry for more ideas onhow they manage their projects.

So the question is will we see a day when we start delivering projects much before schedule and with required quality? Will IT mature?


what are your comments and views?

Sunday, October 15, 2006

A testing puzzle that asks you identify the problem

Hi Reader,

Last week I discovered a problem I was facing at office and framed it as a puzzle that can help testers in thinking and encourage them to refine their approach to solve any such problems in testing.

There were a couple of Indian testers who took part in it and now would like to share it with you too, for a reason which I have listed below.

In order to know what the puzzle is :
Testing Puzzle at office!

Now, you might wonder how important is this to you but you might find it interesting when you know from me that
James Bach did crack the puzzle I put up.

Spend time solving the puzzle and if you think you have cracked the puzzle, try looking at the answer in the same blog.
( have a look at the answer only after you make an attempt cracking the puzzle )
Answer:
Answering the Telephone puzzle

Good luck and kudos to those who crack the puzzle!

Thanks and Regards,

Pradeep Soundararajan

Thursday, October 05, 2006

The 2006 Testing Excellence Award

The "European Testing Excellence Award" is presented annually to an individual who has made a significant contribution to the field of software testing within Europe.

EuroSTAR believes there is a need for an award which recognises leadership and contribution in the field of software testing. An award will provide focus to the contribution testing processes and principles make to the success of the products and services in the Information Technology Industry.

A testing excellence award will serve as a vehicle to build a knowledge base of approaches, solutions and the benefits of implementing testing solutions.

We encourage you to think about your colleagues, teachers and mentors within the software testing community, and take the time to nominate someone who has inspired or motivated you or through their work has made your work easier! Lets reward the people that give their time, energy and full commitment to the world of software testing. The submission details are all explained on our web site, so have a read and then tell us all about your testing hero.

The "European Testing Excellence Award" is presented annually to an individual who has made a significant contribution to the field of software testing within Europe.

EuroSTAR believes there is a need for an award which recognises leadership and contribution in the field of software testing. An award will provide focus to the contribution testing processes and principles make to the success of the products and services in the Information Technology Industry.

A testing excellence award will serve as a vehicle to build a knowledge base of approaches, solutions and the benefits of implementing testing solutions.

We encourage you to think about your colleagues, teachers and mentors within the software testing community, and take the time to nominate someone who has inspired or motivated you or through their work has made your work easier! Lets reward the people that give their time, energy and full commitment to the world of software testing. The submission details are all explained on our web site, so have a read and then tell us all about your testing hero.


Submit your nomination before October 30th. Click Here for full details.

Becoming an Influential Test Team Leader

The following is an extract from the October issue of STAR Tester Newsletter- written by Randall Rice, Rice Consulting Services, USA. You can view the complete article and newsletter by clicking here and don't forget to subscribe to receive future issues.

Seven Ways to Add Value to Your Team

The essence of leadership is influence. Good leaders are able to influence people to achieve a common goal. As a leader, your sphere of influence extends from yourself to your team and then to others outside of your team.

One of the most powerful ways to influence both your team and the rest of your organisation is to lead your team in doing things that will multiply their value to the organisation. In this article I will suggest some low-cost and achievable things that just about any team can do to greatly multiply value and effectiveness.


Read more...

Six Testing Hats

The following is an extract from the October issue of STAR Tester Newsletter- written by Julian Harty,Google, UK. You can view the complete article and newsletter by clicking here and don't forget to subscribe to receive future issues.

What are some of the hard problems related to software testing?
In my experience these problems range from things like: unclear goals, insufficient support or respect from others, using our limited time effectively, as well as the more fundamental testing techniques, covered in numerous testing books. Like you I expect there are better ways to test, and I'm always on the lookout for ideas from other fields than we may be able to adapt and adopt to enhance our testing.


Read More...

Achieving Software Quality Through Teamwork

The following is an extract from the October issue of STAR Tester Newsletter- written Isabel Evans, Testing Solutions Group, UK. You can view the complete article and newsletter by clicking here and don't forget to subscribe to receive future issues.

"I'm going to buy a magic wand, and then when the Development Manager says to me 'We've finished the build, now can you do the quality stuff' I can just wave the wand and make it happen…"
Thus said a Test Manager, complaining about the way testing was regarded in his projects.

Read More...

Monday, September 11, 2006

Your Chance to win a FREE place at EuroSTAR 2006!

This Competition is now closed
As the date for EuroSTAR 2006 quickly approaches we want to give YOU the chance to join us this year in Manchester.

Simply click here and fill in your details to be in with a chance to win a FREE conference place!
Don't miss this opportunity to attend Europe's premier software testing event!

Thursday, September 07, 2006

Developing Testers: What Can We Learn From Athletes?

This is an article from the September edition of the EuroSTAR Newsletter - STARTester, written by Paul Gerrard, System Evolutif, UK. You can view the complete newsletter by clicking here and don't forget to subscribe to receive future issues.

This article presents the first section of the paper that Paul is writing to accompany his keynote talk at EuroSTAR 2006.

1.1 Motivation for this Talk
This article is based on my experience of doing two things. Coaching rowers and coaching testers - two things close to my heart. There are some universal rules about coaching and I wanted to explore some of the commonalities between coaching athletes (rowers are athletes) and testers.

A couple of years ago, I (rather foolishly) volunteered to coach the ‘development women’ squad at Maidenhead Rowing club. The Devwomen squad, as they were called, had learnt to row in 2004 and were keen to carry on and compete in some events the following year. I offered to create a training plan, and coach four sessions a week for the next 11 months. The plan was to take people with a few weeks experience and develop them into competitive rowers in a year.
This sounds quite ambitious, but the beautiful thing about the sport of rowing is that you can compete at almost any level. The levels of enthusiasm and commitment were high enough and I was confident we could make good progress. Whether they competed and won, was another matter.

I briefed the squad on my proposed training plan for the year with a PowerPoint talk. It’s a long story, but between September 2004 and July 2005 the squad were very successful. The squad embraced the training and stuck to it, were enthusiastic and committed throughout. Every person in the group had at least one ‘win’ by the end of the summer – some had three or four pots and medals to display on the shelf. (Half of the devwomen subsequently moved up to row in the ’Elite’ squad last year).

Now, it struck me some time later, that the training plan I worked out at the rowing club had a structure, focus and detail more sophisticated than the personal development plans most testers agree with their employer. (In fact, I subsequently discovered that probably less than 10% of testers have any development plan at all). I was curious to see if a development plan for athletes could be used as the starting point for a tester’s development plan.


1.2 From Athletic Training Plan to Tester Development Plan
I took the devwomen training plan, and using the same headings and appropriate substitutions for the content of my PowerPoint presentation to see what such a plan might look like. It started as just an exercise but much of what I had learnt from working with the devwomen had a direct correspondence to working with testers. There were of course some ‘rough edges’ but far fewer than I would have anticipated. So, it seemed to me that there was value in pursuing it further and developing a talk around this curious exercise.

I took my original training plan and slides and re-ran the thought process for each aspect of the plan. I asked myself, ‘if I were coaching testers and I had that kind of framework, what would I put into a development plan for testers?’
In the paper, I walk through a development plan for athletes and then use the same framework to explore what might be done for testers. I think there is quite a lot of commonality in the resulting proposal, and the thinking that goes into such a plan is at the heart of the message I want to provide. The remainder of the paper sets out a proposed structure for a tester development plan.

1.3 Coaching and Mentoring is Critical

Now, one of the first of several surprises (to me, anyway) was that you cannot separate development from coaching. Coach and mentor are terms often used in the context of people and organisational development, but they are often used just as labels for one’s team leader or manager. Coaching and mentoring are critically important activities that reflect two support roles for every individual that wants to develop their skills and capability.
In my dictionary , a coach is ‘an instructor or trainer (in sport); a private tutor’. The implication is that the coach imparts knowledge, guidance and advice to an individual. In this respect, the coach is pro-active – leading people towards improved performance and capability.

In the same dictionary, a mentor is defined as ‘an experienced and trusted advisor’. The implication seems to be that, whereas the coach takes the initiative, a mentor might wait for the individual under instruction to ask for advice. Whereas a coach would direct the individual, a mentor waits until asked for support. Needless to say, trust and effective communication between coach/mentor and the individual are critical to success.

1.4 The Mentality of IT People is a Barrier to Change
Coach and mentor are terms that are over used in the IT industry, not just testing. The IT industry sees itself as distinct from the rest of business – as if the interpersonal skills so important to most disciplines no longer apply. We are all familiar with the stereotypical deep-techy programmer who has difficulty with the other members of his team, let alone non-technical folk or end-users. Usually male, these ‘types’ excel when it comes to solving difficult problems with technology, and find it easier to communicate with operating systems than people.

I’m exaggerating perhaps, but the perception of most business people is that most folk in IT simply do not appreciate the needs, thinking or motivation of business users. The gap between Business and IT starts at the top and runs through to lowest-level practitioners. The concepts of coaching and mentoring, as softer disciplines, are still met with suspicion by many people in IT even though business folk appreciated their importance decades ago. Can IT-folk even spell interpersonal?

Coupled with this ‘mistrust’ of soft-skills, we tend to assume that we can attend a technical training course, learn a new skill and become instant experts. This preposterous; but the push and pull of certification schemes for example (emerging in all aspects of IT nowadays), tempt you into believing that certification is the same as capability. Don’t get me wrong, certification schemes have some value, but they are no substitute for evidence of achievement and experience and interpersonal skills.

One of the problems we have in IT (and not just testing) is that we seem to think that everything has to be invented from scratch. We are continually reinventing wheels in our industry and this mentality dominates many people’s thinking. Unlike most other industries we are continually reinventing stuff that we probably already have. We are ever so keen to adopt the latest process improvement or capability model, regardless of its relevance or usefulness. No matter – it’s techy, looks simple and it’s new.

But when it comes to adopting approaches that support leadership, motivation, communications, learning methodologies and interpersonal skills in general we shy away. They are soft, alien, non-techy, and worst of all, invented by non-IT Folk.

So, IT tends to be very inward looking and introspective and this is partly because the industry attracts people who like the technology more than the business of exploiting and working with technology. Quite a difference, don’t you think?
Although system and acceptance testers are less obsessed with technology than most, we have to recognise the influence – some would say hold – that technology has on many IT folk.

1.5 The Importance of Leadership

The development process (as an athlete or tester) is mainly about human interaction. Yes, of course, there is a lot of hard work required to be done. Slogging over technical exercises, cranking out test plans and grinding out test results is indispensable. But the real value of preparatory work comes when feedback is obtained and the work is discussed with peers, a customer, the coach or mentor.

The reason a coach exists is to set the vision, to explain how to do things, to hint at faults in technique, to suggest improvements, to cajole, to motivate – all to achieve a change in someone else’s behaviour. It’s not about, “this is how you test boundary values, I have explained it, you have tried it once and now you know it”. Coaching is not like that and learning is not like that. Whether you are learning a new technique in a sport or an approach, technique, mentality or attitude in a discipline like software testing, there is little difference in the thought process of the individual. The coach is trying to change someone else’s behaviour and that is no trivial thing.

Not many people wake up in the morning and say ‘at the end of this day I am going to change the way I do XXXXX’. Usually the drive for change is coming from someone else. The change will not be initiated in the individual. Everyone with a personality, ego and confidence in their own ability is innately resistant to change.
Change threatens one’s ego and confidence in one’s ability. So with few exceptions, people resist (consciously or unconsciously) external demands for changes in their behaviour.

Motivating and encouraging people to change are hugely difficult things to do from the point of view of the individual as well as the coach. Although most team leaders and managers may be good technically, they have poor leadership skills. Needless to say, the development of leadership skills in managers helps practitioners to sustain training and development efforts and improve their capability.

Investing in The Dream Team: How to Keep The Dream Team Together

This is an article from the September edition of the EuroSTAR Newsletter - STARTester, written by Filip Gydé, CTG, Belgium. You can view the complete newsletter by clicking here and don't forget to subscribe to receive future issues.


CTG is quite proud of the low staff turnover in the company. Thanks to the Competency Development system, among other things, the staff turnover at CTG was only 15.5% in 2005 and only 13.82% in 2004, percentages far below the market average.
CTG is an ICT service company. This means that we implement IT projects for customers, usually at their locations.
This also means that staff in the field often have more intensive contacts with the customer than with their own company. This is a real challenge for a company that is proud of its extraordinary high loyalty levels, both from customers and from staff.

How do you make sure that once you have the right people on board, you can also keep them on board?
How do you turn what is usually a big problem in the service world into a real differentiator in the market?


The answer consists of different ingredients and a recipe that combines these ingredients in the right proportions: a very specific recruitment, a clear strategy, focus on continuous development, a corporate culture based on values, etc.

The real secret consists in making all these matters, which are traditionally labelled as "soft", very tangible and "hard", measure very concretely and follow up the results like a financial ratio. "Put your money where your mouth is", is still a very good test to see whether someone actually means what he says.

In this article I will zoom in on one of the ingredients in the recipe of retention policy; Competency Development (CD). We have developed the Competency Development (CD) system and anchored it in an actual job within the organisation. We can also demonstrate that this is one of the reasons for a low staff turnover: 13.82% in 2004 and barely 15.5 % in 2005, percentages far below the market average.

The Competency Developer is continuously looking for the best match: the right co-worker in the right place, with maximum attention for the career path indicated by the consultant and in line with the customer's expectations and the strategy of CTG itself. The reason is simple: we are convinced that the major reason for someone to change companies is mainly related to the job content, which may no longer be in the co-worker's field of interest, or to the feeling that there are little opportunities to further his/her career. Specifically in these domains, the Competency Development concept provides a great added value.

Role of the Competency Developer at CTG

The Competency Developer, called "CD" in short, assists co-workers in developing their career path. He is responsible for our consultants' competency development and for knowledge management in line with the strategy and business plan of our company. When we are looking for a certain profile for one of our projects with a customer, the CD verifies whether the right match can be found. In some cases we immediately come across an adequate co-worker. Sometimes a certain co-worker almost complies with the requested profile description, but he may qualify even better for the job after an extra training or far-reaching coaching.

One CD is responsible for about 50 consultants. Right from the start the CD builds a relationship of trust with the new consultant. For junior profiles, whose career direction is not yet fully defined, it mainly comes down to "steering". Senior consultants usually have already developed a vision of their own, so the CD's task is rather to hold up a mirror for them and give them regular feedback. The CD encourages everyone to develop both technical and interpersonal skills. The idea is to get all our co-workers to really think along with our company and our customers.

The role of the CD starts with the recruitment and settling-in of new co-workers
The HR department takes care of the first screening of an applicant. During the first interview the recruiter does not speak so much about the applicant's technical skills, but he tries to find out whether the applicant's personality would fit into the company. Which values are important for the applicant and do they correspond to our values? From experience we have learned that this fit is the most important aspect: the values of our company describe our identity and the materialisation of these values shows where we are different from other companies. If you do not match our identity, it won't work in the long run.

From positive advice after this first screening, the Competency Developer enters the picture. In a second interview he will go deeper into the job-related skills of the person, he will also perform a double-check of the personality and probes the expectations in the short, medium and even long term. The CD has to be able to commit our company in terms of these ambitions. It makes no sense to start off with someone if the ambitions are not in line with our organisation's strategy.

Because success usually lies in a good start, the CD plays a key role in introducing the new co-worker in our company. The expectations of both parties – the co-worker and CTG – are continuously aligned.

When a first project has been found for the new co-worker, the CD tells him what the current options are and how this fits into the career path he wants to follow. If he does not know which direction he wants to take, the CD will provide "stepping stones" or get him in touch with others which can help him make his choices.

An evaluation takes place after one month: the customer or our own project manager gives feedback to the CD about the technical and interpersonal skills, either or not in the presence of the consultant himself. If required, action items will be proposed.

Besides lots of informal contacts there are also formal moments: feedback interviews and career interviews

A formal feedback interview is organised twice a year, linked to an evaluation with the customer. Once a year the CD holds a career interview: the set objectives and the relevant competencies are assessed. Action items for the next period are defined. Previous to this interview an Appraisal Review document is sent to the consultants, where they have to give themselves a score for all listed competencies relevant to their situation, with the aim of detecting and discussing possible focal points with the CD.
Junior profiles sometimes feel uneasy about this, but more experienced consultants see it as a real support for their personal competency development. In addition, we also work with 360 degree feedback, an evaluation by the customer and an observation to score competencies and corresponding behavioural indicators.

Continuous development means that the CD plays an active role in the training planning.

The competency system is developed on the basis of 10 "levels". A junior consultant starts in level 1 and can grow towards his field of interest via an evolution in technical and interpersonal skills.

For each level a "must-have" list is available of courses to be attended and skills to be acquired before you can be classified in a certain level. For example, influence skills are very important for the profile of a Project Manager. For the choice of learning activities the consultant's preferred learning style is taken into account, through their own assessment or through experience with results of other learning activities.

Everyone can submit an online application for his or her training schedule, consult the growth in level and the training catalogue, as well as register for learning activities, always in consultation with the CD.

No false promises, but a very concrete investment ... which pays off!

A clearly structured system, as you can see, the "Competency Development". With a proportion of 1 in 50 this means an investment of 10 FTEs for 500 co-workers. Such an investment is not just made out of a conviction, it has to work out financially.

And it does, according to the figures. If a co-worker leaves the company prematurely, you at least have to find a replacement and train him/her; you may also have problems with the current project, and you lose the know-how you gathered ... to name just the 3 largest cost items. All together, when one co-worker leaves, it is likely to cost the equivalent of 6 man months. So, if a Competency Developer makes sure that two people less than the market average leave per year, the investment pays off. I can assure you that the ROI is much higher than that.

The Competency Developer is an important ingredient in the recipe for loyalty. Important, but not the only one. During "Investing in the dream team" in Manchester I will disclose a few more ingredients of our recipe.

Wednesday, September 06, 2006

The Captain of Your Special Teams……The Performance Test Lead!

This is an article from the September edition of the EuroSTAR Newsletter - STARTester, written by Scott Barber, Perfectplus Inc, USA. You can view the complete newsletter by clicking here and don't forget to subscribe to receive future issues.

You are familiar with the “Software Development as a Sports Team” analogy, right?
The project manager equates to the coach, lead developer to offensive team captain, test lead to the defensive team captain – where the entire team views the development process as collaborative and each member of the team is driven to produce his or her best work in order to achieve the team's common goal of delivering a “winning” application.

Typically, this is as far as the model goes, but it doesn't account for some important members of the team – the specialists.
There are a variety of specialists that may be a part of your team: security experts, network engineers, configuration managers and performance testers, to name a few.

If we look to American Football, we find a structure to enhance our model to accommodate these team members.

In American Football, there is a third group known as the special teams. The special teams consist of the kicking teams, kick return teams and other groups dedicated to special plays. Historically, coaches would populate these teams with non-starting players to keep the starters from getting excessively tired or injured during the game and so that the starters could remain focused on their primary positions during practice.

Recently, however, coaches have started fielding their best players, sometimes known as “game breakers,” on the special teams to improve their chances of winning games. These players have become more than just specialists; they have become expert generalists who can contribute to the game in a variety of roles and positions.

The captain of the special teams is often a senior player with both exceptional leadership skills and the ability to play a variety of positions on the field. These are the players coaches put in the game in critical situations when they feel the team needs a big play or a shift in momentum. They are the players that make the crowd cheer and inspire the rest of the team to redouble their efforts simply by taking the field. Much like the recent shift in football where coaches look to top players to populate the special teams, project managers have started looking for experienced, senior individuals who are expert specialists and established generalists for their special roles.

On a software development team, this unique individual equates to the performance test lead... minus the fanfare. On the most effective development teams I've ever been a part of, the performance test lead is someone with leadership abilities, strong generalist skills, and a unique and critical specialty.

So what makes the performance tester so unique? On top of their specialization as a performance tester, these individuals tend to be competent and have experience in a wide variety of roles enabling them to effectively contribute to virtually any aspect of the team. Let's take a brief look at all the different roles a performance tester assumes at various points during a project.

Business AnalystBefore performance testers can begin conducting effective tests, they must understand how users are going to interact with the system under test, what tasks they are going to be trying to accomplish, what their state of mind is likely to be while interacting with the system, and what their performance expectations are. Additionally, to establish relevant performance goals or requirements, the performance tester must also determine what the user's tolerances are and how competing applications are performing. Most performance testing literature implies that this information is simply available from the existing business analysts, but experience says that it is rarely available and when it is available it is poorly formed or simply wrong because very few business analysts have any training in this area.

Systems Analyst –
Performance testing is not a black box activity. An effective performance testing strategy has to take into account not only the system as a whole but also the logical, physical, network and software architectures of the system both in test and in production. While this information is generally available, it rarely exists in a consolidated form, and as it turns out, it is often the case that the performance tester ends up being the single person on the team who understands the system from the greatest number of perspectives and has the best grasp on how all of these perspectives interact with one another.

Usability Analyst – When the application finally goes into production, there is really only one aspect of performance that matters: customer satisfaction. And the only way to determine customer satisfaction is to get the customer to use the system. The challenge in determining a customer's satisfaction with performance is that customers often know neither how to quantify performance nor how to distinguish between poor performance and an inefficient interface. Worse, very few organizations have dedicated usability teams, leaving the performance testers on their own to design and conduct these studies.

Test Strategist, Test Designer, Test Developer, Test Manager, Functional Tester, etc.
Typically, the team is just that, a team of people with individual roles and expertise who work together to effectively test the system. Most often, the performance test team is a team of one, so the performance tester has no choice but to be competent at all of the various test team roles. Since there is so little training available that is specific to performance testing, most practicing performance testers were initially trained in functional, systems or even unit testing and have since adapted those skills and techniques to performance testing. Frequently, performance testers were either systems or functional testers prior to becoming performance testers, or have served in those roles after becoming a performance tester.

Programmers – Developing performance tests is far from point and click or record and playback. In order to accurately simulate actual users, it is almost always necessary for performance testers to write elements of at least somewhat complex code. It is frequently necessary for performance testers to be able to read, understand, and interpret the developer's code, and, not infrequently they find themselves developing their own “test harness” simply to enable the possibility of load generation.
Performance testers often write their own utilities to help them parse through the huge volumes of data they collect, to generate test data, to reset their test environments, or to collect performance related metrics on remote machines. Performance testers may not always be senior programmers, but they certainly aren't afraid of code.

There are other roles performance testers play and reasons why the lead performance tester frequently turns out to be that game breaker who equates to the captain of your “software development special teams”, but I've come to the end of my allotted space. I guess you'll just have to attend my keynote at Eurostar to hear the rest of the story. I hope to see you there!

Thursday, August 17, 2006

Is there anything left for us to write in Testing?

Hi Reader,

I have often stumbled upon Cem Kaner's Articles page and it is very recently that I realized, most of the articles have been written and published.

For every problem, you encounter in testing, people like James Bach, Cem Kaner, Jack Falk, Hung Nguyen, Bret Pettichord.....( I could have missed your name too, apologies for that), have written and published articles but "What are we writing nowadays?". For everything else, there is Jerry Weinberg .

I, being naive, did get disappointed but something struck me recently, with which, I now have confidence, to write or continue writing articles about testing.


_ Is there anything left for us to write in Testing? _

Myself being in India, luckily, gave me an opportunity to become optimistic on the scope left in writing articles in testing.

You might be interested to know as to what was one of them, which pulled the trgigger in me to become optimistic about writing and here it is for you.

I graduated, as an engineer, from one of the engineering schools in India and I was able to recollect that for every subject we had to study, we usually referred to two books, one a foreign author and the other an Indian. It is not that we wanted to show patriotism but just that for things that we could not grasp from the foreign author book there was a simpler version written by an Indian author, who understood the audience/students and put things in a layman fashion. Fortunately or unfortunately, they too are a part of the testing community and there should be articles tailored to their understanding levels.

I was able to recollect this concept and this did give rise to my optimism of continuing writing. Before you conclude anything, it is my responsibility to let you know that the above situation prevailing in India or might be in other countries is not because the foreign authors write it in a way that is too complicated for students here to understand but just that there are lot of factors that influences a student to refer to a particular book. One such is psychology, if a person has come across a book authored by some foreigner and due to his own naivety, was unable to grasp then he/she brands all foreign authors book as something written in Greek or Latin, despite the book being very simple to understand.

I also need to mention that the command over the language English is very important, to understand the simplicity of the written book/document.

Did I fail, by making you think that all Indian authors are the ones who abstract work from the ones who originally published the work ?

Not all, but some, yes. There are many genuine writers and I must appreciate their work in this context. Those who write the abstract versions of the original book mostly are the ones who write for commercial or fame, provided they do not give due credits to the original authors. ( As a tester, if I write such abstracted versions of original, I would be mentioning the limitation of my work in terms of the effectiveness in conveying the topic in detail or its usefulness)

It is time for me to make you think of "What can we write?" apart from one such I have mentioned in detail above -
  1. Extensions of research work of published articles.
  2. New experiments, its results, matching or defying with published articles.
  3. Testing, itself, is a game of perspectives, hence, each one's perspective.
  4. Applying the existing research to any non software field/domain.
  5. The problems that did not exist during the days the experts wrote articles and proposed solutions.
  6. Mistakes you have committed and the learning you have had from it.
  7. A new skill that a tester needs, which was not discussed earlier by any of the experts.
  8. Case studies of a project that you have been in, which you are authorized to write and publish.
  9. The kind of change you did to testing to suit new/different business needs.
  10. Things that have baffled you as a tester.
  11. Re-writing an article in native language, giving due credits to the original author.
  12. Lots more... The only limitation is your imagination - ( sorry, I do not know who quoted this)

_ End of _ Is there anything left for us to write in Testing? _

"What can be written, itself, has turned out to be a writing"

Thanks and Regards,

Pradeep Soundararajan

Tester Tested !

Note : In this post, I am representing those upcoming testers who are experimenting, trying to come out of naivety. Seniors excuse if it had not made sense to you.

Wednesday, August 02, 2006

Testing for Accessibility

This is an article from the August edition of the EuroSTAR Newsletter - STARTester, from Ruth Loebl. You can view the complete newsletter by clicking here and don't forget to subscribe to receive future issues.

Testing software for accessibility involves little more than imagination and common sense, but you have to pick the right standards, and then get to know some users.

Disability
• Lots of disabled people use computers, even people whom you might at first assume could not possibly use one.
• Many disabled people are not "disabled". My mother simply can't see as well as she used to, and has a bit of arthritis in her hands.
• Research commissioned by Microsoft indicated that in the United States, 60% (101.4 million) of adults from 18 to 64 years old "are likely or very likely to benefit from the use of accessible technology due to difficulties and impairments that may impact computer use.

"Even excluding people who are 65 or over, that's more than half the population – this isn't a niche market. So there ought to be a demand for accessible interfaces, although it's sometimes hard to detect. I'm encouraged by the improving legal situation" – check out the Code of Practice on the Disability Equality Duty in the Disability Discrimination Act 2005, example in para 3.46.

So how do all these disabled people use computers? For most, the answer is: in the same way that non-disabled people use computers, with a standard keyboard, mouse and screen.
As an example, one of the most important features of Windows is the ability to change the colour scheme and system fonts. While some of us just like a bit of variety in the colours we look at on the screen all day, for quite a few people choosing the right font and colour scheme is what enables them to read the screen at all. A few pre-set font and colour schemes are offered through the Accessibility Wizard (Programs, Accessories, Accessibility). More can be achieved through the Control Panel (Display, Appearance tab, Advanced button).

Accessibility testing should highlight when systems interfere with or disable these features that are provided through the operating system. It would be most annoying if your choice of colour scheme were ignored by a system that you have to use. All too often, oh dear, it is.

Access technology
Access technology is used where the effect of an impairment is such that an intermediary tool is needed to enable someone to use a computer.

• Partially sighted people who can't get by with an alternative font or colour scheme often use screen magnification software, which enlarges some or all of the screen contents, and provides other powerful features such as image smoothing and colour manipulation.
• People who have a reading impairment, poor literacy or dyslexia can use 'text-to-speech' software, where text highlighted with the mouse is spoken out loud. Sometimes, voice input is useful too.
• People who have a problem with their hands or arms will need adjustments or alternatives to the standard keyboard and mouse. This might simply be different hardware (a one-handed keyboard, a trackball, joystick or mouse pad) or a full speech recognition system.
• People who are blind use a standard keyboard but cannot operate a mouse. Speech output software conveys the contents of the screen, sometimes complemented by electronic braille output on hardware called a braille display. A full keyboard interface without reliance on the mouse is essential for effective access.
These access technologies are very powerful, but are often helpless when faced with really poorly designed software.

Software testing
As testers, you may want to know more about disability and access technology, and to meet some disabled people – I would certainly encourage this. For effective accessibility testing, it is important to involve as many different types of real users as possible, with different abilities, background, experience and so on. We often criticise software and web designers who don’t include people with disabilities in their testing processes.
Real user testing does demand a functional interface, but at this late stage it may be too late to change some aspects of the underlying design without compromising the viability of the whole software development project.

So as well as getting to know your users, we advocate the use of 'inclusive design' standards and guidelines at the earliest stages of interface design. The testable statements focus on the software itself, in isolation, independent of any particular user. They are intended to minimise barriers to both accessibility and usability, and to address many of the requirements of disabled users.

Standards and guidelines
Which standards and guidelines are most applicable for testing software? Three suggestions are below, and for our latest thinking on software and web accessibility, visit the RNIB technology site and follow the link to Software Accessibility. This information is due to be refreshed in early August.

ISO 9241-171 (formerly ISO/TS 16071)
The full title is "Ergonomics of human-system interaction – Guidance on software accessibility". It is in final draft, but as an internationally recognised standard written by professional standards-makers, we hope it will become a reference point alongside the guidelines and checklists that exist.
Within RNIB, we have adopted ISO 9241-171 as the basis for our software acceptance procedures, but it is actually too wide-ranging to implement in its entirety! For each software development project so far, we have had to extract a more workable subset of the full range of standards, tailored to the particular development and delivery platforms for that project.

IBM software accessibility checklist
The IBM Checklists are available for various technologies, including software in general as well as Web, Java and hardware. Each key point is explained clearly, and some information about implementation and testing is also given. They are much more usable and user-friendly than ISO, and free, but less comprehensive.

Section 508
Section 508 is US legislation to ensure that “electronic and information technologies” which Federal agencies develop, procure, maintain, or use, conform to standards designed to provide comparable access for people with and without disabilities. If you want to sell to Federal government in the US, these are the standards to apply. They don't include enough to make your system fully accessible, though.

Some messages to end with
• Accessibility is ultimately subjective, like usability. Effective testing depends on having a wide variety of users to test products in the later stages of the design process.
• Inclusive design is more objective, and applies to the software itself, independent of users. It can be tested from the earliest stages of a software design project.
• Inclusive design does not stifle creativity: good design for people with disabilities results in good design for all.

In my presentation at EuroSTAR 2006, you'll have the chance to see access technology in action, and some examples of accessibility standards and testing in the real world. See you there!


Biography
"Ruth Loebl has been with the Royal National Institute of the Blind (RNIB) for 13 years, working in the area of sight loss and technology."

Emotional Testing - Testing From the Heart of the Business

This is an article from the August edition of the EuroSTAR Newsletter - STARTester, from Ian Londesbrough. You can view the complete newsletter by clicking here and don't forget to subscribe to receive future issues.


In the last few years, testing and quality assurance have started to shake off the preconceptions of geekiness and started to carry more gravitas within organisations. Seemingly always the poor relation to development in the IS profession, testing is finally making it onto the agenda for board rooms, businesses and IS communities. But to claim that testing is "sexy" or "cutting edge" would be to overstate its appeal - if this is the case, how can the technology industry demonstrate its importance to the business decision makers?

Testing, like many other practices in the IT industry, has long been the preserve of logical, structured, “left brain” thinkers - the tech heads. In truth, many of the people who drive business are the creative forces, people who are more “right brain” in their thought processes. Therefore if testing is going to take up its rightful place in driving the development of projects that actually deliver the expected results, then something has to change – it has to be made more appealing to the creative forces within business.



Consider for a moment the difference between left brain and right brain thinking and the types of thought processes they engage in:

Left Brain
Logical
Sequential
Rational
Analytical
Objective
Looks at parts


Right Brain
Random
Intuitive
Holistic
Synthesizing
Subjective
Looks at wholes


There is no doubt that testing professionals need all the “left brain” attributes, but in addition to those, they need to move beyond the robotic, structured motions of traditional testing - both the left and the right side of the brain need to be utilised. This would enable testing professionals to start thinking about business from the more strategic and creative perspectives. It would also help them to gain a firmer foothold in the boardroom, allowing them to communicate the importance and criticality of testing to the right audience.

As we know, testing is required for a very diverse range of products and systems. For example, to test a console game which is designed to stimulate emotion requires both emotional and subjective decisions, as well as random and intuitive assessment of risk. These elements require a mental attitude different to that of the traditional testing professional.

The testing industry not only needs to adopt a “whole brain” approach when testing, it needs to utilise the right side in order to engage with the board and demonstrate the added value and business benefits of testing. Jargon such as “stakeholder buy-in”, “board level sponsorship”, “grass roots support”, is forever being bandied about, but how can the testing and technology industries really engage directors, business users, and staff and make a connection which delivers the value and benefit they expect?

To engage with any of their audiences, senior decision makers in particular, testing professionals need to have a firm understanding of the business requirements. The job of a professional tester is to fulfil the requirements of all stakeholders: from the client and their board through to partners and the tester’s own employer. By fulfilling the requirements, testing is not only seen in a favourable light, but it becomes a “must have” for any project.

In order to engage with their audiences effectively, professional testers need to be connected, from an emotional standpoint, to the business and understand on a cerebral level what the business is trying to achieve. When responding to situations the right side of the brain is the reactive, emotional side – think adrenaline rushes, increasing heart rates – the left side of the brain has a more considered, logical approach. Therefore testers can harness this cognitive process to ensure they immerse themselves in the business, the requirements and the risks and dangers it faces and then use their logical, analytical abilities to come up with testing and quality assurance strategies to meet the business’s requirements.

Traditionally, testers are not always vocal about their work and the positive benefits they are bringing to the business. Using this more emotional approach, throughout delivery of the project, testers need to constantly re-engage and maintain a positive relationship with their customers by communicating and demonstrating how testing has removed the risk from the project and produced success and profit.

The connection between an emotive approach and business and technology issues in a testing environment becomes clear when assessing the success that testing professionals achieve in delivering positive outcomes for the business. Testers who are trapped in the old school thought process of testing for testing’s sake and approaching it from a box-ticking, operational perspective, fail to engage effectively with the business and thus achieve a lesser degree of success. For example they may not understand the importance of capturing information that demonstrates the value testing is bringing to the business.

Using an emotive approach to testing will also enable professional testers to bring the discipline to life – using real life examples of projects that have failed due to shortfalls in the testing and quality assurance procedures will be much more effective than the traditional “death by powerpoint” approach.
If testers can engage the business and technology industry on an emotional level, about the added value and business benefits of testing, then testing will be able to move forward. Once the testing industry embraces the “whole brain” approach, they can assume a leadership role, guiding clients through projects safely eliminating the risk and enjoying much more success than you could possibly have from a traditional, logical and structured approach.

For all professional testers convinced of how critical their role is to the success or failure of IT projects, they need to engage the business at the ideas stage, drill down effectively to determine the requirements from the project and really get to the nub of what organisations and IS communities want. In doing this, disaster can be averted and businesses will start to realise the true benefits of successful projects.


Biography
Graduating from Warwick University in 1988, Ian started his career in IT as a Junior Programmer with Barclays Bank. Ian then moved to ICI, which became Zeneca and then Astra Zeneca. Ian has also worked for PA Consulting and prior to joining IS Integration he was the Testing & Release Manager for RWE Shared Services IS (serving npower and Thames Water).

Monday, July 10, 2006

What kind of fish is a tester?

Imagine if everybody were like you…

Would life be the better or the worse for that?

Would testing be better or worse?

This is an article from the July edition of the EuroSTAR Newsletter - STARTester, from Anne Mette Jonassen Hass. You can view the complete newsletter by clicking here and dont forget to subscribe to receive future issues.

I must admit that I think if everybody were like me, (testing) life would perhaps be easier, but also dull, predictable and lacking important aspects. Finally after more than 50 years of life, i have realized that other people – and hence testers – are different from me! Other testers see the world differently and have different values. What a relief!

I’m not fast. The fact that all testers are not alike has been known since the ancient Greek philosopher Galenus defined 4 temperaments (some people like systems and order)

• Phlegmatic
• Sanguine
• Choleric
• Melancholic

Galenus also said: “We all have our share of each – in different mixtures.” Since then others have studied personalities including Freud, Jung, and Myers-Briggs. Based on Jung’s work Myers-Briggs defines sixteen personality types composed from four dimensions. The dimensions are:

• How do you get energy:
Extraversion (E) / Introversion (I)

• How do you get information and knowledge:
Sensing (S) / Intuition (N)

• How do you decide:
Thinking (T) / Feeling (F)

• How do you act:
Judging (J) / Perceptive (P)

The Greek view is quite simple, the Myers-Briggs view rather complex, and they are both concerned with the individual person as just that: an individual. In addition to this, Dr. M. Belbin has defined nine team roles. A team role as defined by Dr. M. Belbin is “A tendency to behave, contribute and interrelate with others in a particular way."

If you go around thinking that all people basically are like you, you are terribly mistaken. And that mistake can lead to misunderstandings and tensions in test teams, and may even cause test teams to break down. When working in test teams, awareness and understanding of peoples’ differences are essential.

I once worked on a team with many frictions and a fair amount of mistrust. One of the team members had heard of the Belbin roles and we all had a test. This was a true revelation to us all. The two team members with the most friction between them were very different types. They had both been completely at a loss as to why the other acted as he did. Having understood that neither had meant any harm, but that it was simply a question of being very different personalities, they worked much better together in the team.

The nine Belbin roles are:

Action-oriented roles

• Shaper
• Implementer
• Completer/Finisher

People-oriented roles

• Co-ordinator
• Team-worker
• Resource Investigator

Cerebral roles

• Plant
• Monitor/Evaluator
• Specialist

Each of the roles has some valuable contributions to the progress of the team in which it acts. They also have some weaknesses that may have an adverse effect on the team.

Two examples:

A Shaper is challenging, dynamic, and thrives on pressure.
He or she has the drive and courage to overcome obstacles.
The weaknesses are that a Shaper is prone to provocation, and may offend people's feelings.

A Team-worker is co-operative, mild, perceptive and diplomatic.
He or she listens, builds, and averts friction.
The weakness is that a Teamworker can be indecisive in crunch situations.
Everybody is a mixture of more team roles, usually with one or two being dominant. An analysis of one’s Belbin team role will give a team role profile showing the weight of each role in one’s personality.

Every person on a team should know his or hers own type and those of the others. It is done by filling in fairly simple questionnaires – not going into deep psychological searches in peoples’ minds. The aim is to provide a basic understanding of ones own and the other team members’ ways of interacting and primary values. It is not about finding out why people are like they are and not to try and change anything either.

It is the test manager’s responsibility to get the test team to work for a specific testing task. And it is the higher management’s responsibility
to choose a test manager with the right traits, skills, and capabilities to be a test manager.

There are two aspects to a team: the people and the roles assigned to the people.
Each individual person in a team has his or her personal team role profile and a
number of skills and capabilities. Each role has certain requirements toward the
person or the people who are going to fill it.

On top of that the people in the team need to be able to work together and not have too many personality conflicts. It can be quite a puzzle to form a synthesis of all this. But the idea is to choose people to match the requirements of the roles, and for them to fit together as a team.

The ideal situation is of course when the test manager or test leader can analyze the roles he or she has to find people for at the beginning of a test project, and then hire exactly the right people. Advertisements can then be tailored to the needs.
The applicants can be tested, both for their skills and capabilities and for personal traits.
The team can then be formed by the most suitable people – and ahead we go.

Unfortunately life is rarely that easy. In most cases the test manager either has an already defined group of people of which to form a team. Or he or she has a limited and specific group of people to choose from. It could also be that the manager has to find one or more new people to fill vacancies on an existing team. In all cases the knowledge of people’s team role profiles is a great advantage.

Forming teams and getting them to work is not an easy task. There is no absolute solution. But a well-formed team is a strong team, and a team tailored for the task is the strongest team you can get.

There will be more examples of types of fish – sorry testers, at EuroSTAR and examples of which Belbin roles fit the best to different test roles in test teams with different targets such as component testing and acceptance testing.
While waiting for this you can try to find out how many fish are hidden in this picture:

Biography
Mrs. Anne Mette Jonassen Hass, M.Sc.C.E. has worked in IT since 1980; since 1995 for DELTA, IT-Processes mainly in software test and software process improvement. Mrs. Hass holds ISEB Foundation and Practitioner Certificate in Software Testing and is an accredited and experienced teacher for both. Mrs. Hass is a frequent speaker and has a solid experience in teaching at many levels. Mrs. Hass has written two books, developed the team-game ”Process Contest”, and created the poster “Software Testing at a Glance – or two”.

Performance Measurement Framework for Outsourced Testing Projects

This is an article from the July edition of the EuroSTAR Newsletter -STARTester, from Kaylana Roa Konda, Applabs Technologies, India . You can view the complete newsletter by clicking here and dont forget to subscribe for future issues

Industry estimates peg the current global market size of outsourced testing services at around $13 billion. This is a strong indication that Outsourcing of testing processes (partially or fully) is here to stay and flourish. Out of the many varieties in outsourcing, Off-shoring is gaining momentum , in which testing activities are typically outsourced to low wage countries like India, Russia, China etc..

This new paradigm of getting the testing done at remote locations is posing significant challenges to both, client and vendor. Some of the key aspects that are demanding attention in managing testing engagements are, differences in test maturity levels, separating test teams from development teams, sharing test environments, managing test tool licenses, changes in roles and responsibilities at client side, defining SLA’s to protect business interests etc., Test outsourcing management and monitoring is indeed very crucial step in supporting and making the outsourcing engagement successful.

Key factors to be considered in managing outsourcing relationship are, business drivers, different outsourcing test scenarios, and potential expectations from the client. Lack of performance measurement framework can often lead to the below situation.

> Excessive communication
> Micro management by client
> Supplier spends too much time in reporting
> Every stakeholder feeling out of control

There is a strong need for performance measurement framework that can prevent the above potential mishaps. A Performance Measurement Framework (PMF) is an essential part of any test-outsourcing project. It defines the boundaries of the project in terms of the services that the service provider will offer to their clients, the volume of work that will be accepted and delivered, and acceptance criteria for responsiveness and the quality of deliverables. A well-defined PMF correctly sets expectations for both sides of the relationship and provides targets for accurately measuring performance against those objectives. At the heart of an effective PMF is its performance metrics. During the course of the test outsourcing engagement, these metrics will be used to measure the service provider's performance and determine whether the service provider is meeting its commitments or not.

‘5P’ performance measurement framework is introduced to establish accountability on both the sides (Client and Vendor), jointly manage and achieve a win-win situation. The 5P’s are - product, project, process, people and price. ‘5P’ performance measurement framework is easy to apply, proven and practical in nature and was developed based on knowledge and experience. This framework provides collection of metrics to choose from multiple dimensions of the testing engagement namely project, process, product, price and people. Metrics can be provided that can cater to wide variety of testing engagements namely test automation, performance testing, certification testing, functional system testing, white box testing, security testing etc.,

Sample metrics against each category are mentioned below to give you some ideas on direction to think about.

• Project: Test effort Vs development effort, Productivity.
• Process: Cycle time improvement, defect leakage index.
• Product: Time to find a defect, test coverage.
• People: Attrition, average experience.
• Price: $ amount saved, Price variance.

Vendor and client have to understand the business drivers of the testing engagement. Identification of the key results areas have to be done based on the business drivers. Appropriate test metrics selection happens based on the nature of the project, test types, test phases etc., Metrics selection is based on the principle that every metric in isolation gives information to track business drivers. The idea of multiple measurements is to put together a pattern of information that collectively gives a complete and accurate picture of the system. Install a metric system in place that allows you to collect the needed information to measure and analyze information and steer projects into the right direction.

Benefits of the model:

Implementation of proper performance measurement framework for outsourced test activities has numerous benefits. Few of them are listed below.

• Helps companies manage their test service providers in an optimal manner for win-win relationships.
• Proper visibility on the return on investment by the outsourced service provider.
• Consideration of all the quality measures into account while analyzing the performance.
• Introduction of a standard evaluation process across the company.
• Identification of the potential risk areas that affect the productivity of the test team.
• Higher level of abstraction with carefully choosen test metrics and the presentation format enabled management to spot the critical issues quickly.
• Past history of the results from the framework can help the success probability of future projects.

Biography

Kalyana Rao Konda is a senior technical services manager at AppLabs Technologies India Pvt Ltd, a company that provides development and testing services. He has been interested in testing from the beginning of his career. He has immense experience in managing product testing groups and also providing test services to clients world-wide. He has a proven track record of managing large scale test automation projects across various technologies, test tools for wide variety of organizations. He has published papers and spoken at international testing conferences and leading web-sites. He holds PMP and CSQA certifications.. He holds a B.tech in Electronics and Communications Engineering.

Collaborative Practices for the Dream Team

Fran O'Hara, Insight Test Services, Ireland

This is an article from the August edition of the EuroSTAR newsletter.
Click here to view the entire Newsletter and to subscribe for future editions.

What are the best team-based practices to help testers and developers collaborate to deliver better software more quickly and less expensively? This article will highlight and provide insight into two high value practices that are both practical and proven in industry.

(Note these and other team based practices such as collaborative planning, project reviews, agile practices, etc. will be expanded upon in Fran O’Hara’s tutorial of the same title at EuroSTAR 2006)

1. Reviews.

Reviews are a key team-based practice that helps develop better collaboration between developers and testers…. if they are well executed. Appropriate use of an efficient and effective review process (one that finds a high percentage of important problems quickly and which also promotes learning) is the best way to gain cultural acceptance and facilitate collaboration. Testers need knowledge to test – reviews are a practical way to gain much of that requirements/system knowledge. Testers are also excellent at finding documentation faults so their involvement adds considerable value. Key documents that benefit significantly from collaborative review involving developers, business analysts, users and testers include User Requirements and Functional Specifications as well as Test Strategies and Plans.

Typical pitfalls with reviews include:

1. Reviews aren’t planned into the project schedule so they have to be done for free in zero time! Without enough time to prepare or indeed without having the right review team, reviews will not find a sufficient percentage of important problems.

2. Review meetings drag on and aren’t well managed. Trivial issues like spelling mistakes are raised, discussions about solutions occur and conflicts arise about the severity of problems or which solution is best.

3. A ‘review everything’ mandate has come from management. When too much has to be reviewed together, the natural tendency is to check the documents quickly just to get through them. This results in finding the more obvious and trivial problems but many of the more subtle and important problems are missed.

4. A ‘one size fits all’ process is being used. Sometimes this is too formal and rigorous for what is really required or indeed for the existing level of maturity of the organisation. This can then result in going through the motions with reviews – this lack of buy-in is often fatal for the process.

5. Review leader role is not emphasized (e.g. no training provided) – leading to poorly planned preparation and poor management of the review meeting – see point b. above. The review leader role is there to ensure efficient and effective reviews by maintaining the team’s focus on finding major defects.

A sample practical peer review process which avoids the above pitfalls will be presented and practiced at the upcoming EuroSTAR conference tutorial.

2. Risk based testing practices.
Risk-based testing provides a common language between all stakeholders including test, development, management and customers/users. Workshops where key stakeholders collaborate to identify and analyse risks and then develop a full lifecycle risk-based test strategy are powerful collaborative activities. They unite development and test in a collaborative approach to testing and addressing risk (including the go/no-go decisions on release). The knowledge transfer and shared vision resulting from such collaborations go a long way to helping ensure a successful project.

Risk-based testing typically involves:

1. Identifying and analyzing/prioritising product risks that can be addressed by testing. This is best done in collaboration with customers/users that can provide business risks and developers who can provide system/technical risks. Examples of business risks include critical functions/features that the users need to do their job. System/technical risks could include core system functions, performance, security or other issues that are critical from a system operational viewpoint. Workshops are an effective approach to use here.

2. Developing a testing strategy that can mitigate these prioritised risks. This may involve assigning critical features to be tested in particular stages or iterations of testing (ranging from static testing such as peer reviews of designs/code to dynamic testing such as functional system testing). Again, focused workshops facilitate this collaboration and agreement on the testing approach throughout the full lifecycle for best results.

3. Designing tests within each test stage that extensively check the allocated high risk elements with less testing of lower risk elements. The result is a prioritised set of test cases agreed by project stakeholders to address the most important product risks.

4. Executing the tests in order of priority.

5. Reporting progress on the basis of risks addressed and residual risks remaining.

Users have their concerns addressed and are provided with information in a language they understand i.e. risk. Developers provide input on technical risk identification as well as executing their part of an integrated test strategy. Testers gain valuable knowledge to help ensure they add significant value to the project.

In summary, at Insight Test Services (www.insight-test.com ) our experience in multiple environments and domains has been that the above two team-based collaborative practices are both practical and highly beneficial in terms of project success. Properly planned and managed reviews used throughout the complete lifecycle make a significant contribution to quality of the final product. The deployment of our risk-based methodology Test Control™ on projects has helped to ensure the positive involvement and collaboration of all stakeholders on a wide range of projects of different size and complexity.

Biography

Fran O’Hara is European director of RPI Alliance,Ltd, (www.rpialliance.com) an alliance of consulting companies collaborating to deliver advanced process improvement technologies. In 1996, he formed Insight Consulting (www.insight.ie) providing Process Improvement services. In 2002, he co-founded Insight Test Services providing managed test/validation services. He is a co-founder and ex-chairman of the Irish Software Testing SIG, SoftTest Ireland (www.softtest.ie), a fellow of the Irish Computer Society and a regular presenter at international conferences and seminars.

Thursday, July 06, 2006

Learning testing through analogies

Hi Reader,

Learning something new is always a challenge and learning testing is like challenge of challenge. When I started learning testing, I found it challenging not only because that learning testing was a challenge but every individual had their own opinion of testing, its methodologies, its branches, its definitions, its vocabulary, its guru's.

"How do I learn testing or from whom do I learn testing and how do I know who is right?" .. were the questions running in my mind.

It was indeed a tough phase of life for me. I happened to be misguided by all the people whom I ask "What is ..... in testing?".

One such basic question I asked was "What is regression testing?" and the answers that confused me are "It is execution of all test cases" , "It is a selective re-testing" , "It is the cycle of testing where all cases should pass", "It is a product qualification test", "It is a combination of Sanity, Comprehensive and Extended set of test cases"... ( goes on )

Well I am lucky to have worked with many organizations in a short period of time. Why I term it "lucky" because, I got a chance to get more confused about what testing is all about. In some places they referred to me as a Quality Assurance personnel and in some other places I was referred to as Test Engineer. Of course, I did ask myself "Who am I?" because I did testing, irrespective of some organization calling me a QA guy or a Test Engineer.

Fed up with the confusions, getting inspired by James Bach, developing passion towards learning testing, I started to think on my own.... to find the answers.

3 years passed ....

Now let me take you to present tense ...

Recently a tester in India contacted me to know the difference between Load and Stress testing and this is how I explained ...

_ Learning testing through analogies __

Ok, so you would want to know the difference in Load and Stress testing and how can it be done, well let us start thinking then...

Assume you are asked to test a chair that can take a load of 50 kilos. For a chair Load and Stress tests are the most important ones as it will be subjected to either of these at usage.

Now let us take up a dictionary and find what the meaning of load is?
Word Web says "Weight to be borne or conveyed" is load.

Ok the next step as a tester is to think of use cases, test cases and test content.

Let us make it brief by starting off with the collection of test content, i.e 10 kilos * 5 bars, 1 kilo * 10 bars, 100 grams * 10 bars, 10 grams * 100 bars.....

Now start testing with a minimum load of say 10 kilos and gradually keep increasing the load in equal steps till 40 kilos. Once you have reached 40 start using 1 kilo bars up to 45 and then later till 49 use the 100 gram and as you near the required load bearing capacity add the weight in smaller steps.

Once the chair is loaded for 50 kilos, allow it for sometime and check for deformations if any on the chair legs.

If the chair takes 50 kilos comfortably then try adding more weight, again, in very smaller steps to see where it breaks. This does not mean you are breaking the system but to give a data that, a system expected to take 50 kilos load is designed to take more and hence the cost has increased.

OK, you are done with load testing and now let us shift our focus to stress testing.
Word Web says "force that produces strain on a physical body"

A very common question or confusion people have is "Is adding more load a stress test", the answer out of my research on this topic is "Need not be".

Well, I could use the same 50 kilos load to stress the system. How ?

There are different ways you can apply a load -

a) Axial Load - The whole load is concentrated on the axis of the chair. By applying a load to the chair in such a manner is a stress test. Use case - A person standing on the chair by putting his entire weight on the toe. People usually do this when they want to stretch for something which is still not rechable after using a chair to reach that ( book or something).

b) Truss - This kind of load becomes a stress where the the mass is not equally distributed over the chair. One leg of the chair could have to take more weight than the other and hence this too becomes a stress test. Use case - 2 children made to sit on the same chair. (Atleast, it happens in India)

c) .... (the only limitation is your imagination)

"Wow, I got a clear picture of Load and Stress testing and using this example, now I can think, how to load and stress test web application." was the reply I got from the person who asked me the question of load and stress testing.

Now, that is how I learn't some of the testing concepts, perhaps that is how I have been learning/teaching it and will continue to learn in the same way unless I see a better way.

_ Learning testing through analogies __

"To know what Stress Testing is say.. *Stress Testing Stress Testing ...* continously without gap..."

Regards,

Pradeep Soundararajan
Tester Tested !
pradeep.srajan@gmail.com

Thursday, June 22, 2006

Hurrah for Automation, But Neglect Manual Testing at High Cost

This is an article from John Scarborough which appears in the 3rd Edition of the EuroSTAR Newsletter entitled 'Hurrah for Automation, but neglect Manual Testing at High Cost'

The article was published in the EuroSTAR Newsletter. Click here to view the entire newsletter and to subscribe to future issues

Hurrah for Automation, but neglect Manual Testing at High Cost

A few months ago I talked with the VP of Engineering for a $50+M software producer whose flagship product, Buckflow (not its real name), was in serious trouble. His team had followed the same Quality Assurance (QA) routine for a few years without running into serious problems, but customer deployment of their last upgrade, which contained patches for 20 customer-reported problems, had resulted in an uproar. At a few sites, Buckflow would not initialize. At another site, a key workflow management tool had been disabled. More patches were scheduled, and a major dot release was scheduled to come out in the summer.

Their board of directors had been alarmed by the report of deployment failures, and was adamant in insisting that this should never happen again. Because of the number of customizations, and because of the frequency of upgrades, the VP believed that the only solution was full automation. He understood that this could only be a long-term goal, but he wanted my company’s help in making it happen, along with whatever bridge solutions were required between now and then.

Their routine for QA and testing, which until now had worked satisfactorily, lacked sophistication. Their development team provided unit testing, and business analysts provided acceptance testing. Their small QA team of four full-time test engineers spent all their time developing and executing test cases for features or fixes that were to be rolled into the next scheduled monthly patch. A week before its scheduled release, everyone in the product development group installed the release candidate patch on their machines and for a couple of days ran certain scenarios selected from a distributed list. The test team barely had time to run their new test cases. They did not have modular tests; and they had stopped running regression tests at least two years ago.

Lack of sophistication in test strategy, the obvious problem at Buckflow, is not unusual. I pointed out that bugs found in the design stage are far less expensive to fix than bugs found during product integration testing. Also – especially applicable to Buckflow – every bug fix is a weak link because the product’s original design did not address it, and therefore must be tested in every release. The VP nodded with evident regret, and said that they had thought that disciplined development combined with unit testing would be sufficient.

It’s also not unusual to find companies who continue to have naive faith in automation, in spite of evidence against such disturbingly resilient illusions as:

* automation eliminates human errors that result from fatigue, boredom, and disinterest;
* automation can be re-used indefinitely (write once, run many);
* automation provides more coverage than manual testing
* automation eliminates the need for costly manual testing;

Every one of the above statements makes sense if properly qualified. Automation may eliminate some or all of the errors that result from manual testers growing weary, but it may also introduce other errors that are equally due to fatigue, boredom and disinterest, arising here in the people who develop automation.

Automation can be re-used indefinitely, provided that the application or system under test does not change, and that nothing else changes that might affect execution, such as common libraries or runtime environments (e.g. Java). Whatever return on investment may have been realized from automation will be quickly wiped out by maintenance costs, at which point the only advantages of automation are strategic, not economic.

If “coverage” means “test coverage” (rather than “code coverage”), then yes, automation can even provide 100% coverage: one need only automate all available test cases. A more significant data point however is the degree of code function or code path coverage provided by available test cases. While achieving 80% code path coverage may be better than 70%, a more significant consideration is what has not been covered, and why.

To avoid manual testing at all costs would be the costliest option, because only in manual testing can all of a test engineer’s understanding, probity, logic, and cleverness be put to work. Security testing of Buckflow at the application level, for example, depends on how the application was developed, where it stores its cookies, what scripts it runs during initialization and various transactions, how stateful connections are established in the inherently stateless HTTP protocol, etc.

While there are commercial test tools that can verify an application’s defense against cookie-cutter varieties of denial of service, or even 50% of the threat model for most applications, interoperation with new applications and with new versions of underlying technologies requires at least a significant investment in manual testing.

More obvious needs for manual testing include penetration testing, usability testing, and localization testing. But Buckflow had a particularly acute need for testing massively configurable applications in diverse environments. While there was room to talk about keyword-driven automation, it was clear that only manual testing would be able to identify configuration issues. In the end, we agreed that the best approach would be a combination of carefully orchestrated automated tests with rigorous manual testing.

Biography

As VP of System Engineering for Aztecsoft, John Scarborough manages and orchestrates pre-sales processes across Sales, Proposal Engineering, and Delivery, from project-based needs analysis to solution design to estimation to retrospective analysis of completed projects. He is also responsible for providing access across Aztecsoft to project-based operational knowledge. Scarborough previously served as Aztecsoft's Principal System Engineer and Quality Architect. Areas covered by his published papers include interoperability testing for web services, model-based estimation, and capability assessment in Agile environments.
Prior to his joining us in 2001, Scarborough was at Microsoft for 11 years, where he built and managed large teams in the test division of its Windows Operating Systems group, including system integration testing, application compatibility, user-context automation, and system validation.

A Software Testing Body of Knowledge?

This is an article from Stuart Reid which appeared in the 3rd Edition of the EuroSTAR Newsletter entitled - A Software Testing Body of Knowledge?

A Software Testing Body of Knowledge?

So, what is a Body of Knowledge or BOK?
A BOK describes the generally accepted knowledge for a particular discipline; it is a formal inventory of the intellectual content of the field. A BOK is thus one way of defining a profession. For a BOK to be accepted there should be widespread consensus within the community that the knowledge and practices within the BOK are both valuable and useful, and applicable to most projects most of the time. The BOK provides the basis for the regulation of the profession; it also defines its boundaries.


Example BOKS in the IT area cover disciplines such as Project Management (APM and PMI variants) and Configuration Management (CMBOK). There is also the IEEE Software Engineering BOK (SWEBOK), which includes a chapter on software testing. The SWEBOK is being advanced to ISO status, but has been dogged by disagreements and, so far, has not been widely accepted by the community.

Who uses a BOK?

Unsurprisingly, a BOK has various stakeholders. New entrants to a field can use it to identify what they need to know, while practitioners can use it as an essential source of information on topics that they only need to reference infrequently. Certification (and licensing) bodies and academics may use it in the form of a syllabus as the basis for qualifications, which, in turn, will mean that training providers and students are also users. Does a Software Testing BOK already exist? Although the authors may disagree, it seems clear that the discipline already includes a number of ‘pseudo’ BOKS. By this I mean that there are several well-used software testing resources, but not one that covers the complete discipline and there is also not one in which there is general consensus. Examples of these ‘pseudo’ BOKs are:

* qualification syllabi created by certification bodies such as ISEB/ISTQB;
* approaches to testing such as TMap®;
* test process improvement models such as TPI® and TMMi™;
* well-regarded text books such as Glen Myer’s original edition of The Arts of Software Testing;
* standards on software testing, such as IEEE 829 and BS 7925; and
* the software testing chapter of the SWEBOK.

Although providing various levels of coverage of the field of software testing, not one of these ‘pseudo’ BOKs on it own satisfies the criteria of becoming the single BOK for the industry. This is because none of them provides broad enough coverage of the discipline of software testing. Neither do any of them appear to command the respect and trust of a large enough proportion of the software testing community to be considered as representing a true consensus.

Is the discipline of software testing ready for a BOK?
Implicitly many contributors to the ‘pseudo’ BOKs appear to believe so; however there is also a strongly-held opposing point of view. Let’s consider the opponents’ view first. Some consider that a BOK acts as a barrier in a number of ways. They feel that BOKs are, by nature, inert and rarely evolve, restricting new thinking and debate on currently accepted ‘truths’. They also point to the continuous stream of project failures and the apparent lack of ‘engineering’ in software testing where scientific theories are not backed-up by solid empirical data. Both points are presented as evidence of the field’s immaturity.

Another argument presented against a software testing BOK is that the discipline is too diffused and changes from domain to domain. Detractors question whether there are enough generally good practices in software testing that apply to most projects and suggest that many good practices are only applicable to specific application domains. For instance, they say that the generally useful practices applied to testing safety-critical system may not be appropriate for the testing of low integrity commercial applications.

The supporters of a software testing BOK point to the benefit of certification in providing a means of regulating the industry and defining training for new entrants. They argue that certification also lends software testing credibility with both customers and developers, while the availability of a single consensus BOK would encourage academics (even those with little interest in, or knowledge of testing) to adopt it. Another suggested advantage of a BOK is that it provides guidance to practitioners on how to improve their current practices. Many of those who feel that software testing should be considered a legitimate engineering discipline see a BOK as a necessary stepping stone to a profession of software testing.

Should a software testing BOK be created?

If the industry decides that a BOK is needed for software testing then it is most important (and probably very difficult) to ensure that consensus is reached. Any initiative must be an inclusive, multi-national effort and care must be taken to ensure that the stakeholders in the previously-mentioned ‘pseudo’ BOKs are invited to join the development process. Ownership of a new BOK could be difficult to manage, and although it is often argued that anything provided for free may be considered worthless by the recipient, I believe that any newly-created software testing BOK should be made freely available to the whole community.

Developers of a BOK must ensure that it does not include practices that are new and unproven with no evidence of their efficacy. A BOK should embody achievable good practice and not simply be a reiteration of academic texts, which may have little connection with the real world. The speed of evolution of the software testing discipline means that its BOK must carry with it the requirement for its continual review and revision. Although a difficult task, I believe that simply by attempting to build a BOK the software testing industry will continue to expand its knowledge of the discipline and so add value to the testing community.

EuroSTAR 2006 Workshop

The topic of a software testing BOK will be covered by an advanced workshop at the EuroSTAR conference in December. The aim is to open up debate on whether the industry should support its creation (with all the attendant questions) or wait until we have more obviously reached maturity. If you feel you would like to contribute to the discussion then please make a note to attend in your diary.

Biography

Stuart Reid has spent the last 17 years involved in software testing, having previously worked on high-integrity systems. He is Chair of the BCS SIGiST and its Standards Working Party and was Chair of the ISEB Software Testing Board and founder of the ISTQB.