Monday, September 11, 2006

Your Chance to win a FREE place at EuroSTAR 2006!

This Competition is now closed
As the date for EuroSTAR 2006 quickly approaches we want to give YOU the chance to join us this year in Manchester.

Simply click here and fill in your details to be in with a chance to win a FREE conference place!
Don't miss this opportunity to attend Europe's premier software testing event!

Thursday, September 07, 2006

Developing Testers: What Can We Learn From Athletes?

This is an article from the September edition of the EuroSTAR Newsletter - STARTester, written by Paul Gerrard, System Evolutif, UK. You can view the complete newsletter by clicking here and don't forget to subscribe to receive future issues.

This article presents the first section of the paper that Paul is writing to accompany his keynote talk at EuroSTAR 2006.

1.1 Motivation for this Talk
This article is based on my experience of doing two things. Coaching rowers and coaching testers - two things close to my heart. There are some universal rules about coaching and I wanted to explore some of the commonalities between coaching athletes (rowers are athletes) and testers.

A couple of years ago, I (rather foolishly) volunteered to coach the ‘development women’ squad at Maidenhead Rowing club. The Devwomen squad, as they were called, had learnt to row in 2004 and were keen to carry on and compete in some events the following year. I offered to create a training plan, and coach four sessions a week for the next 11 months. The plan was to take people with a few weeks experience and develop them into competitive rowers in a year.
This sounds quite ambitious, but the beautiful thing about the sport of rowing is that you can compete at almost any level. The levels of enthusiasm and commitment were high enough and I was confident we could make good progress. Whether they competed and won, was another matter.

I briefed the squad on my proposed training plan for the year with a PowerPoint talk. It’s a long story, but between September 2004 and July 2005 the squad were very successful. The squad embraced the training and stuck to it, were enthusiastic and committed throughout. Every person in the group had at least one ‘win’ by the end of the summer – some had three or four pots and medals to display on the shelf. (Half of the devwomen subsequently moved up to row in the ’Elite’ squad last year).

Now, it struck me some time later, that the training plan I worked out at the rowing club had a structure, focus and detail more sophisticated than the personal development plans most testers agree with their employer. (In fact, I subsequently discovered that probably less than 10% of testers have any development plan at all). I was curious to see if a development plan for athletes could be used as the starting point for a tester’s development plan.


1.2 From Athletic Training Plan to Tester Development Plan
I took the devwomen training plan, and using the same headings and appropriate substitutions for the content of my PowerPoint presentation to see what such a plan might look like. It started as just an exercise but much of what I had learnt from working with the devwomen had a direct correspondence to working with testers. There were of course some ‘rough edges’ but far fewer than I would have anticipated. So, it seemed to me that there was value in pursuing it further and developing a talk around this curious exercise.

I took my original training plan and slides and re-ran the thought process for each aspect of the plan. I asked myself, ‘if I were coaching testers and I had that kind of framework, what would I put into a development plan for testers?’
In the paper, I walk through a development plan for athletes and then use the same framework to explore what might be done for testers. I think there is quite a lot of commonality in the resulting proposal, and the thinking that goes into such a plan is at the heart of the message I want to provide. The remainder of the paper sets out a proposed structure for a tester development plan.

1.3 Coaching and Mentoring is Critical

Now, one of the first of several surprises (to me, anyway) was that you cannot separate development from coaching. Coach and mentor are terms often used in the context of people and organisational development, but they are often used just as labels for one’s team leader or manager. Coaching and mentoring are critically important activities that reflect two support roles for every individual that wants to develop their skills and capability.
In my dictionary , a coach is ‘an instructor or trainer (in sport); a private tutor’. The implication is that the coach imparts knowledge, guidance and advice to an individual. In this respect, the coach is pro-active – leading people towards improved performance and capability.

In the same dictionary, a mentor is defined as ‘an experienced and trusted advisor’. The implication seems to be that, whereas the coach takes the initiative, a mentor might wait for the individual under instruction to ask for advice. Whereas a coach would direct the individual, a mentor waits until asked for support. Needless to say, trust and effective communication between coach/mentor and the individual are critical to success.

1.4 The Mentality of IT People is a Barrier to Change
Coach and mentor are terms that are over used in the IT industry, not just testing. The IT industry sees itself as distinct from the rest of business – as if the interpersonal skills so important to most disciplines no longer apply. We are all familiar with the stereotypical deep-techy programmer who has difficulty with the other members of his team, let alone non-technical folk or end-users. Usually male, these ‘types’ excel when it comes to solving difficult problems with technology, and find it easier to communicate with operating systems than people.

I’m exaggerating perhaps, but the perception of most business people is that most folk in IT simply do not appreciate the needs, thinking or motivation of business users. The gap between Business and IT starts at the top and runs through to lowest-level practitioners. The concepts of coaching and mentoring, as softer disciplines, are still met with suspicion by many people in IT even though business folk appreciated their importance decades ago. Can IT-folk even spell interpersonal?

Coupled with this ‘mistrust’ of soft-skills, we tend to assume that we can attend a technical training course, learn a new skill and become instant experts. This preposterous; but the push and pull of certification schemes for example (emerging in all aspects of IT nowadays), tempt you into believing that certification is the same as capability. Don’t get me wrong, certification schemes have some value, but they are no substitute for evidence of achievement and experience and interpersonal skills.

One of the problems we have in IT (and not just testing) is that we seem to think that everything has to be invented from scratch. We are continually reinventing wheels in our industry and this mentality dominates many people’s thinking. Unlike most other industries we are continually reinventing stuff that we probably already have. We are ever so keen to adopt the latest process improvement or capability model, regardless of its relevance or usefulness. No matter – it’s techy, looks simple and it’s new.

But when it comes to adopting approaches that support leadership, motivation, communications, learning methodologies and interpersonal skills in general we shy away. They are soft, alien, non-techy, and worst of all, invented by non-IT Folk.

So, IT tends to be very inward looking and introspective and this is partly because the industry attracts people who like the technology more than the business of exploiting and working with technology. Quite a difference, don’t you think?
Although system and acceptance testers are less obsessed with technology than most, we have to recognise the influence – some would say hold – that technology has on many IT folk.

1.5 The Importance of Leadership

The development process (as an athlete or tester) is mainly about human interaction. Yes, of course, there is a lot of hard work required to be done. Slogging over technical exercises, cranking out test plans and grinding out test results is indispensable. But the real value of preparatory work comes when feedback is obtained and the work is discussed with peers, a customer, the coach or mentor.

The reason a coach exists is to set the vision, to explain how to do things, to hint at faults in technique, to suggest improvements, to cajole, to motivate – all to achieve a change in someone else’s behaviour. It’s not about, “this is how you test boundary values, I have explained it, you have tried it once and now you know it”. Coaching is not like that and learning is not like that. Whether you are learning a new technique in a sport or an approach, technique, mentality or attitude in a discipline like software testing, there is little difference in the thought process of the individual. The coach is trying to change someone else’s behaviour and that is no trivial thing.

Not many people wake up in the morning and say ‘at the end of this day I am going to change the way I do XXXXX’. Usually the drive for change is coming from someone else. The change will not be initiated in the individual. Everyone with a personality, ego and confidence in their own ability is innately resistant to change.
Change threatens one’s ego and confidence in one’s ability. So with few exceptions, people resist (consciously or unconsciously) external demands for changes in their behaviour.

Motivating and encouraging people to change are hugely difficult things to do from the point of view of the individual as well as the coach. Although most team leaders and managers may be good technically, they have poor leadership skills. Needless to say, the development of leadership skills in managers helps practitioners to sustain training and development efforts and improve their capability.

Investing in The Dream Team: How to Keep The Dream Team Together

This is an article from the September edition of the EuroSTAR Newsletter - STARTester, written by Filip Gydé, CTG, Belgium. You can view the complete newsletter by clicking here and don't forget to subscribe to receive future issues.


CTG is quite proud of the low staff turnover in the company. Thanks to the Competency Development system, among other things, the staff turnover at CTG was only 15.5% in 2005 and only 13.82% in 2004, percentages far below the market average.
CTG is an ICT service company. This means that we implement IT projects for customers, usually at their locations.
This also means that staff in the field often have more intensive contacts with the customer than with their own company. This is a real challenge for a company that is proud of its extraordinary high loyalty levels, both from customers and from staff.

How do you make sure that once you have the right people on board, you can also keep them on board?
How do you turn what is usually a big problem in the service world into a real differentiator in the market?


The answer consists of different ingredients and a recipe that combines these ingredients in the right proportions: a very specific recruitment, a clear strategy, focus on continuous development, a corporate culture based on values, etc.

The real secret consists in making all these matters, which are traditionally labelled as "soft", very tangible and "hard", measure very concretely and follow up the results like a financial ratio. "Put your money where your mouth is", is still a very good test to see whether someone actually means what he says.

In this article I will zoom in on one of the ingredients in the recipe of retention policy; Competency Development (CD). We have developed the Competency Development (CD) system and anchored it in an actual job within the organisation. We can also demonstrate that this is one of the reasons for a low staff turnover: 13.82% in 2004 and barely 15.5 % in 2005, percentages far below the market average.

The Competency Developer is continuously looking for the best match: the right co-worker in the right place, with maximum attention for the career path indicated by the consultant and in line with the customer's expectations and the strategy of CTG itself. The reason is simple: we are convinced that the major reason for someone to change companies is mainly related to the job content, which may no longer be in the co-worker's field of interest, or to the feeling that there are little opportunities to further his/her career. Specifically in these domains, the Competency Development concept provides a great added value.

Role of the Competency Developer at CTG

The Competency Developer, called "CD" in short, assists co-workers in developing their career path. He is responsible for our consultants' competency development and for knowledge management in line with the strategy and business plan of our company. When we are looking for a certain profile for one of our projects with a customer, the CD verifies whether the right match can be found. In some cases we immediately come across an adequate co-worker. Sometimes a certain co-worker almost complies with the requested profile description, but he may qualify even better for the job after an extra training or far-reaching coaching.

One CD is responsible for about 50 consultants. Right from the start the CD builds a relationship of trust with the new consultant. For junior profiles, whose career direction is not yet fully defined, it mainly comes down to "steering". Senior consultants usually have already developed a vision of their own, so the CD's task is rather to hold up a mirror for them and give them regular feedback. The CD encourages everyone to develop both technical and interpersonal skills. The idea is to get all our co-workers to really think along with our company and our customers.

The role of the CD starts with the recruitment and settling-in of new co-workers
The HR department takes care of the first screening of an applicant. During the first interview the recruiter does not speak so much about the applicant's technical skills, but he tries to find out whether the applicant's personality would fit into the company. Which values are important for the applicant and do they correspond to our values? From experience we have learned that this fit is the most important aspect: the values of our company describe our identity and the materialisation of these values shows where we are different from other companies. If you do not match our identity, it won't work in the long run.

From positive advice after this first screening, the Competency Developer enters the picture. In a second interview he will go deeper into the job-related skills of the person, he will also perform a double-check of the personality and probes the expectations in the short, medium and even long term. The CD has to be able to commit our company in terms of these ambitions. It makes no sense to start off with someone if the ambitions are not in line with our organisation's strategy.

Because success usually lies in a good start, the CD plays a key role in introducing the new co-worker in our company. The expectations of both parties – the co-worker and CTG – are continuously aligned.

When a first project has been found for the new co-worker, the CD tells him what the current options are and how this fits into the career path he wants to follow. If he does not know which direction he wants to take, the CD will provide "stepping stones" or get him in touch with others which can help him make his choices.

An evaluation takes place after one month: the customer or our own project manager gives feedback to the CD about the technical and interpersonal skills, either or not in the presence of the consultant himself. If required, action items will be proposed.

Besides lots of informal contacts there are also formal moments: feedback interviews and career interviews

A formal feedback interview is organised twice a year, linked to an evaluation with the customer. Once a year the CD holds a career interview: the set objectives and the relevant competencies are assessed. Action items for the next period are defined. Previous to this interview an Appraisal Review document is sent to the consultants, where they have to give themselves a score for all listed competencies relevant to their situation, with the aim of detecting and discussing possible focal points with the CD.
Junior profiles sometimes feel uneasy about this, but more experienced consultants see it as a real support for their personal competency development. In addition, we also work with 360 degree feedback, an evaluation by the customer and an observation to score competencies and corresponding behavioural indicators.

Continuous development means that the CD plays an active role in the training planning.

The competency system is developed on the basis of 10 "levels". A junior consultant starts in level 1 and can grow towards his field of interest via an evolution in technical and interpersonal skills.

For each level a "must-have" list is available of courses to be attended and skills to be acquired before you can be classified in a certain level. For example, influence skills are very important for the profile of a Project Manager. For the choice of learning activities the consultant's preferred learning style is taken into account, through their own assessment or through experience with results of other learning activities.

Everyone can submit an online application for his or her training schedule, consult the growth in level and the training catalogue, as well as register for learning activities, always in consultation with the CD.

No false promises, but a very concrete investment ... which pays off!

A clearly structured system, as you can see, the "Competency Development". With a proportion of 1 in 50 this means an investment of 10 FTEs for 500 co-workers. Such an investment is not just made out of a conviction, it has to work out financially.

And it does, according to the figures. If a co-worker leaves the company prematurely, you at least have to find a replacement and train him/her; you may also have problems with the current project, and you lose the know-how you gathered ... to name just the 3 largest cost items. All together, when one co-worker leaves, it is likely to cost the equivalent of 6 man months. So, if a Competency Developer makes sure that two people less than the market average leave per year, the investment pays off. I can assure you that the ROI is much higher than that.

The Competency Developer is an important ingredient in the recipe for loyalty. Important, but not the only one. During "Investing in the dream team" in Manchester I will disclose a few more ingredients of our recipe.

Wednesday, September 06, 2006

The Captain of Your Special Teams……The Performance Test Lead!

This is an article from the September edition of the EuroSTAR Newsletter - STARTester, written by Scott Barber, Perfectplus Inc, USA. You can view the complete newsletter by clicking here and don't forget to subscribe to receive future issues.

You are familiar with the “Software Development as a Sports Team” analogy, right?
The project manager equates to the coach, lead developer to offensive team captain, test lead to the defensive team captain – where the entire team views the development process as collaborative and each member of the team is driven to produce his or her best work in order to achieve the team's common goal of delivering a “winning” application.

Typically, this is as far as the model goes, but it doesn't account for some important members of the team – the specialists.
There are a variety of specialists that may be a part of your team: security experts, network engineers, configuration managers and performance testers, to name a few.

If we look to American Football, we find a structure to enhance our model to accommodate these team members.

In American Football, there is a third group known as the special teams. The special teams consist of the kicking teams, kick return teams and other groups dedicated to special plays. Historically, coaches would populate these teams with non-starting players to keep the starters from getting excessively tired or injured during the game and so that the starters could remain focused on their primary positions during practice.

Recently, however, coaches have started fielding their best players, sometimes known as “game breakers,” on the special teams to improve their chances of winning games. These players have become more than just specialists; they have become expert generalists who can contribute to the game in a variety of roles and positions.

The captain of the special teams is often a senior player with both exceptional leadership skills and the ability to play a variety of positions on the field. These are the players coaches put in the game in critical situations when they feel the team needs a big play or a shift in momentum. They are the players that make the crowd cheer and inspire the rest of the team to redouble their efforts simply by taking the field. Much like the recent shift in football where coaches look to top players to populate the special teams, project managers have started looking for experienced, senior individuals who are expert specialists and established generalists for their special roles.

On a software development team, this unique individual equates to the performance test lead... minus the fanfare. On the most effective development teams I've ever been a part of, the performance test lead is someone with leadership abilities, strong generalist skills, and a unique and critical specialty.

So what makes the performance tester so unique? On top of their specialization as a performance tester, these individuals tend to be competent and have experience in a wide variety of roles enabling them to effectively contribute to virtually any aspect of the team. Let's take a brief look at all the different roles a performance tester assumes at various points during a project.

Business AnalystBefore performance testers can begin conducting effective tests, they must understand how users are going to interact with the system under test, what tasks they are going to be trying to accomplish, what their state of mind is likely to be while interacting with the system, and what their performance expectations are. Additionally, to establish relevant performance goals or requirements, the performance tester must also determine what the user's tolerances are and how competing applications are performing. Most performance testing literature implies that this information is simply available from the existing business analysts, but experience says that it is rarely available and when it is available it is poorly formed or simply wrong because very few business analysts have any training in this area.

Systems Analyst –
Performance testing is not a black box activity. An effective performance testing strategy has to take into account not only the system as a whole but also the logical, physical, network and software architectures of the system both in test and in production. While this information is generally available, it rarely exists in a consolidated form, and as it turns out, it is often the case that the performance tester ends up being the single person on the team who understands the system from the greatest number of perspectives and has the best grasp on how all of these perspectives interact with one another.

Usability Analyst – When the application finally goes into production, there is really only one aspect of performance that matters: customer satisfaction. And the only way to determine customer satisfaction is to get the customer to use the system. The challenge in determining a customer's satisfaction with performance is that customers often know neither how to quantify performance nor how to distinguish between poor performance and an inefficient interface. Worse, very few organizations have dedicated usability teams, leaving the performance testers on their own to design and conduct these studies.

Test Strategist, Test Designer, Test Developer, Test Manager, Functional Tester, etc.
Typically, the team is just that, a team of people with individual roles and expertise who work together to effectively test the system. Most often, the performance test team is a team of one, so the performance tester has no choice but to be competent at all of the various test team roles. Since there is so little training available that is specific to performance testing, most practicing performance testers were initially trained in functional, systems or even unit testing and have since adapted those skills and techniques to performance testing. Frequently, performance testers were either systems or functional testers prior to becoming performance testers, or have served in those roles after becoming a performance tester.

Programmers – Developing performance tests is far from point and click or record and playback. In order to accurately simulate actual users, it is almost always necessary for performance testers to write elements of at least somewhat complex code. It is frequently necessary for performance testers to be able to read, understand, and interpret the developer's code, and, not infrequently they find themselves developing their own “test harness” simply to enable the possibility of load generation.
Performance testers often write their own utilities to help them parse through the huge volumes of data they collect, to generate test data, to reset their test environments, or to collect performance related metrics on remote machines. Performance testers may not always be senior programmers, but they certainly aren't afraid of code.

There are other roles performance testers play and reasons why the lead performance tester frequently turns out to be that game breaker who equates to the captain of your “software development special teams”, but I've come to the end of my allotted space. I guess you'll just have to attend my keynote at Eurostar to hear the rest of the story. I hope to see you there!