Page 2 of 8

Creating a Test Strategy

At EuroStar 2017 I did a experiential workshop with Pekka Marjamaki and Carsten Feilberg called “The Magic of Sherlock Holmes – Test Strategy in a blink of an eye”. The goal of this full day workshop was to teach to create a Test Strategy rapidly so you can start testing as soon as possible. This blog post describes the summary of what we taught and publishes the example I made for the participants.

What is a Test Strategy?

In the workshop we defined Test Strategy as a solution to a complex problem: how do we meet the information needs of the testers & stakeholders in the most efficient way possible. In Rapid Software Testing we define Test Strategy as “the set of ideas that guide your test design or choice of tests to be performed”. We also talk about logistics and test plan. Logistics is the set of ideas that guide your application of resources to fulfilling the test strategy and test plan is the set of ideas that guide your test project. A Test Plan is the sum of logistics and the strategy. 

Rikard Edgren did an excellent workshop on Test Strategy at EuroStar 2014. In his workshop he says: Test strategy contains the ideas that guide your testing effort; and deals with what to test, and how to do it. (Some people mean test plan or test process, which is unfortunate…). It is in the combination of WHAT and HOW you find the real strategy. If you separate the WHAT and the HOW, it becomes general and quite useless.

What influences the Test Strategy?

Your Test Strategy is influenced by many factors.

  • Context:  your testing is influenced by the details of the specific situation like information available, the tester(s) doing the testing, what has been tested before, what tools and environments are available, how much time you have, etc.
  • Missions: what do your stakeholders need to know about the product? 
  • Risks: testing is mostly motivated by problems that might happen (risks). We want to find the important problems in the product. 
  • Product: products have many dimensions. By modelling the product we find important and unique aspects of the product.
  • Quality Criteria: various criteria or requirements that define what the product should be for the stakeholders.
  • Testing: your testing changes the Strategy constantly. Each experiment learns you more about the product and the risks involved.

How to create a Test Strategy?

To create a Test Strategy, you have to examine the factors mentioned above. This can be done in several activities (which do not need to be done specifically in this order). Most likely you will do this in a iterative way, building your Test Strategy as you go.

  1. Missions for your testing
  2. Product analysis
  3. Oracles & information sources
  4. Quality characteristics
  5. Context: project environment
  6. Test strategies

I like to use to Heuristic Test Strategy Model (HTSM). It reminds me what to think about when if am creating my Test Strategy and tests.

The HTSM is a model which consists of several sets of heuristics (more about heuristics here and here).  The full model can be found here.

  • Project Environment helps to understand our context and missons: MIDTESTD (mission, information, developer relations, test team, equipment & tools, schedule, test items, deliverables).
  • Product Elements helps to identify dimensions and factors of the product that could be examined in a test: SFDIPOT (structure, function, data, interfaces, platform, operations, time).
  • Quality Criteria helps to identify value and threats to various criteria of the product: CRISP DUCCS (capability, reliability, installability, security, performance, development, usability, charisma, compatibility, scalability). In this case you could also use the software quality characteristics by the Test Eye.
  • Risk analysis reveals potential problem. Risks motivate your testing. But the testing itself is risk analysis in itself: after analysing potential risks your testing informs you about the actual problems and learn new aspects of the product which helps you identify new risks. Each test has its own strategy: FDSFSCURA​ (function testing, domain testing, stress testing, flow testing, scenario testing, claims testing, user testing, risk testing, automatic checking​). To come up with good Test Ideas is an important skill. Erik Brickarp has an excellent blog post called How to come up with test ideas.
  • Identifying Oracles and Information sources helps learning about the product and identify potential problems. To design a good test strategy we need to know what’s important.

Examples of Test Strategy

Before giving you my example, I like to link to two great example of how to create a thorough Test Strategy by Rikard Edgren.

The exercise in the workshop was defined as follows:

Product: tinyurl.com/SherlockES

Approach used to create my Test Strategy below:

  1. Look at mission and things we know (from the class) [1 min]
  2. Explore wix platform website [1 min]
  3. Explore wix casies website and create SFDIPOT mindmap while learning about the product [5-10 min]
  4. Risk analysis [5-10 min]
  5. Think of test ideas / approaches to deal with risks [5-10 min]
  6. Wrap-up. Create testing story about what I know already. List next steps [5 min]
  7. Tidy document and add some comments to make it readable for students

Total time used: 50-60 min

1. What do we know (and what important questions do I still have):

  • No developers available –> No access to code
  • Target customers? Who are they?

(Considering the short time period, I choose not to do a thorough context analysis using MIDTESTD, if I had more time, I would do so).

Mission:

Casies is web shop build with the Wix platform where customers can buy a case for their mobile phone. Your mission is to find problems we want to fix before release. The owner of the website needs information to decide if this web shop can be released.

Most important quality criteria:

  • Usability & charisma
  • Reliability and security of the purchase process
  • Functionality
    • Find, sort & filter
    • Purchase, cart, payment
    • Bestsellers
    • Contact
  • Performance

2. Look at Wix platform site

(url: https://www.wix.com/features/main)

Product exploration: look at website about Wix platform

Claims about the product:

  • Easy Drag and Drop
  • Free & Reliable Hosting
  • App market –> what else is there?
  • Mobile Friendly
  • loads of templates
  • SEO

3. Explore Wix casies

Start using the product

Product exploration: play with casies website using SFDIPOT

Download Xmind mind map (created in Xmind Zen beta4)

4. Risks

The risk mentioned here are probably too vague in some cases. Since risk analysis is a continuous process I will update risks later, making them more concrete and actionable. Also I will add more risks while testing.

  • Web shop not available
  • Web shop not easy to use
  • Target customers do not like the web shop
  • Web shop is not secure: customer data accessible by 3rd party
  • Customer cannot add items to cart
  • Customer cannot buy items in cart
  • Customer cannot find the items wanted
  • Web shop is not easy to find

5. Risks – Testing

Test ideas

Used document Software Quality Characteristics by Testeye

(More info on session based testing: here)

6. Testing Story & Next Steps

Looking at the website I found that the web shop doesn’t look complete to me: there is no possibility to check out and pay. Is this okay? To be able to do thorough testing to fulfil the mission “Your mission is to find problems we want to fix before release. The owner of the website needs information to decide if this web shop can be released” the website needs to be completed and payment functionality needs to be added. I am also interested in the maintenance model: how can I add cases? This would be very handy to create more test data and play with parameters in there to see how that comes out in the shop. Does this need to be part of my testing?

The results of my short initial exploration are captured in the SFDIPOT mind map I made while playing & interacting with the product. After that I made an initial risk analysis. I haven’t gone deep on anything yet.

Next steps will be talking to the product owner with my initial test strategy. If the payment module isn’t available soon, I will start with testing the first three charters, although I will not be able to fully do the purchase process charter, so I have to split this charter and focus on the cart part only.

  • Purchase process: cart & payment including investigation of fields
    (using the Test heuristic cheat sheet)
  • Finding cases – sorting & filtering cases
  • GUI tour: check all links and info

Used heuristics

Below an abstract of the “Software Quality Characteristics” by Testeye used as heuristics for creating this Test Strategy

Usability

  • Affordance: product invites to discover possibilities of the product.
  • Intuitiveness: it is easy to understand and explain what the product can do.
  • Minimalism: there is nothing redundant about the product’s content or appearance.
  • Learnability: it is fast and easy to learn how to use the product.
  • Memorability: once you have learnt how to do something you don’t forget it.
  • Discoverability: the product’s information and capabilities can be discovered by exploration of the user interface.
  • Operability: an experienced user can perform common actions very fast.
  • Interactivity: the product has easy-to-understand states and possibilities of interacting with the application (via GUI or API).
  • Control: the user should feel in control over the proceedings of the software.
  • Clarity: is everything stated explicitly and in detail, with a language that can be understood, leaving no room for doubt?
  • Errors: there are informative error messages, difficult to make mistakes and easy to repair after making them.
  • Consistency: behavior is the same throughout the product, and there is one look & feel.
  • Tailorability: default settings and behavior can be specified for flexibility.
  • Accessibility: the product is possible to use for as many people as possible, and meets applicable accessibility standards.
  • Documentation: there is a Help that helps, and matches the functionality.

Charisma

  • Uniqueness: the product is distinguishable and has something no one else has.
  • Satisfaction: how do you feel after using the product?
  • Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
  • Attractiveness: are all types of aspects of the product appealing to eyes and other senses?
  • Curiosity: will users get interested and try out what they can do with the product?
  • Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
  • Hype: should the product use the latest and greatest technologies/ideas?
  • Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
  • Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
  • Directness: are (first) impressions impressive?
  • Story: are there compelling stories about the product’s inception, construction or usage?

Reliability

  • Stability: the product shouldn’t cause crashes, unhandled exceptions or script errors.
  • Robustness: the product handles foreseen and unforeseen errors gracefully.
  • Stress handling: how does the system cope when exceeding various limits?
  • Recoverability: it is possible to recover and continue using the product after a fatal error.
  • Data Integrity: all types of data remain intact throughout the product.
  • Safety: the product will not be part of damaging people or possessions.
  • Disaster Recovery: what if something really, really bad happens?
  • Trustworthiness: is the product’s behavior consistent, predictable, and trustworthy?

Security

  • Authentication: the product’s identifications of the users.
  • Authorization: the product’s handling of what an authenticated user can see and do.
  • Privacy: ability to not disclose data that is protected to unauthorized users.
  • Security holes: product should not invite to social engineering vulnerabilities.
  • Secrecy: the product should under no circumstances disclose information about the underlying systems.
  • Invulnerability: ability to withstand penetration attempts.
  • Virus-free: product will not transport virus, or appear as one.
  • Piracy Resistance: no possibility to illegally copy and distribute the software or code.
  • Compliance: security standards the product adheres to.

Performance

  • Capacity: the many limits of the product, for different circumstances (e.g. slow network.)
  • Resource Utilization: appropriate usage of memory, storage and other resources.
  • Responsiveness: the speed of which an action is (perceived as) performed.
  • Availability: the system is available for use when it should be.
  • Throughput: the products ability to process many, many things.
  • Endurance: can the product handle load for a long time?
  • Feedback: is the feedback from the system on user actions app
  • Scalability: how well does the product scale up, out or down?

Final thoughts

As my example shows: you can create a Test Strategy in an hour. Of course this Test Strategy is not complete. But after the first tests (3 sessions) we learn and discover more about the product, so we can identify new risks, which inform new missions and will help us come up with new Test Ideas. Our Test Strategy will grow over time!

Extra info:

  • The slides are here.
  • The pictures and flipcharts from the workshop are here.

References:

Does certification have value or not?

I read a blogpost in Dutch named “Does certification have value or not?” by Jan Jaap Cannegieter. I wanted to reply, but there was no option to reply, so I decided to turn my comments into a blogpost. Since the original blogpost is in Dutch I have translated it here.

The proponents claim that you prove to have a foundation in testing with certification, you possess certain knowledge and it supports education.” (text in blue is from the blogpost, translated by me).

Three things are said here:

  1. prove to have foundation
    Foundation? What foundation? You learn a few terms/definitions and an over-simplified “standard” process? And how important is this anyway? Also the argument of an common language is nicely debunked by Michael Bolton here: “Common languages aint so common
  2. possess certain knowledge
    When passing an exam, you indicate to be able to remember certain things. It doesn’t prove you can apply that knowledge. And is that knowledge really important in our craft? I think knowledge is over appreciated and skills are undervalued. I’d rather have someone who has the skills to play football well instead of somebody who knows the rules. From a foundation training, wouldn’t you at least expect to learn the basic testing skills? In no ISTQB training, students use a computer. Imagine giving someone a driver’s license without having ever sat in a car …
  3. supports education
    Really? Can you tell me how? I think the opposite is true! As an experienced teacher (I also did my share of certification training in the past), my experience is that there is too much focus on passing the exam rather than learning useful skills. Unfortunately, preparing the students for the exam takes a lot of time and focus away from the stuff that really matters. Time I would rather use differently.

Learning & tacit knowledge

So how do people learn skills? There are many resources I could point to. Try these:

In his wonderful book “The psychology of software testing” John Stevenson talks about learning op page 49:

The “sit back and listen” approach can be effective in acquiring information but appears to be very poor in the development of thinking skills or acquiring the necessary knowledge to apply what has been explained. The majority of trainers have come to realise the importance of hands on training “Learn by doing” or “experiential learning”.

John points to resources like: Learningfromexperience.com and the book “Experiential learning: experience as the source of learning and development” by David Kolb. Also Jerry Weinberg has written books on experiential learning.

The resources on learning skills mentioned by me earlier, will tell you that experienced people know what is relevant and how things are related. Also practice, experimentation and reflection are important parts of learning. Learning of a skill  depends heavily on tacit knowledge. On page 50 in his book John Stevenson writes:

Pakivi Tynjaklak makes an interesting comment in the International Journal of Educational research: “The key to professional development is making explicit that which has earlier been tacit and implicit, and thus opening it to critical reflection and transformation” – This means that what we learn may not be something we can explain easily (tacit) and that as we learn we try to find ways to make it explicit. This is the key to understanding and knowledge when we take something which is implicit and make it explicit. Therefore, able to reflect on what is learned and explaining our understanding.

And since testing is collecting information or learning about a product, the importance of tacit knowledge also applies to testing: John writes in his book on page 197:

However testing is about testing the information we do not know or cannot explain (the hidden stuff). To do this we have to use tacit knowledge (skills, experience, thinking) and we need to experience it to be able to work it out. This is what is meant by tacit knowledge“.

Back to the blogpost:

The opponents say certification only shows that you’ve learned a particular book well, it says nothing about the tester’s ability and can be counterproductive because the tester is trained to a standard tester.

  • Learned a particular book
    Agree, see arguments 1 and 2 above.
  • it says nothing about the tester’s ability
    Agree, see my argumentation on skills in point 2 above: “knowledge is over appreciated and skills are undervalued”. To learn we need practice and reflection. Also tacit knowledge is an important part of learning.
  • Trained to a standard tester
    Agree. No testing that I know of, is standard. Testing is driven by context. And testers with excellent skills have the ability to work in any context without using standards or templates. Have a look at the Ted Talk by Dr. Derek Cabrera “How Thinking Works“. He explains that critical thinking is a skill that is extremely important. Schools (and training providers)  nowadays are over-engineering the content curriculum: students do not learn to think, they learn to memorize stuff. Students are learned to follow instructions, like painting by the numbers or fill in templates. To fix this, we need to learn how to think better! Learning to paint by numbers is exactly what certification based on knowledge does with testers! Read more about learning, thinking and how to become an excellent tester in one of my earlier blogpost: “a road to awesomeness“.

Comparison with driving license
Does a driving license show anything? Well, at least you have studied the traffic rules well and know them. And, while driving, it is quite useful if we all use the same rules. If you doubt that, you should drive a couple of rounds in Mumbai.

In testing we should NEVER use the same rules as a starting point. “The value depends on the context!” . Driving in Mumbai or anywhere by strictly adhering to the rules, will result in accidents and will get you killed. You need skills to drive a car and be able to anticipate, observe, respond to unexpected behaviour of others, etc. This is what will keep you out of trouble while driving.

As I explained earlier on the TestNet website: this comparison is wrong in many ways. For a driver’s license, you must do a practical exam. And to pass the practice exam, most people will take lessons! You will be driving for at least 20 hours before your exam. And the exam is not a laboratory: it means you go on the (real) road with a real car. A multiple-choice exam does not even remotely resemble a real situation. That’s also how pointless ISTQB or TMap Next certificates are. Nowhere in the training or the exam, the student uses software nor does the the student has to test anything!

This is the heart of the problem! People do not learn how to test, but they learn to memorize outdated theory about testing. Unfortunately in many companies new and inexperienced testers are left unattended in complex environments without the right supervision and support!

So what would you prefer in your project: someone who can drive a car (someone who has the basic skills to test software), or someone knows the rules (someone who knows all the process steps and definitions by heart?). In addition, ISTQB states that the training is intended for people with 6 months of experience here.  So how are new testers going to learn the first 6 months?

The foundation for a tester?

The argument that the ISTQB foundation training provides a basis for a beginner to start is nonsense! It teaches the students a number of terms and a practically unusable standard process. In addition, there is a lot of theory about test techniques and approaches, but the practical implementation is lacking. There are many better alternatives as described in the resources earlier in this blogpost: learning by doing! Of course with the right guidance, support and supervision. Teach beginners the skills to do their work, as we learn the skills to drive a car in driving lessons. In a safe environment with an experienced driver next to us. Until we are skilled enough to do it without supervision. Sure, theory and explicit knowledge are important, but skills are much more important! And we need tacit knowledge to apply the explicit knowledge in our work.

So please stop stating that foundation training like TMapNext and ISTQB are a good start for people to learn about testing. There aren’t. Learning to drive a car starts with practicing actually driving the car.

Jan Jaap states he thinks a tester should be certified: “And what about testers? I think that they should also be certified. From someone who calls himself a professional tester we may expect some basic knowledge and knowledge about certain methods?“.
I think we may expect professional testers to have expertise in different methods. They should be able to do their job, which demands skills and knowledge. We may expect a bit more from professional testers than only some basic knowledge and knowledge about methods.

Many of the well-known certification programs originated when IT projects looked very different and, in my view, these programs did not grow with the developments. So they train for the old world

Absolutely true.

Another point where the opponents have a point is the value purchasing departments or intermediaries attach to certificates. In many of the purchasing departments and intermediaries, the attitude seems that if someone has a certificate, it is also a good tester. And to say that, more is needed.

It is indeed very sad that this is the main reason why certificates are popular. Many people get certificated because of the popular demand of organisations who do not recognise the true value of these certificates. Organisations are often not able (or do not what to spend the time needed) to recognise real professional testers and so they rely on certificates. On how to solve this problem I did a webinar “Tips, Tricks & Lessons Learned for Hiring Professional Testers” and wrote an article about it for Testing Circus.

Learning goals & value

On the ISTQB website I found the Foundation Level learning goals. Let’s have a look at them. Quotes from the website are in purple.

Foundation Level professionals should be able to:

  • Use a common language for efficient and effective communication with other testers and project stakeholders.
    Okay, we can check if the student knows how ISTQB defines stuff with an exam. However, understanding what it means or how to deal with it in a daily practice is very different. Also, again, common language is a myth.
  • Understand established testing concepts, the fundamental test process, test approaches, and principles to support test objectives.
    Concepts and test process: okay, you can check if a student remembers these. However, the content is old and outdated and in many places incorrect! I think understanding approaches cannot be checked in a (multiple-choice) exam. Maybe some definitions, but how to apply them? No way.
  • Design and prioritize tests by using established techniques; analyze both functional and non-functional specifications (such as performance and usability) at all test levels for systems with a low to medium level of complexity.
    Design and prioritize tests? Interesting. Where is this trained? Or being tested in the exam? Analyse specifications? That is not even part of the training. Applying some techniques is, but there is a lot more to designing and prioritize tests and analysing specifications.
  • Execute tests according to agreed test plans, and analyze and report on the results of tests.
    Execution of tests nor analysis or reporting of test results is part of the exam. In class only the theory about test reporting is discussed but never practiced.
  • Write clear and understandable incident reports.
    How do you check this with a multiple choice exam? And how you train this skill without actually testing software in class? No exercises in class that actually ask you to write such reports.
  • Effectively participate in reviews of small to medium-sized projects.
    The theory about reviews is part of the class. To effectively participate in reviews, you need to do it and learn from experience.
  • Be familiar with different types of testing tools and their uses; assist in the selection and implementation process.
    Some tools and their goals and uses are mentioned in class. So I will agree with the first part. But to assist in selection and implementation, again you need skills.

So looking at the learning goals above, I doubt if the current classes teach this. The exam for sure doesn’t prove that a foundation level professionals is be able to do this things. A lot of promises that are just wrong! Certificate training like ISTQB-F and TMap as they are now are simply not worth the money! The training and exam are mostly 3 days and cost around 1.700 euro in the Netherlands. I think that is a crazy investment for what you get in return… There are better ways to invest that money, time and effort!

I think that a more valuable 3 day foundation training is doable. But surely not the way it is done now by TMap or ISTQB. I’ve written a blog post about it years ago: “What they teach us in TMap Class and why it is wrong!“.


More blogs / presentations about certification:

Test improvement in an agile/CDT environment

This post and the article have been updated on April 4 2017.

One day during a team meeting at Joep‘s previous job at a bank the Team Manager of Testing, listed a number of topics his testers could work on in the coming months. One of those topics was “testing maturity”. This topic was on the list not because this manager was such a fan of maturity models, but because the other team managers (Business Analysis and Development) had produced one for their own teams and higher management would like to have one for testing as well. And although Joep saw little value in a classic five-tiered maturity model either, he was intrigued by the question: so what can you do with respect to maturity models that is of value?

Joep asked Huib to help him think of a way to create a valuable, context-driven way to work on maturity. Since Huib had been working for the same bank, they met and discussed the possibilities. Soon they found out that the criteria should be variable since maturity depends on context. They started experimenting with stack ranking and quite soon they had the first version of their “maturity model”.

Maturity or improvement?

After discussing the first version of the article with James and Michael, we felt the need to update our article. Their comments helped us realize that we needed to explore maturity and maturity models a bit more. After doing this, we decided to rename our model into a test improvement model.

Maturity mission: better testing

What is the mission a maturity assessment? We think the assessment should be a pathway to better testing. As a part of solving problems we think the mission should be: “An investigation of strengths and weaknesses. A starting point for a discussion about potential (testing) problems and how to solve them.” Or as James Bach says: A maturity model is plan for achieving maturity. And this is exactly what we created. Our maturity model isn’t anything like the staged, fixed models available in the market. Maybe we shouldn’t call our method a maturity model, since basically it isn’t. It is a tool designed to help teams assess and improve their testing. It is a method supported by a card game that helps teams retrospect and identify strengths and weaknesses in their way of working, the stuff they create, the team, their skills and context.

Result

After a first try-out at the bank Joep worked, we let it rest for a while. After a couple of months we wrote this article. It is the first version and it needs to be refined and polished. The heuristics lists are probably to long and need to be reduced. We think of this model as a card game that can be played with teams.

Currently we are also working on an agile version of this model, a card game for agile teams to assess their “maturity” to help them to find possible areas for improvements. More about that later.

We are curious about your thoughts. What do you think? Maybe you want to try the game? Feel free to try it out. We hope you will share your experiences with us.

Article (pdf) – Card game (pdf)

The slides are of the meetup about our model are here.

Help Linnea

“There is a saying that it takes a whole village to raise a child. Now we need a whole village to save our Linnea”

Linnea, Kristoffer Nordströms daughter, is five and a half years and comes from Karlskrona in Sweden. Her world revolved up until recently around My Little Ponies, riding her bicycle and popcorn… lots of popcorn. She who has one best friend: her beloved big brother Kristian.
That was her world – until a few months ago when she suddenly and shockingly became afflicted, and got emergency surgery for a brain tumor.
After the operation, we hoped that the bad news would end. But now the family lives in the hospital and has been told that the tumor is an aggressive variety called DIPG (Diffuse Intrinsic Pontine Glioma). The short story is that there is a heart-breakingly minimal chance of survival using established treatments.

There is a possible treatment that we are now aiming for: one that means the tumor is treated through catheters implanted directly into the tumor. Studies and reports show that such a direct treatment gives Linnea the best chance of one day becoming healthy. The cost of treatment and the journeys are very high. Higher than the average person can pay for: £ 65.000 for the first operation and then £ 6.500 for treatments thereafter. In the current situation, it is unclear how many of these Linnea will need.

Please help Kristoffer and his family!

Update July 3 2017

Great news!! The treatment seems to work, Kristoffer writes on his Facebook: http://www.facebook.com/kristoffer.nordstrom.792
We know a lot of people are waiting for this so me and Giedre want to give you the fantastic news.
Linnea had her third Intra Arterial treatment today and all went well without complications, this was the first time that we added the immunotherapy to the treatment, Linneas immune system is now being taught how to recognise and attack the tumor cells itself (Autologous Dendritic Cell Immunotherapy)!
The amazing news of today is that the treatment continues to do its job and we now see further shrinkage in the tumor from the last treatment three weeks ago.
The doctors see a distinct reduction in size since last time!
We now know that this is a treatment that is working for Linnea and for the other children here in Mexico, there are now over 30 families from all over the world here.
The treatment is very effective, but also very expensive, with the combined Immunotherapy and Chemotherapy the cost is 30.000USD (250.000SEK) every third week, later once Linneas immunesystem is trained the treatment will go back to chemotherapy only.
We realise we need to do this for an unforseen time going forward and looking at the costs of each treatment and our budget we need to ask all of you who have helped us get here to help us even further in saving our daughter.
Any donations, big or small, are more than welcome, you can help us at our fundraising site.
Thank you so much everyone who has helped us get here.
Swish donations from Sweden are also very welcome, the number is: +46 723 58 09 53

A Road to Awesomeness

At TestBash Manchester I did a presentation titled “A road to awesomeness”. In this talk I tried to explain how testers can become awesome. I talked about learning and testing skills. To prepare this talk I created a mind map to list skills and ways to learn. The mind map is here and will be continuously updated in the future. Let me know if you have things to add.

The 4-hour tester

In another talk at TestBash Manchester Helena Jeret-Mäe and Joep Schuurkes talked about the 4-hour Tester Experiment.  They took the the challenge of teaching new testers useful skills in only 4 hours. They focus on 5 skills:

  • Interpretation
  • Modeling
  • Test design
  • Note taking
  • Bug reporting

Their experiment is an interesting try to teach software testers essential skills in only 4 hours. Of course it is impossible to teach anyone to test in 4 hours.  But the 4-hour tester has simple exercises that take at most one hour to complete. Interesting approach to help inexperienced testers to get started.

Learning

To become awesome, you have to learn a lot. But how do we learn? I like the 10/20/70 model: 70 percent of learning comes form job-related experiences, 20 percent from interactions with others, and 10 percent from formal educational. So training is only a small part of a learning journey. Interaction with others, like communities and networking, coaching and mentoring are a part of it as well. But the biggest part of learning takes place from work related experience. This doesn’t mean just working. Many years of working doesn’t let you necessarily learn effective if you do not get feedback, reflect on what you do in the right environment. Doing the same work for 10 years doesn’t give you 10 years of useful experience. If you never retrospect, reflect or get any feedback, I’d say you have 10 times 1 year of experience. To make learning effective in your job you need:

  • Concrete, challenging and achievable tasks
  • Realistic application of skills, processing and reflection
  • Personal interpretation, exchange with others and constructive feedback
  • A safe environment to experiment and make mistakes

But there is more to learning. In her TED Talk “Learning how to learn” Barbara Oakley talks about two fundamental learninglearning-how-to-learn modes of our brain: focused mode and the more relaxed diffused mode. When your are learning, you want to go back and forth between these two modes. When you are stuck, you defocus going into the diffused mode, generating new ideas. If you want to learn more about this, I recommend the coursera MOOC “Learning How to Learn: Powerful mental tools to help you master tough subjects“. Two of the things I took from that course is that active listening (e.g. by asking questions) is far more effective. Also learning by doing is a great way to learn fast.

Learning is the most important skill for a tester. And learning is closely related to thinking skills. I’ll come to that later.

What makes an awesome tester?

An awesome testers has many skills. In my blogpost “Heuristics for recognizing professional testers” I described 18 heuristics to recognize professional testers. This was the starting point for my talk at TestBash Manchester. The first steps to help me think about aspects to become awesome, was creating a mind map with a couple of things to focus on: what makes an awesome tester: who (characteristics), what (skills) and how (ways to learn and how to become an expert).

awesome-testers-mapThere are many skills that make an awesome tester. I think the most important skill is the ability to learn! Remember the definition of testing we use in Rapid Software Testing: “Testing is evaluating a product by learning about it through exploration and experimentation”. Besides learning I identified  5 categories of skills testers need:

  • Thinking skills
  • Testing skills
  • Technical skills
  • Domain skills
  • Soft skills

And of course there are many, many skills that make these categories. Have a look at the mind map to learn more about these skills. It is very important to recognise what skills are involved in being a great tester. If you want to learn, you need to know what to focus on. Not knowing which skill to train, will result in unfocussed and ineffective learning. I see many testers apply test techniques without knowing what skills are used and which methods are underneath the technique. This makes them apply techniques as recipes or trick. Being able to apply approaches, tactics and techniques effectively in any situation requires the right skills.

Thinking skills

Learning and thinking are closely related. While researching thinking and thinking skills, I stumbled upon a great TED talk by Dr. Derek Cabrera called ”How Thinking Works”. He talks about the schooling system and about 4 universal thinking skills. Schools nowadays are over-engineering the content curriculum: students do not learn to think, they learn to memorize stuff. Kids are learned to follow instructions, like painting by the numbers. To fix this, we need to learn how to think better and for that he suggests four thinking skills: DSRP. A simple set of rules to become better in systems thinking.

  • Making Distinctions – Any idea or thing can be distinguished from other ideas or things
  • Organizing Systems – Any ideas or thing can be split into parts or lumped into a whole
  • Recognizing Relationships – Any idea or thing can relate to other ideas or thing
  • Taking Perspectives – Any idea or thing can be the point or the view of a perspective

In his book “Systems thinking made simple“, Dr. Cabrera talks metacognition. “Systems thinking is a particular type of metacognition that focuses on and attempts to reconcile the mismatch between one’s mental models and how the real world works”, he continues to describe acts of metacognition: “Awareness that everything you experience is a mental model that approximates (either poorly or well) the real world”, “The ability to distinguish among cognition (thinking), emotion (feelings), and conation (motivations) and the awareness of how these different human capacities influence our mental models and behavior”. Recognizing that models are a big part of our thinking makes you a better tester. But also that your biases and emotions influence your thinking are great insights we have to take into account in our testing.

A second TED talk I recommend is “How to think, not what to think” by Jesse Richardson. He says: “The way we approach education needs to adapt! What’s different in teaching children how to think, we are involving them in the process of their own learning. Instead of just telling them to memorize the right answer, we ask them to engage their own minds, their own awareness by questioning things, attaining understanding not just knowledge. And that involvement, that engagement, is so important, because it keeps the spark of curiosity alive.”

He mentions the famous TED Talk by Ken Robinson: “Do schools kill creativity?“. One of my all time favorites.

Activities in testing
To fully understand what skills we use in testing, I made a list of testing activities. In Rapid Software Testing we teach the 9 Elements of testing and I took these as a starting point.

  1. Model the test space and risks
  2. Determine coverage
  3. Determine oracles
  4. Determine test procedures
  5. Configure the test system
  6. Operate the test system
  7. Observe the test system
  8. Evaluate the test results
  9. Report test results

The next step was to zoom in on the different elements in the list. Zooming in on “Model the test space and risks” we can distinguish:

  • Context Analysis
  • Creating a product Coverage Outline
  • Test Plan
  • Test scope
  • Risk & Value Analysis

I did this with all 9 elements and came to a pretty long list of activities. Maybe you can think of more activities testers do, let me know if you do.

This (making distinctions or factoring) exercise resulted in a huge list of things we do in testing and products we make. The next step was to lists the skills that we use, doing these activities. Obviously the first thing I thought of was learning and thinking. Researching this, I stumbled upon a lot of interesting stuff, of which part I have described earlier. But there was more, way more as you can see in my mind map. And this mind map expands every time I visit it.  Factoring the activities and skills helps me to see what is really going on while we are testing.

So how do you become awesome?

So now we have a map of activities and skills. But how do we learn them? And where do we start?

The most important part of becoming awesome is knowing what to learn and how. The first step is to know what to learn and focus on that. Stop trying to learn 10 things at the same time, it doesn’t work! A way to get to know what you want to learn is writing a Personal Development Plan answering 5 simple questions: Who am I? What are my skills? What do I want? What do I need?  How do I get there? This can give you great insight in your skills, ambition and learning needs. The second step is to find mentors who can help you and give you feedback. After that it is “just” a matter of doing it and practice!

I learned to be awesome by doing many things:

  • Observing others and myself: by becoming manager and a coach I found out that observing others gives you great insight in how testing is done and what the common problems are for people who are learning how to test. Observing myself, which is quite difficult in the beginning, I learned what I was doing and I found out that this was a big eye-opener and learning enabler. Using a journal a writing down stuff every day on my way home, gave me great insights in what my problems were. Knowing your problems, is the first step to solving them.
  • Explaining, presenting, teaching & coaching: by explaining stuff to others, you learn to structure your thoughts. To be able to present or  teach you really have to know your stuff. And still every time I teach, I learn new things about what I teach. New examples, new problems, new ways of doing things. Also answering difficult questions is very helpful learn deeply about topics. That is way I think we should encourage debate and questioning at conference way more.
  • Pairing: pairing is a fun and nice way to learn from others. Seeing how others do things is helpful and has many learning opportunities. Also the feedback you get from your pairing-partner is valuable.
  • Writing my blog: same thing as explaining and teaching. Writing this blog post made me think about what o write and how to explain. I needed to structure my thoughts. Great exercise.
  • Keeping a journal: see observing other and myself.
  • Always having a notebook with me: to practice note taking and it also helps me not to forget interesting stuff that others tell me. It is great to read back notes from conferences and courses from way back. It refreshes my memory and I often see new insights I didn’t see before.
  • Discussing & debating testing: see explaining.
  • Trying new stuff: experimenting is fun and also a way to learn about new techniques, tools, approaches, etc.
  • My coaches and mentors: I have had and still have many mentors. All these people help me to become better every day. It is great to have many mentors, each with their own expertise and specialties. I have a big international network of people I know and who I work with. So many sources of knowledge and skill help me quickly find an answer to almost every difficult testing problem I have.

My advise is to try the same exercise I did creating the mind map. Create your own model of test activities and skills. Observe what you do, how you do it and make your own mind map (or any other way you would like to model it). Come back and compare yours to my mind map.

So why is this important?

I see may similarities with the problems in the current schooling system and in testing:  over-engineering the content curriculum, students do not learn to think, they learn to memorize stuff and to follow instructions (like painting by the numbers). In most testing classes testers learn tricks, procedures and the use of template like approaches in stead of the required skills to do an excellent job. In Rapid Software Testing (RST) we teach thinking skills. We learn testers how to effectively think and solve problems instead of using tricks, templates and standards to do their work. By learning to think better, you learn better. And isn’t testing all about learning? Also, if you are able to think, you are able to do better work, apply your skills in the right way. Adapt to changing context, find your own solution to problems that occur. We also teach testers to dynamically manage their focus: focus/defocus is similar to fundamental learning modes Prof. Dr. Barbara Oakley talks about in her TED Talk.

The DSRP model (distinctions, systems, relationships, perspectives) is very useful in testing. In RST we talk about models and learn our students how to model (distinctions, systems and relationships) and to think from different perspectives using a diverse strategy using heuristics. Great to see that the stuff we teach is actually backed up by scientists who study metacognition (thinking about thinking) and epistemology (the study of knowledge). Making a product coverage outline for instance helps you see distinctions and relations and helps systems thinking.

Good luck on your journey to awesomeness!

References

The slides I used in Manchester are here: slideshare.
The updated slides are here: pdf.

All talks at TestBash are recorded. My talk is here.

Context Driven Clichés

Today I read a blogpost by Olaf Agterbosch on the ViQiT site with the intriguing title “Context Driven Clichés“. Since it is written in Dutch, I will first translate his post.

Do you recognize the following? In recent years I often hear the words “Context Driven ‘when it comes to developments in ICT. Apparently, those using them consider the context in their activities in other words the overall environment in which their activities take place. As if nobody kept that in mind before, these people talk about the importance of the environment and the way their products, services and processes have to connect to the context.

I sometimes wonder why people choose to kick in a open door and yell how well it’s working. But what’s even worse: people parroting it. Everyone jumps on the bandwagon and before you know it the latest hype is born. A hype that is fully wrung out by, for example consultants, who will not fail to emphasize the importance of this development. You really can not live without!

Old wine new bottles. Old wine in Agile Bags. Old wine in faded bags …

Apparently there is marketing potential.

Context Driven testing is exemplar for an over hyped branch on the Agile tree. The nice thing is that there is nothing wrong with it, you are aware of the environment in which you perform your job. What I regret is that many people get sucked into this development. Running along with the latest hype.

Does it pay? Possibly. A prediction of the trends for the coming years, we will get occupied with:

Context Driven Requirements engineering;
Context Driven Application Management;
Context Driven Directed Lining;
Context Driven Project Management;
Context Driven CRM;
Context Driven Innovation;
Context Driven Migration;
Context Driven Whatever etc.

What did you say? We were doing that for long, weren’t we? We obviously haven’t been paying attention.

Now let us just go back to work. And yes, work, like the good ones among us have always done it in a Context-Driven way. Or Risk-Based. Or Business Case Driven. But please stop those cheap semantic tricks. Try to be less weighty when performing the same trick again. The trick won’t get better doing this. Try to simply create better real added value. Perhaps less familiar to you and others, but effective!

Not sure what his problem with context-driven is…

Fortunately, I hear about people becoming more and more context-driven. To me being context-driven is not just keeping the context or environment in mind, it is way more that that… As I wrote in a post on the DEWT  blog: “Context-driven testing made my testing more personal. Not doing stuff everybody does, but it encouraged me to develop my own style. It is a mind set, a paradigm and a culture. It is not only about what I do, it is more about who I am!”

A hype? Maybe because it gets more and more attention. Although I think it isn’t. Far from it! A hype, according to the free dictionary:

  1. Excessive publicity and the ensuing commotion
  2. Exaggerated or extravagant claims made especially in advertising or promotional material
  3. An advertising or promotional ploy
  4. Something deliberately misleading; a deception

I can’t see why context-driven testing would be a hype. Exaggerated? Misleading? A promotional ploy? Excessive publicity?

Old wine in new bags? I don’t think so. The saying means “things are presented differently, but not fundamentally changed.” I think context-driven testing is fundamentally different. Of course testers has been taking context in consideration for years. But how well do they do that? I still hear things like “that’s how we do things now here” or “you have to play by the standard/procedure” quite often. I almost never hear testers speak about serious context analysis and adapting their approaches accordingly. But there is a lot more to context-driven testing then taking context into consideration. To name a few:

  • developing and using skills to effectively support the software development project
  • learning tester to be less dependent on documentation
  • modeling by mapping of various aspects of the software product
  • diversify tactics, approaches and techniques
  • thinking skills like logical reasoning, using heuristics and critical thinking
  • dealing with complexity, ambiguity, coping with half answers and changes

In the last paragraph Olaf states that the good testers among us, have always been using a context-driven approach. Really? How does he define good testers? And if they use a context-driven approach, why is he complaining? Unfortunately I see too many testers not using context-driven approaches and creating a lot of waste!

Then he continues “try to be less weighty when performing the same trick again. The trick won’t get better doing this. Try to simply create better real added value“. If you look at testing as performing a trick, I can see that Olaf sees context-driven testing as a cheap semantic trick! The opposite is true: adding real value is what context-driven testing is trying. Context-driven testers do this by focusing on their skills, using heuristics, considering cost verus value in all they do, continuous learning by deliberate practice, etc. When I look at the most commonly used test approaches in the Netherlands (TMap Next and ISTQB), I wonder how they add value by focusing on standards, document heavy procedures, use of many templates, best practices and using  the same approach every time.

Leaves me with one question… What is Olaf’s problem with context-driven exactly?

Too controversial?

On May 11 2016 TestNet (*)  held her spring conference with “Strengthen your foundation: new skills for testers” as the central theme. The call for papers that was send out made me frown.  It said:

“In the final keynote of the TestNet autumn event, speaker Rini van Solingen referred to the end of software testing as we know it. ‘What one can learn in merely four weeks, does not deserve to be called a profession’, he stated. But is that true? Most of our skills, we learn on the job. There are many tools, techniques, skills, hints and methods not typical for the testing profession but essential for enabling us to do a good job nonetheless. Furthermore the testing profession is constantly evolving as a result of ICT and business trends. Not only functional testing, but also performance, security or other test varieties. This presses us to expand our knowledge, not just the testing skills, but also of the contexts in which we do our jobs. The TestNet Spring Event 2016 is about all topics that are not addressed in our basic testing course, but enable us to do a better job: knowledge, skills, experience.”

I think that there are a lot of skills that are not addressed in our “basic testing course” where they should have been addressed. I am talking about basic testing skills! So I wrote an abstract for a keynote for the conference:

The theme for the spring event is “Strengthen your foundation: new skills for testers”. My story takes a step back: to the foundation! Because I think that the foundation of most testers is not as good as they think. The title would then be: “New skills for testers: back to basics!

Professional testers are able to tell a successful story about their work. They can cite activities and come up with a thorough overview of the skills they use. They are able to explain what they do and why. they can report progress, risk and coverage at any time. They will gladly explain what oracles and heuristics they use, know everything about the product they are testing and are deliberately trying to learn continuously.

It surprises me that testers regularly can’t give a proper definition of testing. Let alone that they are able to describe what testing is. A large majority of people who call themselves professional testers can not explain what they do when they are testing. How can anyone take a tester seriously if he/she can not explain what he/she is doing all day? Try it: go to one of your testing colleagues and ask what he or she is doing and why it contributes to the mission of the project. Nine out of ten testers I’ve asked this simple question, start to stutter.

What do you exactly do if you use a “data combination test” or a “decision table”? What skills do you use? “Common sense” in this context does not answer the question because it is not a skill, is it? I think of: modeling, critical thinking, learning, combine, observe, reasoning, drawing conclusions just to name a few. By looking in detail at what skills you are actually using, helps you recognize which skills you could/should train. A solid foundation is essential to build on it in the future!

How can you learn the right skills if you do not know what skills you are using in the first place? In this presentation I will take the audience back to the core of our business: skills! By recognizing the skills and training them, we are able to think of and talk about our profession with confidence. The ultimate goal is to tell a good story about why we test and value it adds.

We need a solid foundation to build on!

My keynote wasn’t selected. So I send it in as a normal session, since I really am bothered by the lack of insight in our community. But it didn’t make it on the conference program as a normal session either. Why?  Because it is too controversial they told me. After applying for the keynote the chairman called me to tell me that they weren’t going to ask me to do a keynote because the did want a “negative” sound on stage. I guess I can imagine that you do not want to start the day with a keynote who destroys your theme by saying that we need to strengthen our foundation first before moving on.

But why is this story too controversial for the conference at all? I guess it is (at least in the eyes of the program committee) because we don’t like to admit that we lack skills. That we don’t really know how to explain testing. I wrote about that before here.  It bothers me that we think our foundation is good enough, while it really isn’t! We need to up our game and being nice and ignoring this problem isn’t going to help us. A soft and nice approach doesn’t wake people up. That is why I wanted to shake this up a bit. To wake people up and give them some serious feedback … I wrote about serious feedback before here. But the Dutch Testing Community (represented by TestNet) finds my ideas too controversial…

 


(*) TestNet is a network of, by and for testers. TestNet offers its members the opportunity to maintain contacts with other testers outside the immediate work environment and share knowledge and experiences from the field.

Must read: A Context-Driven Approach to Automation in Testing

Test Automation is a hot item in our industry. Many people talk about it and much has been written on this topic. Sadly there is still a lot of misconception about test automation. Also, some people say context-driven testing is anti test automation. I think that is not true. Context-driven testers use different names for it and they are more careful when they speak about automation and tooling to aid their testing. Also, context-driven testers have been fighting the myths that testing can be automated for years. In 2009 Michael Bolton wrote his famous blog post “Testing vs. checking“. Later flowed up by “Testing and checking refined” and “Exploratory testing 3.0“. These tremendous important blog post learn us about how context-driven testers define testing and that testing is a sapient process. A process that relies on skilled humans. Recently Michael Bolton and James Bach have published a white paper to share their view on automation in testing. A vision of test automation that puts the tester at the center of testing. This is a must read for everyone involved in software development.

The follow text is taken from the “A Context-Driven Approach to Automation in Testing” white paper written by James Bach and Michael Bolton.

We can summarize the dominant view of test automation as “automate testing by automating the user.” We are not claiming that people literally say this, merely that they try to do it. We see at least three big problems here that trivialize testing:

  1. The word “automation” is misleading. We cannot automate users. We automate some actions they perform, but users do so much more than that.
  2. Output checking can be automated, but testers do so much more than that.
  3. Automated output checking is interesting, but tools do so much more than that.

robotAutomation comes with a tasty and digestible story: replace messy, complex humanity with reliable, fast, efficient robots! Consider the robot picture. It perfectly summarizes the impressive vision: “Automate the Boring Stuff.” Okay. What does the picture show us?

It shows us a machine that is intended to function as a human. The robot is constructed as a humanoid. It is using a tool normally operated by humans, in exactly the way that humans would operate it, rather than through an interface more suited to robots. There is no depiction of the process of programming the robot or controlling it, or correcting it when it errs. There are no broken down robots in the background. The human role in this scene is not depicted. No human appears even in the background. The message is: robots replace humans in uninteresting tasks without changing the nature of the process, and without any trace of human presence, guidance, or purpose. Is that what automation is? Is that how it works? No!

The problem is, in our travels all over the industry, we see clients thinking about real testing, real automation, and real people in just this cartoonish way. The trouble that comes from that is serious…

Read more in the fabulous white paper “A Context-Driven Approach to Automation in Testing” by James Bach and Michael Bolton.

State of Testing survey 2016

state-of-testing-2016-600px

This survey seeks to identify the existing characteristics, practices and challenges facing the testing community in hopes to shed light and provoke a fruitful discussion towards improvement. The State of Testing survey  is a collaboration from the folks at QA Intelligence and TeaTime with Testers. Last year was very successful with almost 900 participants, and with your help they hope this year’s survey will be even bigger by reaching as many testers as they can around the world!

You can find the 2016 survey here.

If you want to know what this survey is all about, have a look at the results of previous years:

 

Why testers are not taken seriously…

5784a3e8c82ffdc1da395f1ded31eab6Some time ago I was invited to talk to a group of testers at a big consultancy firm in the Netherlands. They wanted to learn more about context-driven testing. I do these kind of talks on a regular basis. During these events, I always ask the audience what they think testing is. It surprises me each time that they cannot come up with a decent definition of testing. But it gets worse when I ask them to describe testing. The stuff most people come up with is embarrassingly bad! And it is not only them, a big majority of the people who call themselves professional testers are not able to explain what testing is and how it works…

How can anybody take a tester serious who cannot explain what he is doing all day? Imagine a doctor who tells you he has to operate your knee.

Doctor: “I see there is something wrong there
Patient: “Really? What is wrong doctor?
Doctor: “Your knee needs surgery!
Patient: “Damn, that is bad news. What are you going to do doctor?
Doctor: “I am going to operate your knee! You know cut you with a scalpel and make it better on the inside!
Patient: “Okay… but what are you going to do exactly?
Doctor: “Euh… well… you see… I am going to fix the thingy and the whatchamacallit by doing thingumabob to the thingamajig. And if possible I will attach the doomaflodgit to the doohickey, I think. Get it?
Patient: “Thank you, but no thanks doctor. I think I’ll pass

But it is much worse… Many testers by profession have trouble explaining what they   are testing and why. Try it! Walk up to one of your tester colleagues and ask what he or she is doing and why. 9 out of 10 testers I have asked this simple question begin to stutter.

How can testers be taken seriously and how they learn a profession when they cannot explain what they do all day?

albert-einstein-if-you-cant-explain-it-simply-you-dont-understand-it-well-enoughOnly a few testers I know can come up with a decent story about their testing. They can name activities and come up with a sound list of real skills they use. They are able to explain what they do and why. At any given time they are able to report progress, risks and coverage. They will be happy to explain what oracles and heuristics they are using, know what the product is all about and practice deliberate continuous learning. In the Rapid Testing class (in NL) we train testers to think and talk about testing with confidence.

How about you? Can you explain your testing?

« Older posts Newer posts »