Why Every Tester Should Learn Pairwise Testing


The promise of pairwise testing is that it can help you reduce the number of test cases you need to execute without significantly increasing the risk that you’ll miss a critical defect. While this may be true in some situations, it’s not true in all situations (see “Pairwise Testing: A Best Practice That Isn’t” by James Bach and Patrick Schroeder).

I believe that learning about pairwise testing actually offers a much more important benefit to testers and developers that can pay dividends throughout your entire career in software.

There are plenty of explanations about pairwise testing on the web, so I won’t repeat them here.  If you’re not familiar with the technique, I suggest you start here, and then come back to this article.

My First Experience with Pairwise Testing

I had the good fortune to learn about pairwise testing very early in my test career while working on the Windows Shell team. At first, there was lots of FUD around how effective the technique would be, how we might miss important bugs, etc. In the end, however, it proved out to be an incredibly effective technique to find lots of bugs across the huge surface area of the Shell in relatively little time. We built a custom tool that would let a tester specify variables & possible values for those variables, prioritize them, and even link between variables.  For example, a certain feature would only be available if the system had a 32-bit processor, but not if it had a 64-bit processor. We used the tool to generate a pseudo-random selection of test cases to execute for each regression test pass. When a generated case lead us to a defect, we could mark it to be preserved for future test passes. Commercially available tools today offer similar features (and probably work a lot better than ours did!).

Believe it or not, accelerated bug-finding was not actually the biggest benefit of pairwise testing.  More importantly, we saw a shift in how testers thought about breaking down complex systems into components, and this lead to more effective testing overall. This benefit carried forward into all types of test planning, even when pairwise testing wasn’t used.

4 Steps to Defining a Pairwise Test Plan

Before I go into more details about the derivative benefits of learning pairwise testing, let’s review the steps for creating a pairwise test plan:

  1. Identify all the potential variables in the system under test.
  2. List all the possible values or states for those variables individually.
  3. Determine which combinations might be invalid and indicate this in the test plan so invalid pairs aren’t generated.
  4. Add weights and priorities to the states based on your best understanding of how the system will be used, business priorities, etc.

Step 1 is about identifying all the “moving parts” or “things that can vary” in the system.  These can be explicit variables, such as parameters passed into an API or “implicit variables” such as the position of a particular node in a linked list.

Step 2 is about identifying all the possible ways each of those variables could change.

Step 3 is about winnowing down your matrix to only the possible states.  This step shouldn’t weed out possible error states, because those are extremely important to test. Rather, this is about weeding out the impossible states.

Finally, Step 4 is about prioritizing your testing to help guide defect-finding to the most likely or most impactful areas first.

With these steps completed, your pairwise testing tool will generate a reduced set of test combinations to be executed. But the value doesn’t end here.

The Derivative Value of Pairwise Testing

At this point, you’ve actually done something far more valuable than generate a set of test cases. You’ve thought deeply about the system under test, you’ve decomposed it into sub-parts, and you’ve identified all the interesting ways those parts can change. While it sounds obvious, learning to do this effectively can take lots of practice, and sadly, this skill is missing in many people who consider themselves to be “test experts” today.

Consider your last interview for a testing position. You were probably asked to how you would test a particular feature or system or to generate test cases for a function or class. As you came up with every possible test you could think of, your interviewer was likely most interested in how you thought about the problem and how you identified and prioritized risks.

Any complex system is just a composition of simpler systems. By breaking a problem down into smaller parts, you can focus on testing those parts independently, and then test them in various combinations. Building a pairwise test matrix doesn’t help you discover the sub-systems directly, but it forces you to think about what can vary in the system you’re testing and how it can vary. By thinking top-down from this “meta level” rather than about each individual case, I believe you’re more likely to find the full set of interesting tests to execute than if you just start throwing out individual test cases. Additionally, reviewing the a set of smaller systems is a much easier problem than reviewing an entire complex system all at once to look for missing cases.

An Example

Let’s walk through an example. Suppose you’re asked to test a function that searches a two-dimensional sorted array to determine if the number is in the array. If the number is found, it returns true, otherwise false.

Your first instinct might be to test a case where the number is in the array and one where the number is not in the array.  But those are specific cases. The variable here is “number_is_in_array” and it can have a value of true or false. To generalize, we could change the variable to “number_of_times_value_appears_in_array”.  Now it can have the values 0, 1, 2, 3, etc. but we’d probably weight the values 0 and 1 most heavily if those are indeed the most likely cases.

What else could change? Maybe the number exists in the beginning of the first row, somewhere in the middle, or at the end of the last row.  There are two variables here:  “position_in_row” and “position_in_column”.  Each of these could have values of index 0, 1, somewhere in the middle, last position, just before last position.  Of course, this variable only applies if “number_of_times_value_appears_in_array” is 1 or greater.

If we’re talking about the position within a row and column, then we probably also need to have variables for “number_of_rows” and “number_of_columns”.

What else could change? The array is supposed to be sorted, but maybe it isn’t… “array_is_sorted” could be another variable. Most of the time, it would be sorted, so we’d probably weight that so that 90% of the generated cases use a sorted array, but 10% of the time the array would be filled randomly.

Up to this point, we haven’t generated a single test case. Rather, we’ve identified several smaller systems within the larger system, and we’ve identified how they can vary individually. The pairwise testing tool can do the hard part of generating the actual test cases for us.


Pairwise testing can be a powerful technique for building a minimal set of test cases, but the most valuable aspect of pairwise testing is that it forces you to practice decomposing complex systems into smaller systems that are easier to think through.

Why You Shouldn’t Write Test Plans

Four years ago in a previous role, I blogged about why I don’t like the term “Test Plan”. I made the point that a Test Plan was something a test team wrote so they could measure quality and report back to the team, whereas a “Quality Plan” describes how the whole team will work together to achieve the necessary level of quality before shipping. You cannot “test in” quality; quality starts with the specs, then the code, and then the test and stabilization processes. As the old tester adage goes, “Testers don’t break code. It’s already broken when we get it.” Quality Plans recognize this and focus on helping the team, not just the testers, drive quality.

In addition to those points (which I still believe are valid), I’d also like to point out that “testing” is an activity, but “quality” is a result. Which would you rather have your team focused on? Using the term “Quality Plan” helps put a team in the mindset of working to reach a destination as opposed to just “doing testing”. It is still a subtle difference in verbage, but it can be a massive (positive) pivot in thinking. 

Testers who focus on activities are less likely to question whether those activities are appropriate for the situation at hand, and they are more likely to just keep doing what they’ve always done. Testers who focus on achieving specific goals are more likely to identify and change activities that don’t help them reach those goals. now, I don’t have scientific data to back this up, but you can find at countless studies about the success of people who write down goals vs. those who don’t, and they all reach similar conclusions. 

Quality goals can take many forms, and obviously you will have to experiment to find the form that best fits your needs. How can you tell if you have well-written quality goals? The SMART goal format offers some hints. Your quality goals should be specific, measurable, achievable, relevant, and time-bound. I will also add that it should be clear who is accountable for reaching those goals. They should clearly describe what you need to see in order to know you are ready to release. Some examples include which tests must pass, a certain volume of positive beta user feedback, target values for performance and stress tests, etc.

Including activities in your Quality Plan is fine too, but they should always be listed as supporting specific goals. After all, this is a plan of action as well as a plan of destination. You wouldn’t make a travel plan that only describes how you will get in the car, drive to the airport, fly, get off the plane, and hail a cab, would you? Why would you do the same with a plan for reaching quality?

Will you make the switch from Test Plans to Quality Plans?

Stop Using Passive Voice

As I mentioned in a previous post, testers must learn to communicate efficiently and effectively. Today, I want to focus on a form of communication I see all too often and why you should try to avoid it at all costs.

The term “passive voice” refers to a form of grammar in English where the subject of a sentence is being acted upon rather than doing the acting. For example, “The decision was made to move forward with the current plan” or “The bug was found during the test pass.”  In the first example, “the decision” is the subject, but it’s not clear who made the decision; it was just “made”.  In the second example, “the bug” is the subject, and again… who found it?

Alternatively, an “active voice” sentence is one where the subject is doing the acting: “Sally found the bug during the test pass.”  In the latter, Sally is the subject, and in this sentence there is no question about who found the bug.

Hopefully, you can now see why active voice is the better choice. For one, active voice provides more clarity, whereas passive voice leaves the reader/listener with questions.

Another reason to prefer active voice is that it more directly attributes the result with the “doer”. Passive voice can come across as not taking accountability for a bad result or not properly attributing success to the specific behaviors that led to it. Hopefully you work in a culture that applauds and encourages taking accountability rather than shifting blame to some unseen actor. If you don’t, run.

Luckily, many word processors with grammar checking can help point out usage of passive voice for you. Before you try that, let’s try a quick example. Which sentence do you think is more helpful for knowing what went wrong and how to fix it?  

  1. The tests weren’t included in the test pass because a miscommunication occurred.
  2. Oscar did not execute the tests in the test pass because he misread Rolph’s instructions.

I tried to make it obvious there by including not one, but two passive voice phrases in the first sentence and two active voice phrases in the second. After hearing the first sentence, you might be left wondering who didn’t run the tests, what the miscommunication was, etc. The second sentence makes these points clear, however. Of course, you still need to understand whether Oscar was careless in his reading of the instructions or if Rolph’s instructions were too confusing, but at least you’re one step closer to getting to the bottom of the situation.

As testers, part of our job is to root-cause problems in products and processes. Active voice provides a much more direct way to present a root cause, and that should in turn provide a more direct route to a solution.


Pay attention to communication you receive today and try to count the number of times you hear or see passive voice being used. Pick one or two examples and re-form them in active voice.

If you have any questions on how to rephrase passive to active voice, let me know and I’ll try to help.

3 Ways To Increase Your Value As A Tester

A couple of tweets by Justin Hunter (@hexawise) recently got me thinking about what makes a tester really valuable for the long term.  He tweeted:

3 things more testers should know more about: test design approaches, Context Driven Testing, Bach & Bolton-style Exploratory Testing (tweet)

As a tester, if you want to maximize your ST earning potential, learn Selenium. To maximize your LT value, study Context Driven Testing. (tweet)

(In the last tweet, ST and LT refer to “short term” and “long term” respectively.)

While I agree these are all valuable skills for testers to have today, I believe these hint at deeper skills that are actually immensely more valuable over the long term.  

Before we begin, consider what “valuable” means to you. Is it money? Is it getting the opportunity to work on cutting edge products or products that help change the world? Is it the flexibility to work remotely or to be able to travel around the world? Is it getting to work with brilliant people? Depending on your answer, your path to becoming “highly valuable” will vary greatly.

Since the tweets above mentioned earning potential, I’m going to focus on that aspect for now, though I personally rank some of the aspects above higher.


Thinking about the law of supply and demand, one way to look at “value” as a tester is having the ability to stay in high demand but short supply throughout your career. Given the rapidly changing pace of software development technologies and practices, you’ll need to rapidly evolve your testing skills along with them. As a quick example, my first job as a tester required writing test automation in C++ and COM. After that, I worked with C#, SQL, JavaScript, and most recently Objective C. At any of those points, I could easily have chosen to stay put (yes, people still use COM today!), but because I knew other programming languages and had demonstrated I could adapt to change, many other opportunities wre open to me.

Here’s another way to think about it: your industry is changing every day. Your team is hiring new people with new skills and different experience. Why? It wants to be more efficient and more effective. Staying static in a dynamic world is extremely risky if you want to keep your role for very long.

Learn How To Automate Tests Effectively

There is a heated debate that has been raging for years about whether testers should learn to automate or not.  My personal opinion is that nobody should do anything they don’t want to do. Having said that, testers who can automate tests effectively in addition to doing a great job at all the other stuff (test planning, manual testing, attention to detail, etc.) are more valuable in the long term. I claim this in the same sense that any skilled craftsperson who is an expert with two tools is more valuable than one who is an expert in only one tool.  Manual testing is essential to validating many types of products, and I would never suggest removing all manual testing in favor of automation for a product with a user interface.  However, all great craftspeople know that it’s best to use the right tool for the job, and manual testing isn’t always the right tool.  Much of what testers do on a daily basis is repetitive and predictable and could be automated.  Testers who know how to write automation that is fast, reliable, and maintainable will have an advantage in the long term over those who don’t. 

Selenium is a great platform for automating web sites, and Justin is correct that knowing Selenium will probably help you earn a well-paying job in the short term.  While the web seems to be pretty popular these days, websites are just one type of product in a sea of many. Knowing the “elements” of test automation is what’s most valuable. This comes with years of experience and learning by trial and error for many. I’ll cover these elements in depth in a future post. 

Communicate Effectively

Written and verbal communication are critical elements of being effective as a software tester, and it is a fallacy to think you could be highly valuable if you don’t “work well with others”. Think about all the parts of our jobs that require effective communication: writing test plans, filing bugs, clarifying requirements, reporting test results, sending e-mail, documenting processes, giving presentations… Yet examples of ineffective communication are all too easy to find every day in each of these scenarios. I’m not perfect by any means; communicating effectively is a life-long pursuit. This is equally important if you work remotely or in a team room with other engineers. 
When you communicate effectively, others will “hear” you the first time. You won’t waste precious time going back and forth asking and answering questions. You’ll be less randomized, and so will your team. You’ll be seen as someone who can be trusted, because your communications will inspire confidence rather than doubt. All of these will contribute to higher earning potential over time.
One of the best tips I can offer for testers is to say as much as you can in as few words as possible. This holds for your defect reports, presentations, emails, test plans, etc. Don’t assume you have to fill up a certain amount of space with “fluff”. Just say what needs to be said and be done. Look for words like “utilize” as smells that you’re off track. 
For defect reports, try to anticipate what questions others would have after reading your report. Did you report enough detail in the “steps to reproduce” section, or did you make assumptions that the reader might not also make? Including screenshots or videos can be extremely valuable too.

Lastly, smile. A tester’s role is highly critical, and many testers come across as unnecessarily grumpy. Of course, many developers also come across as grumpy, but who wouldn’t when there’s an army of people paid to point out how bad your code is? If you think about your role from the developer’s point of view, you can see how a friendly smile (no, not a Dr. Evil smile!!) can go a long way in convincing a developer to pay close attention!


There are many ways to maximize your long term earning potential as a software tester. We discussed three such ways above: adapting to change, learning how to write effective test automation, and communicating effectively. Each of these deserve much more attention than a single blog post and can take years to master, but hopefully by putting them on your radar you’ll stop to think how well you do them now and how you might want to change.


On Context-Driven Testing

I may be the last person to learn about context-driven testing, but since I have recently, I felt compelled to share a few thoughts.

The concept of context-driven testing is described thoroughly by its creators, but at least part of it could be summed up quickly as “use the right tool for the job at hand.” 

For many, CDT may be one of those frustratingly obvious learnings that’s been sitting in front of you all along, hidden behind tradition and “best practices”.  We write test plans using the same template we’ve always used, hold the same meetings we’re used to holding, and assume that all new tests will simply plug into existing harnesses or test case managers. We get so used to “the way things are done around here” that we often forget to step back and just solve the problem at hand.

As I reflect back on the testers I’ve worked with over the years, I realized that while a significant number of them were brilliant when it came to testing software, many had not really tested their own processes.  They thrived in the momentum of their environment, but over time as new challenges came up, they had a tough time creating different kinds of momentum that were more applicable to the problem at hand (and I could draw parallels to “kids today”, but I’ll hold off for now).

My call to action for testers today is to pick at least one regular process you follow and seriously think about whether it solves a problem you actually have.  If you don’t know what problem it solves, ask a peer or your manager.  There’s a good chance that much of what you do is actually necessary – and in those cases you should definitely understand why!  But there’s also a good chance that at least some of what you do is just done because that’s the way it was done before you got there.  You read a wiki, followed a template, check, check, check, and off you went.  Be the person to spot one of these, and for bonus points, come up with a better way to do it – or cut it out altogether!

Testing Is A Risky Business

Some of the most important responsibilities we have as testers are identifying, reporting, and mitiating risks.  I’ll go as far as to say that if you aren’t regularly doing these things now, you aren’t reaching your full potential as a tester.  

Let’s take a look at each of these individually.

Identifying Risks

Risks are anything that could cause the project to go off schedule or result in lower than acceptable quality when finished.  Let’s consider a few examples and explore why they’re risky:

  • A seemingly endless stream of bugs found in a feature – Might indicate the feature’s code is too complex, its requirements were unclear, or it isn’t being tested in a methodical way.  Risky because it makes predicting the point by which the product can be released harder to estimate.
  • Features being implemented before design decisions are finalized – Risky because work might have to be thrown away and re-done late in the development cycle, causing late code churn and lots of extra testing needed with little time to stabilize.
  • Dependency teams missing deadlines – When teams you depend on can’t deliver high quality components to you in a predictable way, it adds uncertainty to your schedule. 
  • Features that used to work break frequently as code churns – Risky because deeper testing might get delayed while bugs are being fixed.  Also a sign that proper regression testing is not being done before code is committed.  Usually a smell of deeper problems in test coverage, team discipline, or code complexity.
  • A test plan review ends up with more questions about feature design than answers – Risky because it indicates the team doesn’t have a clear picture of what they’re trying to build, leading to confusion, inefficiency, and (usually) rework.

Focus on the biggest risks first

Any given project will have many risks; it’s just the nature of developing software.  The most effective testers will focus on the biggest risks first.

As testers, we’re accustomed to finding bugs, and each individual bug could be thought of as a risk.  But chances are, you’ll find more bugs than you care to track and report about on an individual basis.  Rather, it’s more helpful to step back and “look at the forest” instead of the trees (bugs).  

Sometimes the biggest risks will be obvious, but sometimes you’ll need to work with others on the team to determine which risk truly represents the biggest threat to the project’s success.

Reporting Risks

Once you’ve identified risks, one of the most effective ways to squash them is to keep them highly visible to project stakeholders.  A risk report could take many forms, but usually it includes:

  • a brief summary of the problem
  • a brief description of the planned mitigation
  • next steps
  • a date/time for when the risk should be mitigated
  • some indicator of whether the mitigation is on track or not (e.g. red/yellow/green)
You don’t need to write up an official memo or create a Powerpoint presentation for each risk.  Often, a simple email will do, or a few quick sentences in a team meeting.  The point is to raise visibility on the risks as soon as possible to give the team more time to mitigate the risks.  
You might be thinking to yourself, “Why would testers need to report this?  Isn’t this the project manager’s job?”  If your team is lucky enough to have a skilled project manager working with you, then yes, it’s certainly within the realm of what they might report.  That said, someone needs to inform the PM about the risks in the first place though.  As a tester, you are likely to have a unique view of the project that others don’t have, and therefore it is your responsibility to call out risks you see from the tester’s perspective.
Aside: Regardless of whether you have a PM or not, I’ve never been a fan of drawing hard lines around areas of responsibility.  As a team member, the team succeeds or fails together.  If you see a risk that nobody else is reporting, call it out!  Consider the 2012 security breach at Target.  Had someone raised the risk that their anti-malware software was detecting problems earlier, they might have been able to prevent any customer data from ever leaving their servers!

Mitigating Risks

This part should almost be obvious. Once we’ve identified risks, our job is to help make them go away.  I say “help” here because we can’t do it alone; we usually need others to work with us.  For example, once we’ve identified a risk that a particular feature has a seemingly endless stream of bugs, we might ask the developer team to revisit the code to see if it can be simplified or ask a designer or stakeholder to simplify the requirements.  For our part, we need to continue testing thoroughly and expand our thinking to find ways to test the product more efficiently or more effectively.  Once a fix has been made, we need to instill confidence among stakeholders that the risk is indeed mitigated.

To prevent risks from reappearing later, it helps to identify their root causes and then drive whatever changes are necessary to fix those problems.  The 5 Why’s technique can be helpful useful here.  Great testers know how to break problems down into smaller problems and debug them piece by piece.  This applies to debugging processes as well as software.

Other mitigation techniques for testers might include reviewing test plans with key members of the team to identify gaps in test coverage, building new test automation to catch bugs earlier in the process, or enlisting the help of other testers to bring “fresh eyes” to an area that was previously thought to be well tested.


Risks exist in every software project.  Successful software testers recognize this and regularly identify and focus on the biggest risks first, keep the team informed about risks, and help mitigate risks constantly.  Are you?

Increasing Your Influence On Product Design

A common complaint I have heard from testers is that they don’t feel like they have enough influence on product design.  

The pattern goes like this: A designer, program manager, or analyst comes up with a plan for how a particular feature should work.  The developer team builds it and then hands it off to the test team.  After using it for a while, the tester identifies some improvements or changes they feel would make the feature better for customers.  

From here, any number of things can happen.  In an ideal world, the tester has a perfect understanding of what the customer (or the market) really wants, the change is made, and the product sells like hotcakes!

But what if this doesn’t happen for you?  What if your suggestion is considered but ultimately not accepted, or worse – ignored altogether?  How can you increase your influence on the product?  Here are some suggestions:

1. Suggest Solutions, Not Just Problems

This point is probably worth its own post as it applies in so many aspects of testing.  When you suggest a change of plans, helping the team get to the ideal solution quickly is much more valuable than just calling out a problem.  Chances are, your team is overbooked, under-staffed, and up against tight deadlines.  They probably have several other features in development and a deadline looming overhead.  The last thing on their minds is going back to a feature they’ve already coded up and revisiting the initial design (this is often true even for teams who think they’re operating in an Agile fashion).

If, however, you suggest a path to success, that’s one less step for the team to take in order to get to your solution.  Research suggests that humans are better at iterating than innovating in general, and when you throw in the other pressures of shipping products, my observations are that it’s more common for teams to look for a small iterative solution rather than throw out the whole feature and start from scratch.  By providing that first step, you’re helping the team get started in the redesign process and removing a big barrier to change.

The anti-pattern here is to assume “It’s not my problem. I just point out the problems, and someone else has to fix them.”  This is a very dangerous and counter-productive mindset for testers, and I highly encourage you to check yourself if you even come close to this line of thought.  Falling prey to this way of thinking is essentially giving up.

If you don’t have an immediate suggestion for how the feature can be made better, you should at least state that and then offer to be part of a virtual team to go figure out a solution.

2. Provide Supporting Evidence

Perhaps you’re certain your suggestion is perfect, but that really doesn’t matter unless you are able to convince others in time to make the change.  Your mission at this point is to collect data or other evidence that supports your feedback.

Feedback from usability studies or beta users can be a very powerful tool.  Also consider asking the team to add logging or instrumentation to the feature to understand how users interact with it, what they do and undo, how quickly they accomplish the task at hand, etc.  If you can get them to build two versions of the feature and test one vs. the other with beta users, you can determine more scientifically which approach is more effective.

Another possibility is to seek out others with similar features to see if they have solved the same problem.  For example, if your organization produces both a native app and a website version of your product, has one team already gathered the supporting evidence you need?

3. Pick Your Battles Carefully

While you could “go to battle” for every single product change you feel is right, ultimately you have limited “funds” available, so you’ll need to choose carefully which issues to pursue.  

For the purposes of this discussion, your “funds” are all the resources you could diminish while trying to convince others that your idea or change is worth taking.  One such resource is your own credibility, because if you routinely pursue issues that the team deems aren’t worth fixing, the team may start to expect your future ideas to be of similar merit.  Once this happens, it can be very difficult to turn around this bias.

Another resource to consider is the time/energy you could put into driving the conversation.  Is the suggestion really worth all that effort?  Will it result in an incremental improvement or something monumental?  

Each of these have an associated opportunity cost – the cost of not doing some other task (such as fixing a bug or feature that would potentially help more users).  

4. Know When To Quit

Once you’ve clearly made your points and supported them with evidence, it’s time to let the decision maker do their thing.  Either you were right and he/she is wrong, or you’ll learn something when the feature hits the market and is successful as designed (or the team discovers bigger opportunities or problems to chase). 

If, however, you continue to badger the decision maker or get stuck in a depressive shame cycle because your idea was ultimately rejected, then frankly, you’re wasting your time and energy.  Worse, you could be eroding your influence on the team with this behavior.  Instead of being perceived as a smart and insightful representative for the customer, you risk being seen as whiny or annoying.  The team will start to ignore your feedback for fear of encountering another time-wasting battle.  The next time you have a great idea, you’re even less likely to get the attention you need to influence the team.

In practice, this is much trickier to do than to write about.  It may take months or years of experience with a particular set of people to fine tune your timing.

5. Follow The Decision To The User

If your idea is ultimately rejected, you should pay close attention to what happens once users ultimately get their hands on the feature.  If it’s successful without your suggestion (where success is defined by happy customers and reaching business goals), then take a moment to consider why you cared SO much and whether the data you provided to support your suggestion was flawed in any way.

If the feature is unsuccessful in market, then reflect on whether your suggestion would have addressed the specific concerns being raised by customers.  If so, then perhaps it’s time to bring it up again with the new data (from unhappy customers in market).  If the team sees they had the perfect solution sitting in front of them all along, then your credibility will go up a bit, and hopefully the next conversation about change will be a little easier.

One team I worked on developed a very clever way to find ways to crash the product with automation that exercised the product in random ways.  Most of the bugs they filed, however, were rejected by the development team as “theoretical” because who would ever do that?  The team was unsuccessful in getting these bugs fixed before the product shipped.  But they stuck to it and tracked incoming crashes to determine how many had been found before shipping by their stress test.  It turns out a very large percentage had been found!  They took this evidence back to the developer team, who then made it mandatory to fix all bugs found by the tool before each release, even if the steps to reproduce those crashes seemed crazy.  As a result, the overall incoming crash rate dropped significantly in the next release, support costs, dropped, and the team was less randomized with shipping patches and could focus more on the building new features.

6. Don’t Be A Jerk

The old adage, “you catch more flies with honey than vinegar” holds in this context too.  Always start (at least) your conversations with a polite tone and try to find a common ground with other decision makers.  If you haven’t read it yet, I highly recommend the book Crucial Conversations: Tools for Talking When Stakes Are High as a great way to learn how to influence others through civil discourse.  Dale Carnegie’s books (example) are another fantastic resource.

Wrapping Up

Building credibility as a tester is an important part of being successful in your role.  For many testers, having influence on product and feature design is also critical to job satisfaction.  We covered several ways to improve your influence on the team, but there are certainly more.  If you have others not listed here, please share!