Building a DIY Eurorack Case

9U 104hp DIY Eurorack case in cherry

I wanted a nice wood Eurorack case for my modular synthesizer, and after lots of research, I decided to build my own DIY Eurorack case. The final product is a 9U / 104hp cabinet I’m very happy with!

The sides, top, and front panels are cut from a piece of 3/4″ cherry (12 inches deep). I finished it with a few coats of an easy wipe-on polyurethane. With the help of a very talented and generous neighbor, we cut all the wood parts in a few hours. I then spent a week sanding, gluing, and finishing the final product.

Parts

I ordered the rails and end-brackets from Pulp Logic. The parts came really quickly and was easy to put together. I went with sliding nuts instead of threaded strips for more flexibility on mounting modules tightly together. Don’t forget to order screws and washers to prevent rack rash!

For power, I went with a DIY build of the Synthrotek Case Power Blue system. This provides 12V at 3A, -12V at 2.5A, and 5V at 1.5A – way more than I should need. I added 2 noise filtering power distribution boards (affiliate link) to provide power to the top two rows. Be sure to plan out your rack at ModularGrid and pay attention to power requirements! The DIY build was a bit tedious (about 1 hour per board), but not difficult. You also save quite a bit of money by building it yourself, so if you’re handy with a soldering iron, this is the way to go. The TipTop Studio Bus system also looked pretty nice, but at the time I was doing the build, they were sold out everywhere.

I used a 3/4″ x 12″ cherry board. In reality, it gave me about 11 3/8″ to work with. I really wanted to use walnut, but I couldn’t find wide enough walnut planks and I wanted to avoid edge-gluing the boards.

Advice For Building Your Own DIY Eurorack Case

My primary advice to people building your own case is: don’t rush it. Take your time to plan out your system, shop around for parts, and make sure you measure carefully.

Beyond that, make a cardboard mockup to make sure you like the profile. When I did this, I decided I didn’t like how thick the top of the case was. I shaved off 5/8 of an inch from the design, and I think that made a big difference. I still had plenty of depth for modules (70mm on the top row), so this wasn’t a problem.

One woodworking tip – finish the pieces before gluing. If glue leaks out when you clamp, you can wipe it off with a wet rag, but it will show up in the finish. If you finish first however, then the glue can be wiped off without any residue remaining. The WATCO poly was pretty easy to use, didn’t stink to much, and dried quickly.

Update – After using the case for a couple of months, one thing I wish I’d done a little differently is to have less of an angle between the bottom two rows. All the modules fit fine, but sometimes patch cable ends can bump into each other if a jack on the bottom of the middle row aligns horizontally with another jack on the top of the bottom row. I’m also glad I put 3/4″ rubber feet on the bottom of the case, because I’ve found it’s nice to route the power, MIDI, and pedal cables from my keyboard under the case to the back of the desk.

Final Words

A huge thanks to Matthew Goike for advice and inspiration for this build and to Synthrotek for making awesome DIY kits!

Why Every Tester Should Learn Pairwise Testing

pears

The promise of pairwise testing is that it can help you reduce the number of test cases you need to execute without significantly increasing the risk that you’ll miss a critical defect. While this may be true in some situations, it’s not true in all situations (see “Pairwise Testing: A Best Practice That Isn’t” by James Bach and Patrick Schroeder).

I believe that learning about pairwise testing actually offers a much more important benefit to testers and developers that can pay dividends throughout your entire career in software.

There are plenty of explanations about pairwise testing on the web, so I won’t repeat them here.  If you’re not familiar with the technique, I suggest you start here, and then come back to this article.

My First Experience with Pairwise Testing

I had the good fortune to learn about pairwise testing very early in my test career while working on the Windows Shell team. At first, there was lots of FUD around how effective the technique would be, how we might miss important bugs, etc. In the end, however, it proved out to be an incredibly effective technique to find lots of bugs across the huge surface area of the Shell in relatively little time. We built a custom tool that would let a tester specify variables & possible values for those variables, prioritize them, and even link between variables.  For example, a certain feature would only be available if the system had a 32-bit processor, but not if it had a 64-bit processor. We used the tool to generate a pseudo-random selection of test cases to execute for each regression test pass. When a generated case lead us to a defect, we could mark it to be preserved for future test passes. Commercially available tools today offer similar features (and probably work a lot better than ours did!).

Believe it or not, accelerated bug-finding was not actually the biggest benefit of pairwise testing.  More importantly, we saw a shift in how testers thought about breaking down complex systems into components, and this lead to more effective testing overall. This benefit carried forward into all types of test planning, even when pairwise testing wasn’t used.

4 Steps to Defining a Pairwise Test Plan

Before I go into more details about the derivative benefits of learning pairwise testing, let’s review the steps for creating a pairwise test plan:

  1. Identify all the potential variables in the system under test.
  2. List all the possible values or states for those variables individually.
  3. Determine which combinations might be invalid and indicate this in the test plan so invalid pairs aren’t generated.
  4. Add weights and priorities to the states based on your best understanding of how the system will be used, business priorities, etc.

Step 1 is about identifying all the “moving parts” or “things that can vary” in the system.  These can be explicit variables, such as parameters passed into an API or “implicit variables” such as the position of a particular node in a linked list.

Step 2 is about identifying all the possible ways each of those variables could change.

Step 3 is about winnowing down your matrix to only the possible states.  This step shouldn’t weed out possible error states, because those are extremely important to test. Rather, this is about weeding out the impossible states.

Finally, Step 4 is about prioritizing your testing to help guide defect-finding to the most likely or most impactful areas first.

With these steps completed, your pairwise testing tool will generate a reduced set of test combinations to be executed. But the value doesn’t end here.

The Derivative Value of Pairwise Testing

At this point, you’ve actually done something far more valuable than generate a set of test cases. You’ve thought deeply about the system under test, you’ve decomposed it into sub-parts, and you’ve identified all the interesting ways those parts can change. While it sounds obvious, learning to do this effectively can take lots of practice, and sadly, this skill is missing in many people who consider themselves to be “test experts” today.

Consider your last interview for a testing position. You were probably asked to how you would test a particular feature or system or to generate test cases for a function or class. As you came up with every possible test you could think of, your interviewer was likely most interested in how you thought about the problem and how you identified and prioritized risks.

Any complex system is just a composition of simpler systems. By breaking a problem down into smaller parts, you can focus on testing those parts independently, and then test them in various combinations. Building a pairwise test matrix doesn’t help you discover the sub-systems directly, but it forces you to think about what can vary in the system you’re testing and how it can vary. By thinking top-down from this “meta level” rather than about each individual case, I believe you’re more likely to find the full set of interesting tests to execute than if you just start throwing out individual test cases. Additionally, reviewing the a set of smaller systems is a much easier problem than reviewing an entire complex system all at once to look for missing cases.

An Example

Let’s walk through an example. Suppose you’re asked to test a function that searches a two-dimensional sorted array to determine if the number is in the array. If the number is found, it returns true, otherwise false.

Your first instinct might be to test a case where the number is in the array and one where the number is not in the array.  But those are specific cases. The variable here is “number_is_in_array” and it can have a value of true or false. To generalize, we could change the variable to “number_of_times_value_appears_in_array”.  Now it can have the values 0, 1, 2, 3, etc. but we’d probably weight the values 0 and 1 most heavily if those are indeed the most likely cases.

What else could change? Maybe the number exists in the beginning of the first row, somewhere in the middle, or at the end of the last row.  There are two variables here:  “position_in_row” and “position_in_column”.  Each of these could have values of index 0, 1, somewhere in the middle, last position, just before last position.  Of course, this variable only applies if “number_of_times_value_appears_in_array” is 1 or greater.

If we’re talking about the position within a row and column, then we probably also need to have variables for “number_of_rows” and “number_of_columns”.

What else could change? The array is supposed to be sorted, but maybe it isn’t… “array_is_sorted” could be another variable. Most of the time, it would be sorted, so we’d probably weight that so that 90% of the generated cases use a sorted array, but 10% of the time the array would be filled randomly.

Up to this point, we haven’t generated a single test case. Rather, we’ve identified several smaller systems within the larger system, and we’ve identified how they can vary individually. The pairwise testing tool can do the hard part of generating the actual test cases for us.

Conclusion

Pairwise testing can be a powerful technique for building a minimal set of test cases, but the most valuable aspect of pairwise testing is that it forces you to practice decomposing complex systems into smaller systems that are easier to think through.

Theory Of Constraints and Overcoming Optimism

In the book How Children Succeed: Grit, Curiosity, and the Hidden Power of Character by Paul Tough, the author describes a problem solving mechanism created by NYU psychologist Gabriele Oettingen:

… which goes by the rather clunky name of Mental Contrasting with Implementation Intentions, or MCII, was developed by NYU psychologist Gabriele Oettingen and her colleagues. Oettingen discovered in her research that people tend to use three strategies when they are setting goals and that two of those strategies don’t work very well.

Tough goes on to explain the three strategies: optimism, pessimism, and mental contrasting.  As you might guess, the latter has been found to be more effective than the first two.

The problem with pessimism, or dwelling, is obvious. You spend all your energy focusing on all the reasons you can’t reach your goal and eventually talk yourself out of even trying.

The problem with optimism is slightly less obvious.  Sure, it’s helpful to be able to see the opportunities before you and imagine success, but optimism alone is not sufficient.  You actually need to do something to reach your goals! This also usually involves overcoming obstacles along he way.

This is where mental contrasting comes in.  This is a hybrid approach to envisioning the future. After envisioning your end state, or goals, you spend a healthy amount of energy exploring the obstacles that will get in your way.  Enough, but not too much.  For some, this might be difficult, but it is important not to get mired in all the possible ways you could go wrong.  Just survey the landscape and move on.

The next step, implementation intentions, rounds out the approach by forcing you to think about the actual steps you will take in order to reach your goal. Multiple studies have found that simply having a goal is not a sufficient motivator for reaching the goal.  Instead, it is far more effective to take an initial baby step, celebrate your success, and repeat.  As you build confidence and experience your baby steps might become bigger steps.  Eventually, you’ll be running directly at your goals.

This falls very much in line with Goldratt’s Thinking Processes for problem solving.  TP uses the mechanisms of the Future Reality Tree (FRT) to test your vision of the future and the Prerequisite and Transition Trees (PRT and TT) to identify and schedule actual steps towards reaching your goal.  (Of course, you have to know your goal and identify the key problems causing you to not be in that state already, but that’s for another post.) Dettmer simplifies the process by collapsing the PRT and TT into a single step in his excellent book, The Logical Thinking Process: A Systems Approach to Complex Problem Solving.

To make this real, let’s walk through an example where your goal is to visit Hawaii.

The pessemistic view is to dream up all the obstacles to getting to Hawaii: lack of funds, not enough time off, no idea of where to stay, fear of flying, nobody to watch your cat, you look terrible in flowery shirts, the very idea of poi is disgusting… You get the idea.  You’d get stuck on these ideas and likely just forget the whole idea.  In the worst case, your bitterness begins to boil over and affect other aspects of your life.

The optimistic view, while more pleasant, is no more helpful: you spend your days dreaming of sitting on the beach, snorkeling with colorful fish, listening to live music play by the pool while you sip fruity drinks…  Maybe you even decorate your office or computer desktop with pictures of beaches and sunsets. The brutal irony though is that up to this stage, all you’ve done is fantasize and likely distract yourself from other more important things.

In the mental contrasting/implementation methods approach, you would quickly survey the obstacles and the. Use your creative mind to begin disarming them one by one.  Pick any one of them and analyze the very first baby step you could take in order to clear the obstacle.  Write it down, do it, and decide on the next step.  Don’t have a vacation fund saved up?  Find an empty jar and stuff some of your loose cash or change into it.  Got $100 in the jar?  Go to the bank and start a dedicated savings account.  Can’t always make it to the bank to deposit your change?  Set up an auto-transfer or direct deposit… And so on.

The TP approach is more cautious in that it requires more up front work, but it can also lead to less wasted effort and missteps in the long run.  By envisioning the whole plan up front and brainstorming about the likely obstacles along the way, you can usually take a more direct path to success.  Of course Murphy will always surprise you when you least expect it, but when that happens you’ll have a plan you can start from and simply adjust rather than starting the whole process from scratch.

The bottom line is obvious… If you want to reach your goals, you have to take them one step at a time.  Doing nothing is the only guaranteed way to fail.  We’ve looked at two methods for making incremental progress, and I encourage you to try both and see which one works best for you.

 

 

 

A Journey Through the Theory of Constraints

A co-worker recently turned me on to the book, The Goal: A Process of Ongoing Improvement by Eli Goldratt and Jeff Cox.  Turns out, I’d been exposed to the ideas in the book at least a year ago by my former manager, but I didn’t realize it at the time.  Since reading that book, I’ve also devoured 2 of it’s several sequels. In them, I have learned about what I think might be a very powerful system for problem solving that is relevant to software testing.  This post is will be the beginning of a series explaining my exploration of these ideas.

First, some background. The Goal was very popular when it first came out because it challenged the state of the art in manufacturing processes and cost-based accounting. Today, it’s regarded as part of the “origin story” for Agile. The book is written as a fictional novel that slyly introduces the key concepts of the Theory of Contraints (TOC) through the story of Alex Rogo, a manufacturing plant manager who must quickly turn his operations from losing money to making money, or else. Unable to make new significant investments or develop new products on the tight deadline, Alex is forced to do some deep and creative thinking about how to save thousands of jobs.

Through a chance encounter with a former colleague, Jonah, Alex learns to use The 5 Focusing Steps, a series of processes to help identify the biggest constraint in a system and figure out what to do about it. These concepts were somewhat revolutionary at the time and got a lot of attention in manufacturing circles and MBA programs.  The steps are based on the concept that a chain is only as strong as it’s weakest link, and therefore strengthening any link other than the weakest one is wasteful; it will not improve the strength (or throughput) of the overall system. The Goal walks us brought Alex’s learning of the 5 steps and applying them to successive weak links in his factory. You can probably guess that things end up well enough in the end for Alex, or else the 5 Focusing Steps wouldn’t have been regarded as successful.  But At the end of the book, Alex is left wondering how Jonah, a college physics professor with no background in manufacturing, could have possibly known so much about Alex’s situation.

The answer comes in Goldratt’s next book, It’s Not Luck.  In this book, Goldratt formally introduces Jonah’s method as the “Thinking Processes” (TP).  If, like most of the colleagues I have surveyed, you stop after The Goal, I believe you will miss the most valuable part of Goldratt’s work.  TP is an exhaustive (and exhausting) step-by-step process for identifying the. Ingest constraint in a system, determining what to do about it, and determining how to go about making the change.

Goldratt presents TP with several examples as he continues Alex Rogo’s story, now the VP of a division of 3 different types of companies. Readers are treated to multiple examples of applying TP to a variety of problems both in personal life and business.  Goldratt’s next book, Critical Chain continues the exploration, this time applying TP to project management.

So why are TOC and TP applicable to software testing?

First of all, testing is a process by which raw materials (code, features, test plans) are turned into finished results (shipped features). Inherent in any process is a certain amount of waste, and TOC and TP can help discover and eliminate the waste. Bottlenecks and constraints are another common characteristic of processes, and testing is no different.  TOC/TP can help you figure out which bottleneck is the biggest, so you can focus all your efforts on that until it is no longer the biggest problem.  For example, are you flooded with more features to test than you can handle? Are you sitting around waiting for code from partner teams to arrive?

Second, TOC/TP force you to think, and think deeply, about the processes around you. As software testers, we are quite accustomed to thinking deeply about the features we test, but honestly, how often do we put the same rigor around our own processes? We have good intuitions, of course, and I’m sure we’re all detecting little smells of problems here and there and tweaking them along the way (perhaps in our sprint retrospectives), but to what effect? How do we know for certain that our efforts are being applied correctly and on the biggest problems? When we are pressed for time, it’s easy to jump on the first optimization opportunity that comes along. TOC/TP shows us the danger of doing this.  As we saw earlier, we might just be wastefully optimizing a link in the chain that is NOT the weakest.  Sure, it might come in handy down the road, but if we aren’t careful, there may not be a “down the road” if we don’t focus correctly.

Finally, software testing is a constantly changing field, and TOC/TP not only embraces change, but it establishes change as a core principle. In order to survive and remain competitive, we must change and improve constantly or get left in the dust of our competitors.  Some in our profession believe that standards are the future of software testing (e.g. ISO 29119), but that line of thinking is dangerous because it prevents us from thinking for ourselves and continuously improving.  Rather than submit to a static set of practices, it is crucial that we as testers learn to think thoroughly through our test problems and processes in order to apply our focus in the most beneficial way possible for our customers and our organizations.

Don’t take my word for it. Test my theory. Follow along as I learn and experiment with TOC/TP as a tester. Succeed or fail, we’ll learn something together.  My next few posts will describe the various tools introduced by TP with examples from a theoretical software testing operation.

Words to Live By

I was recently contacted by a 12 year old who wanted to interview me about ThereminWorld.com and how running it as a hobby has impacted me over the years.  One of the questions was “If this was your LAST interview, what advice would you pass along?”  Here’s the advice I offered along with a few additions I’ve considered since then.  

I won’t lie and say I am successful in living by these guidelines every day, but I find it helpful to think about them regularly and check in to see how I’m doing.

Hopefully some of my 7 blog readers will find these useful or inspiring 🙂

Love everyone. 

Never be afraid to try something new. 

Failure is not the end of the road, but rather a gift of knowledge of what doesn’t work and an opportunity to try again. 

Save for the future. 

Surprise someone else with kindness every day. 

Learn about eating healthy food and take care of yourself. 

Don’t panic.  

Never stop learning.

Always speak the truth, but never out of spite.

Ride the bus.

 

Why You Shouldn’t Write Test Plans

Four years ago in a previous role, I blogged about why I don’t like the term “Test Plan”. I made the point that a Test Plan was something a test team wrote so they could measure quality and report back to the team, whereas a “Quality Plan” describes how the whole team will work together to achieve the necessary level of quality before shipping. You cannot “test in” quality; quality starts with the specs, then the code, and then the test and stabilization processes. As the old tester adage goes, “Testers don’t break code. It’s already broken when we get it.” Quality Plans recognize this and focus on helping the team, not just the testers, drive quality.

In addition to those points (which I still believe are valid), I’d also like to point out that “testing” is an activity, but “quality” is a result. Which would you rather have your team focused on? Using the term “Quality Plan” helps put a team in the mindset of working to reach a destination as opposed to just “doing testing”. It is still a subtle difference in verbage, but it can be a massive (positive) pivot in thinking. 

Testers who focus on activities are less likely to question whether those activities are appropriate for the situation at hand, and they are more likely to just keep doing what they’ve always done. Testers who focus on achieving specific goals are more likely to identify and change activities that don’t help them reach those goals. now, I don’t have scientific data to back this up, but you can find at countless studies about the success of people who write down goals vs. those who don’t, and they all reach similar conclusions. 

Quality goals can take many forms, and obviously you will have to experiment to find the form that best fits your needs. How can you tell if you have well-written quality goals? The SMART goal format offers some hints. Your quality goals should be specific, measurable, achievable, relevant, and time-bound. I will also add that it should be clear who is accountable for reaching those goals. They should clearly describe what you need to see in order to know you are ready to release. Some examples include which tests must pass, a certain volume of positive beta user feedback, target values for performance and stress tests, etc.

Including activities in your Quality Plan is fine too, but they should always be listed as supporting specific goals. After all, this is a plan of action as well as a plan of destination. You wouldn’t make a travel plan that only describes how you will get in the car, drive to the airport, fly, get off the plane, and hail a cab, would you? Why would you do the same with a plan for reaching quality?

Will you make the switch from Test Plans to Quality Plans?

Stop Using Passive Voice

As I mentioned in a previous post, testers must learn to communicate efficiently and effectively. Today, I want to focus on a form of communication I see all too often and why you should try to avoid it at all costs.

The term “passive voice” refers to a form of grammar in English where the subject of a sentence is being acted upon rather than doing the acting. For example, “The decision was made to move forward with the current plan” or “The bug was found during the test pass.”  In the first example, “the decision” is the subject, but it’s not clear who made the decision; it was just “made”.  In the second example, “the bug” is the subject, and again… who found it?

Alternatively, an “active voice” sentence is one where the subject is doing the acting: “Sally found the bug during the test pass.”  In the latter, Sally is the subject, and in this sentence there is no question about who found the bug.

Hopefully, you can now see why active voice is the better choice. For one, active voice provides more clarity, whereas passive voice leaves the reader/listener with questions.

Another reason to prefer active voice is that it more directly attributes the result with the “doer”. Passive voice can come across as not taking accountability for a bad result or not properly attributing success to the specific behaviors that led to it. Hopefully you work in a culture that applauds and encourages taking accountability rather than shifting blame to some unseen actor. If you don’t, run.

Luckily, many word processors with grammar checking can help point out usage of passive voice for you. Before you try that, let’s try a quick example. Which sentence do you think is more helpful for knowing what went wrong and how to fix it?  

  1. The tests weren’t included in the test pass because a miscommunication occurred.
  2. Oscar did not execute the tests in the test pass because he misread Rolph’s instructions.

I tried to make it obvious there by including not one, but two passive voice phrases in the first sentence and two active voice phrases in the second. After hearing the first sentence, you might be left wondering who didn’t run the tests, what the miscommunication was, etc. The second sentence makes these points clear, however. Of course, you still need to understand whether Oscar was careless in his reading of the instructions or if Rolph’s instructions were too confusing, but at least you’re one step closer to getting to the bottom of the situation.

As testers, part of our job is to root-cause problems in products and processes. Active voice provides a much more direct way to present a root cause, and that should in turn provide a more direct route to a solution.

Homework

Pay attention to communication you receive today and try to count the number of times you hear or see passive voice being used. Pick one or two examples and re-form them in active voice.

If you have any questions on how to rephrase passive to active voice, let me know and I’ll try to help.

3 Ways To Increase Your Value As A Tester

A couple of tweets by Justin Hunter (@hexawise) recently got me thinking about what makes a tester really valuable for the long term.  He tweeted:

3 things more testers should know more about: test design approaches, Context Driven Testing, Bach & Bolton-style Exploratory Testing (tweet)

As a tester, if you want to maximize your ST earning potential, learn Selenium. To maximize your LT value, study Context Driven Testing. (tweet)

(In the last tweet, ST and LT refer to “short term” and “long term” respectively.)

While I agree these are all valuable skills for testers to have today, I believe these hint at deeper skills that are actually immensely more valuable over the long term.  

Before we begin, consider what “valuable” means to you. Is it money? Is it getting the opportunity to work on cutting edge products or products that help change the world? Is it the flexibility to work remotely or to be able to travel around the world? Is it getting to work with brilliant people? Depending on your answer, your path to becoming “highly valuable” will vary greatly.

Since the tweets above mentioned earning potential, I’m going to focus on that aspect for now, though I personally rank some of the aspects above higher.

Adapt

Thinking about the law of supply and demand, one way to look at “value” as a tester is having the ability to stay in high demand but short supply throughout your career. Given the rapidly changing pace of software development technologies and practices, you’ll need to rapidly evolve your testing skills along with them. As a quick example, my first job as a tester required writing test automation in C++ and COM. After that, I worked with C#, SQL, JavaScript, and most recently Objective C. At any of those points, I could easily have chosen to stay put (yes, people still use COM today!), but because I knew other programming languages and had demonstrated I could adapt to change, many other opportunities wre open to me.

Here’s another way to think about it: your industry is changing every day. Your team is hiring new people with new skills and different experience. Why? It wants to be more efficient and more effective. Staying static in a dynamic world is extremely risky if you want to keep your role for very long.

Learn How To Automate Tests Effectively

There is a heated debate that has been raging for years about whether testers should learn to automate or not.  My personal opinion is that nobody should do anything they don’t want to do. Having said that, testers who can automate tests effectively in addition to doing a great job at all the other stuff (test planning, manual testing, attention to detail, etc.) are more valuable in the long term. I claim this in the same sense that any skilled craftsperson who is an expert with two tools is more valuable than one who is an expert in only one tool.  Manual testing is essential to validating many types of products, and I would never suggest removing all manual testing in favor of automation for a product with a user interface.  However, all great craftspeople know that it’s best to use the right tool for the job, and manual testing isn’t always the right tool.  Much of what testers do on a daily basis is repetitive and predictable and could be automated.  Testers who know how to write automation that is fast, reliable, and maintainable will have an advantage in the long term over those who don’t. 

Selenium is a great platform for automating web sites, and Justin is correct that knowing Selenium will probably help you earn a well-paying job in the short term.  While the web seems to be pretty popular these days, websites are just one type of product in a sea of many. Knowing the “elements” of test automation is what’s most valuable. This comes with years of experience and learning by trial and error for many. I’ll cover these elements in depth in a future post. 

Communicate Effectively

Written and verbal communication are critical elements of being effective as a software tester, and it is a fallacy to think you could be highly valuable if you don’t “work well with others”. Think about all the parts of our jobs that require effective communication: writing test plans, filing bugs, clarifying requirements, reporting test results, sending e-mail, documenting processes, giving presentations… Yet examples of ineffective communication are all too easy to find every day in each of these scenarios. I’m not perfect by any means; communicating effectively is a life-long pursuit. This is equally important if you work remotely or in a team room with other engineers. 
 
When you communicate effectively, others will “hear” you the first time. You won’t waste precious time going back and forth asking and answering questions. You’ll be less randomized, and so will your team. You’ll be seen as someone who can be trusted, because your communications will inspire confidence rather than doubt. All of these will contribute to higher earning potential over time.
 
One of the best tips I can offer for testers is to say as much as you can in as few words as possible. This holds for your defect reports, presentations, emails, test plans, etc. Don’t assume you have to fill up a certain amount of space with “fluff”. Just say what needs to be said and be done. Look for words like “utilize” as smells that you’re off track. 
 
For defect reports, try to anticipate what questions others would have after reading your report. Did you report enough detail in the “steps to reproduce” section, or did you make assumptions that the reader might not also make? Including screenshots or videos can be extremely valuable too.

Lastly, smile. A tester’s role is highly critical, and many testers come across as unnecessarily grumpy. Of course, many developers also come across as grumpy, but who wouldn’t when there’s an army of people paid to point out how bad your code is? If you think about your role from the developer’s point of view, you can see how a friendly smile (no, not a Dr. Evil smile!!) can go a long way in convincing a developer to pay close attention!

Summary

There are many ways to maximize your long term earning potential as a software tester. We discussed three such ways above: adapting to change, learning how to write effective test automation, and communicating effectively. Each of these deserve much more attention than a single blog post and can take years to master, but hopefully by putting them on your radar you’ll stop to think how well you do them now and how you might want to change.

 

On Context-Driven Testing

I may be the last person to learn about context-driven testing, but since I have recently, I felt compelled to share a few thoughts.

The concept of context-driven testing is described thoroughly by its creators, but at least part of it could be summed up quickly as “use the right tool for the job at hand.” 

For many, CDT may be one of those frustratingly obvious learnings that’s been sitting in front of you all along, hidden behind tradition and “best practices”.  We write test plans using the same template we’ve always used, hold the same meetings we’re used to holding, and assume that all new tests will simply plug into existing harnesses or test case managers. We get so used to “the way things are done around here” that we often forget to step back and just solve the problem at hand.

As I reflect back on the testers I’ve worked with over the years, I realized that while a significant number of them were brilliant when it came to testing software, many had not really tested their own processes.  They thrived in the momentum of their environment, but over time as new challenges came up, they had a tough time creating different kinds of momentum that were more applicable to the problem at hand (and I could draw parallels to “kids today”, but I’ll hold off for now).

My call to action for testers today is to pick at least one regular process you follow and seriously think about whether it solves a problem you actually have.  If you don’t know what problem it solves, ask a peer or your manager.  There’s a good chance that much of what you do is actually necessary – and in those cases you should definitely understand why!  But there’s also a good chance that at least some of what you do is just done because that’s the way it was done before you got there.  You read a wiki, followed a template, check, check, check, and off you went.  Be the person to spot one of these, and for bonus points, come up with a better way to do it – or cut it out altogether!

Testing Is A Risky Business

Some of the most important responsibilities we have as testers are identifying, reporting, and mitiating risks.  I’ll go as far as to say that if you aren’t regularly doing these things now, you aren’t reaching your full potential as a tester.  

Let’s take a look at each of these individually.

Identifying Risks

Risks are anything that could cause the project to go off schedule or result in lower than acceptable quality when finished.  Let’s consider a few examples and explore why they’re risky:

  • A seemingly endless stream of bugs found in a feature – Might indicate the feature’s code is too complex, its requirements were unclear, or it isn’t being tested in a methodical way.  Risky because it makes predicting the point by which the product can be released harder to estimate.
  • Features being implemented before design decisions are finalized – Risky because work might have to be thrown away and re-done late in the development cycle, causing late code churn and lots of extra testing needed with little time to stabilize.
  • Dependency teams missing deadlines – When teams you depend on can’t deliver high quality components to you in a predictable way, it adds uncertainty to your schedule. 
  • Features that used to work break frequently as code churns – Risky because deeper testing might get delayed while bugs are being fixed.  Also a sign that proper regression testing is not being done before code is committed.  Usually a smell of deeper problems in test coverage, team discipline, or code complexity.
  • A test plan review ends up with more questions about feature design than answers – Risky because it indicates the team doesn’t have a clear picture of what they’re trying to build, leading to confusion, inefficiency, and (usually) rework.

Focus on the biggest risks first

Any given project will have many risks; it’s just the nature of developing software.  The most effective testers will focus on the biggest risks first.

As testers, we’re accustomed to finding bugs, and each individual bug could be thought of as a risk.  But chances are, you’ll find more bugs than you care to track and report about on an individual basis.  Rather, it’s more helpful to step back and “look at the forest” instead of the trees (bugs).  

Sometimes the biggest risks will be obvious, but sometimes you’ll need to work with others on the team to determine which risk truly represents the biggest threat to the project’s success.

Reporting Risks

Once you’ve identified risks, one of the most effective ways to squash them is to keep them highly visible to project stakeholders.  A risk report could take many forms, but usually it includes:

  • a brief summary of the problem
  • a brief description of the planned mitigation
  • next steps
  • a date/time for when the risk should be mitigated
  • some indicator of whether the mitigation is on track or not (e.g. red/yellow/green)
You don’t need to write up an official memo or create a Powerpoint presentation for each risk.  Often, a simple email will do, or a few quick sentences in a team meeting.  The point is to raise visibility on the risks as soon as possible to give the team more time to mitigate the risks.  
 
You might be thinking to yourself, “Why would testers need to report this?  Isn’t this the project manager’s job?”  If your team is lucky enough to have a skilled project manager working with you, then yes, it’s certainly within the realm of what they might report.  That said, someone needs to inform the PM about the risks in the first place though.  As a tester, you are likely to have a unique view of the project that others don’t have, and therefore it is your responsibility to call out risks you see from the tester’s perspective.
 
Aside: Regardless of whether you have a PM or not, I’ve never been a fan of drawing hard lines around areas of responsibility.  As a team member, the team succeeds or fails together.  If you see a risk that nobody else is reporting, call it out!  Consider the 2012 security breach at Target.  Had someone raised the risk that their anti-malware software was detecting problems earlier, they might have been able to prevent any customer data from ever leaving their servers!

Mitigating Risks

This part should almost be obvious. Once we’ve identified risks, our job is to help make them go away.  I say “help” here because we can’t do it alone; we usually need others to work with us.  For example, once we’ve identified a risk that a particular feature has a seemingly endless stream of bugs, we might ask the developer team to revisit the code to see if it can be simplified or ask a designer or stakeholder to simplify the requirements.  For our part, we need to continue testing thoroughly and expand our thinking to find ways to test the product more efficiently or more effectively.  Once a fix has been made, we need to instill confidence among stakeholders that the risk is indeed mitigated.

To prevent risks from reappearing later, it helps to identify their root causes and then drive whatever changes are necessary to fix those problems.  The 5 Why’s technique can be helpful useful here.  Great testers know how to break problems down into smaller problems and debug them piece by piece.  This applies to debugging processes as well as software.

Other mitigation techniques for testers might include reviewing test plans with key members of the team to identify gaps in test coverage, building new test automation to catch bugs earlier in the process, or enlisting the help of other testers to bring “fresh eyes” to an area that was previously thought to be well tested.

Conclusion

Risks exist in every software project.  Successful software testers recognize this and regularly identify and focus on the biggest risks first, keep the team informed about risks, and help mitigate risks constantly.  Are you?