Quality Assurance vs. Software Testing

For a vast majority of my time in the Context-Driven community, I have loosely accepted many “truths” as presented. I have pushed back on some, argued with a few, and flat out rejected some others. The idea that Quality Assurance is an inferior title to something more appropriate such as Software Testing, Light Shiner, Information Gatherer, or Bug Master. Recently I have found that I have a loose agreement with this idea. This post is an attempt to come to a better understanding for myself, and hopefully others in the community.

So not long after my whole team took the RST course with Paul Holland, they decided as a team that the name of our team should be more representative of what we actually do. It was a unanimous decision to change the name from “Quality Assurance” to “R&D Testers”. This was indicative of the fact that we were first and foremost, members of the Research and Development team, and that the role we filled was testing, as opposed to “Assuring Quality”.

Great! I left our meeting that day thinking the team really listened to some of what was taught. I thought the process of changing the name of the team would be rather simple. Change a couple e-mail aliases, a couple quick conversations with the R&D team leadership, and we’d be done.

So I went to start those quick conversations, and it turned out that they weren’t as quick as I thought. Before I go on, I want it to be clear that these individuals I was talking to are engaged development leadership that really care about what we do, engage in community discussions on agile and kanban topics, and actually have my respect in some ways. This isn’t a “bash the ignorant” type of blog post. In that framework, I brought this idea to the R&D team leadership and was met with some resistance. In my side of the conversation, I parroted the arguments I have heard from others in the community, “testers don’t do anything to assure quality, we can’t be certain (or sure) of the quality of a product.”

This was not received as well as I thought it would be. I was under the impression that this was a self-evident truth. That others in the industry were simply too ignorant of what testing actually is to understand this, and all of this “QA” garbage that flies around are relics of manufacturing processes that get applied to software. Here I was talking to people that I share many beliefs about software development, and they disagreed with me. The main thrust of the argument was disagreement with the notion that testers do nothing to assure the quality of a product. In this person’s opinion every project and team they had been on, testers were very influential in increasing product quality and therefore the name QA wasn’t altogether misleading.

“But we don’t ‘ASSURE’ anything, impact perhaps, but not assure,” was my dutiful context-driven retort.

“Assurance doesn’t mean that the product is perfect, but QA people definitely bring a great value in improving quality,” was the response I got.

I was able to walk away from that conversation with a kind of do-whatever-you-want-to agreement from our team leadership, but I wasn’t satisfied. I went back to my desk to look up the definition of the word ‘assurance’ to prove that my point was right, we don’t assure anything as testers. In looking up this definition, this is where my agreement with CDT started to get a little looser.

The definitions of ‘assurance’ all pointed back to the root word ‘assure’. Miriam-Webster offered 4 definitions of ‘assure’. I pulled each one and started detailing why each of those definitions didn’t apply to what testers do (the outcome of that process can be seen here). I eventually came to a definition of assure that stopped me though: “to give confidence to”. For example, “The child was scared to go to the dentist, but her mother’s assuring words gave her the confidence to climb into the chair.”

This reminded me of a conversation I had with James Bach a few years ago. The first conversation that really pulled me into the CDT community was that they were the only people that seemed to agree with me on how testing is valuable. As James and I were talking he made the following comment, “I test because my clients are worried that something they don’t know about the product will hurt them.”

To me, that statement seems to agree that testing is done to build confidence in a product. At the end of testing, all wrapped up in appropriate safety language and carefully crafted words is a report about the level of confidence in a product, or at the very least information that is meant to affect some stake-holder’s confidence in a product.

The rest of the definitions of the word assurance I agree are misleading, even a bit scary. But the idea of Quality Assurance being a process of building confidence in a product, or gathering information for others to build that confidence, is one that I think I could get behind.

This isn’t to say that I dislike the term ‘testing’ or anything else that does a decent job of describing what a team does. What I am trying to do here is gain a better understanding of why the community is so opposed to the term “Quality Assurance”. Please let me know in the comments if you agree with how this is presented, or where I am way off.

My next post will be about the cultural impacts in an organization of changing the name of team from QA to Test. That is what this post was supposed to be, but I thought this was a better point to start the conversation.

January 9 2013 Update

So after letting this post simmer for a few months, I have decided that taking up the fight internally to officially change the name of the team wasn’t worth it. We refer to ourselves as testers. The rest of the development team understands that we are testers, but in terms of support, sales, marketing, etc. I didn’t find there to be any payoff to changing the team name. Heck, I don’t even have the energy/time at this point to write another full post about why I feel that way. That is why I am updating this post rather than writing a new one. I wanted to cover another topic in my next post, but didn’t want to leave this topic unsettled.

It’s good to be the tester! (HTC DROID DNA Review)

Sometimes, it can be good to be the tester.  And by good I mean really good.  By virtue of my love for testing, and HTC smartphones, I got the opportunity to get my hands on a pre-release version of the DROID DNA, the new flagship ultra-awesome 5-inch Android smartphone from HTC.  Woo! Continue Reading

The Changing Face of Test Management

Another week, another podcast.  I have been very luck to have the opportunity many times to Join Matt Heusser, Michael Larsen, and others on the weekly This Week in Software Testing podcast sponsored by Software Test Professionals.  This week was a good one.

If you remember back to my post on writing conference reports, in my report from the KWSQA conference I mentioned that as our team made progress towards more agile (small ‘a’) methodologies the testers and developers needed to move closer and closer together.  As the testing and development teams have merged together, we have gone from 2 distinct teams and now have 1 team.  This is great and has had a significant impact on the quality of the software we are producing (as I mentioned in my presentation at CAST 2012 last month), however it produces an interesting position for myself (the QA Manager) and the Dev Manager, as we now have 1 team with 2 managers.

Others in the industry are having similar problems, and this week’s podcast is a bit of our conversation along this topic.  Go ahead, give it a listen.

Part 1 - http://www.softwaretestpro.com/Item/5690/TWiST-117-The-Changing-Face-of-QA-Management-Part-I/podcast

Part 2 - http://www.softwaretestpro.com/Item/5700/TWiST-118-The-Changing-Face-of-QA-Management-Part-II/podcast

I taught myself a new word…I’m an autodidact!

For those of you that missed it, Test Coach Camp was a blast.  2 days of non-stop discussion with the best and brightest minds in the space of test coaching, and I got to go!

There were tons of great discussions, exercises, and lessons learned at TCC, but one of my favorite discussions was one that I was able to facilitate on the topic of autodidacticism.  We approached the topic from the angle that  the best way to teach testing is to empower people to teach themselves about testing, but how do you get people to do that.

Luckily, Michael Larsen pulled out his handy dandy portable recording studio and was able to catch the whole conversation and post it out to the interwebs.  Thanks to Software Test Professionals for hosting the recording.  The link is below:

Part 1 – http://www.softwaretestpro.com/Item/5613/TWiST-107-%E2%80%93-Autodidacts-Unite-Part-I/podcast

Part 2 – http://www.softwaretestpro.com/Item/5618/TWiST-108-Autodidacts-Unite-Part-II/podcast

How I Write a Conference Report

A while ago I was able to attend the KWSQA Targeting Quality conference in Waterloo Ontario.  After a great time learning, connecting with old friends, and meeting new ones, I eventually had to go back to the office.  When I got there I was expected to produce an experience report to justify the trip, as I am sure many of you have had to do in the past.

For the purposes of this post, I would like to share a couple tricks I employed to produce what I would consider a decent experience report.

Focus on Value

In my case, the company covered the bill for the trip and the conference.  though it wasn’t a huge investment for the trip, I wanted to make sure it was a worthy investment.  I learned lots of things at the conference, and it is important to make sure those tidbits of knowledge are included to show what was learned that could be leveraged for the company.

Focus on Solutions

I have seen quite a few reports from others (I have even been guilty of it in the past) that just rewrite the class descriptions in a report form and call it good (i.e. I learned x in class a and y in class b).  This covers my first point a bit, but just listing random facts and topics that you learned about don’t show the application of that knowledge.  Based on all of the knowledge you gain, seek for ways to apply that knowledge to problems currently facing your company.

Implement Solutions/Value

Once you have this knowledge and some way in which to apply it, the next step I would consider in writing a great experience report is to actually implement the ideas in the report.  If the experience report is just some document that gets filed into the nether regions of the company storage banks, where is the value in that?

Allow the lessons learned to extend out of the conference, and off the page of the experience report and actually work to implement what you learned.  I was able to do so with what I learned at KWSQA and doing so made the experience (and the experience report) much more valuable.

Below is the text of my experience report from KWSQA (sanitized a bit for safety reasons) for an example of these suggestions in practice:

Targeting Quality 2012 Conference Attendance Report

-Wade Wachs-

After spending a couple days at the Targeting Quality 2012 conference sponsored by KWSQA, I came back to the office with a few items that I feel would benefit the culture and outcomes of the development and QA teams in our company.  Those items are listed and explained below.

Reduce/Remove any Us vs. Them culture

This is one of the biggest actionable items I came away with from the conference.  This applies in several dimensions that our company is already taking actions to accomplish.

Dev vs. QA

I think we have managed a pretty decent relationship between the development team and the testers in our company, but we have consistently thought of these as two separate teams.  One of the big things that I heard at the conference was the idea of considering the testers as part of the development team.

Paul Carvalho talked about this in terms of the fact that SCRUM processes only recognize 3 roles of Product Owner, Scrum Master, and Product Developer.  That is not to say that only those who write code count as developers, but that all members of the team that are not managing or defining the requirements should be working to build a quality product.  I had several conversations with Paul and others that suggested a cultural shift to include the testing role in the team of developers could have a significant impact by tightening the feedback loop between code creation and testing.

We have already made significant steps in the last couple weeks to work towards a goal of integrating the code writers and testers better.  Conversations are in the works to continue this integration further.

Office 1 vs. Office 2

Selena Delsie made a comment that I really liked along the idea that having a small team that practices agile in a larger more waterfall organization is typical, but greater benefits can be realized if the whole organization works together in a more agile manner.  This really hit home for me, as I have felt that Office 1 has been going more and more agile while Office 2 is still struggling with understanding how we do things.  I wrote in my notebook in Selena’s session, “The WHOLE company needs to BE agile, not just development DO agile.”

After conversations with an internal employee last week, I think we are taking some good steps in this direction with the inception of monthly blackout dates and taking the time to all meet together as a company and discuss what we are all doing.  I am cautiously optimistic that these meetings could have a significant positive impact on the quality of the software we are producing as we reduce the feedback loops between those of us producing the software and those teaching how to use the software.

The Software Testing Ice Cream Cone

Paul Carvalho in his tutorial about pitfalls in agile organizations talked about the balance of manual testing and automated testing.  Based on some concepts from Brian Marick (one of the Agile Manifesto signers) and a couple others, there needs to be a push to have manual testers doing business facing testing that is critiquing the product, and spend as little time as possible focusing on base functionality and regression checking.  The amount of testing can be drawn in a pyramid with unit tests at the bottom, integration then functional tests on top of that, and manual exploratory testing depicted as a cloud on top of the pyramid that is being supported by the bottom three layers.

However, in many organizations (ours included) the actual testing effort is an inverted pyramid with very little automated unit and integration testing, a little automated functional testing and lots of cloud shaped manual testing, which ends up looking like an ice cream cone.  I have already talked with Steve about turning that ice cream cone around by adding some additional effort in unit testing and better supported automation.  This goal is in the process of being implemented via the talent reviews with QA and developers.

Effective Metrics

There was a great keynote from Paul Holland where he gave a few techniques on how to effectively provide metrics to management while maintaining integrity of the narrative.  The few concepts that I would like to investigate more and implement are:

- Provide metrics along with narrative to provide the full story behind the metrics.  This narrative can contain any of the potential pitfalls or dangerous conclusions from the metrics or other qualitative information not captured in numbers.

- Use a dashboard to provide a better picture of testing activities.

- More effective use of sticky note boards and how to accurately use those for managing testing effort and displaying work that is being done.

I also was party to a couple side discussions along this topic at the conference.  I hope these conversations will be helpful in moving forward in our goal to identify useful performance measures and provide that information up the management chain.

All in all it was a very enjoyable conference.  The intangibles of the conference were many, but include an increased passion in continuing to push forward, a feeling that the company values me as an employee enough to invest the funds to send me to training, and an increased connection to the testing community to further relationships that will be sustaining in the future.  I truly appreciate the investment and would like to attend further conferences in the future as we get a better handle on this current list of improvements.

Are there any decent testing metrics?

Last week I was asked by my company to define some decent metrics to start tracking for my team.  I have been thinking on this for a while, and the report I ended up with is one that seemed very appropriate for a blog post.

I read a tweet the other day from Janet Gregory:

“I like visible metrics that are aimed at measuring a problem that we are trying to solve. Once the problem is solved, stop.”  (original tweet)

That resonates with me. So I question what problems is it that we are actually trying to solve through collecting metrics? The way these metrics were framed by $person, we are looking for numbers to provide corporate to keep them from forcing us to gather metrics that may not be applicable in our scenario. $person also mentioned justification of new hires and additional resources.

I appreciate the heads up and the suggestion to get some metrics in place before we are forced to do so.I don’t believe  that 2 or 3 numbers will provide any real insight into our team, especially at the corporate level. I also lack an understanding of what would satisfy their need for data. This doesn’t seem like an actual problem that I know how to solve.

As far as justification for additional resources, I understand ‘resources’ to include hiring new people into the team, as well funding for training and improvement of the current team. Through the page coverage metrics below, we will be able to show the page coverage possible with our current team. These metrics will not show any inefficiencies in time usage, but it seems with the coming of $newtool we will get a complete suite of metrics there than can track that. $testerperson will also be tracking time spent on helpdesk tasks to help facilitate hiring a full time helpdesk tech.

The problem I chose to address is one that is authentic to myself and my team.  The coverage metrics listed below are ones that I think will assist in managing the testing effort of my team.  As each release date draws nearer, these coverage metrics will provide insight into areas of the application that may need further testing. As mentioned below, coverage metrics are fallible, and this is only one tool to aid in testing, and showing the story of our testing to others in the company.

Metrics for QA

Hot fixes required to fix critical bugs in production

This is the most important metric for tracking the efficacy of QA as far as I am concerned. Hot fixes are pushed to resolve critical issues clients face. This metric should be as close to 0 as possible for any given period of time. While we have not been tracking this explicitly, we have noticed a decrease in this recently. I think a brief description of each of these incidents should also be recorded.

Time Spent on Support Tasks

One of the problems we are facing right now is the lack of a full time support employee. $testerperson and Wade (the two main QA members that work with customers) will track their time spent on support related tasks to assist in hiring a full time employee.

Page Coverage in Automation

This week we will be adding a tool to $app that will allow us to track all page views by user. We will then be able to compare this against current count of pages in the app and come up with a decent metric for % coverage. This will only be a coverage metric for how many pages are viewed, it is not branch coverage, code coverage, logic coverage, or any other sort of coverage that someone may think of, but this will give us some insight into what portion of the app is being tested by automation.

Page Coverage in Manual Testing

With the same tracking tool that will allow us to track automation’s progress, we will also be able to track manual coverage. This will be useful for tracking coverage at each release cycle to take into consideration for .

These are the metrics I plan to track for my team in the coming months.  Like I mentioned above, the page coverage metric is fallible, but I think in my context we will be able to use the metric for some useful information.  I don’t necessarily want this metric to reach 100% coverage, as there are pages that currently exist in our app that are not actively used by clients, or even available for use by them.

Once I get a feel for how these metrics starts to illuminate our process, the metrics or our processes might change.  Until then, this is what we will use.

Where does innovation come from?

I have been involved in a rather interesting discussion this week with an individual I will refer to as Mr. Higherup, he’s one of the higher-ups here at the company (I refer to him in this way only because I haven’t asked his permission to use his name in this blog post).  There was a plea for more innovation to come from our company, and that sparked a debate between the two of us on ways to structure the company in such a way to allow for innovation to occur.

As I reflect on this topic, many of the arguments we have made can easily be applied to testing.  Exploratory Testing to me is a very innovative process.  The process of allowing your last step to  influence the next requires a steady stream of ideas on where the next step should be.  Below is my initial response to the request for the company to be more innovative:

Mr. Higherup,

I see a few cultural obstacles that need to be broken down for ideas to really flow out of this company.

The software we write exists to solve problems. In the case of [the division of the company in which I work], Mr. So-and-so and his companions noticed a problem in the justice sector, and they came up with an idea for a solution to solve the problem. This idea came from people that were very intimately connected to the problem that needed to be solved. I imagine that many of the ideas that this company currently monetizes have roots from ideas of people connected to different industries.

The first obstacle I see is that we don’t know what problems exist out there. A large portion of [this division], and by abstraction [the company as a whole], is made up of technical people that create and implement these ideas, but have no connection to the industries or problems we are solving. The members of the company that do have that connection to the industry in general don’t understand how the technology could solve the problems they are facing as well as the technical side of the company.

Steven Johnson talks about the coffee shop being the idea center of the Enlightenment because of the exchange of information and ideas that happen in that space leading to new and exciting ideas. I believe that the ideas you seek will breed in an environment where an understanding of existing problems mix with an understanding of technology solutions. The challenge is to create that environment.

I don’t believe that replacing [our e-mail system] or implementing corporate social networking will create that environment.

The next obstacle in creating these ideas is having time available to devote to creating ideas.. We don’t necessarily need to go the direction of Google’s 20% time or Atlassian’s model of quarterly idea days, but we need time to be able to look up from the day-to-day noise and talk about the problems that we could be solving. This isn’t going to happen passively, or by writing a couple hundred words on a blog. We all have dueling demands on our time, and if ideas are important to the company, then time needs to be dedicated to creating them.

I will offer one brief anecdote on this topic. One of our developers at [this division] just got back from a conference where he had the ability to sit and talk with people in the industry about some of the problems they are facing. He came back today with an idea that could be the killer app in the justice sector. He had time to mix his ideas with others with very positive results.

I will close with one last aspect that I see. For people to want to invest their ideas into the company, many people want to feel like the company is invested in them. [In this environment this is especially important].  Missteps in this area can lead to significant morale problems that will stomp out any desire to invest in a company that is not invested in its employees.

The take away message, it’s one thing to ask for ideas, it’s another to create an environment where ideas can be created. Hopefully this feedback is useful.

-Wade-

The main points I made in this initial response are that employees have basic needs to be innovative, namely an understanding of a problem, time and resources to focus on that problem, and a desire to allocate time and resources to solve that problem.  I believe these same needs apply to good software testing (for some definition of good).

Understanding the Problem

I had the opportunity last month to spend some time in Toronto at the Toronto Workshop on Software Testing (TWST).  At TWST we had some great conversations about identifying stakeholders and our ethical responsibility to these stakeholders as software testing professionals.  We started a mindmap on a flipchart to identify all of the potential stakeholders of a project and very quickly filled an entire page with probably 50+ individuals and groups and ideas were still flowing from the group as we moved on.  Each of these stakeholders represented another potential person our projects can impact.  Each of these people represent a potentially different problem set that we are facing as we develop and test software.  The trick here is to identify the key stakeholders in each specific project, understand what problems they have that need to be solved, then you can use that knowledge to guide your testing on that project.  New projects have new stakeholders and new problems that should be guiding testing.

Requirements and other documentation can be used heuristically to develop a rudimentary understanding of a problem while testing.  I often find that brief conversations with customers, managers, executives, marketing, support, and other important stakeholders can shed a lot of light on a project.

Most of my posts on this blog have covered this topic, and even though I have learned a lot on the topic since those initial posts, I will not cover it in much more detail in this post.  However, there is one important stakeholder that I have never before mentioned on this blog, that is yourself.  For me, it is critical to understand how specific projects and tasks fit into and affect my goals.  Earlier this year I packed my family up and moved across the country to start a new position in part because I had a firm understanding of what my goals were and the projects I was working on at the time did not fit into the goals I had set for myself.  I believe that we as individuals are the most important stakeholders in any project and we need to understand that.

Time and Resources to Focus on the Problem

Within certain frames and contexts, especially in business and software projects, time and resources are limited.  In these contexts we have to make decisions on where we allocate the resources and time that are available.  Often these are allocated based on our understanding of the problem.  In testing, resources are allocated to the portions of the software where someone understands there to be the most significant risks to solving the problem we are facing.

The problem with this method of allocation is that it only as good as our understanding of the actual problem.  With a perfect understanding of the problem, we can attack the most important items in order until the resources are spent at which point we call the project ‘done’ and move on to the next.  This perfect understanding is only possible in the most trivial of circumstances however.  In the more real-world and complex cases, our understanding is often flawed.  Going back to the map we identified at TWST, understanding the full problem set of those 50+ stakeholders is incredibly complex.  Given that as the situation, we can either choose to dedicate resources to developing a better understanding of the problem (which will likely lead to fewer resources and an understanding that is still flawed), or find a way to allocate resources in such a way to compensate for an imperfect understanding.

In my team, we use requirements and a brief regression checklist to drive most of our testing each sprint.  These are the things that we have identified as areas of the software that present the highest risks to reaching the goals of the company.  However, on each sprint I try to make sure that we have at least a few hours, if not a full day at the end, and a day or two at the beginning or each sprint to focus on areas that the testers feel need some attention.  I rely on the intuition and expertise of the individual testers to define what is tested during this time.  This time has helped us identify and track down quite a few important bugs that would not be found by only focusing on our flawed understanding of what certain stakeholders care about.

Desire to Allocate Resources

Understanding and have resources available to solve a problem don’t matter in the slightest if you don’t care enough to actually do anything about it.  In my letter to Mr. Higherup, I mentioned the responsibility of the company to manage itself in a way that promotes the desire of the employees to help the company grow.  This same dynamic exists in testing.

This is a really interesting point that I haven’t really heard much about in this context, but I feel is incredibly important.  I have read quite a bit lately on motivation and I think that it has such a large effect on our products that it should be talked about far more than it is.

In the TWiST podcast that just came out today I was part of a conversation with Matt Heusser, Michael Larsen, and Jon Bach on the topic of motivation.  In this conversation Jon gave an interesting bit of insight into how relationships affect his motivation to allocate his currently limited resources.  He says, “When I look at my todo list, it’s not a matter of, ‘oh look at all of this stuff to do’.  I think about the people that I’m serving.  In fact that often helps me to remember my todo list!  To just think about the people that I work with and go, ‘Oh. OK.  Probably isn’t the highest priority, but it is most motivating to me right now to get back to Thomas about something I told him I would get back to him about.’  I still have another 2 weeks but right now I feel like ‘hey he helped me out today.  I owe him, I want to do that now.’  That motivates me.  That’s part of respect and reputation I suppose, but it’s really a sense of being a part of what I think is cool.”

Right before that Jon defined cool as inspiring.  What this means to me is that he is, and by abstraction many of us are (including myself), inspired to do work based on relationships.  How does this relate to testing?  Several ways.

I often see the world from a Test Manager bias, because I function as a Test Manager.  The way that I see this connecting to testing is that I have a responsibility to create a relationship with my team.  Cem Kaner has stated, “Managers get work done through other people.”  There are many different styles of management, but the way that I choose to manage my team is by creating a relationship of mutual respect and leveraging that respect to get the job done.  In the time since we recorded TWiST, I have seen vast improvements in the team of testers I work with because I finally took a more proactive approach to investing in the relationships I have with my team.  I now understand them much better, they understand me, and we have a stronger relationship of mutual respect.

This respect has led to some great improvements on my team.  I see more initiative.  Based on the very positive response from customers and management over the last few months our testing has improved.  I have more trust in my team to get a job done when asked.  My team is doing a great job at the work I am asking them to do, and I believe that is a direct benefit of building relationships, and therefore desires to accomplish the work we are here to do.

I don’t yet know what the outcome of my conversation with Mr. Higherup will be.  His mind could be opened and significant changes could come, or he may not even respond again.  I do know that in this process I made some great connections and understand my world a little better.  Now that you ahve read this I hope you understand your world better as well.

Paradoxes as a testing excersize?

I had a friend ask an interesting question.  I wanted to share my initial response to his question as an excersize for anyone else that wants to take the same challenge.  I see several flaws in my logic that could be cleaned up, but I like the rawness of my initial thought process.  Please share your own responses in the comments.  I would love to hear some feedback.

This question leads me to some very interesting thoughts on how we can use dilemmas like this to teach a path to asking questions and teaching new testers to dig a bit deeper.  I like this idea, I will have to use it.

My conversation with a friend:

Friend: q for you…. what would happen if Pinoccio said “My nose will grow now”?

wadewachs: it would not grow
that is different from saying ‘my nose will grow right this second”
there is also the possibility that at the time he is saying it he is a real boy
in which case it wouldn’t matter whether or not he is telling a lie
so we have to assume the puppet pinochio as opposed to the real boy pinochio

Friend:lol

wadewachs: you then also have to define the scenario in which his nose grows
from the story, we understand it to be when he tells a lie
but what is a lie?

Friend: i guess that depends on intent

wadewachs: is it any mis-truth, or is it only when there is intent to mis-guide
so, if you look at his intentions, when he makes the comment ‘my nose will grow’ is he planning on lying in the future
if so, he believes it will grow, and could be argued that is not a lie
however, if he intends to tell the truth for the remainder of the time that he is an enchanted puppet, and he understands that lying is what causes his nose to grow, and he understands lying requires intent to do so, then he creates a bit of a paradox for himself, by lying about his intention not to lie
in which case the nose would grow because of the intent not to lie
did that answer your question?

Stake holders wanting to see the green? (checkmarks that is)

Michael Bolton made an interesting post a month ago titled Gaming the Tests where he explores a situation where we are asked to provide incomplete or inaccurate information.  I would suggest reading the scenario he creates about this topic as this post will be talking about a possible approach to handling that situation.

Jason Strobush commented via Twitter about my previous post creating a situation similar to what Michael talks about in his post:

@WadeWachs Ah, but what if it is MANAGEMENT that likes to see the pretty, green, meaningless checkmarks?

Fast-forward two weeks to this morning as I was making my way through the daily RSS feeds where I came across the following quote:

Trying to be a first-rate reporter on the average American newspaper is like trying to play Bach’s ‘St. Matthew’s Passion’ on a ukulele.

That quote is referred to as Bagdikian’s Observation.  Ben Bagdikian is a professor of investigative journalism, author, former Editor, and expert in his field.  In reading a bit about Bagdikian, I have been thinking that the role of investigative reporter is very similar to that of being a tester.  An investigative reporter digs into society to find the defects that will cause harm to the general public.  A definition from Hugo de Burgh (via Wikipedia) that I particularly like says that, “An investigative journalist is a man or woman whose profession it is to discover the truth and to identify lapses from it in whatever media may be available.”

Is that not the same thing testers do?  We find the differences between the way software is expected to work and the way it actually works.  Those differences are merely ‘lapses in truth’ that need to be identified, which are then reported through our available media, typically a bug report.  Investigative reporter … bug report … tester = reporter … QED.

I digress, the point I want to make here is what do you do if your stake holders are asking for bad information.  What if all they want to see is a page full of meaningless green checkmarks without any real testing going on?

A bad tester would simply produce whatever information management wants, regardless of accuracy.  Test results (if run at all) would be falsified to make management happy.

A mediocre tester would likely run through as many test cases as possible, perhaps even intelligently pick features to test that are known to be working to give management the information they are looking for to help their team look better so the project move forward. (ok, good is a relative term)

A better tester would know which tests are more important than others, and will make informed decisions on what areas of the software are likely to be, and test those early in the testing time, provide feedback to the devs so the problems can be fixed while still providing the magical green checkmarks that management so desperately wants.

Great testers however know better than all of that.  If we want to be first-rate testers, and improve our craft, we need to look for a higher method of dealing with issues like this.  There are lots of aspects that go into being a great tester, and I won’t go into all of those right now, but for the interest of this post I will define great testing as identifying the truth about a piece of software, and reporting that truth accurately.

This is where Bagdikian’s Observation applies to us.  We can’t exist as first-rate testers in situations where great testing is not expected or possible.  The most intelligent tester will never have the ability to shine when only allowed to produce meaningless information to management that doesn’t care.  Bagdikian, in an interview with PBS, talking about similar compromising positions that journalists get stuck in made this comment, “I know a lot of journalists, I’ve taught them for a while … what happens to some of the best people … is that when things like that happen, they in effect say I don’t want to be in this business anymore, and they leave.”  Leaving the field is not the only option, but how many great testers are we losing to bad situations.

I would like to explore two other options of what great testers can do in situations like this.  The first, is to change management’s perception of what testers do.  The way this happens is through open, honest communication.  Don’t be afraid of management.  Don’t beat around the bush.  Don’t tell management one thing then do another.  Work with them in defining a reasonable set of expectations on what testing can do, then (if you come to a consensus) do it.  If you have to do some patching of bad promises made in the past (be it by yourself or someone else) then start now, move forward, and make progress in the right direction.  I don’t care how powerless you feel, or where you fit in the corporate structure, if you want to be a great tester, then create an environment where you can do so.

In my current company, I started in the call center, the bottom of the company.  After a couple open and honest conversations with our CEO, and a year of working my tail off, I was sitting in weekly meetings with department heads defining the direction of the company.  I often felt like a fish out of water, I was a grunt worker coming up out of the trenches to sit and talk about specifics of the company with the men that ran the company.  I held my own however, voiced my opinions, gained the confidence of those around me, and within a few months I too became a department head.  I now manage our QA department.  There is more to the story, but a lot of that has to do with not being afraid to talk to management and being able to have that open and honest communication with them.  You can create change for the better.

Now, I don’t know the political climate of every organization out there.  I’m sure there are some people that get stuck in situations that they truly can’t change.  In these situations you have the option to settle at one of the levels mentioned previously, or you can go find a location where you can be truly great.  That may mean leaving your current company and finding a place where you can grow and find your own greatness.  James Bach throws around a couple numbers related to this topic.  90% of the testing positions out there may be suited for mediocre testers, places where potential is stifled and there is no room for greatness.  That still leaves 10% of all the testing positions where great testers can truly move forward, better the craft, and better themselves.  James is happy working in that 10%, and I am confident that there is plenty of room for more great testers in that job market.

If you want to be great, then don’t settle for a mediocre position.  Push yourself, build your name and reputation, and refuse to compromise your integrity.  I don’t know the whole situation around Ben Simo’s recent employment situation, but from what I have read on Twitter, it sounds like he was a great tester stuck in a mediocre position.  In a tweet a couple weeks ago Ben commented that the decision to leave his previous employer was one of the best he has ever made.

Now in case my boss is reading this, I am very happy in my current position and I know I have plenty of room to reach towards greatness.  But what about your current position?

The Shifting Schema

I mentioned in my first post that there would be more to come about  how my perceptions of “real” testing have changed.  This is that promised post.

As was said in “Serious Schema Shift“, my uneducated guess of what formal testing was included a plethora of documentation, heavily formalized structure for tests and testers, and more planning than actual testing.  Looking back on this perception, I believe this schema was rooted in my natural tendency towards heavy amounts of documentation and process.  I enjoy defining processes, sorting things into groups, and bringing order to chaos, and therefore often try to create formal structure in situations.

As a young kid, one of my favorite past times was to go to my grandma’s house and help her sort through the possessions she has amassed over the years.  We would spend days defining and implementing organization systems to cut the clutter in her house, and I loved it.  As time went on, in high school my friends and I spent many weekends creating new versions of board games such as Risk and Monopoly, complete with loads of new rules that needed to be defined.  Even to this day, I am such a stickler for rules and documentation that my wife has affectionately dubbed me the ‘Rule Nazi’ when playing games with friends.  All of that to make the point that I often see the world in terms of rules and regulations, and that perspective definitely defined a good deal of what I expected of the testing industry.

One of the mantras I have heard repeated in my current company is that of “just enough process and documentation.”  The theory being that the time we would spend in defining every detail of every product and process would be a waste of money when we could be actually implementing and monetizing our products.  I have struggled against this mentality over the last couple years, but the mistakes that have been missed by the lack of planning have been much cheaper than the amount of time it would have taken to meticulously plan all of the projects that launched without problem.

Due to this climate, our QA department has been run in a similar way.  We have documented and defined as little as possible, but as much as necessary.  While this approach worked well for the company as a whole, the QA team was struggling to succeed with this approach.  I was in one meeting where I was told that my department was the most broken in the entire company.  Something needed to change, and I thought the answer was in more formalized testing procedures that forced the QA team to push forward and be better.

To this end, I made my way to the StarWest conference hoping to find the tools and processes I would need to fix my seemingly broken department.  As I planned my class schedule for the conference I looked for the speakers that would tell me about how to develop a successful test plan and how to define strategies that would pull my team up to the next level.  The first day of the conference I wanted to attend Janet Gregory’s tutorial ‘Planning Your Agile Testing: A Practical Guide’, but at registration her tutorial was sold out, so I settled for my second pick, James Bach’s ‘Critical Thinking for Testers’.

Before I came along, my team was hired based upon individuals with critical thinking skills.  The driving principle being that testing is more of a mindset that a skill set.  With this being the history of my team, I was anxious to see what James had to say about using these critical thinking skills in testing.  The tutorial lasted all day, and I left feeling like I left the day with a few nuggets of information, but I hadn’t really heard any incredibly new concepts that I felt were very applicable to my team.  I enjoyed listening to James, and the day was entertaining, but I didn’t feel like I was any closer to having the test plans I was looking for.

The next day I spent in a full day tutorial titled “Successful Test Automation”.  I was introduced to exactly what I had come to learn, but it felt all wrong.  The presentation was all about documentation, metrics, formulas, and red tape in relation to test automation.  This was exactly the stuff I was looking for, but it was all nonsense.  The metrics were measuring flawed information, the formulas were using the bad metrics, and the whole crowd of testers were listening and agreeing with the presenter.  I couldn’t believe it!  I brought up my concerns about the flaws I saw and immediately was shot down not only by the presenter, but by several of the attendees in the class.

I remembered hearing James Bach say the day before that he no longer attended other presenters classes because they often couldn’t handle the comments he would make.  I felt the same way, I was in the class trying to learn, but when I brought up my concerns I felt shunned by the entirety of the class.  Many of the ideas and principles James had taught the day before suddenly seemed so much more alive and real than they had the day before.  The concepts James had taught had mostly seemed like common sense at first, but I could now see that the sense he taught was not as common as I had once thought.

At the lunch break I brought up some of my concerns about the flaws I was seeing in what was being taught.  The one individual at the table that was attending the same tutorial I was disagreed with every point I brought up.  It seemed to me that the tutorial was teaching testing as a means to documentation as opposed to documentation as a means of assisting testing.  The conversation at the lunch table didn’t give me much hope that I would find anyone that really understood what I was trying to say.

In the interest of keeping this post a readable length, I will break more about this session into another post at a later time.  The outcome of my experience was that I realized my testing team was doing a pretty good job.  Yes, we have a lot of room for improvement.  However, the answer for improvement does not lie in endless piles of documented test cases and paperwork.  The improvements we need are to cut through a lot of the mindless regression checking that we are currently doing so my team can step out and use the critically thinking minds that we hired them for to do some intelligent testing.  James Bach has some good processes for developing intelligent testers and giving them the tools they need to be effective.  We are implementing some of these tools, and I am already getting positive feedback from management above me.  I am glad I was able to adjust my model of what good testing is.  Now I feel like my team can move forward.