Be Careful What You Test For

My teams are great. I’m not on them…I own them. At least as far as an executive ‘owns’ his teams. In an Agile world, when we say ‘own’ we kind of mean ‘is responsible for the success of ‘ or some such thing. What I’m trying to say is, don’t take umbrage at the words I used. I’m responsible for a bunch of people in a company, and those people are mostly organized into Scrum teams. So, my teams are great.

They’ve only been doing Scrum for five months now and they’re doing well. They’ve been producing potentially shippable software each sprint from the very beginning. They inspect and adapt, although there’s still a lot of help required from their managers in that. They do stories and grooming and planning. And they’re now writing automated tests. Wowie.
And an unexpected thing happened that is interesting enough for me to write about it. Here’s what it is: our quality went down.

Yup. We’re doing more software. We’re releasing more often. We’re writing automated tests, both unit and functional. What could possibly go wrong?

In the old days (before last June), they used to release maybe three or four times a year max. And for each release, they would spend a week or two or more testing. Everybody testing. And the software was pretty good and the testing was pretty good and so it went out and the production software was pretty good.

Then we became Agile and started releasing more. We started writing automated tests (over 225 done on the back end system, and the handset application team has begun to write their own automated tests). We prepared to release each sprint. And we stopped doing huge manual testing marathons the way we had done in the past. Everything seemed OK until maybe Sprint 6, then we started to notice small problems in production after the release. Bad SQL. Broken javascript. Small stuff, but noticeable.

So what happened? We stopped testing, is what. But Agile says don’t do manual testing, do automated testing and we’re doing that. Aha, but writing automated tests isn’t quite ‘doing automated testing’ until you have enough automated tests in place to really make a difference. That’s what we seem to have missed. We stopped our manual testing (those three week marathons) because we figured that because we had started writing automated tests, everything would be OK.

Somebody smart must have already said that “A poor test that is executed is way better than a great test that isn’t executed.” (In case nobody said it, I say it all the time so I’ll take credit.) We forgot that. We didn’t have enough tests to execute, so in effect we weren’t testing, and we didn’t recognize it and supplement with old, manual tests. So now we’re figuring that out, and it’s tricky because we don’t have three weeks each sprint to spend doing manual testing. Unfortunately, we’re probably a couple thousand test cases from being up to speed on our automated testing. We’ll have to inspect and adapt a few times on this.

And yes, you’re right, the title doesn’t really match the article, does it? Still, I like it.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s