Friday, January 16, 2015

Test Metrics when using Scrums

I had a some interesting discussions this week about testing metrics for my team.  We are not doing anything out of the ordinary from other agile development shops, so like others, I have management asking me to provide metrics on the work my group is doing and how do I know testing is done.

We are using Jira to track stories, tasks and defects, and TestRail to track our test cases, sets and milestones.  Not saying either one of those are better than any other tool, but they work for us.  We completed a big bang redesign of our website late last year and these tools worked really well for us.  Specifically, we were able to track our manual tests in TestRail through monthly milestones and provide our management with a daily pass/fail/block/na/no run status, which was really helpful in keeping them informed and not asking questions.

But now that the big-bang release is done, we are back to our scrums.  We have 6 scrum teams with 1 QA resource for each team.  We have the ability to do daily releases, so how are we going to track how many tests cases we write and then run, while still maintaining rapid releases?

So far, we have settled on using TestRail milestones, sets and test cases for new feature and functions.  For bug fixes we will put in our test results on the Jira ticket and we will record a dummy test to record our automated results.  We have the option to import our automated tests out of Jenkins to TestRail, but I'm not sure if that is needed.

We have had really good throughput on the fixes since the release but we have not had to report any work metrics, so I'm not sure if I can continue to not report anything.  I'm going to try this approach for awhile and then revisit this after we get some releases under our belt.  Once I do, I will figure out if this approach will work or if we need to do something different to show what we did.

Ping me @todddeaton if you have a good way of showing test results when you have disparate scrums.

Friday, January 9, 2015

I'm Back


First of all, for those of you who were following me back in the day, and wondered what happened to me, I'm sorry for falling off the face of the earth.  I started this blog because I was doing some cool things with tools for testing and development, and thought I had information people could use on how organizations manage and evaluate different tools. In 2011, I went through a reorg and ended up bouncing around to various shops doing contract work.  I didn't think I had anything to discuss, so I just stopped posting and pretty much forgot about this blog.

Fast forward to 2015 and I've been a QA manager for a couple of years testing web applications with various test management, automation and release management tools.  I was told I have a pretty good story to tell about testing and test tools, so I felt it was time to resurrect this blog.  

The one thing I thought was missing from my blog was more of the operational view and a voice of how someone tests. In my previous job I was managing a portfolio of test and develop tools, but not really using them to get products out the door.  Now I'm more on the front lines and have more experience using the techniques and tools to get web products in front of millions of people. It hasn't been an easy ride, but the journey has been fun and I hope people can learn something from it.

I have a couple of motivations for this other than just helping folks.  My resolution for this year is to write more, and our department is also tasked with getting our story out, so I want to use this forum to talk about what works for us and our challenges.  I'm open to hearing from anyone about topics to discuss, so feel free to reach out to me @todddeaton.  Some of the things I want to explore are agile methodologies, lighter testing tools, testing in the cloud, automation and CI, branching and releases.

I'm looking forward to 2015 and hope you enjoy this as much as I will writing this.