Friday, April 10, 2015

More on test metrics

Last post I wrote a bit about test metrics for the scrums.  It was pretty light, but it was starting the conversation.  Since then, we have met a couple of times to further the conversation and narrow down what we want to report.

The problem with reporting is we have 2 "masters" for our reports:

  • Product - to ensure the product has been tested for each feature or deployment.
  • Management - to show how the organization is working and have a measure to determine how well individuals are working.

The management reports are usually pretty easy.  Most groups use some kind of test management system and pulling test cases and test runs by tester is something any good test management system can give you.  In an agile environment, we sometimes substitute tickets completed and sub-bugs opened, but whatever method, we are able to have a consistent metric that all test engineers use in their scrums.

The product reports are a little harder depending on how you work.  For a big bang project, the daily/interval reports are easy, but for agile and feature deployment, it was a struggle for us.  The issue we ran into is we are now doing small features that don't really lend themselves to full pass/fail test case execution reports like we had in a big bang project.

A sample of one of our big bang project reports was something like:

Testing Summary
Testing Type# Planned# ActualPassedBlockedRetestFailedN/ANo Run
Ads/Meetrics264264211088370
Alerts7167166520048160
Articles/TV/Hurricane/Tornado228228214002120
Commuter Forecast371371342000290
Haircast/Farmer/Fishing/Flu2806280626610001450
Header/Footer/Navigation/Search/Location129012901232146470
Homepage1850185014820993500
Maps148014801436016370
Social Share574574531001420
Titan65265265200000
User Profile/UGC1480148013770837580
Video238423841586001066920
Today/Hourly/Weekend/Day20062006200600000
Pages59659659600000
Video - 10/22 AMP Build816816753001620
Automated Test239923992378002100
Total19912199121810913024515270
% Complete100.0%90.9%0.0%0.2%1.2%7.7%0.0%
Failure rate goal is <5%

This is a really good report for management.  They get a feel of the number of tests being run and what the failure rate is.  The tough part is this doesn't lend itself to agile.

We have started using milestone report for our major features in our scrums.  The milestone reports are based on the milestone feature in TestRail.



This is similar to the above testing summary report, but a bit easier to generate because it comes from our test management tool.  My folks are not big fans of having to write test cases and document execution during the sprint, but they see the value of it when product or management recognize the work and appreciate the status.

One big issue is the tool doesn't show the automated test results.  We run automated tests on our test and production environment at least once a day and display the results on a dashboard that shows our current status one of the TVs in our area.


At first it was kind of hard to read, but more people are reviewing the results and see the value in keeping our failures under 1% prior to release.  Combined with a milestone report, product owners can see if their features are ready to go.

Reporting results and work are still struggles throughout our organization, but we are trying to do better and provide people with information.  I am interested to know what people are using for their reports.  If you have a report that works well for you within your agile scrums, please let me know.  I'm always looking for a better way of doing something.

Friday, January 16, 2015

Test Metrics when using Scrums

I had a some interesting discussions this week about testing metrics for my team.  We are not doing anything out of the ordinary from other agile development shops, so like others, I have management asking me to provide metrics on the work my group is doing and how do I know testing is done.

We are using Jira to track stories, tasks and defects, and TestRail to track our test cases, sets and milestones.  Not saying either one of those are better than any other tool, but they work for us.  We completed a big bang redesign of our website late last year and these tools worked really well for us.  Specifically, we were able to track our manual tests in TestRail through monthly milestones and provide our management with a daily pass/fail/block/na/no run status, which was really helpful in keeping them informed and not asking questions.

But now that the big-bang release is done, we are back to our scrums.  We have 6 scrum teams with 1 QA resource for each team.  We have the ability to do daily releases, so how are we going to track how many tests cases we write and then run, while still maintaining rapid releases?

So far, we have settled on using TestRail milestones, sets and test cases for new feature and functions.  For bug fixes we will put in our test results on the Jira ticket and we will record a dummy test to record our automated results.  We have the option to import our automated tests out of Jenkins to TestRail, but I'm not sure if that is needed.

We have had really good throughput on the fixes since the release but we have not had to report any work metrics, so I'm not sure if I can continue to not report anything.  I'm going to try this approach for awhile and then revisit this after we get some releases under our belt.  Once I do, I will figure out if this approach will work or if we need to do something different to show what we did.

Ping me @todddeaton if you have a good way of showing test results when you have disparate scrums.

Friday, January 9, 2015

I'm Back


First of all, for those of you who were following me back in the day, and wondered what happened to me, I'm sorry for falling off the face of the earth.  I started this blog because I was doing some cool things with tools for testing and development, and thought I had information people could use on how organizations manage and evaluate different tools. In 2011, I went through a reorg and ended up bouncing around to various shops doing contract work.  I didn't think I had anything to discuss, so I just stopped posting and pretty much forgot about this blog.

Fast forward to 2015 and I've been a QA manager for a couple of years testing web applications with various test management, automation and release management tools.  I was told I have a pretty good story to tell about testing and test tools, so I felt it was time to resurrect this blog.  

The one thing I thought was missing from my blog was more of the operational view and a voice of how someone tests. In my previous job I was managing a portfolio of test and develop tools, but not really using them to get products out the door.  Now I'm more on the front lines and have more experience using the techniques and tools to get web products in front of millions of people. It hasn't been an easy ride, but the journey has been fun and I hope people can learn something from it.

I have a couple of motivations for this other than just helping folks.  My resolution for this year is to write more, and our department is also tasked with getting our story out, so I want to use this forum to talk about what works for us and our challenges.  I'm open to hearing from anyone about topics to discuss, so feel free to reach out to me @todddeaton.  Some of the things I want to explore are agile methodologies, lighter testing tools, testing in the cloud, automation and CI, branching and releases.

I'm looking forward to 2015 and hope you enjoy this as much as I will writing this.