Friday, April 10, 2015

More on test metrics

Last post I wrote a bit about test metrics for the scrums.  It was pretty light, but it was starting the conversation.  Since then, we have met a couple of times to further the conversation and narrow down what we want to report.

The problem with reporting is we have 2 "masters" for our reports:

  • Product - to ensure the product has been tested for each feature or deployment.
  • Management - to show how the organization is working and have a measure to determine how well individuals are working.

The management reports are usually pretty easy.  Most groups use some kind of test management system and pulling test cases and test runs by tester is something any good test management system can give you.  In an agile environment, we sometimes substitute tickets completed and sub-bugs opened, but whatever method, we are able to have a consistent metric that all test engineers use in their scrums.

The product reports are a little harder depending on how you work.  For a big bang project, the daily/interval reports are easy, but for agile and feature deployment, it was a struggle for us.  The issue we ran into is we are now doing small features that don't really lend themselves to full pass/fail test case execution reports like we had in a big bang project.

A sample of one of our big bang project reports was something like:

Testing Summary
Testing Type# Planned# ActualPassedBlockedRetestFailedN/ANo Run
Ads/Meetrics264264211088370
Alerts7167166520048160
Articles/TV/Hurricane/Tornado228228214002120
Commuter Forecast371371342000290
Haircast/Farmer/Fishing/Flu2806280626610001450
Header/Footer/Navigation/Search/Location129012901232146470
Homepage1850185014820993500
Maps148014801436016370
Social Share574574531001420
Titan65265265200000
User Profile/UGC1480148013770837580
Video238423841586001066920
Today/Hourly/Weekend/Day20062006200600000
Pages59659659600000
Video - 10/22 AMP Build816816753001620
Automated Test239923992378002100
Total19912199121810913024515270
% Complete100.0%90.9%0.0%0.2%1.2%7.7%0.0%
Failure rate goal is <5%

This is a really good report for management.  They get a feel of the number of tests being run and what the failure rate is.  The tough part is this doesn't lend itself to agile.

We have started using milestone report for our major features in our scrums.  The milestone reports are based on the milestone feature in TestRail.



This is similar to the above testing summary report, but a bit easier to generate because it comes from our test management tool.  My folks are not big fans of having to write test cases and document execution during the sprint, but they see the value of it when product or management recognize the work and appreciate the status.

One big issue is the tool doesn't show the automated test results.  We run automated tests on our test and production environment at least once a day and display the results on a dashboard that shows our current status one of the TVs in our area.


At first it was kind of hard to read, but more people are reviewing the results and see the value in keeping our failures under 1% prior to release.  Combined with a milestone report, product owners can see if their features are ready to go.

Reporting results and work are still struggles throughout our organization, but we are trying to do better and provide people with information.  I am interested to know what people are using for their reports.  If you have a report that works well for you within your agile scrums, please let me know.  I'm always looking for a better way of doing something.

Friday, January 16, 2015

Test Metrics when using Scrums

I had a some interesting discussions this week about testing metrics for my team.  We are not doing anything out of the ordinary from other agile development shops, so like others, I have management asking me to provide metrics on the work my group is doing and how do I know testing is done.

We are using Jira to track stories, tasks and defects, and TestRail to track our test cases, sets and milestones.  Not saying either one of those are better than any other tool, but they work for us.  We completed a big bang redesign of our website late last year and these tools worked really well for us.  Specifically, we were able to track our manual tests in TestRail through monthly milestones and provide our management with a daily pass/fail/block/na/no run status, which was really helpful in keeping them informed and not asking questions.

But now that the big-bang release is done, we are back to our scrums.  We have 6 scrum teams with 1 QA resource for each team.  We have the ability to do daily releases, so how are we going to track how many tests cases we write and then run, while still maintaining rapid releases?

So far, we have settled on using TestRail milestones, sets and test cases for new feature and functions.  For bug fixes we will put in our test results on the Jira ticket and we will record a dummy test to record our automated results.  We have the option to import our automated tests out of Jenkins to TestRail, but I'm not sure if that is needed.

We have had really good throughput on the fixes since the release but we have not had to report any work metrics, so I'm not sure if I can continue to not report anything.  I'm going to try this approach for awhile and then revisit this after we get some releases under our belt.  Once I do, I will figure out if this approach will work or if we need to do something different to show what we did.

Ping me @todddeaton if you have a good way of showing test results when you have disparate scrums.

Friday, January 9, 2015

I'm Back


First of all, for those of you who were following me back in the day, and wondered what happened to me, I'm sorry for falling off the face of the earth.  I started this blog because I was doing some cool things with tools for testing and development, and thought I had information people could use on how organizations manage and evaluate different tools. In 2011, I went through a reorg and ended up bouncing around to various shops doing contract work.  I didn't think I had anything to discuss, so I just stopped posting and pretty much forgot about this blog.

Fast forward to 2015 and I've been a QA manager for a couple of years testing web applications with various test management, automation and release management tools.  I was told I have a pretty good story to tell about testing and test tools, so I felt it was time to resurrect this blog.  

The one thing I thought was missing from my blog was more of the operational view and a voice of how someone tests. In my previous job I was managing a portfolio of test and develop tools, but not really using them to get products out the door.  Now I'm more on the front lines and have more experience using the techniques and tools to get web products in front of millions of people. It hasn't been an easy ride, but the journey has been fun and I hope people can learn something from it.

I have a couple of motivations for this other than just helping folks.  My resolution for this year is to write more, and our department is also tasked with getting our story out, so I want to use this forum to talk about what works for us and our challenges.  I'm open to hearing from anyone about topics to discuss, so feel free to reach out to me @todddeaton.  Some of the things I want to explore are agile methodologies, lighter testing tools, testing in the cloud, automation and CI, branching and releases.

I'm looking forward to 2015 and hope you enjoy this as much as I will writing this.

Thursday, March 17, 2011

CI Tools

After the news about Oracle messing with Hudson, I had my folks do some research on continuous integration (CI) tools and determine what we should offer as a corporate standard for a CI tool.  One of my architects, Aparna Annapareddy found a good comparison site, picked a few tools for us to consider, and gave a short pro/con take on those tools.  I thought I would share this in case anyone else was in the same place as us.

All these tools supports multi -platform, multi-language including Java and C#, distributed builds and parallel builds.

1. AntHill Professional ALA:
Features: IDE Plug-ins,  full life cycle traceability, has more than 60 out-of-the-box integrations with variety of tools including QC, Robust API to write our own integrations (REST, SOAP), scalable architecture, supports multi-environment builds, metrics and  build reports, pre-tested commits, build artifact repository for sophisticated auditing , Professional support.
Cons: Not OpenSource (I can not find direct quote for licencing), User interface not very friendly compared to Hudson.

2. JetBrains TeamCity:
Features: Tight IDE integrations, easy setup, 50 ready to use plug-ins, sophisticated notifications, advanced build metrics and reports, Audits, scalable architecture, plug-in API, features for troubleshooting memory issues and abnormal behaviors in code , pre-tested commits,  Monitor builds in real time,  professional support, and not very pricy.
Cons: QC integration not available yet, not many out-of-the-box integrations available, not free.

3. Electric Cloud:
Features: IDE Integration, has more than 80 out-of-the-box integrations with variety of tools including QC, Agile support, API to write our own integrations, metrics and build reports, pre-tested commits, scalable, supports multi-environment builds, different levels of professional support.
Cons: Not OpenSource (Price information is not available),

4. OpeanMake Mojo and Meister:
MOJO is free and Meister is paid tool.
Features:  IDE plug-ins, out-of-the-box integrations, QC Integration, SOAP API for creating integrations, Build Audits, impact analysis, Build metrics and reports, Monitor builds in real time, Best support for both Agile and water fall,  pre-commit builds, easy install and setup, Free tool (MOJO), support.

5. Atlassian Bamboo:
Features:  IDE plug-ins, IDE notifications, few out-of the integrations, API to create our own customizations, metrics and reporting.
Cons: Not free, support through partners.

6. ParaBuild (Team Edition is free, Cluster Edition is not free):
Features: Easy setup, integrations with few SCMs and test tools, SOAP API, trending reports, build archives, Team Edition is free and it supports up to 50 users and 50 build agents, Cluster edition is more scalable, professional support.
Cons: IDE integrations are not yet available, Cluster edition is $375 per build machine.

Thursday, February 17, 2011

When Proprietary eats Open Source (minus a fork)

Interesting news recently when Oracle laid claim to the Hudson name and demanded control of the Hudson CI tool project.  Hudson CI tool founder Kohsuke Kawaguchi decided he wasn't going to take this laying down and put a fork in the project to create Jenkins.  We did some poking around and the new Hudson site now includes a Term of Use (granted, it is for Java.Net) and a Legal link (that goes to the Oracle legal disclaimer site).  The Jenkins site looks remarkably like the old Hudson site.

Why is this important?  Well, this situation has spurned a litany of different questions in my group and makes us take a serious look at what tools we use to make software.

Being part of a large company, we tend to purchase enterprise worthy tools which provide what I call a "throat to choke" in case something goes wrong.  We like to deal with vendors who have established support organizations and dedicated customer representatives, so if (actually... when) we have problems with their software, we can grab a hostage/customer rep and make the vendor fix our issues, while the hostage/customer rep buys us lunch/dinner and says compassionate things about our problems.

But sometimes, we go with open source solutions, like Hudson, and build plug-ins or adapters to fit our other tools or processes.  Depending on the situation, we will either make those additions available to the community, or we will keep them for our internal use.

So when a proprietary company (i.e. Oracle) eats an Open Source project (i.e. Hudson), it gives us pause because it introduces unknowns, which may or may not affect our work.  Things like:

  • Why would they want control over a project?  
  • Will they charge for it in future?
  • If we have add-ins we didn't share, do we have to give them to the proprietary company?
  • What's up with the legal disclaimer and how does it affect us?
I'm not sure where we are going to go with this, but I sent a note to our legal group asking for an opinion.  We will also gather our groups who use Hudson to see what they want to do.  Either way, I didn't see this coming, but now that I know what to look for, I'm going to check our other tools to make sure I have a plan if this happens again.

Wednesday, January 26, 2011

HP QC10 and ALM11 on Windows 7

We are going through our Windows 7 beta program and I was selected as an early adopter.  Not sure why they selected me, but it was probably because I know how to write up a defect with somewhat accurate steps to reproduce.  I have a Dell D630 2GHz Dual Core with 2GB of RAM.  Kind of light machine from everything I read about Windows 7, but still should be adequate.

I loaded the corporate image of Windows 7 with Internet Explorer 8 and tested some of the basic apps I use everyday like Outlook, Office Communicator (OC), IE, Firefox, Ziepod (I'm not a big iTunes fan and all I listen to at work are podcasts), Yammer, Clarity, Hyperion and Business Availability Center (BAC).  Hyperion and BAC were giving me fits with IE8, so I downloaded the IE9 beta.  Still no luck getting Hyperion or BAC to work (we are only on BAC 7.5, so that is probably the reason), but the rest of my apps using IE work fine with IE 9, so I got a virtual desktop for Hyperion and BAC and went on with life.

I then accessed QC10 through IE9 and before it started the initial load, it gave me an error message to load a .Net component, with a link.  I went through the link and installed the component and then went on to installing QC.  I'm not sure the install was any longer than with XP, but it took time enough to notice.

Moving through QC10 was relatively painless.  I created requirements, tests, test folders, releases and test sets without any issues.  I ran some tests individually and through test sets, without any issue as well.  I didn't run any automated tests, but I'll do that at a later date.  I ran a couple of Excel reports and they ran to completion and was able to save them to my desktop.  I didn't go into the Dashboard because we don't have anything in there outside of what is delivered by HP, but I will need to look at that later as well.

While running QC10 on IE9, I also had Outlook 2010, OC 2007, Yammer, Firefox 3.6, Chrome 8.0.x and Ziepod running at the same time.  I really didn't have any issues running QC10 and swapping between other apps and it took about 100MB to 125MB of RAM by itself, with about 80% RAM use for all my apps (high, but still functional).

The reason I noticed the RAM use is when I then tried ALM11, with the same apps open, I started to get low memory errors pretty much right away.  I reviewed the memory and ALM11 was taking about 175MB to 200MB.  While it was a relatively significant increase for just the app, overall it should have been fine, until I noticed I was now pegging about 90 - 95% memory use, with spikes tapping out at 100%.

This got me to review the features in ALM11 and I noticed HP tried to improve speed with a feature where if you don't log off, then you come back to where you left without reloading the app.  This tells me HP is like everyone else and sees RAM as relatively cheap, so they are engineering their app to put more in RAM.  My 2GB of RAM, no longer works with Windows 7, so I got another GB (I would have added 2GB, but the D630 is a pain to add a second stick) and decided we may have to use a virtual desktop if we want to do extensive testing work in ALM11.

We will probably move the company to Windows 7 before we move everyone to ALM11, so we will need to make sure our QC/ALM users know they will need to beef up their machines or go to virtual desktops when they get Windows 7.  Overall, QC10/ALM11 works pretty well with Windows 7, just be prepared to check your machines so you don't run into any performance issues.

Friday, January 7, 2011

Thoughts on HP ALM 11.0

I was asked by HP to do a couple of press interviews in conjunction with their release of HP ALM 11.0.  I had a good time doing them, but was hoping to be able to expand a bit on what we found value in ALM 11.0  beyond the  couple of sentences included in print articles to help bolster their story.

Luckily, I got to do a follow-up podcast with Dana Gardner talking about how we believe ALM 11.0 will work within our company and the benefits of ALM tools in general.

Kind of weird hearing myself talk, but I think it came out pretty well.  Hope you find it informative and if you have any questions or comments, feel free to shoot them to me.