Making testing visible in the Tracker workflow


As a feature story progresses through the Tracker workflow, a lot of testing activities are also underway. Team members are collaborating to turn examples of desired behaviors into business-facing tests that guide development. Testers are performing manual exploratory testing on stories as they are delivered. Performance or security testing may be underway at some point in the development process.

A testing workflow?

To keep things simple, Tracker’s states are limited to Not Started, Started, Finished, Delivered, Accepted and Rejected. Only the “Accepted” and “Rejected’ states seem directly related to testing. Testing activities such as specification by example, acceptance testing, exploratory testing, load testing, and end-to-end testing aren’t reflected in the Tracker workflow, but they’re going on nevertheless. Testers, coders, product owners and other team members continually talk about how a feature should work, and what to verify before accepting a story as “done”. But details can still be overlooked or lost. If stories are rejected multiple times because of missed or misunderstood requirements, or problems slip by and aren’t discovered until after production release, testing activities need to get more attention.

We’re working on enhancing collaboration and communication in Tracker, with increased flexibility that will help with tracking testing activities. Meanwhile, how can Tracker users follow testing along with other development activities? It would be helpful to have a place to specify test cases, note plans for executing different types of tests, and make notes about what was tested. Accomplishing this requires a bit of creativity, but it’s possible to keep testing visible in the current Tracker workflow. Here are some ways we do this on our own Pivotal Tracker team.

Testable stories

First of all, we work hard to slice and dice our features into stories small increments that are still testable. We read the stories in the backlog to make sure we understand what each one should deliver, and how it can be tested. If I have questions about an upcoming story when we’re not in a planning meeting, I note it in a task or comment to make sure we talk about it. Iteration planning meetings are a good place for the team to start discussing how each story will be tested. Some teams get together with their business experts to help write the stories with this in mind.

We make sure we know how we’ll test all the stories in the upcoming iteration. There are a couple of different ways to get enough of this information into the story .

Using tasks and comments

Test cases and testing notes can be added to a feature story as tasks. They’re easy to see in the story, and can be marked as completed when done. We often include links to additional details documented in a wiki page, or to automate-able functional tests used for acceptance test-driven development (ATDD). As teammate Joanne Webb points out, sharing test cases before implementing a story clarifies requirements, and gives developers clues on problems to avoid introducing. In our experience, this shortens the accept/reject cycle for stories.

Comments are another good place to add information about requirements and test cases, especially since you can also attach files with additional information, screenshots, pictures of diagrams, and mockups. And if team members have questions they can’t get answered in person right away, comments provide a place to record a written conversation, and email notifications can alert the story owner and requester so they can answer questions.

Visibility and workflow through labels

We can find ways to record conversations about requirements, but how do we incorporate a testing workflow into the larger development workflow for a Tracker story?

TestingExample

Labels are a handy way to keep stories progressing through all coding and testing tasks. In our Tracker project, automating functional tests is part of development. The story isn’t marked finished until both unit tests and functional tests are checked in, along with the production code. Once a feature story is delivered, someone (usually a tester or the product owner, but it could be a programmer who didn’t work on coding the feature) picks up the story to do manual exploratory testing.

To make this visible, we put a label on it with our name, for example, “lisa_testing”. Not only do we conduct exploratory testing, we verify that there are adequate automated regression tests for the story, and that necessary documentation is present and accurate. Once we’re done testing a feature story, we put a brief description of what we tested in a comment, remove the “testing” label, and add another label to show the story is ready for the product owner to verify. This might be “lisa_done_testing” or “ready_for_dan”. Sometimes the product owner gets to the story first, and uses similar labels to show he’s in the process of testing or finished with his own acceptance testing. Once all involved parties are happy with the story, we can accept it. Using labels is a bit of extra overhead, but it gives us flexibility to continually improve our acceptance process.

Putting together a bigger picture

Some testing activities extend beyond one story, especially since we usually keep our stories small. It’s possible to write a feature story or chore for the testing activity. For example, you might write a story for end-to-end testing of an epic that consists of many stories and extends to more than one iteration. Writing a chore for performance testing, security testing, or usability testing may be useful.

However, as my teammate Marlena Compton points out, there are advantages to making sure testing is integrated with the feature stories themselves. If a story remains in delivered state for several days while we complete system testing related to it, the labels we put on the story convey the testing activities underway. Completing all testing before accepting a story helps ensure the stories meet customer expectations on the first day. As Elisabeth Hendrickson says, testing isn’t a phase, it’s an integral part of software development, along with coding and other work. Having our Tracker stories reflect that helps keep us on target.

As we do exploratory testing on a feature story, we might discover issues or missing requirements that don’t make the story un-shippable, but may need to be addressed later. We can create separate feature stories, bugs or chores for those, and link back to the original story via links or labels.

We track some testing information outside of Tracker, for example, on our team wiki. However, we find that tracking testing activities in Tracker helps ensure that they get done in a timely manner, and keeping tests visible helps ensure that stories meet customer expectations the first time they’re delivered. Integrating testing activities with coding tasks keeps our testing efforts aligned with other development efforts.

While we work to make Tracker more flexible for teams and testers, we hope these ideas help you make your testing more visible in the Tracker workflow right now. Check out our blog post http://pivotallabs.com/2013-update-new-features-new-api-new-design/ to get an overview of some of the plans for Tracker this year, and come back periodically for the latest news. We’d also love to hear how your team incorporates testing in agile development. Please leave a comment, or write to us at tracker@pivotallabs.com.

16 Comments

  1. Steven Vore says:

    Creating tasks for each of the testing details makes sense; we’ve also seen that it helps the developers (if it’s done before a story’s started) by making sure they’re aware of details they may have otherwise missed.

    With regard to using labels to show workflow, who’s working on the story: We’ve been just changing the Owner field (i.e. who currently “owns” the work being done). Is there a reason not to be doing that, i.e. is Owner better used for something else?

    April 8, 2013 at 12:18 pm

    • Lisa Crispin says:

      Thanks for that feedback on using tasks, we’ve experienced the same good results, with developers proactively anticipating things we will test and making sure those work before delivering the story.

      For the workflow, changing the owner sounds fine too. Labels have worked well for us because in most cases we want both a tester and the product owner to deem a feature story acceptable. It’s easy to glance at the Current panel and see what’s going on with the delivered stories. But using the owner field lets each person watch their “My Work” panel for stories that are ready for them. Visibility, either way.

      April 8, 2013 at 12:43 pm

  2. Alan Ridlehoover says:

    Nice post, Lisa. Using tasks seems like a natural thing to do. And, I like your creative use of labels. But, this sounds like a whole lot of ceremony. Does it feel that way in practice?

    Is your project a large one? How many people are involved? And, what’s the average cycle time of a single story (from started to accepted)?

    April 8, 2013 at 4:30 pm

    • Lisa Crispin says:

      Hi Alan, thanks, glad you like it! Using labels as I described feels lightweight in practice. And helps minimize confusion on who should be testing a given story at a given time, which saves time.

      We have 20 or so people on the Tracker team, including designers and marketing folk. We don’t keep cycle time statistics, but most stories are ready for acceptance testing within a couple days of being started.

      April 8, 2013 at 5:19 pm

  3. James Majcen says:

    Hopefully blogging about this topic means that challenges using PT for formal testing process is being explored and baked in for a future release.

    We also use tasks for test steps and it works pretty well. We’ve actually created epics to nest stories as tested and ready for production release as well as other stages. Seems to work well for getting us by.

    It may just be our workflow, but once a story is Accepted (tested) it is a challenge to track whether the story has been released to production or not. Perhaps a “Deploy” button/status/step after “Accept” would work nicely?

    July 25, 2013 at 1:04 pm

    • Lisa Crispin says:

      Hi James, thank you for your comment. Indeed, we are looking for better ways to incorporate testing activities into the Tracker workflow.

      I like your “Deploy” button idea, definitely worth considering. Please keep sending suggestions on how Tracker can be improved with respect to testers and testing activities.
      – Lisa

      July 25, 2013 at 6:04 pm

  4. Max de Grunwald says:

    In my team we add test plans to the body of the story, and then add any bugs with the feature (or edge case requirements which don’t warrant a separate story) as tasks. We just use a naming convention so the Developers can add “Task – Set up db” and the Product Manager can add “Issue – If no first name is present we should display username”.

    July 25, 2013 at 1:24 pm

    • Lisa Crispin says:

      That sounds like a workable approach too. We’re working on adding markdown for text input fields, that might help with formatting the information in the tasks to clearly distinguish bugs, edge case requirements and so on.

      July 25, 2013 at 6:05 pm

  5. Jason M. says:

    I appreciate the feedback regarding the use of tasks to provide more visibility on the progress of a story. Regarding tasks, I was wondering if you guys are following this thread, and can you provide some idea as to whether there are plans in the backlog to allow individual tasks to have an owner?

    http://community.pivotaltracker.com/pivotal/topics/task_ownership_independent_of_story_ownership?utm_content=topic_link&utm_medium=email&utm_source=reply_notification

    July 25, 2013 at 8:32 pm

  6. richard w says:

    Tasks seem a nice pragmatic way of tracking testing, but very quickly the list of tasks on a story can get unwieldy eg if you had dev tasks, then test tasks, then doc tasks etc. Supporting at least a one level hierarchy for tasks v a story to aid managing and organising them.

    Definite +1 for allowing tasks to have Owners btw

    July 26, 2013 at 6:29 am

  7. Casper says:

    We are using Pivotal Tracker along with Tracker tracker to keep track of testing. Tracker Tracker allows you to se a flowchart like on a whiteboard with unstarted – started – in QA – Passed QA – Delivered – Accepted/Done and let you drag’n'drop stories across the states. It uses labels for differentiating “in QA” from “passed QA” since both are actually just “Finished” in Pivotal Tracker. In my opinion customization of the different states would be awesome. That way you can add any number of states between “Started”, “Finished” and “Deliveted” or “Done”/”Accepted”. We don’t use the “Delivered” state for example. We do internal testing first and then deliver the story on the customers test environment for them to test. Whenever we deliver the story we consider it done/accepted because we use 14 days sprints and can’t wait another week or two for them to finish testing before marking the story as complete. That means that in our case “Delivered” should be either removed or placed after the “Accept”/”Reject”. By enabling customization of these states and maybe also making a flow chart (like on Tracker Tracker) things would be easier for some of us :)

    July 28, 2013 at 10:14 am

  8. Lisa Crispin says:

    Jason, sorry to not get back to you sooner. We are currently looking at either allowing multiple owners on a story, or allowing task ownership. I don’t have a timeframe yet, we need to decide on a design. We also will provide the ability to do @mentions in a task, as you can do in comments. Thanks for your feedback!

    Richard, others have suggested the task hierarchy idea. We will be providing the ability to use markdown in tasks soon, which might help with that a bit.

    Casper, thanks for the pointer to TrackerTracker, that sounds really useful. I need to try that out.

    Sorry for the group reply, but I appreciate all the comments!

    August 2, 2013 at 3:06 pm

  9. Brent says:

    Your post finally inspired me to blog about some of the things we are doing with testing. I like the idea of using a Wiki to share acceptance tests. We are using Google Drive as a shared area to write and discuss our cucumber acceptance tests, the main advantage of this is that it allows people to collaborate on an acceptance test in real time.

    Please read more about it here:

    http://blog.brentgreeff.com/using-google-drive-to-collaborate-on-cucumber-features-06-08-2013/

    August 6, 2013 at 4:08 am

    • Lisa Crispin says:

      Hi Brent,
      Thanks for sharing that. for some reason the link isn’t working for me right now, but I’ll try again later.
      – Lisa

      August 6, 2013 at 10:02 am

  10. Magne says:

    I second what James Majcen said.

    Why not just allow custom states?

    Then people won’t have to remember what label to tag the story with, or to notify each other when they put the labels on.

    Custom states could help enforce a particular workflow, and ease communication since when stories go from one state to another, a particular team member could be notified by default.

    Besides, it’s not always simple to define what “Accepted” means, but if one could define and use custom states one could divide it up into more fine-grained and unambiguous “Deployed” and “Tested” states.

    http://community.pivotaltracker.com/pivotal/topics/more_acceptance_states_to_support_testing_on_multiple_environments

    This is THE feature that would make pivotal tracker appeal to Kanban enthusiasts, and everyone else that also need a specific workflow (i.e. Lean Startups that need a “Hypothesis validated” state). I see that this feature request is over 5 years old… and not even on your 2013 roadmap. I sincerely hope you will reconsider prioritizing this feature!

    PS: I think you are excluding a lot of customers because of only having so few and non-changeable states. It was the first thing I thought of when considering PivotalTracker a year ago, and the reason I didn’t decide to use it then, in favor of Asana.

    Best,
    - Magne

    September 20, 2013 at 6:36 am

    • Lisa Crispin says:

      Hi Magne,
      Thanks for your feedback. We’re working on the best way to allow sensible workflows in Tracker that take testing into account. It’s my understanding that we’ve avoided custom states because we don’t want to make Tracker too heavyweight. But you make a good argument. I’m passing it along to our designers.
      Thanks,
      Lisa

      September 20, 2013 at 10:08 am

Add New Comment

Your email address will not be published.