Process, People, and Pods


Sunday 2 September 2007

...And How Are You Going to USE That Information?

I cited in my last post a case where Counting Wasted Time. I ended by giving my favorite response when someone is asking to collect data about the project:

...And how will you use that information?

Don't get me wrong. I really want a great answer to that question. I seek metrics that help understand the project. To paraphrase the NRA:

Metrics don't kill projects; people do.

But I digress. Some additional examples of useless information gathering will help you see the importance of the question, "And how will you use that information?"

Example 1: Revised Estimates

In a recent fixed price project, we dutifully re-estimated stories at the start of an iteration. We had release level estimates with which we did our original planning. The iteration estimates were supposed to set our "commitment" level for the iteration; they were supposed to set client expectations.

We over-delivered by 100%. We repeated the exercise, and over-delivered again. So on the third cycle, we asked the question, "How did we use the estimates we made?" The team, including the project manager, acknowledged that we hadn't looked at them since we collected the numbers. So we dropped them.

(BTW as I cited in yesterday's post about Counting, I have rarely found use in resizing just in case things have changed.)

Example 2: Defect Tracking Database

We haven't tracked bugs in the last three development projects I have participated in. How can this be?

One of the key Lean practices is feedback; feedback is most effective when rapidly given. This leads to a positive metric of story cycle time, a measure of the time from the start of analysis until story acceptance by the customer. So if a bug blocks a story, then the clock is running on that story until it is fixed and moves forward. Developers on a Lean-oriented team will attack bugs rather then work on development of new stories. The net results is few bugs live for more than a day. In tracking a recent project, over half the days were finished with zero open bugs.

Instead, we post the bug's existence on a red task card on our standard card wall. When it is fixed and verified, we take the card down. I would like to say that we throw the card away, but we still like to put them into a stack and admire them (for some reason). I do have to admit to a certain curiosity myself. But in the last couple of projects, I have yet to see anyone looking back at those cards!

There are great bug tracking tools out in the market. Many IT shops have standardized on one, and have process guidelines requiring their use. An Agile/Lean project shouldn't need any of them.

[In a bit of cruel fate for a vendor, I was recently asked to give a statement on a recently announced bug tracking tool that purports to solving all your tracking needs; indeed, it is the next generation of tracking tools, rendering all other obsolete. Needless to say, I commented that it was a solution in search of a problem as far as I was concerned. If you need that level of sophistication, your project is going to fail anyway.]

Bad Reasons to Collect Information

There are two reasons for collecting metrics that are, in fact, good reasons not to collect metrics:

Because I might need them: I consider this a euphemism for "I don't know." I lump into this same category, "We always do it this way" and "We have to". In the case of "I don't know", we drop the counting for an iteration or so, and see if we miss it. It is always hard to argue against a short-term experiment. In the case of "we always do it this way", start the discussion to drill down and figure out why we did it that way originally, and see if the reason still exists (especially if you have introduced Agile and/or Lean practices). This is just good development practice, but in particular is a good Lean practice. In the case of "we have to", it is time to engage the process architects and gauge if this is a battle worth fighting. I tend to relish that idea having done the process role in IBM many, many years ago, but we won't discuss my personality defects here. Regardless, the tactic is the same: Find out what it was useful for and see if the need still exists.

I need to understand how well everyone is doing. Translation: "I need to pin the blame on someone when things go wrong." Now you have hit a hot-button of mine. Blame is not a way to run a team. Blame creates fear, and fear kills motivation and productivity. And kills fun! Agile teams, especially collocated Agile teams, have many positive ways to address inhibitors to productivity. This includes many ways of addressing skill deficiencies which will always exist in software projects because of the wicked pace of change. So if someone attempts to collect information for the primary purpose of assigning blame, well, let's just say they have to get past me first.

There Are Good Metrics

Less I have led you astray (and you will use these posts to justify collecting no information), there are many good uses of collected information. I will cite my favorite Agile/Lean project metrics in a future post.

But for now, do you have examples of gathering useless information?

Saturday 1 September 2007

Counting Wastes Time

It's there. You see it. You have this overwhelming urge to count it. It's a moral obligation, a call-to-arms; your personal mission.

I am referring to counting artifacts and measuring artifacts. I am talking about project managers and team leads, and sometimes programmers, testers, and analysts.

And if you stop to think about it, it is a big time waster. A better core philosophy:

"Never count your money while you're sitting at the table. There'll be time enough for counting when the dealings done." -- Kenny Rogers from The Gambler

This thinking is based in Lean, a process borrowed from manufacturing. One of the practices Lean espouses is the identification and elimination of waste. Waste is broadly defined as anything that does not add value to the delivered product.

Agile software development produces loads of artifacts, like stories, tasks, test cases, and code. All of these artifacts have different useful lives. Code endures. Tasks are transient artifacts of development. There is a time and place for counting each. But if you count outside of those boundaries: waste!

Here is a real life example from an Agile project. It occurred between the client project administrator (sort of a cross between a administrator and a project manager) and myself in my role as tech lead and process coach. The discussion was around task cards. A task is a small unit of development work identified by the programmer for a story, which itself has an average of 10 or so tasks, with a huge variance.

  • PA: "I need to track task cards."
  • Me: "There are going to be a lot of them."
  • PA: "I still want to track them."
  • Me: "Could you just track stories? Development only takes a couple of days."
  • PA: "I still want to track them."
  • Me (seeing a pattern here): "Well, if you really want to..."
  • PA: "Great. We need to number them."
  • Me (I am beginning to catch on): "So that you can track them?"
  • PA: "Yes."
  • Me: "You are free to put any number on them you want. They're on the wall." (Note the technique of delegating to the person who seems to have all the extra time.)
  • PA: "And I need the initials of the pair that worked on them."
  • Me (Can't see a quick way around that.) "Okay. We mark the finished as we go. We'll put our initials on it then."
  • PA: "I need an estimate for each task."
  • Me: "Two-to-four hours."
  • PA: "No, I need an estimate for each task."
  • Me: "Two-to-four hours. We break the story down into 2-4 hour tasks. That is the estimate."
  • [Repeat several more times.]
  • PA: "Okay, I will mark a range of 2-4 hours. Then I will need the actuals."
  • Me: "No." I really try not to use this word with clients, but sometimes there isn't a better word.
  • PA: "But how can I track how long it actually took?"
  • Me (trying to use powerful Vulcan logic): "If we gave you a number, it is only an estimate of the actuals. We have to estimate to accommodate meetings, distractions, bathroom breaks, and the like. Even we don't know how long it actually took."
  • PA (after long processing pause): "But I need actuals."
  • Me (feeling a little guilty suggesting it): "You can track how long it takes to do the story, and just divide it."
  • PA: "Okay, I guess that will work."

I was being a bit unfair to our project administrator. The whole conversation would be rendered moot as soon as she saw the number of tasks that were created, started, and completed each day (about 20-25 on an average day). The counting only lasted a week.

Most consultants would (and do) succumb to the pressure to count. Some may even advocate it. Frankly, I exploited my gray hair and used my authority voice to carry the day in this case.

Imagine the overhead if this process had been established? Even a few minutes per task add up when there are 3000 tasks on a project. And the killer question for this:

...And how will you use that information?