We Have Come To Deliver Value!

The Agile Manifesto, in my opinion, is a decent document. By no means perfect, but still pretty good. I happen to interact a bit with a few of the principles on social media, and I get the sense that if they had written the document today, they might use different language in some places. But I could be completely wrong about that. Nonetheless, the Agile Manifesto is a pretty good and useful document that anyone interested in Agile should pay close attention to. (Many close “Agile” friends disagree with me on this point – they are all wrong).

Why should we pay attention? Because I believe that wrong, yet popular, interpretations of certain principles leads to some confusion. Case in point, most people I speak with, would have you believe that an Agile principle is “we deliver value.” That sounds like something any reasonable person would want to do. Never mind that the concept of “value” is a bit nebulous. Ron Jeffries, in his book, The Nature of Software Development, defines value as “what we want.” Well, who is “we”? Mark Schwartz, in his book, The Art of Business Value, makes the case that business value can be tricky to define. (Both great books by the way. If you don’t you own a copy; I’m not sure what you’re waiting for!)

The point is that value is hard to define. Value might not even be a fixed target. You know, complexity, double-loop learning, and all that jazz. For example, in an Enterprise IT setting where team members develop internal tools for other team members to use as part of the business processes, how would you define value? Where is the value created? When is it created? Who is the value created for? All great questions that require answers if we’re going to walk around telling people we “deliver value.”

But here’s the catch. The Agile Manifesto says something slightly different from what I hear people say. This is what Principle Numero Uno says:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

Wait a second. Valuable software? Hmm. Is value shorthand for valuable software? Is that what people mean when they say they delivered value? Is that what you mean when you talk about value? Not in my experience. I think people have grander aspirations than valuable software. I think they’re thinking outcomes and impacts. I’m thinking it too. In fact, I’m not just thinking about it, I’m writing about it. I want the software I develop to help my organization achieve desired outcomes and have the intended impact. But does software have to create value (whatever that is) before we consider it valuable?

In my opinion, valuable software is software that people find useful or helpful. It’s software that is important. It’s software that makes those who use it more effective at their jobs. It’s software that’s beneficial to the organization. (Yes, I did use synonyms of valuable). It’s software that addresses the needs of the stakeholders. We know our stakeholders, right?

Valuable software may not always (at least not immediately) create value (desired outcomes and impacts) because value is impacted (excuse the pun) by many variables, many of which have nothing to do with software. In other cases, valuable software delivered is not intended to create value (as we often define it). And yet in other cases, connecting software delivered with value is near-impossible or at least extremely difficult.

Just because we can’t easily tie software changes to business value doesn’t mean we shouldn’t care about outcomes and impacts, we should. Our processes should give us confidence that our valuable software will eventually create value. But we can’t wait till it creates value to consider it valuable, can we? I don’t think so.

So what do we do in the meantime? I suggest we focus on delivering valuable software.

Advertisements

Should a Scrum Master Be “Technical”?

This is a pretty popular question in Scrum circles that recently came up on LinkedIn (again).

In order to answer this question, I believe we first need to define terms “Scrum Master” and “technical”.

Merriam-Webster provides this as a definition for the term technical, “having special and usually practical knowledge especially of a mechanical or scientific subject” while the Business Dictionary provides the following for its definition, “pertaining to computers or technology”.  For the purposes of this post and in the context of Scrum as a process framework for the development of software products, technical, will simply mean “understanding the technologies and being able to engage in the software development activities involved in the process of creating software products”. Technical, is not a synonym for “software developer” or “computer science major”.

For the definition of Scrum Master, we don’t have to look any further than the Scrum Guide which provides us with the following as it relates to the Development Team:

The Scrum Master serves the Development Team in several ways, including:

  • Coaching the Development Team in self-organization and cross-functionality;
  • Helping the Development Team to create high-value products;
  • Removing impediments to the Development Team’s progress;
  • Facilitating Scrum events as requested or needed; and,
  • Coaching the Development Team in organizational environments in which Scrum is not yet fully adopted and understood.

I highlighted “helping the Development Team to create high-value products” because it stands out separately from “coaching the team”, “removing impediments” and “facilitating events like retrospectives, refinement, and standups”. What does it mean for the Scrum Master to “help the development team create high-value products”? Pretty vague, isn’t it? It would seem to me that the interpretation is largely left to the Scrum Master (and the Development Team) on what help they can provide. In my opinion, helping includes the Scrum Master to directly engaged in the value creation process.  In other words, the Scrum Master can directly act on the software product as its being developed. (For what it’s worth previous versions of the Scrum Guide said something much stronger when it came to helping the development team: https://webgate.ltd.uk/scrum-guide-2013). In helping the team, the Scrum Master still needs to respect the empowerment given to the Development Team.

I don’t know how a Scrum Master can effectively engage in core software development activities without understanding what’s going on from a technical perspective. In the Scrum community, there is a ton of focus on coaching, facilitating and removing impediments. I have no intention to diminish the importance and necessity of these activities but these activities do not act directly on the product that is being created.

I don’t see a lot of conversation around how the Scrum Master can help with the core activities of software development if need be.  In fact, I read many posts that seemingly discourage Scrum Masters from becoming directly involved in value creation. It’s taboo to ask (for example) if a Scrum Master can test, write stories or even code if need be. I’ve interviewed many a Scrum Master who didn’t really care about how things really got done or worked because they believed that this needn’t be a concern of theirs or as they often put it, “I’m not technical and I was told during my training that I didn’t need to be”.

Maybe we’re compensating for bad practices we observed such as the making of the Lead Developer a part time Scrum Master. However, I don’t believe that we effectively address dysfunction by reinforcing other dysfunctional practices.

So, should a Scrum Master be technical? I believe the answer is yes but that is only if the Scrum Master wants to be able to help act directly on the software product being developed. Otherwise, the answer is obviously no. I encourage all the Scrum Master’s I work with to be as technical as they can comfortably be.  As a Scrum Master, the choice is ultimately yours.

Are We Done Yet?

Recently, there has a been a bit of talk about being done in the context of Agile software development.  If I’m not mistaken, as far as Agile frameworks go, it would seem that the Definition of Done as described by Scrum influences many an Agile team. We also know that there are variations on this as well such Done-Done and Done-Done-Done.

But when are we really DONE?

In some regard this depends on how we define done.  Is a movie done when production is over?  Is a presentation done when the slide deck has been prepared?  Is a meal done once it has been cooked?  Is a software increment done when coding and testing are complete?

Teams engaged in real (there is a lot of fake and dark Agile out there) Agile software development are guided by a set of 4 values and 12 principles.  The very first principle in the Agile Manifesto goes as follows:

Our highest priority is to satisfy the customer through early and continuous delivery of valuable software

When we use this principle to guide how we define done, I cannot understand how we can consider ourselves done until we’ve delivered an increment of value to our customers. Not just deployed (and hidden behind feature toggles or something like that) but delivered such that our customers can interact with the increment in some form or fashion.

So the next time your team debates whether you are done, ask yourselves if your customers are using the increment.

(Thanks to Mia Augustine for inspiring this post).

P.S. IMHO Scrum’s Definition of Done (in the best case scenario) is really (more appropriately named) Definition of Ready To Be Delivered.

Tips For Working Without Timeboxes

My previous piece on timeboxes led to a fair amount of dialogue on Twitter.  Some of it was positive and some of it was critical.  However, a particular tweet (that led to some back and forth) stood out to me.

In the absence of “real deadlines”, do we need to timeboxes to rein in our development teams?

As promised in the previous post, I’d like to share some approaches that teams I have been on have used to reduce their dependency on timeboxes.  I do want to be very clear that I have also used these techniques in conjunction with timeboxes.

Ruthlessly Delivering Small Increments Of  Value

Regardless of whether you use stories or not (all our work does not have to be represented as stories), I will encourage you to decompose your work into small bites of customer value.   The INVEST method (associated with user stories) is a good guide to use. Each ‘increment of value” should take a few days to complete. 

There is a common misconception that I often encounter which suggests that the bites of value a team works on should be exactly what will be “released” to the customer.  I believe this to be a grave mistake.  Let me illustrate with a common example.

Imagine a team is developing an API that supports GET, PUT and POST as methods.  Let’s also imagine that the team decided the API will not be made available to customers until all three methods are implemented.  Should this be treated as one bite of value or three bites of value?  Most technology folk I know will argue that it should be one bite of value because it will not be released until all the methods are supported.  I probably would have argued for this earlier in my career as well.  Well now I beg to differ.

It is clear (at least to me) that there are least three units of value available with this API and while it is true that all three methods need to be implemented before the API is made available, each method can independently be developed and validated for proper function. If each method was treated as its own “story”, we’d find that each story meets INVEST criteria.

But what difference does it make?  After all, nothing will be released until all three methods have been implemented.  That is fair point.  And yet I’m compelled to ask the following questions:

  • Why would we delay (internal) feedback on the first method just because the second and third methods haven’t been implemented?
  • Why would we couple the the release of value with deployment?
  • What if it so happens that we learn that we can release the API progressively?
  • Why would be knowingly limit the options of how we approach the work?

As I learned many years ago, “bite small, chew fast”.  There are many tips on how to split/slice stories.  Google is our friend.

Limit Work-In-Progress

Timeboxing, done right, helps us to be extremely selective about what we want to accomplish in a given timeframe.  We take the time to understand the capacity available to us and then select the most appropriate work that we think we can get done in that time period.

The good news is that we shouldn’t require a time constraint to explicitly limit our work in progress.  It’s a proven fact that the more work we have going on concurrently, the less we actually get done.  Unfortunately many organizations don’t pay attention to how much they have going on and instead attempt to ensure that they are operating near 100% utilization which invariably slows things down and ensure.  Teams are primarily rewarded for how busy they look.  I see many Product Owners struggle with limiting work in progress.

A word to the wise about limiting work in progress. If we don’t limit the size of the unit of value as well, we’ll still be working on items for an extended period of time with little to show for it.

Quantifying Desired Product Qualities

I am a fan of the work of Tom Gilb, EVO and Planguage.  If you’re not familiar with any of these, you should probably look them up (and also watch this talk by Tom).  I believe that there are ideas from Tom’s work that can be of help to those of us in the business of developing software products..

For example, when we say that our API should be fast, what do we mean by fast?  Or when we say that the website should handle a lot of users, what do we mean by a lot?  These are qualities that our solution is supposed to have that are often simply described in qualitative terms.  Qualitative attributes leave up to the imagination of whoever is responsible for ensuring that product has the desired quality.  In the absence of explicit quality attributes, teams succumb to pitfall of gold plating.

Specifying in quantitative terms what “goodness” will look like is critical if we want to stay focused and avoid going down rabbit holes.  One could argue that this is the Specific (S) and Measurable (M) from SMART Goals applied to our work items.

In summary, while I’m still a fan of using time boxes, there are ways of working that can reduce the dependency of a team on explicit timeboxes.  Regardless of whether you use timeboxes or not, you will still benefit from adopting these techniques as you do your work.

Should A Scrum Master Have A Goal To Make Themselves Unneeded?

It has become quite common to come across articles or posts that state that the goal of a Scrum Master should be to make themselves unneeded or as a colleague recently put it, “put him or herself out of a job”.  I disagree.

The role of the Scrum Master is clearly defined in the Scrum Guide.  To summarize, the Scrum Masters provides service and help to the:

  • Product Owner
  • Development Team
  • The Organization at large

Suggesting that the goal of the Scrum Master is to make themselves unneeded is to suggest that the Scrum Master should be focused on either ensuring that (a) the services they provide are no longer needed and/or (b) someone else besides them provides these services.  In other words, ensuring that they are temporary!  In my opinion, this completely misses the point, is an unnecessary distraction and a potential source of much frustration.

The goal of every team member should be to make a contribution to the team in support of its quest to be successful and this includes the Scrum Master.  The goal of the Scrum Master should be to provide the help needed so that the team can effectively and efficiently deliver software that is valuable.  It is not to make themselves unnecessary.

How about coaching? Is that a permanent thing?  The Scrum Master coaches three distinct groups and even if the Product Owner and Development Team arrive at a point where they no longer need coaching, I’m not convinced the “Organization at large” in a complex environment (ever) arrives at such a point.  Never mind that the Scrum (Agile) body of knowledge continues to evolves, rarely ever does the system remain the same.  If you know of any organizations that have arrived at such point where they could no longer benefit from continued coaching, please feel free to share.

But for grins and giggles, let’s agree that it is actually quite possible for an organization to get to the point where (a) the help provided by the Scrum Master is no longer needed or (b) other roles in the organization can effectively and efficiently provide this help as it changes (because it will change).  In my opinion, this then is simply a by-product of the Scrum Master’s continued service to the organization and is not a goal that is defined upfront.

And yet, if you insist that the goal of the Scrum Master is to make themselves unneeded, then I have a book recommendation for you.  Check out  Obliquity: Why Our Goals Are Best Achieved Indirectly.  It might be beneficial to you on your journey to make yourself unnecessary or unneeded.

Thoughts on “Managing” Software Development

I’ve written in the past on management and managers and you can find some of those thoughts here however I want to share some additional thoughts.  These thoughts are basically my answer to the question of: what does a Software Delivery/Development/Engineering manager do?

(Even if you’re one of those who believe managers are not needed in knowledge work, I’ll crave your indulgence to read on).

(I do believe that Software Development/Engineering Manager are better role names than Software Delivery Manager but I digress).

It is dangerous to define the job duties of a role without taking into consideration the guiding philosophies of an organization.  For example, let’s just look at the concept of the manager. The definition of management (and hence managers) in an organization that adopts the principles of Frederick Taylor will be markedly different from that of an organization that adopts the principles of Edward Deming.  In other words, roles are not defined in the abstract.  The organizational philosophy ultimately defines the role.

I belong to the school of Deming when it comes to management, and as a result, my definition and description of a software delivery manager is going to be based on Deming’s view on management and managers.  I posit that real Agile and/or Lean organizations take a similar stance.

A software delivery manager is a “manager of software delivery” and if we decompose the role into its component parts we have manager and software delivery.

Douglas McGregor basically describes the Taylorist manager as person who works with the assumption that:

…workers have an inherent dislike of work and hence needs to be directed, controlled and threatened with punishment in order get them to work hard to achieve company objectives  

The Taylorist manager is all about control and external direction ergo this manager focuses on defining how the work should be done by the individuals on the team and then measuring individuals to ensure they are meeting pre-determined targets the manager defined.  This manager leaves little room for individuals and teams to self-direct as these managers need to do all the planning and provide all direction.

So how does Deming describe as the role of the manager?  According to Deming (excuse the gender-specific tone), a manager:

  1. Understands how the work of the group fits with the aims of the company. He teaches his people to understand how the work of the group supports these aims.
  2. Works with preceding stages and following stages
  3. Tries to create joy in work for everybody
  4. Is a coach and counsel, not a judge
  5. Uses figures to help understand his people
  6. Works to improve the system that he and his people work in
  7. Creates trusts
  8. Does not expect perfection
  9. Listens and learns
  10. Enables workers to do their jobs

Deming is not alone in his thinking.  Other management thought leaders such as McGregor, Drucker, Ackoff, and Senge all share the same thoughts.

If I were to summarize, I would say that the role of the Agile (and some would say modern) manager is to:

Create an environment where people and teams can do their best work for themselves and the organization in a manner that is both rewarding and fulfilling

The modern manager realizes that s/he is dependent on their team to be successful.  The team is also dependent on the manager.  In a nutshell, the team and the manager are interdependent.  The manager and the team share in the responsibility of developing and delivering software.

(If you work for an organization that says they are Agile and but exhibit Tayloristic management tactics, I hope for your sake that the organization is in the early stages of Agile cultural adaption.  If they aren’t, you’ve been warned).

So we’ve defined manager but how about software delivery? Software delivery is simply the aim of the system that the manager and their team(s) belong to. This system has the goal of delivering software that addresses the needs of stakeholders.

What then are the duties of the software delivery manager?  Here are a few that immediately come to mind:

  • Provide clarity to team members and teams on organizational objectives and what achieving them entails
  • Define operating constraints and boundaries for teams in which they operate
  • Provide teams with the appropriate authority, resources, information and accountability
  • Provide career guidance and coaching to individual team members and teams
  • Staff teams appropriately within budgetary constraints.  Manage their budget responsibility
  • Partner and network with other managers in the organization in addressing organizational impediments
  • Provide feedback to the teams on the software that is being developed.  Engage in product “inspect and adapt” sessions.

What else would you add to this list?

While we’re talking about software delivery teams, I need to remind us that in Lean and Agile organizations, the work is done by cross-functional teams.  The implication of this is that software delivery teams will often be composed of individuals from more than one functional group in the organization.  A software engineering delivery manager will often need to team up with managers from other functional groups in the organization e.g. infrastructure manager and together they will provide the team the support it needs.

It’s also important to remember that Agile teams (regardless of the framework of choice), as a part of being cross-functional, have a role on the team responsible for championing the product (or something similar to a product).  This role, often referred to as the Product Owner, and not the software delivery manager, decides what the team should focus on.

So there you have it.  These are my thoughts on the Software Delivery/Development/Engineer Manager.  I anticipate that some readers will disagree with me.  Please share your thoughts in the comments.  But as you disagree with me, take a moment to explore why it is that you disagree with me.  In fact, I would implore that you examine the underlying philosophy and mental models that inform your definition. Where do they come from?  Do you know?  Are they based upon Taylor’s Principles of Scientific Management, Deming’s principles or something else?  

Our actions reveal our beliefs.

Time-boxes, goals and realizing outcomes

Goal is defined as:

the end toward which effort is directed” or “the object of a person’s ambition or effort; an aim or desired result.

I don’t understand the debate over the Sprint Goal.

I also don’t understand how a teams can function and focus without goals (be they implicit or explicit although I would suggest that there is a danger in not making goals explicit).

If a team carves out a fixed set of time (a time-box) to do some work, they already have, albeit implicitly, established a goal i.e. completing a certain amount of work in a certain amount of time.  I don’t consider this type of a goal to be a best goal we could come up with because it’s output focused and you know I favor outcomes over outputs.  I will concede however, that this type of goal is probably better than no goal at all.

From the Scrum Guide, the Sprint Goal is:

an objective that will be met within the Sprint through the implementation of the Product Backlog, and it provides guidance to the Development Team on why it is building the Increment.

Seems pretty straightforward to me with a lot of latitude for the Scrum team to create their goal.  What do you think?

If your time-boxed work period has no goals, no aims, no objectives, no desired end what exactly is your team doing?  Where are you headed?

(I’m pretty sure some folks reading this are thinking to themselves that time-boxing is an archaic practice and should be dropped altogether.  Stay tuned for my next post).

And while this post has focused up till this point on teams working with time-boxes, I will go even further to suggest that any team that is developing and delivering software in support of a broader organizational vision – yes I’m looking at all the teams that say they are using Kanban out there –  that doesn’t have time-bound (see Agile Manifesto Principle #3)  goals is operating irresponsibly.

Goal are very useful.  Use them wisely.  Use them well.

Predictability in Software Development – Part III

In Part I and Part II of this series, I challenged the popular perspective on predictability in software (product) development and suggested that if an organization needs to respond rapidly to the changes in its ecosystem, it needs to value adaptability over predictability at least “in the large”.  However if innovation is not critical, then valuing predictability over adaptability may be the better thing to do.

What role (if any) does predictability play in product development?  This is especially important because – as mentioned in the first post of this series – many people in positions of significant authority demand predictability from their teams.

Even though it would seem that I’ve spoken against predictability, I do believe that there is often a need for teams to be predictable however I just consider this need to be predictable in the small.  Teams that are predictable in the small consistently complete about the same amount of work over a series of short time periods. Candidly, “short time periods” is dependent on your strategic and operational approaches to addressing business opportunities.  As a rule of thumb, however, I recommend that ‘small’ be 4 weeks or less.

For example, let’s say that a team forecasts the completion of ten items each iteration over the course of four two-week iterations and then has the following results:

Forecasted Delivered
Iteration 1 10 8
Iteration 2 10 11
Iteration 3 10 9
Iteration 4 10 9

Even though the team never delivered exactly ten items, we can see that they completed about the same amount of work in each iteration.  A basic understanding of variation will help us understand that the team exhibits predictability.

But wait a second, this seems to be all about outputs and I thought we were all about outcomes?  Actually we care about both outcomes and outputs and even though we value outcomes over outputs, we know that our outputs are needed for our desired outcomes to be realized.  So, understanding how much work we can complete in the small can help us determine what outcomes can be achieved in the small.

What practices can an organization and their teams adopt in order to help them be predictable in the small? Here are three:

Bite Small, Chew Fast

As a team, focus on value adding items that can be completed in a couple of days. Use techniques such as INVEST (for user stories) to decompose large initiatives into small chunks of value.

Less Is More

Take an essentialist approach to how much work the team takes on.  Less is often more. Minimize the amount of work that is progress.  Team members with different skills should collaborate on work items with the goal of finishing the work items as quickly as is possible.

Minimize Process Loss

Ivan Steiner came up with the equation AP = PP  – PL; that is, the actual productivity of a group equals its potential productivity minus process losses.  Process losses are those things that prevent our teams from being as productive as they could be.  They dampen the good we could potentially do.

Teams need to take the time to reflect on the things that are negatively impacting their performance and then address those things with both rigor and discipline.

(To be fair, these three things are good for any Agile team to pay attention to regardless of whether they have predictability demands or not)

The Conclusion of the Matter

And yet with all this talk about predictability, I’d be remiss if I didn’t make observations about predictability and the potential unintended consequences that arise from pursuing it in an unbridled manner:

  1. Describing knowledge work teams (especially product development teams) as factories or engines needs to be done with care.  Software development teams are not machines that are executing repetitive or pre-programmed activities.  Innovation means discovery.
  2. Just because a team is predictable doesn’t mean they are high performing or that they are delivering value. (Subject of a future post).
  3. Predictability, even in the small, is often at odds with innovation and creativity.  If you are challenging your teams to be creative and innovative and also demanding high levels of predictability from them, something will have to give.  Eventually.

So after three posts on predictability, where have we landed?  Are we adaptable in the large? Are we predictable in the small?  If you’re part of a real Agile organization, then adaptability and predictability are characteristics that your organizational operating model needs to support.  Its critical that leaders provide their teams with clarity on the business landscape and guidance how they need to balance the attributes of adaptability and predictability in software (product) development.

Velocity is NOT about Productivity

I happen to participate in a couple of Lean and Agile subject matter groups on LinkedIn. This is my way of learning and sharing.  In my opinion, any serious Agile practitioner should join and participate in such types of groups (no it’s doesn’t have to be on LinkedIn). I personally learn more from these groups than from workshops and certifications classes.

I’ve observed over the past year that not a week or two goes by without someone having a question around Agile velocity on one of these discussion groups.  Interestingly enough, their questions are never about how to use velocity for forecasting and/or planning. No, the questions are always about how to increase or improve their team velocity.  As an example, check out this very recent velocity conversation.

Why are so many of the velocity questions focused primarily on productivity?

Unfortunately, many of these well-intentioned posters find themselves in organizations where velocity is considered to be a primary measure of productivity.  Not only do organizational leaders use velocity as a measure of productivity, we find some Agile practitioners (sometimes unintentionally) doing so as well.  I’ve seen postings on LinkedIn where teams are congratulated for increasing their velocity.  I’m saddened and disappointed that in 2016 we’re still talking about velocity in this way.  I’m not sure what got us here but instead of simply complaining, I am compelled to share my thoughts in the hopes that it may help someone or some organization now or in the future.  Paraphrasing Peter Block, start with the room you’re currently in.

First of all, let’s agree on what velocity in the context of Agile software development is. Operational definitions are extremely important and I think defining velocity will do us some good.  Velocity can simply be defined as the:

Sum of effort estimates completed in an iteration (see the Agile Alliance reference for more information of velocity).

Any other usage of the term in an Agile software development context is a redefinition of the term.  Agile velocity doesn’t refer to how fast a team works.  It’s not even the count of items completed in an iteration (that could be throughput). It how much of our effort estimates did we complete in an iteration.  No more, no less.  That simple.

As the Agile Alliance article makes clear, velocity is primarily a planning instrument. Carefully read the article as it provides a pretty good explanation of the purpose of velocity and what it should be used for.  Pay attention to the common pitfalls as well.

(However I must point out that many Agile teams effectively forecast and plan without using velocity.  Stated differently, velocity is not a requirement for forecasting and planning.)

But maybe you still think that we should be able to use velocity to gauge how productive our team is.  Maybe the information presented so far is not convincing enough for you. In that case, let’s take a moment to dissect what it means for a team to increase its velocity.

If velocity is the sum of our effort estimates, increasing our velocity would mean that we have to increase the sum of our effort estimates completed.  On the surface, it may seem that this would be easy to do.  All a team has to do is increase the number of items it completes and their velocity will increase.  However, completing more items doesn’t assure us of a velocity increase because we can’t assume that our estimates are the same or larger than our estimates for previous iterations.  In fact they could be smaller and our velocity could actually decrease and yet it would be a good thing!

For example, let’s assume a team had a velocity of 20 points in Iteration 1, which meant they completed 4 stories each estimated at 5 points.  Then in Iteration 2, they had a velocity of 18 points after completing 6 stories, which were estimated at 3 points each. Were they less productive in Iteration 2 because their velocity went down?  What if they completed 7 stories in Iteration 3 and each of those stories has an estimate of 1 point? Are they still less productive?  How do we know?

Don’t forget that velocity is calculated using “effort estimates”.  Estimates are subject to many factors that I won’t get into in this article and yet its important to remind ourselves that each iteration the team is solving problems they have never solved before. The team effort estimates could go up due uncertainty and complexity and it would have nothing to do with how productive they are.

Determining how productive a team is based on how much of their “effort estimates” they complete totally misses the point.  I think Deming would refer to this as management malpractice.  It’s that bad.

And yet teams can easily increase their velocity.  In fact, it’s not hard at all.  All they need to do is make their effort estimates larger.  I’ve seen many teams do this in my career. I’m not sure your organization really wants that.  Riffing off of Goldratt:

Tell me how you’ll measure me and I’ll show you how I’ll behave.

So encouraging teams to increase their velocity is simply something organizations that are intentionally adopting Agile should not be doing especially if they don’t want their teams to game the system.  I will go further to suggest that using velocity to determine predictability (in the small) is in serious danger of missing the point of velocity as well. There are there better ways to do this and I will be addressing that point as I finish my series on predictability.  If you missed the first two posts, check out Part I and Part II.

After all this talk on velocity, I am moved to remind us that the primary measure of progress for an Agile team (and organization) is the amount of software that has been delivered to users and that the users find valuable. All other measures are secondary and tertiary.  Velocity can be used to help forecast and plan (and there are possibly better alternatives to it) but let’s not use velocity as a measure of productivity.  If you must use velocity, use it responsibly.