Category Archives: Norms & Principles

November 3 – Mike Beedle on Enterprise Scrum

This was a blasting event, with one of the co-signers of Agile Manifesto: Mike Beedle, being our honorable guest.

Mike’s presentation (outlined below) was followed by comprehensive Q&A.  Some folks were wise enough to have Mike hand-sign Agile Manifesto copy 🙂

Summary of Mike’s talk:
  • Enterprise Scrum (ES) goes much further than Technology alone.  ES grows Unicorns and transforms Dinosaurs.  ES is:
    • Balanced Management
    • Integrated Agility
    • Principles & Techniques
    • Management Framework
  • ES mandates Management of new breed.  Today, most of Management taught at colleges and universities is completely outdated.
  • In 10 years from now, “S&P 500 Membership” will be different than it is today. And companies that don’t become more agile will know it first
  • Conventional performance management is an outdated process that must be redefined
  • The most effective way to ensure a successful agile transformation (especially, at a large organization) is to “spin off” a small internal entity that would be free of certain existing counter-agile organizational laws, mandates, norm and behaviors.  Such organisation should be ‘real’, having its “front-” and “back-end”
  • Financial firms that don’t  also think of themselves as technology firms (worse yet, outsource their technical solutions elsewhere) will have difficult times in years to come
Some Kodak moments below:

meetup_mike_beedle-7

meetup_mike_beedle-6

meetup_mike_beedle-12

meetup_mike_beedle-11meetup_mike_beedle-3 meetup_mike_beedle-4

meetup_mike_beedle-10

meetup_mike_beedle-9

meetup_mike_beedle-14

October 18 – LeSS Talks: LeSS PBR Session Simulation, with User Story Mapping

This was another very exciting and highly collaborative Large Scale Scrum (LeSS) meetup session – took place in NYC this week.

The meetup group simulated LeSS Overall Product Backlog Refinement session, by a few Feature Teams that collaborated with each other and Product Owner, while using Story Mapping technique.

The “Product” in scope was the virtual collaboration tool itself – Nureva Span.  All of its existing (currently, available in-production) functionalities were “reverse-engineered” into “done” use stories (this was done by me, Gene, before the session) that were placed on Story Map canvas.  The purpose of this preliminary exercise was to create a point of reference for the group – to create a point of reference for future product brainstorming.

The role of the product owner was played by one of the facility hosts (Ellen) – she knew the tool very well, from a stand point of existing features and strategic business plans of the company-producer.  Another host facility (Geoff) was asked to pick up the role of a well-informed SME that would be able to provide clarifications to teams’ questions, without dependency on the product owner.

The group of attendees was split into two “feature teams”; skillset and familiarity with the tool itself (some attendees were meetup regulars, whereas others were newbies) were mixed up.

The product owner described to the teams her most important strategic goals (they were based on real-life demands from the company’s sales force and market feedback obtained from existing customers).  She also gave a few examples of possible user stories that she would find most valuable (highest priority).  This primed both teams to engage into a very collaborative discussion: flushing out additional user stories, mapping them and aligning with larger, overarching features that already existed in the product and were displayed on the virtual canvas.

The product owner used the concept of iterative delivery to “schedule” hypothetical releases and provided guidance to the teams, in which order user stories would need to be developed to deliver highest value first.

Both teams also attempted to size user stories, by using Small, Medium and Large scale (there was no time to use Planning Poker).

During the meetup summary the product owner admitted that it would be worthwhile for her and SME to export all created user stories from the canvas and pass them on to the actual product development teams that were responsible for designing and supporting the collaboration tool, hoping that wishes of end-customers (meetup people) would eventually materialize as product features.

P.S.  Before the event, the meetup group was advised to review the following references:

 

Kodak Moments:

10-18-2016-8 10-18-2016-7 10-18-2016-6 10-18-2016-5 10-18-2016-4 10-18-2016-3 10-18-2016-2

10-18-2016-1

August 30-31st: The 1st Large Scale Scrum Conference in Amsterdam

The first ever Large Scale Scrum (LeSS) Conference took place on August 30th -31st of 2016.  Budgeted to accommodate around 150 people, it was oversubscribed (around 180).

The event took place in the beautiful city of Amsterdam, in the cathedral style audience hall Rode Hoed.

What you will find summarized below, are the topics that were that were most interesting to me (Gene) personally.   But also, my friend and colleague Vicky Morgan made a thorough recap of the entire event at the bottom of this page.  So, please, spend some time reviewing as well.

Also, please check out:


But first, a few entertaining Kodak moments (hover over images to see who got caught in scope):…

day_2_15

day_2_2day_1_1 day_1_2 day_1_3 day_2_1day_2_11day_2_19

day_2_3

Large Scale Scrum Communities Discussion

There was very high interest from many conference attendees in local LeSS communities.  People wanted to learn how the can ensure continuity of learning LeSS, once they leave the conference and return home.   Since NYC LeSS community is the oldest and the biggest at this moment, I (Gene) was called upon share my experiences of creating and supporting the community.   I covered the following aspects of NYC LeSS community:  Birth/Life Span, Count/Composition/Growth/, Venue/Frequency/ RSVP vs. Attendance statistics, Format, benefits for Local Community, potentials of Global Networking, Communications/Announcements, Public vs. Intra-company communities (pros & cons).

Among many other people that attended to the discussion, the folks from Germany/Berlin, UK/London, Italy/Rome and Finland/Helsinki  seemed to have more interest.  As a an immediate result of that, Berlin LeSS Community was born:

day_3_18

day_3_14

day_3_2 day_3_3 day_3_6 day_3_11 day_3_12

LeSS Bazaar

At the beginning of the conference, all participants were split in multiple teams, with each team working throughout the conference to produce something “shippable”. It could be an idea, a concept, a tool…anything.  At the end of the second day, there was LeSS Bazaar, facilitated by Craig Larman, where each team had to present its product to others.  Each conference member was asked to used “LeSS money” to vote for what they felt was the best product.

day_3_25 open_space_7 open_space_12 open_space_11 open_space_10 open_space_8 open_space_

open_space_3


Event Summary by Vicky Morgan

day_1_4

Self Designing Teams Workshop, by Ahmad Fahmy

  • Forming Communities – Facilitation Techniques – Constellations
  • Discussion vs. Dialogue  – Peter Sengue

Story of LeSS, by Bas Vodde

  • We can go really fast un the wrong direction or slowly in the right direction;

LeSS is about focusing on going in the right direction

  • Shu – ha – ri
    • The original books were written at the ha-ri level
    • New book is at the shu level
  • Framework Prescriptiveness
    • add prescription around the points that create transparency – Scrum does this well
    • LeSS is prescriptive on organizational structure as structure has to be changed in order to have an effective transformation
  • LeSS
    • Basic rules – LeSS rules
    • Guides clarify how you adopt the rules
    • LeSS complete picture
    • LeSS principles are retrospectively added
  • More with LeSS
    • Build your method up – don’t tailor it down
    • LeSS is about how you do scrum with multiple teams vs. one of the product teams doing scrum

Port Of Rotterdam, by Rutger Van Dijk

  • Business value – what’s important and how do you measure it?
  • Simple visual reporting to upper management; do not try to force “agile” reporting onto upper management
  • Exploring user stories
    • Analyzing users and their behaviours
    • Gemba – go see
    • Users visit the team
  • Obtaining Management buy-in
    • Stop building features for management, build for customers / end users; PO has biased assumptions
    • Physically delete stories – indicate symbolically that you cannot build everything
  • Distributed team
    • Team bonding activities
  • Culture
    • How are consultants/freelances treated? Are they “resources” / a pair of hands or valued team members?
  • Hire for mindset
    • Learn while doing
    • Lunch sessions with online courses (plurosite)
    • Pair-programming (with expert)
    • Special code camps
  • This worked:
    • Build what you know will be needed, not what you think will be needed
    • People need to be around to explain why something is needed. If not around then do not build or cancel the sprint
  • This did not work:
    • Every team member working on their own stories
      • Less motivated
      • Less productive
      • No project hygiene
      • Complete disarray
    • Reduced involvement of management (and you definitely need them)
  • The Wave
    • There is no sea without waves
    • Everything is changing all the time
    • Experimentation and learning

 

Owning (versus Renting) Your System, by Craig Larman 

  • We need to own our ideas
    • Employees should be involved with the decisions themselves – leads to employee engagement
    • When you give a person an idea, they are renting the idea and thereby do not own the idea
  • 2014 study: 5 keys to a successful Google team
    • Psychological safety was far & away the most important of the 5 dynamics unearthed
    • The foundation of psychological safety is that team are empowered to ask “why” enabling them to understand “why”, thus enabling teams to own ideas
  • Meaningfulness
    • Connection with real customers
    • Involved In the whole rather than the part (degree that employees are involved in end-to-end
  • Why Feature teams vs. Component teams
    • In LeSS feature teams work directly with real customers
      • This intentionally translates into teams owning the ideas and improving the system
      • In addition to reducing coordination and hand-off waste
  • Local thinking vs. system thinking
    • System Optimizing Goal
      • Although there are many system optimizing goals, what is really important is that the group decides what the system optimizing goal is. This is a very important question
      • Is the current system / team optimized for this goal?
      • What is the behaviour of the organization/system?
    • Why would employees care about the system optimizing goal?
      • Are the team involved in strategy?
      • Are the team directly engaged in owning the idea of the product?
      • Are the team directly engaged with the customer?
      • Are the team just ditch diggers?
  • Owning the answer
    • Why is there 1 and only 1 backlog in LeSS?
      • Team should find the answer themselves

 

Technical Practices In Less. Why Are They So Important And So Hard?  by Terry Yin

  • Organizational agility is constrained by technical agility
  • Technical agility = people problem
  • Organization
    • Self-leading teams instead of maximizing resource utilization
      • Resource utilization – utilization of what people already know – known-known
        • Delivery culture
        • Prevents learning from happening
      • Focus on known-unknown instead of known-known
  • Education
    • Help people to overcome the CS cliff
    • Be the village
  • Model
    • Adopt the craftsmanship metaphor
      • Craftsman model
        • Apprentice
        • Journeyman
        • Master
    • Don’t cheat too much
  • Technical Excellence
    • Iterations: product > feedback > improve
    • Practice & More Practice
    • If the organization doesn’t promote practicing then you cannot have technical excellence

 

Impact Mapping with Innovation Games, by Gojko Adzic

  • Squirrel-driven product management
    • Agile at scale – BBC – 75M  spent with no clear definition of benefits
    • Large scale failure – FBI Case delivery system $450M spent before discovering something was wrong
    • Status report was green and occassionally amber for years
    • Analysis: 10000 inefficiencies in the system. Nobody was following any process. The only reporting was reporting on delivered activity
  • Scrum reporting focuses on delivery activity – delivered user stories/ features / burn-up & burn-down charts / velocity / story points
    • velocity reporting = this many people working this many days of the week.
    • Story points = how much money you have spent
  • Underpants gnomes progress reporting (story points)
    • Phase 1: Collect underpants
    • Phase 2: ?
    • Phase 3: profit (lagging indicator)
  • Study: 1/3 of initiatives delivered expected results. 1/3 no visible business impact, 1/3 = damaged the organization
  • Can we find something to report on that is indicative of success? Anton Zadorozhniy.
  • Successful initiatives tend to change somebody’s way of working.
  • Doug Howard – “How to Measure Anything”
    • People measure what they think is easy to measure
  • Impact map  = visualization of the plan.
    • Helps visualize what you are doing and why you are doing it
    • Helps solve underpants gnomes reporting problem – report on achievements instead of delivery reporting
    • Helps with business analysis in general (business analysis often fails as you analyze an option that someone else had already chosen [Tim Brown])
  • Innovation Games: Impact Trump Cards
    • A lot more options to analyze behavioural changes
  • Open Impact Mapping

August 23 – LeSS Talks: Agile Budgeting & Finances: unveiling conventional management mistakes


Last night’s meetup produced an exceptional turn-out of people. There were some guests from the UK: friends and colleagues.

The following key points were covered:

  • Triple Constraint Triangle of Conventional Management
  • Why Agile Community understands Budgeting better than some finance people?
  • Why Management Area is so “untapped” in terms of improvements?
  • Why did Borealis abolish traditional budgeting?
  • Decomposing Budget into: Forecasts, Targets and Resource Allocation
  • Forecasts vs. Targets
  • “Rolling” Forecasts vs. Dynamic Forecasts
  • KPIs: good, bad
  • Balances Scorecards against Budgets – what usually wins?
  • Splitting a bag of cash
  • Does Meeting a Budget Drive Individual Performance?
  • What do Monetary Incentives to do People?
  • Why do we need Partnership between HR and Finance?
  • Frequently ignored scientific evidence
  • How to overcome resistance?
  • Evolution vs. Revolution: what is better?
  • Who is doing “this”
  • Agile budgeting for scaling

Note: a number of folks approached, asking to share the materials presented. Please, use the form at the bottom of this page, to receive the materials.

Some Kodak moments captured:

meetup_aug_23_6

meetup_aug_23_11meetup_aug_23_7 meetup_aug_23_8 meetup_aug_23_9 meetup_aug_23_5 meetup_aug_23_4 meetup_aug_23_3 meetup_aug_23_2

meetup_aug_23_10

Quotes from: Implementing Beyond Budgeting: Unlocking the Performance Potential, by Bjarte Bogsnes



Bjarte Bogsnes 
(left) has a long international career, both in Finance and HR. He is currently heading up the Beyond Budgeting implementation at Statoil, Scandinavia’s largest company with operations in 36 countries and a turnover of 130 bn USD. On Fortune 500, the company was recently ranked #1 on Social responsibility and #7 on Innovation. Transparency International has named Statoil the most transparent listed company globally.
Implementing Beyond Budgeting: Unlocking the Performance Potential by [Bogsnes, Bjarte]Bjarte is a popular international business speaker and winner of a Harvard Business Review/McKinsey Management Innovation award. He is the author of “Implementing Beyond Budgeting – Unlocking the Performance Potential”, where he writes about his implementation experiences. Statoil realized that traditional leadership and management practices no longer work in today’s competence organizations operating in business environments more complex, dynamic and unpredictable than ever.


The summary below (selected quotes from “Implementing Beyond Budgeting: Unlocking the Performance Potential”) has been prepared for Senior Leaders, Finance and HR people that are still have to “do budgets” the old way, by combining Targets, Forecasts and Resource Management in one KPI number.  They are highly recommended to read the entire book to draw their own conclusions.

  • Today, however, we are in very different times. Not only have our business environments become much more dynamic and unpredictable, but they are just as much about people: the birth of the knowledge worker and the demise of organizations as obedient machines. In this environment, budgeting has become more of a barrier than a support for great performance, something that instead prevents organizations from performing to their full potential.
  • Today, there is so much more VUCA out there: Volatility, Uncertainty, Complexity, and Ambiguity.
  • Traditional management has more in common with how the Soviet Union was run than with the principles and beliefs of a true democracy.
  • Much of the internal communication would also benefit greatly both in trustworthiness and usefulness by turning down somewhat the one-way “aren’t we great” messages. The result is often the opposite— cynical employees laughing about all those polished corporate messages. Instead, we need much more employee-driven discussions and information exchange. Why are there, for instance, so few internal company blogs when the external world is full of them? We need more horizontal communication: sharing, challenging, and learning. But there seems to be a fear of people using these forums to speak up, voicing critical viewpoints that might fit badly with the image companies try to paint of themselves. Again, the parallel to totalitarian regimes is disturbing. It’s mushroom management; keep them in the dark and feed them shit.
  • Traditional management fears transparency because it threatens control. But as Jeremy Hope, cofounder of the Beyond Budgeting Roundtable, put it, “Transparency is the new control system.” There is a reason why thieves and crooks prefer to operate at night (although in some businesses it seems to happen during daytime, too).
  • One of the most stubborn myths in traditional management is that the only way to manage cost is through detailed annual cost budgets, with a tight follow-up to ensure that no more is spent than is handed out. The many problems this practice creates are not necessarily among the most serious ones, but I have chosen to address them early as the consequences of removing the cost budget are definitely what worries managers the most when considering Beyond Budgeting.
  • Great performance! What is the problem? It works; managers did not spend more than they were given. We have cost under control, right? Unfortunately, this is just half the story. That ceiling works just as well and often better as a floor for the same costs. Cost budgets tend to be spent, even when the initial budget assumptions changed (which they almost always do). Managers do not necessarily behave like this to cheat; they do it because the system encourages them to do so.
  • The problem gets bigger because not only one bag is handed out. There are a lot of smaller bags inside: “Of course, we cannot just give you one big bag of money!” We are talking about a huge mountain of bags, labeled salary, overtime, travel, consultants, and so on, often split further into even smaller monthly bags.
  • Great performance! What is the problem? It works; managers did not spend more than they were given. We have cost under control, right? Unfortunately, this is just half the story. That ceiling works just as well and often better as a floor for the same costs. Cost budgets tend to be spent, even when the initial budget assumptions changed (which they almost always do). Managers do not necessarily behave like this to cheat; they do it because the system encourages them to do so.
  • The problem gets bigger because not only one bag is handed out. There are a lot of smaller bags inside: “Of course, we cannot just give you one big bag of money!” We are talking about a huge mountain of bags, labeled salary, overtime, travel, consultants, and so on, often split further into even smaller monthly bags.
  • There is a lot of work involved in negotiating the right size of all of these bags, which often stimulates behaviors bordering on the unethical. As the budget-approving manager, this is a game you are bound to lose. You will always have less information than those below you about the real need for resources, status on ongoing activities and projects, and the quality of new projects.
  • To make sure that money is spent from the right bag, there is also the detailed monthly follow-up of actual costs against the year-to-date budget (the one we were trusted to make ourselves). Variances are spotted with accounting accuracy. Never mind the fact that our monthly reference point becomes more and more obsolete and irrelevant as months go by, assumptions change, and the real world moves on.
  • Another mantra is low costs. Costs should be as low as possible and cutting the budget is an effective way of achieving that. What we want, however, is not necessarily the lowest possible cost level. What we want is the optimal level, the one that maximizes value creation. How
  • In an increasing number of businesses, however, this resource is no longer the main constraint, at least not all the time. Instead, human capital is often taking this role. Our processes struggle with this shift. Finance has spent decades developing and perfecting the financial language and process: common charts of accounts; international financial reporting standards; and systems for data capture, reporting, and audit. HR, however, is still in the very early days of trying to do something similar with human capital.
  • “What if you are in a business where margins are wafer-thin? What if the financial situation is so bad that tight cost management is a question of life or death?”
  • Again, Beyond Budgeting is not about ignoring the need for good cost management. On the contrary, it is about better cost management, better optimization of scarce resources than what the traditional budget offers.
  • Control is an important word in the management vocabulary. Some Finance people are even called Controllers. I recall the first time I got that title. I felt pretty good about myself! When managers are asked about their biggest concern in abandoning traditional management practices, including budgeting, invariably the answer is “losing control.”
  • Transparency, as already discussed, is a great example of a good control mechanism. So is a strong, values-based culture. There are, however, two other types of control that we want much less of. The first one is too much controlling of what people shall and shall not do, through detailed budgets, tight mandates, detailed job descriptions, rigid organizational structures, smartly constructed bonus schemes, and all other Theory X– driven control mechanisms. Some of these controls might seem real and effective, but are often nothing but illusions of control. People are smart, and any system can be gamed if people want to. The second type of control we need less of is maybe an even bigger illusion. It is the perceived control of the future, the one we think we get if we only have enough details in our plans and forecasts.
  • The budget variance analysis is another classical example of control illusion. Detailed explanations of the difference between actual and budgeted numbers might provide a comforting feeling that the past is both understood and well explained.
  • There are more illusions: “If we don’t manage performance, there will be no performance. If we don’t develop people, there will be no development.” Many Finance and HR functions seem to be built on such assumptions. Admitting.
  • When managers are setting goals for their employees, it is useful to understand their full performance potential. I don’t think I will know mine before the day I pass away. Key Performance Indicators (KPIs) are often used to set targets. As we will discuss later, we must remember what the I stands for. KPIs are trying to indicate if we are moving toward where we want to be, but they are not always able to reveal the full truth. They are not called KPTs, Key Performance Truths!
  • Precision does not always equal relevance. The more accounting oriented we are in our performance thinking, the more we tend to emphasize precision and sacrifice relevance.
  • Let us not lose track of what performance management is meant to be about. Remember Albert Einstein’s wise words: “Not everything that can be counted counts, and not everything that counts can be counted.” I recommend using the SMART principles with caution. Here is some advice to ensure they actually help and not hinder the ultimate goal, which is the best possible performance given the circumstances:
    • Specific— but not a straitjacket
    • Measurable— but do not forget words
    • Achievable— but do not forget Michelangelo (see below)
    • Relevant— do not forget strategy
    • Time bound— but do not leave it all for year-end
  • Someone once added ethical and reasonable to the acronym to make it SMARTER. Nice! A target can easily create the same ceiling/ floor situation we discussed on cost. After managers have negotiated and low-balled, they might strive to hit their targets, but they have normally few reasons to go beyond, especially if
  • How targets are set is also important. There is a big difference between targets that you set for yourself, compared to those set for you.
  • Most Finance people believe that all target numbers must add up exactly to the corporate target, and that this can be achieved only through top-down cascading. The fact that such cascading often destroys ownership, commitment, and motivation is ignored: “That is HR stuff, we work in Finance.”
  • In my own private life I have set very few targets, if any. I certainly have had my dreams and aspirations, and I know quite well what good looks like. When I many years ago was diagnosed with diabetes, I knew I had to lose weight. I never set any targets about how much and by when. But I changed my lifestyle, and I measured frequently that things were moving in the right direction, both weight and blood sugar.
  • First, let it be clear that performance can be evaluated even if no targets are set. We normally know what good looks like when we see it, and there are always strategic direction and performance standards to relate to.
  • These purposes are: Feedback and development, Reward, Legal documentation
  • If instead the reward purpose dominates, it easily pulls the dialogue in the opposite direction. The rational employee might instead focus on “I’m so great” successes, and avoid anything that can taint the polished performance picture he or she is trying to paint.
  • The legal documentation purpose isn’t very motivating, either. It is about the employer needing to have the paperwork in case there should be a need for drastic action. The purpose is seldom the opposite: a legal justification needed for praise, promotion, or pay increase.
  • The development purpose requires no rating at all. The best appraisal dialogues I have experienced have been rating-free, with managers providing open, honest, and constructive feedback, focused more on my strengths than on my weaknesses. Rating can actually “dumb down” the dialogue because the number easily replaces or shortens the much more important words.
  • First, we need to remember that performance is not the same as results. A result is a measured outcome. Performance is the behavior and effort behind.
  • We need a broader and more intelligent performance language than the old one of “within budget” or “green KPI.”
  • True objectivity is therefore wishful thinking. There will always be subjectivity when targets are set.
  • I am no big fan of performance ratings, but when rating is also used in a forced ranking of employees we are entering the realm of stupidity. A number of companies are now abandoning this hopeless management practice. According to Washington Post, 10 percent of the Fortune 500 companies have now abolished the traditional annual performance appraisal, including Microsoft, Accenture, Deloitte, and Expedia. Even GE is experimenting with alternatives.
  • When presenting Beyond Budgeting in Europe, the first question I normally get is how cost can be managed without a budget? In the United States, the first question typically is, “What drives bonus if there is no budget?” The smallest problem with bonuses is that they often are tied to delivery of budget numbers, which as we have discussed is a language quite ill-suited for performance evaluation. A much more serious problem is the negative effect on motivation and performance, which this section is about. I have totally lost my belief in individual bonus systems. I am convinced they do much more harm than good.
  • The employer– employee relationship becomes a “principal– agent” contract, where the main focus for both parties is to maximize their own gain and benefit.
  • I believe that one day the idea of individual bonus will be driven out of town, shamed and undressed. But we need more little boys (or corporate rebels) raising their hand, shouting what everyone in the crowd also can see: This emperor has no clothes on. Let me explain why he is naked.
  • If it has to be money, does it have to be individual bonus? Why can’t a collective system be an alternative? Can sign-on fees sometimes be an alternative?
  • When Australian Atlassian decided to abandon their sales bonus system, they knew they would lose some of their salespeople. They did, but those they really wanted to keep stayed on.
  • In 2013, the pharmaceutical company GSK (GlaxoSmithKline) announced a new compensation program abolishing individual targets within sales.
  • Individual bonus can be a very effective motivational mechanism for simple work where there is little motivation in the job itself, where the link between individual efforts and outcomes is easy to measure, and where quantity is more important than quality. So for picking fruit, catching rats, and similar simple, repetitive work, individual bonus definitely works. But when moving to more complex tasks, where more cognitive skills and teamwork are required, research shows that individual bonus loses its power.
  • But for more complex tasks, external motivation typically has either no effect or a negative one, reducing the internal motivation. It is called the “crowding out” effect.
  • Giving blood is a great thing to do. Experiments have shown that when hospitals have introduced financial rewards in order to get people to give more blood, the effect has often been the opposite. Donors feel that it reduces the noble act of giving blood to something closer to “selling body liquids.” Hundreds of studies on individual bonus arrive at similar results, across borders and cultures. There is probably no other area where there is a bigger gap between what research says and business does. How come? Is it lack of knowledge or pure ignorance? Or is it simply laziness? Dangling a financial carrot in front of people is undoubtedly much simpler and easier than motivating through great leadership. Money is so much simpler. But again, that old craft called leadership is not meant to be easy.
  • Journalist and author David Sirota sees it like this: ”The main question for management is not how to motivate, but rather how management can be deterred from diminishing or even destroying motivation.” Bonus systems can definitely be one way of destroying motivation, although there are probably those who find satisfaction and motivation in cheating the system.
  • Even executives realize that something is wrong. Here is John Cryan, co-CEO at Deutsche Bank: “I have no idea why I was offered a contract with a bonus in it because I promise you I will not work any harder or any less hard in any year, in any day because someone is going to pay me more or less.” There are, however, a few camps in psychology that see things differently. The behaviorism theory of the American psychologist B.F. Skinner strongly advocates extrinsic motivation. The only small problem is that most of Skinner’s supporting studies and experiments were conducted on mice, rats, and pigeons. The studies were about simple, mechanical, and repetitive tasks where individual results are easily measured— not exactly what life in today’s knowledge organizations is about.
  • Team or collective bonuses are very different, as they are designed with a different purpose: hindsight reward for shared success. This is an important distinction. Individual bonus is intended to provide both up-front motivation and hindsight feedback. Collective bonuses are often criticized for not delivering that up-front motivation. But they are not meant to. Collective bonuses are meant to create a positive feeling around common efforts and shared success being rewarded in a fair way. Creating such positive vibrations has, of course, a positive indirect motivational effect.
  • We have discussed two different reasons why companies have bonus systems— market and motivation. There is actually a third— affordability. It can be a cheaper way of paying people, because bonus is variable, not fixed.
  • Fortunately, there is management innovation taking place also here. Companies like Google, HCL, and Zappos are experimenting with peer-to-peer bonuses or non-financial rewards. The thinking behind is that colleagues and people you work with often have a better view of your performance than your manager.
  • “most merit or performance-based pay plans share two attributes; they absorb vast amounts of management time and resources, and they make everybody unhappy.” Kohn recommends a simple way out of the misery: “Pay people fairly, and then do whatever possible to make them forget everything related to pay and money.”
  • There is more we do not like in the real world. Projects and activities that run past year-end also mess things up. An approved project stretching over several years must be reapproved every autumn. We need control!
  • There is an accordion rhythm to this way of forecasting, which seems to assume that the world ends December 31. One solution is rolling forecasting.
  • This is not an attack on coordination in general, only on the annual coordination stint. We need a coordination that is continuous and customized, where those who need to should communicate as they choose themselves, on a schedule and time horizon relevant for their business relation.
  • Budgets are used for setting targets, mostly financial. At the same time, those budget numbers shall also reflect an expectation of what next year might look like. Finally, cost and investment budgets are a pre-allocation of required resources. We therefore want three different things from this process: Good targets, Reliable forecasts, An effective resource allocation
  • As we will discuss later, it is hard to achieve any real quality improvement in target setting, forecasting, or resource allocation without first separating the three. A two-step approach is needed: separate and then improve. The Borealis and Statoil chapters will discuss in depth what this can mean in practice.
  • Why do we spend so much time and energy on budgets and budget reporting? One reason is the illusion of control that we discussed earlier. The more details and decimal places we churn out in our plans and budgets, the more control we believe we have, and the safer it feels to set sail in those treacherous business waters.
  • To conclude, performance is managed by someone who is not present in the situation, and decisions are not based on entirely fresh information. It is a simple, rules-based system.
  • What about the police officer in the middle of the crossing, whistling, waving, shouting, and pointing? Doesn’t that person also make local decisions based on fresh information about the actual situation on the ground? Absolutely, but who really needs that middle manager and his command and control when a self-regulating system can do the job just as well and much cheaper?
  • “Skaters work out things for themselves, and it works wonderfully well. I am not an anarchist, but I don’t like rules which are ineffective.”
  • On the management process side, the traditional budget typically needs to go or at least be radically changed. Relative targets should replace absolute targets where possible and where it makes sense.
  • This is what Beyond Budgeting is about: changing both leadership behaviors and management processes in a coherent and consistent way, with the aim of becoming more agile and more human
  • …we were approached by Société Générale, Airbus, Michelin, Danone, and GDF Suez (now Engie).
  • …budget problem was just one part of a larger systemic problem. The solution could not be found just in new tools and processes that could do the budget job in a better and more effective way. A set of leadership principles was also needed.
  • Leadership Principles:
    • Purpose— Engage and inspire people around bold and noble causes, not around short-term financial targets.
    • Values— Govern through shared values and sound judgment, not through detailed rules and regulations.
    • Transparency— Make information open for self-regulation, innovation, learning, and control; don’t restrict it. Organization— Cultivate a strong sense of belonging and organize around agile and accountable teams; avoid hierarchical controls and bureaucracy.
    • Autonomy— Trust people with freedom to act; don’t punish everyone if someone should abuse it.
    • Customers— Connect everyone’s work with customer needs; avoid conflicts of interest.
  • Management Processes:
    • Rhythm— Organize management processes dynamically around business rhythms and events, not around the calendar year only.
    • Targets— Set directional, ambitious, and relative goals; avoid fixed and cascaded targets.
    • Plans and forecasts— Make planning and forecasting lean and unbiased processes, not rigid and political exercises. Resource allocation— Foster a cost-conscious mindset and make resources available as needed, not through detailed annual budget allocations.
    • Performance evaluation— Evaluate performance holistically and with peer feedback for learning and development, not based on measurement only and not for rewards only.
    • Rewards— Reward shared success against competition, not against fixed performance contracts.
  • We are often asked about what kind of organizational structure Beyond Budgeting recommends. There is no single, simple answer. The principles do advocate agile and accountable teams with a strong customer focus, but this can be achieved in many different ways. The organization chart seldom tells the full story.
  • The most famous Beyond Budgeting case has long been Handelsbanken, a Swedish bank that today operates almost 900 branches in 24 European countries and is the fastest growing bank in the UK. What makes their story so fascinating is not just the fact that the bank decided to kick out the budget as part of a radical transformation of their management model back in 1970. It is equally fascinating to observe how the bank has performed since then:
  • More profitable than the average of its competitors, every year since 1972
  • Among the most cost-efficient universal banks in Europe
  • Never needed a bailout from the authorities because they messed it up
  • The strongest bank in Europe and one of the strongest in the world, according to financial information provider Bloomberg
  • Wallander’s bold steps included:
    • Much greater branch authority—“ the branch is the bank”
    • A flat structure with only a few layers
    • A focus on customers instead of products
    • Transparent performance data
    • No individual bonuses, only a collective profit-sharing system
    • A strong values-based culture
    • No budgets
  • Beyond having no budgets, it sets no targets and does very little of traditional planning.
  • All bonuses are collective, driven by how Handelsbanken is performing against other banks, which gives everybody a very good reason to share knowledge and best practices.
  • Stimulating internal knowledge sharing is not the only reason why Handelsbanken is shying away from individual bonuses. It is also very much about the customer. They want to make sure there is absolutely nothing that can create a conflict of interest when branch employees advise their customers.
  • Miles has been growing by double digits almost every year. Growth was never a goal, however, just a consequence of doing well and being attractive.
  • Miles does not operate with individual bonuses, but employees get a share in two different ways. There is a provision system where the employee gets a cut of the revenue he or she generates. They can choose the risk profile that best fits their private situation: high fixed or high variable. In addition, if the annual profit margin for their unit exceeds 10 percent, all employees with partners/ spouses are invited for a weekend trip abroad. This
  • Reitan hates bureaucracy: “I don’t want processes, I want decisions and execution, and the distance between the two as short as possible,” he says. He is no big fan of rules, either, but loves transparency. “Rules limit creativity, motivation, and enthusiasm. Transparency creates understanding and accountability.” Their marketing slogan “Simple is often best” (“ Det enkle er ofte det beste”) has become legendary.
  • There are no budgets. “Many companies have big departments only doing budgeting, sending numbers up and down the organization, before they are presented to the board to give them something to talk about. A waste of time and energy! A forecast, however, is something very different. It gives us a picture of how things might develop. If we don’t like what we see, it forces us to do something!” Reitan says.
  • Now, there is a lean four-quarter rolling forecast process, combined with an annual three-year forecast. When needed, the quarterly forecast is updated more frequently.
  • It creates all those local “profit centers” that allow for autonomy, benchmarking, and self-regulation, just as we saw in Handelsbanken.
  • Bogsnes, Bjarte. Implementing Beyond Budgeting: Unlocking the Performance Potential (Kindle Locations 1896-1897). Wiley. Kindle Edition.
  • “Why do we budget? What is the purpose of those budget numbers?” We simply had not thought about it from that angle before. It was another magic moment. Just like pushing a button, answers came pouring out: target setting, forecasting, cost/ investment management, and delegation of authority. It soon became clear that many of these purposes were not that closely related.
  • We got out of this simply by limiting our financial target setting to improving “return on average capital employed” (RoACE).
  • The solution we developed was the relative RoACE. This was not RoACE benchmarking between Borealis and competitors, or between Borealis business units. Instead, we calculated the historical relationship between market conditions and RoACE both for the company and for each of the business units.
  • Even if everyone knew that costs had to come down, the budget negotiation always included a number of convincing arguments for the opposite. The result was often higher budgets instead of lower ones. You know the game. But as soon as we started reporting against the new budget, things looked okay.
  • What were the drivers and purposes behind the costs we incurred? We found the answer in activity accounting. Activity accounting is about understanding the purpose of costs, not just the type of costs (accounts or cost items) and where they occur (cost centers).
  • “Every action has a purpose and every cost can consequently be attributed to an objective,”
  • The rolling financial forecast gave us a continuous and updated view of our shorter-term financial capacity while a longer-term forecast was made annually. The investment forecast was a combination of approved projects and projects in the pipeline. When the forecast signaled capacity constraints, the actual and planned investment level was reduced by delaying or turning down projects.
  • KPIs dominated, in target setting, follow-up, evaluation, and rewards, causing many of the negative side effects we discussed earlier.
  • Handelsbanken has of course been going since 1970 and seems to have a rock-solid foundation. Still, if something similar had happened in the bank after seven years, perhaps things would have looked quite different today. Fortunately, Handelsbanken has a policy of recruiting top management from within, which makes a big difference.
  • Today, nimblicity is one of the Borealis values. The company even has it copyrighted. Wonderful!
  • The Statoil journey actually made me understand the Borealis journey better:
  • Management Information in Statoil (MIS): Most scorecard implementations start at the top and are cascaded out into the organization. They are mainly about translating and communicating group strategies.
  • Compared to its sister unit responsible for the Norwegian continental shelf, INT operated in an even more dynamic and unpredictable environment. This provided us with a wealth of great examples and evidence of why traditional budgeting and planning is a flawed process.
  • But the process allowed for only one number to represent both an ambitious target and a realistic expected outcome. It was simply impossible. Not surprisingly, the result was a negotiated compromise, an “in-between number” that nobody was very happy about.
  • Another favorite of mine was the exploration budget. Exploration is about finding new oil and gas reserves. Before I share this story, I want to underline that I am not criticizing the Exploration management team. They were (and are) great people, and I highly respect the job they do. I am criticizing the system we asked them to operate under. They did not invent it; we did.
  • The scorecard was also connected to the bonus system: the greener the KPIs, the higher the bonus. It was not difficult to understand why the exploration budget had not been spent. It was not necessarily bonus driven.
  • Our proposal was twofold: First separate and improve the different budget purposes, as we had done in Borealis. In addition, we would let the scorecards introduced from 2000 to 2004 become the new cornerstone in the management process, under the name Ambition to Action. The latter definitely created comfort and helped to secure a yes. There would not be a big black hole where the budget had been. The proposal would also solve another problem, the conflict between scorecards and budgets. As we will discuss in Chapter 6, there are often conflicting signals coming from the two. Almost always, the budget wins, undermining the importance of the scorecard.
  • We identified a long list of budget problems: weak links to strategy, a time-consuming process, unethical behaviors, outdated assumptions, illusions of control, decisions made too early and too high up, budgeting as if the world ends December 31, and budgets ill-suited for performance evaluation.
  • The first wall in the larger room is the Statoil Book, a booklet given to everyone in the company.
  • The second wall is each unit’s Ambition to Action, which provides more concrete guidance and direction through strategic objectives, KPIs, and actions.
  • The third wall consists of a set of both financial and non-financial decision criteria, combined with a set of decision authorities stating how big an individual decision a manager can make before having to go one level up. This wall has been there all the time. What is new is that we no longer have “double decision making” with regard to also approving, for instance, annual investment budgets.
  • The fourth wall is sound judgment. The power of common sense should never be underestimated. I mistrust any model lacking this important component.
  • For instance, having a balanced scorecard is not unique at all. The way it is implemented and operated, however, makes a big difference. A scorecard can protect and reinforce a command-and-control regime, or it can do the opposite. Too many companies are in the first category. We aim to be in the second.
  • In earlier chapters we discussed the different purposes of a budget— target setting, forecasting, and resource allocation— and why combining the three causes serious problems. Let us quickly recap. One by one these do not represent a problem as long as each one is done in a meaningful way. The problem emerges when the three are combined in one process allowing for only one set of numbers, the budget numbers.
  • A target is an aspiration, what we want to happen. A forecast is an expectation, what we think will happen. A good sales forecast can’t also function as an ambitious sales target. Forecasts that also are applications for resources tend to carry a systematic “too high” bias, as managers hoard and secure room to negotiate before the axe comes out.
  • Here is one way of separating target setting and forecasting: Do target setting first, based on an outside-in perspective of what is possible. What have others been able to do? What does great look like?
  • The separation also calms the scared. There will always be managers who are frightened by the idea of abolishing budgets. By separating and then improving, we can assure these managers that we will continue doing what the budget tried to do for us, but in much better ways. That doesn’t sound too scary, does it?
  • Ambition to Action has three purposes: Translating strategy from ambitions to actions Securing flexibility— room to act and perform Activating values and leadership principles
  • There is then the vertical translation between organizational levels.
  • It is not mandatory to have an Ambition to Action. Managers often ask us if their team should have one. We absolutely recommend trying it out, but we advise yes only if the team itself experience this as a sensible and value-adding way of managing themselves. If not, they are better off without.
  • Some managers still believe they have to have one, often with low ownership as the result. The best indication of missing ownership is when Ambition to Action is updated only before business review meetings with the level above. We
  • The importance of tone and language used when formulating objectives is often underestimated. Strategic messages can easily be lost in too many words where “correct and precise” win over “makes people tick.” The consulting language is full of words to be avoided, because they do not reach people the way we think they do. Take the popular Excellence, for example. It is a worn-out term that I believe turns more people off than on. World class may be in a similar category. Actually, many would probably be much more fired up by “Let’s beat the s*** out of the competition!” Whatever language used, aim for the simple and natural, but paint big pictures that engage and that people can relate to and believe in.
  • BBI Core Team member Dag Larsson puts it like this: “Speed can never replace direction.”
  • I can sometimes be stubborn, and my hunt for those perfect KPIs continued for many years. Today, I have given up, simply because they do not exist. Again, they are not called KPTs – Key Performance Truths. This does not mean they are not useful. We just have to remember their limitations. Earlier, we discussed some of the characteristics of a good KPI. Here is the checklist we use:
    • Do they measure progress toward strategic objectives?
    • Do they measure real performance?
    • Is there a good mix of leading and lagging indicators?
    • Do they address areas where we want change or improvement (or is monitoring sufficient)?
    • Are the KPIs perceived as meaningful at the level they are used?
    • Can data be collected easily?
  • As already discussed, relative KPIs can be very effective. There are two types of relative KPIs. The first is about input/ output relations, focusing for instance on unit cost instead of absolute cost; the second is about benchmarking and comparing with others. The two can also be combined.
  • One solution is “indirect benchmarking”: comparing how well each unit improves their own performance.
  • The majority of our KPIs are actually in the “absolute” category. We simply try to use relative KPIs where it is possible and where it makes sense, but again, a Beyond Budgeting journey does not depend on these. If absolute KPIs and targets are used, ranges or rounded numbers are normally better than the decimal-loaded numbers. The more absolute KPIs and targets are used, the more important it is to also apply a holistic performance evaluation, where we also look at what measurement isn’t picking up. More about this later.
  • The more ambitious a target is, the less it must be perceived as imposed from above. Without ownership and commitment, ambitious targets become nothing but a numbers game.
  • A final reflection on targets. As discussed, a target is actually not the target or the goal. What we really want is the best possible performance, given the circumstances. Setting targets is one way of achieving this, but it is a medicine that often comes with a number of negative side effects. These include lowballing, negotiations and hidden (or not-so-hidden) agendas, and even more of it if a bonus is linked to target achievement. The rational (or cynical) manager has no reason whatsoever to set ambitious targets. On the contrary, it only reduces the chance of hitting the number and getting the reward. What if we could find other ways of getting people to do their best without these side effects? Relative benchmarking KPIs without targets is one alternative.
  • Which actions do we need to take in order to deliver on strategic objectives and KPI targets?
  • What are the expected consequences of these actions, expressed as a forecast, either against KPI targets or in other financial or operational areas where we need to understand what lies ahead (e.g., financial capacity)?
  • It is quite natural to have gaps between ambitious targets and realistic forecasts. The goal is of course to close such gaps, as deadlines and delivery time are approaching. A gap is not necessarily something negative; it just shows that we are aiming high while at the same time having a realistic view on where we believe we will end up as things look today.
  • Although our forecasting principles are simple, the practice of them is not necessarily so for several reasons. The first has to do with our heritage from the budget days, which are not that far behind us. In the old process, there was “one number” only. This was optimized depending on the main purpose: a “high” number if the main purpose was to ask for money and a “low” number if the main purpose was target negotiation.
  • A forecast is not a promise, not something to deliver on. People using that expression have not understood the difference between a forecast and a target. Again, a forecast is what we think will happen, an expectation; a target is what we want to happen, an aspiration. Sometimes we definitely don’t want to hit our forecasts.
  • Leadership behaviors are often to blame for good forecasts becoming bad ones.
  • Let us stay at sea. A supertanker needs a huge radar screen. It takes a long time to turn, so it is important to be able to discover dangers and obstacles early. A speedboat, however, hardly needs a radar screen. It can react and turn the very second something is observed. The speedboat is much more agile than the supertanker, which uses forecasting to compensate for its lack of agility. Maybe companies should put less effort into becoming better at forecasting, and more into becoming more agile? Dynamic forecasting and dynamic resource allocation are closely related. What is the point of having the world’s largest radar screen and the ability to sense and respond instantly, if there is no dynamic resource allocation ensuring that the necessary resources also can be instantly accessed or reallocated, instead of being locked up in a detailed annual budget?
  • Measuring forecasting accuracy is normally only relevant for external forecasting where we can’t influence the outcome (oil prices, exchange rates, etc.).
  • Here are a few simple but important forecasting principles. Forecasting should primarily be something you do for yourself to help you manage your own business. If a lot of your forecasting is triggered by requests from above, asking for data you otherwise would not have bothered with in order to manage your own business, then something is wrong. Why do others need this information if you do not? Local ownership is key for getting good data quality. There is always better quality if those making the forecast depend on the quality themselves. A forecast should also be actionable. If the information cannot be used to trigger any action, why do we forecast?
  • Cost budgets are definitely much easier, if that is the goal. But it isn’t! The goal is an optimal use of scarce resources, and we need something much more effective than the annual, preallocated, and detailed cost budget.
  • The mindset we need to move away from is the one expressed by “Do I have a budget for this?” as the main and sometimes the only question asked when a decision with cost implications shall be made. The answer is typically yes if there is budget money available; otherwise, it is no. I know that decision-making in a budget regime is somewhat more sophisticated, but there is still a big core of truth in this observation. Instead we want people to ask, “Is this the right thing to do? What is good enough? How is this creating value? Is this within my execution framework?”
  • What we want is a dynamic allocation of resources that is as self-regulating as possible. What we don’t want is the detailed budget preallocation, where all units are given a bag of money divided into detailed cost items: salary, overtime, travel, consultants, and all the other cost types in the chart of accounts, often further divided into monthly budgets. This is what creates the millions of preallocated bags in big organizations.
  • -The “1,000” doesn’t have to be an annual number; it can also be a 12-month average target, valid until there is a need to change it, up or down.
  • -This can be solved by moving from absolute to relative KPIs. A unit cost target is more flexible than an absolute target.
  • -Unit cost benchmarking is even more self-regulating.
  • -It is also possible to operate without cost KPIs at all, and instead rely on the self-regulating effect of a challenging bottom-line target on such as operating profit or RoACE, absolute or relative.
  • -Finally, it is possible to manage costs with no targets at all. Instead, we rely on the two other dimensions in Ambition to Action. Strategic objectives can, for instance, express what kind of cost mentality we want, such as “We spend company money as if it was our own.”
  • The model is based on trust, on the belief that the majority of people are mature and can be trusted to spend money wisely. The only thing we know for certain when trust is shown is that someone will abuse it.
  • If we can’t be trusted to manage our own travel cost, how can we be trusted when we advise and recommend on million- and billion-dollar projects?
  • Absolute cost targets may be set if a significant change in activity and cost levels is required, but must be set at the overall rather than the detailed level to secure the necessary flexibility. Even if no cost targets are set, both actual and foretasted cost trends are monitored and corrective measures taken as required. All entities should continuously challenge their own efficiency, level of activity and resource use.
  • The conflict between targets, forecasts, and resource allocation is present also in projects. Therefore, we no longer have only one single “budget” number approved. Instead, we have separated, and operate with three numbers also here:
  • The project estimate (e.g. 1,000) is the expected cost estimate used in the profitability analysis of the project. Being merely a forecast, this number will continue to live throughout the project.
  • The more ambitious target cost (e.g. 900) is the cost level the project team aims for.
  • The resource allocation estimate (e.g. 1,100) or the mandate to spend is set higher than the 50/ 50 project estimate, to avoid on average every second project having to come back and ask for more money.
  • As a consequence, the phrase “project budget” is no longer very meaningful and is slowly (but very slowly) disappearing from our vocabulary.
  • Beyond Budgeting does not mean that cost is not important, and that the constraints introduced are set at very high level. Some also needed a reminder that Beyond Budgeting is about so much more than cost management. There are actually 11 other principles!
  • A key principle in our business follow-up is “forward looking and action oriented.” The KPI status (red/ yellow/ green) is therefore set by comparing forecasts with targets, instead of actual versus budget or target year to date. “Green” means forecast better than target, and “Red” the opposite. The purpose is to shift the focus forward, away from the past and from explaining historical variances. This does not mean that we do not focus on our actual figures. It is the comparison to an increasingly outdated year-to-date reference point we have skipped. This focus triggers one of two questions: If the KPI is green, which risks can jeopardize what looks okay, and how are these risks addressed? If the KPI is red, which actions must be initiated to get back on track? I must admit I am ambivalent to the use of KPI colors. It works well in teams which mainly see them as a simple way of sharing status with each other. It works less well when perceived as part of a top-down control-and-reward regime, sometimes triggering gaming and unethical behaviors to change reds to greens. We are back to leadership again. The colors themselves are probably not to be blamed.
  • Five questions to test measured KPI results:
  • Did delivered results contribute toward the strategic objectives? If we consider what the KPI was unable to pick up, how does it look? There is normally a lot of hindsight information available. The answer might confirm what the KPI indicated, or reveal a more positive or negative picture.
  • How ambitious were the targets? Imagine two teams. One stretched and set themselves an ambitious target, but just missed. The other lowballed and negotiated and was able to get away with a much lower target, which they hit. We shouldn’t punish the first team and reward the second.
  • Are there changes in assumptions that should be taken into account? Was there significant tailwind or headwind that had nothing to do with performance? Was there an earthquake in Japan, making it all more difficult? Was there a competitor going bankrupt, making that sales target a piece of cake?
  • Were agreed or necessary actions taken? Were actions continuously established and executed as needed?
  • Are the results sustainable?
  • Or have there been sub-optimization or shortcuts in order to hit the target? The intention with these questions is not to create a long list of excuses for not delivering. The purpose is to understand relevant background information and then conclude on how much of this should be taken into account. It works both ways; it can just as well downgrade measured results.
  • The holistic evaluation is about using measured results as a starting point for revealing the true underlying performance. As we also discussed earlier, combining development, reward, and legal documentation in one process is problematic due to the conflicting purposes, especially between the two first ones.
  • Feedback and development should not be an annual stunt and could be more peer-based. Colleagues have often a better picture of a person’s performance than the manager has. Annual bonuses could be replaced or supplemented with spot bonuses, which are not dangling “do this/ get that” carrots. Base pay adjustments could still be an annual exercise but decoupled from feedback and development.
  • It is, however, important to remember that we have neutralized some of the negative bonus effects. We have broken the fixed performance contract, the mechanical link between target and reward, by using relative KPIs where possible and by introducing the holistic performance evaluation. There is also the collective bonus scheme for all employees, based on how the company is performing against competition. The maximum bonus potential here is 10 percent. In addition, there is a very popular share savings program, where all employees can buy shares for up to 5 percent of their base salary each year and receive one free share for each one bought. Shares must be kept for a minimum of two years.
  • First, there is the definition uncertainty: How well do the chosen KPIs describe performance?
  • What we proposed was to introduce Dynamic Forecasting and also abandon annual versions of Ambition to Action in favor of a more dynamic and event-driven process.
  • “The world stops December 31.” One consequence is “forecasting against the wall,” or accordion forecasting as it deserves to be called.
  • Many companies going Beyond Budgeting solve this inconsistency by introducing rolling forecasting. The forecast is typically updated every quarter, and always with the same time horizon of, for instance, five or six quarters. This is definitely much better than accordion forecasting.
  • The solution became Dynamic Forecasting, with no fixed and predefined frequency or time horizon. Units update their forecasts when events occur or new information becomes available that they deem important enough to justify an update (external forecasting).
  • Dynamic forecasting does not necessarily mean more often; it means at the right time. For some, it could actually mean less often. Another benefit is a more even workload, although rolling forecasting also would have helped.
  • But why should we force all those with much shorter horizons to fill those outer buckets with forecasting data when they have no need whatsoever for that information themselves? We therefore encourage the levels that need the longer horizon to fill the gap themselves with more generic numbers, using their own knowledge of the business.
  • “What if we organized ourselves around business cycles instead of calendar cycles in the rest of the Ambition to Action process as well, not only in forecasting?
  • Our strategy process was already quite continuous and issue driven. Strategic objectives can now be updated as needed, when strategy changes so much that new or revised objectives are required.
  • KPIs can be replaced at any time if strategic objectives change, or if we simply find better ones.
  • Even KPI targets can be changed if they have lost their meaning by becoming impossible to reach. Such targets don’t work. They don’t motivate and inspire. They have only one function left: punishment. It could also be the other way around; the target has become too low with no stretch whatsoever. We already had the “target review”; this was about strengthening this mechanism.
  • The target horizon can vary, depending on the type of business and what we aim to achieve. We want more natural target deadlines, driven by urgency and complexity. The more relative targets we use, the less need there will be for annual targets. “First quarter,” “above average,” and similar targets do not need to be reset every year. Actions were already meant to be continuously updated, but more dynamic strategic objectives and KPIs now make this even more obvious and natural.
  • Forecasting is more continuous and event driven, as described above.
  • Performance evaluation in People@ Statoil is still done on an annual cycle, but it is now easier to change team and individual goals, as described later.

July 7-8: Certified Scrum Product Owner Course with ScrumInc

On July 7th and 8th, in New York, there was Certified Scrum Product Owner Course delivered by two co-trainers:  Avi Schenier, representing Jeff Sutherland’s company ScrumInc  and Robin Dymon, CST from Innovel.

cspo_6

The class (size of about two scrum teams) was engaged in heated discussions, practical exercises and games.   I had the honor of attending and participating in the course and was able to make a series of observations.  Below, is the list of points that either very strongly resonated with me because they are already in-line with my personal views or are going to be leveraged to  enrich my personal training methods and coaching techniques:

  • The Scrum Guide: Scrum Values (courage, commitment, focus, respect, openness) – the latest, very important Scrum Guide amendment
  • Management-Leadership: emphasis on management acting in Servant-Leadership, not Command-Control capacity.  Exposing  most common problems with mid- and first-line management
  • Finding a good Product Owner: common challenges and good vs. bad candidates
  • Brooks law: fallacy beliefs about how to return late projects back on track
    • N(N -1)/2 formula
    • Communication saturation and “Heat of Commutation” gone wild
  • Story Mapping: purpose, techniques, strategic advantages
    • Story Slicing
    • Story dependency discovery
  • “What to build first?”: Value vs. Effort conversation
  • Sprint Interrupts and Emergency Procedures: how to deal with sub-optimal Scrum situations
    • Use of Buffers for Emerging Requirements and Bugs and Customer Feedback
    • Individual/Team Capacity Management
  • Scrumming the Scrum: continuous, relentless improvement of Scrum itself
  • Definition of Ready vs. Definition of Done: importance of understanding and agreeing to what is being delivered (and what is not!)
  • Hardy-Weinberg Principle of task switching impact on productivity and Throughput: a great practical in-class game
  • “Team Happiness is a Leading Indicator of Performance”
    • Measuring Individual/Team Happiness Factor vs. Velocity
    • Using Happiness Metrics as an Analytic Tool
  • Agile Strategy Objectives: Convergent & Divergent design vs. Process Predictability & Adaptability
  • Making story discovery real with Customer Personas
  • “Why is it important for Product Owner to understand principles of Story Sizing and Estimation?”
  • Importance of Business Value estimation in strategic planning (frequently forgotten by POs concept)
  • Creating Executive Action Team – provide support and remove impediments
  • Breaking the Iron Triangle of conventional Project Management
  • Release Management: burn-up/down charts as means of communicating status
  • Agile Metrics/Dashboards – why is it important for Senior Management to rely on empirical data produced by teams to measure progress
  • Three Common Approaches to Release Planning: Deadline-based, Regular-Departure, Value-Based

It was also a great experience to observe (and participate) in a daily scrum call, held by ScumInc folks during the lunch breaks.  Jeff Sutherland and his entire team practice what they preach: every day, around the same time (noon), they meet virtually, from multiple locations, to debrief each other on progress of their sprint-in-flight. Below, is the illustration (collage of different locations) of how they do it (from Avi’s laptop):

Daily Scrum With Jeff Sutherland and ScrumInc Teamdaily_scrum

My Kodak moment with Avi (center) and Robin (right) :avi_robin_gene

 

Additional Kodak moments from the course:cspo_7

cspo_5

cspo_2

cspo_1

 

Managing Performance by Extrinsic “Motivation”

“The idea of a merit rating is alluring. The sound of the words captivates the imagination: pay for what you get; get what you pay for; motivate people to do their best, for their own good. The effect is exactly the opposite of what the words promise.”

-Edward Deming, “Out of Crisis”


This article took me almost 10 years to write… This has been a long journey for me.  As an organizational and agile coach, I base views not merely on feelings and emotions but rather on hard scientific evidence, research, review of literature, analysis of work from other credible resources that I have been collecting over years. And of course, continuous assessment of my personal experience.

So, I wanted to start this discussion with Wikipedia definition of Performance Appraisal:

A performance appraisal (PA), also referred to as a performance review, performance evaluation,[1] (career) development discussion,[2] or employee appraisal[3] is a method by which the job performance of an employee is documented and evaluated. Performance appraisals are a part of career development and consist of regular reviews of employee performance within organizations.”

Interestingly enough, while all three terms are being included in the same definition (“review”, “evaluation”, “appraisal”), in practice, companies predominantly use the term “review” to describe PA, as it implies less scrutiny and preconception towards an employee. But does this change the essence of the process if less abrasive terminology is being used?

 A typical PA process includes setting individual career goals, by an employee that should, presumably, be her own goals but nevertheless must be in-line with organizational/department career goals that usually sent by senior management and then cascades down to line management.  It is expected that throughout a year, an employee has to steer herself towards pre-set goals, while performing her day-to-day job responsibilities.

Every company that supports PA process has a scoring system (variations exist) to rank employees, against other employees, based on score that an employee earns, while performing her yearly accomplishments (goals set vs. goals achieved). Some organizations offer a mid-year (quarterly, at best) check-point to an employee when, along with her line manager, she reviews how she performs with respect to the goals, she set originally.  Practically, no companies handle PA as an actively managed, iterative agile process: there is typically one mid-year check point and end-of-year final decision.

For most companies the whole PA process, typically serves the following three main purposes:
1. To identify low-performing employees that are potentially a subject to downsizing (or kept where they are)
2. To identify high-performing employees that are potentially a subject to promotions and compensation increases
3. To decide how discretionary incentives (bonuses) should be distributed among employees

While on its surface PAs still appears as an effective way to ensure quality of employees and to provide benefits to an organization, under the surface this process presents real challenges. These challenges become more apparent at organizations that attempt to adopt more agile culture, since in agile environments system organizational dysfunctions get exposed much better.

But before we dive deeper in the discussion, let us first briefly refer to some credible research and studies that exists today:

In his book “Out of the Crisis”, originally published in 1982, Edward Deming discusses Seven Deadly Diseases of Management and refers to individual performance reviews and performance evaluation as Disease # 3. Deming’s philosophy of transformational management is about seriousness of barriers that management faces today, while improving effectiveness and striving for continual improvement. Deming argues that by trying to evaluate and measure workers with the same yard stick, managers cause more harm than good to individuals and to companies.

Tom Coens and Mary Jenkins offer specific suggestions on how to replace performance appraisals with a more effective system that emphasizes teamwork and empowerment in their book “Abolishing Performance Appraisals: Why They Backfire and What to Do Instead.” Coen and Jenkins discuss new alternatives that produce better results for both managers and employees.

In his Forbes article “Eliminating Performance Appraisals”, Edward E. Lawler III, a distinguished professor of Business at the University of Southern California, advocates why organizations stop doing performance appraisals. Professor Lawler states that performance appraisals frequently do more damage than good, with damage levels fluctuating between wasted time (least troublesome) to reasons for alienation of employees and creating conflicts with their supervisors (most troublesome).

Garold Markle, an author, executive consultant and speaker, leverages his studies and experience with systems theory to illustrate his points with real-life examples of why employees and managers both have come to believe the “ubiquitous performance evaluation as industry’s poorest performing, most ineffective, and least efficient personnel practice”. In his book “Catalytic Coaching: The End of the Performance Review”. Markle provides an innovative way to measure ineffectiveness and inefficiency of performance evaluations and then introduces his catalytic coaching to replace them. His statement is awakening: “People hate performance reviews”.

In his book “Drive”, Daniel Pink offers a paradigm-shattering view on what truly motivates people in their lives. Pink draws on four decades of scientific research on human motivation, to expose a mismatch “between what science knows and what business does”. Pink challenges the mistaken belief of many that people doing intellectual work, will demonstrate higher performance, when incentivized monetarily. Based on Pink’s research, it becomes clear that individual performance evaluations and individual appraisals that are linked to monetary rewards, are not an effective way to steer individuals to become more efficient and productive. Therefore, they should be abolished.

Finally, in his book “Implementing beyond Budgeting: Unlocking the Performance Potential“, Bjarte Bognes, who has a long career with HR and Budgeting departments, unveils ineffectiveness of conventional budgeting processes that so many companies still follow today.  Bjarte, describes common fallacies associated with “accordion” or “against the wall” budgeting that is done under assumption that “…the world will end on December 31st…”.   By offering many real life examples and cases studies of the companies that have instituted alternative budgeting approaches, Bjarte forces his readers to fundamentally shift their mindset, away from some outlived “de facto” concepts.  For example, one of Bjarte’s recommendations is to decouple what has been mistakenly lumped together for years: Targets, Forecasts and Resources, and treat each one as an independent system variable.  The connection is astonishing.

On many occasions, in his book, Bjarte, connects the dots between conventional Budgeting process and conventional Performance Management process – both of which harmfully feeding off of one another.

And the list goes on….

So, now lets take a closer look at the problem on-hand, with some specific examples:

 

Fabricating Goals to Game the System

Are goals that employees officially set for themselves (in a system of record) truly reflecting their genuine, personal goals?

It is not uncommon that real personal goals are risky and challenging to achieve or may take longer than initially expected. Some other goals may be situational/opportunistic: they may change as a situation changes or unforeseen opportunity presents itself (job market trends, other job opportunities, personal life).  People want to have freedom and flexibility to adjust their goals to optimize their personal benefits and this is a human nature.  There is no real personal benefit to an individual to “set in stone” her personal development goals at year-start and then be locked to them at year-end, as if not meeting those goals equates to penalty.  In general, in order to set her real goals, a person needs to know that it is safe to actively manage them along the way and, if needed, safely change and/or fail them, without fearing negative consequences.

But is there any safety with PA processes if job security, career advancement ability and ability to collect fair compensation are at risk? If there is no personal safety, the exercise of setting personal goals becomes nothing but a routine of faking objectives that are “definitively achievable”. People are forced into system-gaming, to minimize the risk of being penalized by their management, if goals are not met. Setting individual goals becomes just a formality that brings no true value to an employee.

The process of individual performance reviews becomes even less meaningful if people work in small teams, where swarming (working together on the same task) and collective ownership is important, while joint delivery is expected. In cases such as these, people are forced into unhealthy competition with each other over goals, trying to privatize what should be owned and worked on collectively.
Another challenge with evaluating employees’ individual career goals is that in pursuit of personal goals, people frequently “drop the ball”  and pay less to attention common goals. Again, this dysfunction becomes much more vivid in “going-agile” environments, where agile frameworks (e.g. Scrum, Kanban, LeSS) de-emphasize individual ownership and reinforce importance of collective ownership. Often, close to mid-year and end-year performance reviews, collaboration and mutual support of team members worsens, as silos get created and everyone starts to think about their own goals, at expense of shared goals. This translates into productivity drop: swarming, velocity and throughput go down; cycle time goes up, queues grow and handovers take longer.

SO, lets take a look a few hypothetical examples that are based on real life scenarios:

Example 1:

Jane is an employee of a large insurance company.  She is being requested to enter in a company’s system of record her personal goals – things that she intends to achieve throughout a year.   Jane is smart and in order to avoid any unwarranted risk, where her personal success depends on success of others, she creates goals that are free of dependencies.  Jane creates a set of personal goals that other group members do not know and don’t care about.  Her line manager John, also discourages Jane from sharing such information. 

However, Jane does not work alone.  Her day-to-day work is tightly coupled to work of other people in her group: Jim, Jeff, Jill, Joe and Julie.

Jane really values team work. She also feels that by closely working with her group members, by swarming and sharing day-to-day activities, she can earn a lot more than if she worked by herself.   This is where Jane decides to put her full focus: on team work. She does not feel that creating an additional set of personal goals can add real value to her professional growth.  But Jane needs to “feed the beast”: she needs to provide her line manager with a list of “achievable” bullets that the latter can measure.  At the same time, Jane does not want to create a conflicting situation with her colleagues, by diluting her focus on shared goals and shifting it to personal goals. So, what does Jane do?  She fabricates her personal goals: “quick kills” and “low hanging fruits” – something that she can easily claim as her “achievements”, without jeopardizing common interests of her team.  Jane is forced to “game” the system to minimize harm to herself and her team.

In his book “Tribal Leadership”, David Logan describes five tribal stages of societal evolution. According to his research, corporate cultures typically oscillate between Stage 3 (“I am great and you are not”) and Stage 4 (“We are great and they are not”), with agile organizations trending more towards Stage 4. When individuals are motivated by force (a.k.a. “manipulated”) to think more about individual performance than about collective performance, they mentally descend to Tribal Stage 3 and, as a result, drag along their organization to this lower stage. It is very important for organizations and their senior leaders to understand that motivation is one of the most important factors that drive evolution of corporate culture.

Note: To understand how Motivation Evolution (defined by Daniel Pink in “Drive”) relates to Tribal Evolution (defined by David Logan in “Tribal Leadership”), please refer to this tool.

So, clearly, in the example above, Jane’s mindset is at Stage 4 and in order to descend to Stage 3, she “plays unethical game”.

Unhealthy Competition, Rivalry and Jealousy

Let’s face it, overemphasizing individual performance evaluations and allowing them affect job security, promotions and compensation of individuals does not come free of charge to organizations.  Organizations pay and they pay dearly.  Bad norms and processes come at expense of lowered collaboration, unwillingness to share knowledge and provide peer-to-peer support, increased selfishness and self-centric behaviors. For individuals that are encouraged to work and produce collectively (e.g. Scrum or Kanban teams) unfair performance evaluations frequently result in jealousy and feelings of unfair treatment. These dysfunctions become more frequent around times when employees are due to mid-year and end-year reviews. PAs have seasonal adverse effects on individuals’ ability to focus on work and, as a result, prevent them from producing high quality products and focusing on satisfying customers.

It is worth mentioning, ironically, that when dysfunctions are uncovered, it is agile that becomes the target for blaming.  But agile is hardly at fault here as it only provides transparency and reflection of already existing deep systemic dysfunction.

Example 2:

Jane works alongside Jim, Jeff, Jill, Joe and Julie.  All of them are smart, self-motivated and talented technical experts that cumulatively have more than 70 years of software development experience.  Their work is intense: there are lots of deliverables and their timeframes are rigid.  The group serves the same client for a number of years and, so far, a client is happy.  Work that this team performs, requires a lot of collaboration, collective thinking and brainstorming, teaching/learning from each other and, of course, collective delivery.

But then comes a mid-year review period and Jill notices that Jeff is not as supportive of her as he was at the beginning of the year.  Jeff becomes less responsive to Jill’s requests, he does not share his knowledge as readily as he used to; he does not give advice. Tasks that used to be handled collectively by Jill and Jeff are now illogically split by Jeff as he tries to focus only on what he assigns to himself.

There is also a noticeable change in Julie’s behavior.  Julie becomes very eager to be the one who stands in front of a client and presents deliverables of the whole team.  This responsibility used to be rotated from one person to another, with no one caring too much about being a “spokesman”.  But as mid-review came, Julie clearly stepped up to be the main, customer-facing presenter.  Julie also tries to make it very obvious to John (the group’s manager) that it is her – Julie, who presents to a customer. Julie wants to be viewed as a “centerpiece” and tries to gain most of spot light. 

Jim’s contribution to the group’s efforts has also decreased.  Early in the year, Jim used to be a very active participant at the team’s brainstorming meetings and workshops.  As mid-year arrived, Jim started spending a significant share of his time working on items that are not related to the team’s shared work; his focus has noticeably shifted to personal work that he chooses not to discuss with others.

Since the beginning of the year, it has been customary for the group to go out for drinks to a local bar, every Friday.  But this tradition is now barely followed, at mid-year period.  There seems to be less desire for the group to socialize outside work settings.  Everyone finds an excuse not to make it.  The group’s synergy has gone down noticeably.  What used to be a well jelled team of great collective performers has turned into a group of self-centered individual achievers that want to be acknowledged for their heroics.

“Scripted” Ranking to Force-Fit into Bell-Shaped Curve
Typically, when an organization ranks its employees based on individual performance, a bell-shaped curve is produced, where samples (ranked employees) are binomially distributed around the median: majority of samples are centered (“center mass”), representing average performing employees, left tail – representing low performing, and right tail – representing high performing (over-achievers). Statistically, a bell-shaped curve is a normal distribution of any large sample. The symmetrical shape of a curve (“bell”), however, can be influenced by additional three main factors (forces):

  • Platykurtic distribution – it lowers amount of samples around the median (average performers) and increases amount of outliers (under-performers and over-achievers), equally on both sides. A curve remains symmetrical.
  • Leptokurtic distribution – it increases amount of samples around the median (average performers) and lowers amount of outliers (under-performers and over-achievers). A curve remains symmetrical.
  • Uneven distribution of samples on left and right sides from median – Typically, this increases amount of samples on left (under-performers) or right (over-achievers) tails of a curve, while also disturbing symmetry of a curve and evenness of sample distribution around median (average performers). A curve loses its symmetry

This statistical distribution is tightly coupled to actions that management takes towards its employees at year-end. However, the shape of bell curve, does not “drive” (as it might be expected) managerial year-end decisions. On contrary, managerial decision shape up a curve.

Managerial decisions are driven by financial conditions of an organization as well as other strategic organizational plans. When managers review their employees, they have to account for such factors to make sure that a bell-shaped curve does not exceed organizational capabilities of promoting too many employees and giving out too much money. Effectively, the entire process of performance assessments becomes a retro-fitting exercise that shapes a bell curve, basing it on organizational capabilities. This makes the process, practically, staged or “scripted”.  What further ads to the irony of this situation is that at times an employee may report into a manager that does not even have sufficient skills for perform an objective assessment of an employee’s performance.  For example, an architect or a software engineer that reports into a non-technical manager (e.g. PMO) has a much lower chance to objectively discuss her work accomplishments and receive an objective feedback during PA.

There is a need for an alternative approach that will help dealing with overly complex, over-staffed organizations that spend so much time and energy trying price-tag its employees.

Here is an idea: how about more thorough checks of background and references, more rigorous interviewing processes that involve practical (hands-on) skills assessments, try & buy periods, before hiring an individual full time or some other, more objective methods?

Instead of attracting cohorts of workers of questionable quality and then dealing with inevitable force reduction or worrying (or pretending to worry!) about employees maintaining and/or improving their quality, hiring managers should be striving to acquire and retain lower quantities of higher quality workers: self-motivated, enthusiastic professionals, with a proven track record and clearly defined career goals…AND be willing to pay them higher compensations. This may require offering more competitive base salaries, and abolishing manipulative discretionary incentives: removing money from the table makes intellectual workers think more about work, and less about getting paid. This approach would also ensure that quantity of employees is kept at a minimum (this also ensures lowering overhead, complexity reduction, organizational descaling), while maximizing quality. Such alternative should render performance reviews much less important or even obsolete as there will be no need to reduce employees at year-end or thin-slice discretionary incentives among too many candidates.

 

Example 3:

John is a line manager for the development group. John has great organizational skills, he is well spoken and can greatly articulate his wishes.  But John, has never developed software products; he is not technical. John knows that all of his team members are “good guys”: knowledgeable, enthusiastic, and mutually supportive.  But when the team works together, John really cannot validate quality of work that they produce.  (Luckily, there is one reliable measurement of the team’s success – it is customer satisfaction).  The only thing that John can validate is the team’s vibe and spirit.  But even when John notices disagreements or temporary misalignment among the team members, it is impossible for him to offer a constructive advice or understand a root cause.  What is even more challenging and frustrating for John is that due to the nature of team’s work (closely collaborative, collectively shared) he cannot objectively assess individual performance of every team member.  In conversations with John, the team members rarely use the word “I”; it is typically “we”.

John is in a tough position.  How can he decide who the best performer on his team is and who is not?  John needs to be able to ‘rank’ his people and based on ranking, decide who gets promoted and paid more at the end of the year.  Deep at heart, John feels that everyone deserves a promotion and monetary “thanks” but he cannot satisfy everyone.  John’s management informs him that only one person from his team can get promoted and the amount of discretionary money allocated to his group is limited; in fact, it is less than last year.

Around mid-year time, John begins evaluating how each of his team members has performed up-to-date. John does this based on “achievable” goals that were set by each employee at year-start.  John’s inability to truly understand the nature of peoples’ technical work adds to his challenge…and frustration.   He cannot objectively evaluate his employees, let alone rank them against each other.  

Meanwhile, John’s management expects from him a ranking model that will fit into a bigger picture of an overarching ranking model, for a given year.  It means that even if John feels that all of his members are outstanding performers he will not be able recognize this officially.  At most, he will be able to recognize that they have achieved their set goals.  Further, based on what John learns from his management, he has to commit even less noble act.  Learning that certain percentage of a company’s workforce has to be reduced, John has to identify people from his group for future downsizing.  It is clear to John that people that are outstanding performers are not to be downsized (potential HR “cases”).  Therefore, John decides to force-fit some of his team members into a bell-shaped curve, away from the right-sided tail, towards the middle (average performers) and left-sized tail (underperformers).  John uses organizational “script” to play his own game.  What John does, is a wasteful act that is full of subjectivity and ambiguity.  The process is also destructive to the team’s cohesiveness and morale.  John is at risk of losing some good people sooner than he could imagine. 

Truth be told, a natural “knee-jerk” reaction of any employee when she is told by someone why she is not “perfect” and what she needs to do to improve is defensive.  Of all reasons, the biggest reason why she would become defensive is her resentment that someone will subjectively “evaluate” her and decide how much she is worth.  Although an individual may keep her feelings and emotions concealed under the umbrella of political correctness and diplomacy deep under covers, there is emotional harm being done.

 

Generating Waste
Rarely, do companies consciously analyze how much time and effort is being spent on performance evaluation process itself: by employees, by line management, by senior management and by HR. Unfortunately, for large, enterprise-size companies, these expenditures are already “budgeted for”. From the standpoint of lean thinking, today’s typical process of PA conducted by line managers is a clear example of organizational overhead that slows cultural evolution and prevents companies from maturing to Pink’s Tribal Stage 4.

Example 4:

All members of the team: Jane, Jim, Jeff, Jill, Joe and Julie spend a lot of time during the year writing and reviewing their personal coals.  John spends a lot of time reviewing and discussing personal goals of each team member.  John also spends a significant portion of his time with his line management, discussing achievements and intended ranking for each of his subordinates.  Overall, the amount of time this entire group of people spends on the PA process creates a lot of unnecessary procedural overhead and over processing. Annually, PA processes cost companies hundreds of thousands of dollars in wasted time by employees, at many organizational levels.

Alternative Approaches to Performance Reviews

Are there any working solutions to this problem? Is it possible to ensure that organizational behavior towards its employees (e.g. motivating and incentivizing) is more in-line with what is best for an organizational prosperity, customer satisfaction, waste reduction and creation of more pleasant work environment, and Kaizen culture? Is there a way to depart from archaic, 100+ years old Taylorian management principles, Skinnerian behaviorism, outdated norms and behaviors, without causing too much stress to an organizational ecosystem, perhaps, by offering alternative, less harmful solutions?

Lets be clear on something: ideally, the end goal of any organization should be to abolish individual performance appraisals completely and substitute them with other, more effective methods of individual motivation – at least for intellectual workers that are expected to work in team settings.

But for now, let’s look at some possible alternatives that can help companies gradually depart from individual performance appraisals, towards less harmful approaches.

Here are some potential, “second-best to a complete abolishment”, alternatives to discontinuing PA and incentives allocation process:

  • Instead of prizing individuals, prize teams and do so based on what an entire team has produced, not a single individual. If individuals must work in tight collaboration with each other and are expected to cross-pollinate with knowledge and domain expertize, what is the point to stress individual performance and superior excellence of each individual? Let a team, internally, decide who is elevating them above the water and who is dragging them down to the bottom. Individual underperformers will be quickly identified in such settings, and a team will either expel them or help them improve. Also, please note that prizing a team (monetarily, team bonus) does not have to be coupled to “performance assessment”. This could be done, simply, as a profit sharing model between business and technology: if work by technology has noticeably improved business profits, why cannot business say “thank you” to technology for its hard work in the form of sharing profits?
  • Take away singleton decision making capability of defining what a team deserves (in terms a monetary prize) out of line managers’ hands and spread wealthacross multiple parties: make it based on customers/stakeholders satisfaction, senior management satisfaction, third party feedback, etc. But again, judge teams, not individuals (important!).
  • Make monetary incentives allocation more objective and formula-driven, than subjective and single opinion-based. Here are a few suggested formulasto do this (other options exist):
    1. Monetary incentives are equally allocated among all employees whose work is tightly coupled to a shared goal, and where collective ownership is expected
    2. Monetary incentives are allocated in proportion to base salary of each employee: decide on employee’s “cost-basis” when she is hired (based on expertize, experience, etc) and then fall back to option “a” above
    3. Monetary incentives are allocated based on team’s internal voting, done confidentially (incremental, 360 review by all team members).
  • Please, visit the following links for graphic illustration of Conventional and Alternative incentives allocation schema.

Note: Consider the above options as temporary solutions, second-best to completely abolishing discretionary monetary incentives for intellectual workers that work in team settings. Although, team-level incentives are less dangerous than individual incentives, they may still bring harm: they make people think about getting paid, not about doing work.  There is still some risk that entire teams may engage in system-gaming.  Chances of that would seem to be lower than system-gaming by individuals. 

Ideally, for any kind of intellectual work, the topic of discretionary moneys should be removed from the table completely: people should be focused on doing work, not on how they can game the system to get a higher pay.

Conclusion

The famous quote from the book “Out of Crisis” written by Edward Deming (originally published in 1982) summarizes this topic well:

“The idea of a merit rating is alluring. The sound of the words captivates the imagination: pay for what you get; get what you pay for; motivate people to do their best, for their own good. The effect is exactly the opposite of what the words promise.”

References

  • Deming, W. E. 1993. The New Economics for Industry, Government & Education. Cambridge: Massachusetts Institute of Technology Center for Advanced Engineering Study.
  • David Logan; John Paul King; Halee Fischer-Wright. Tribal leadership: leveraging natural groups to build a thriving organization. New York : Collins, ©2008
  • Tom Coens and Mary Jenkins. 2012. Abolishing Performance Appraisals: Why They Backfire and What to Do Instead.
  • H. Pink. 2011. Drive: The Surprising Truth About What Motivates Us. Riverhead Books
  • Garold Markle. 2000. Catalytic Coaching: The End of the Performance Review.  Quorum Books
  • Edward E. Lawler III. 2014. Eliminating Performance Appraisalshttps://www.forbes.com/sites
  • Jeffrey Pfeffer, Robert I. SuttonHard, 2006. Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management
  • Tom Coens, Mary Jenkins, Peter Block., 2002. Abolishing Performance Appraisals: Why They Backfire and What to Do Instead
  • Alfie Kohn, 1993, Punished by Rewards
  • Samuel A. Culbert, 2010, Get Rid of the Performance Review
  • Adobe Systems set to scrap annual appraisals, to rely on regular feedback to reward staff
  • Microsoft’s Downfall: Inside the Executive E-mails and Cannibalistic Culture That Felled a Tech Giant
  • Get Rid of the Performance Review!

SAFe: Market Share Increase. Rapid Growth. What is the recipe?


Some time ago, there was a webinar recorded by VersionOne: How to use SAFe® to Deliver Value at Enterprise Scale Q&A Discussion with Dean Leffingwell).   If you fast-forward to about 23 min, 20 seconds into the recording, you will hear the following statement: “…We don’t typically mess with your organizational structure because that is a pretty big deal…”

This statement somewhere puzzled me.  While graphic representation of SAFe framework is nowhere short of supporting organizational complexity, I was still under impression that organizational design improvements/simplification are included in SAFe teaching.  To me, an ability to influence first-degree system variables, such as Organizational Structure, is critical.  Without this ability, any attempt to improve organizational agility and system dynamics would be short-term and limited.  Even such important second-degree system variables, as organizational culture, values, norms, behaviors, policies, agile engineering practices usually bring limited results if organizational structure remains unchanged.

…But regardless of my recent new learning I admitted to myself that SAFe still remains a very successful (financially) and popular product that many organizations are willing to buy-unwrap-install….Fast forwarding…


 

Lately, there has been so much buzz in agile arena about scaled agile frameworks.  I just came back from Scrum Global gathering in Orlando, where I heard a lot of discussions about agility at scale and various existing agile frameworks that companies use.  Following Orlando discussions, I have seen a wave of email exchanges and blogs on the same topic, some of which involved seasoned organizational coaches and trainers.  I have noticed that there has been a lot of focus on SAFe (Scaled Agile Framework): opinions, comments, attempts to compare to other agile frameworks.   There are two things, in particular, that stroked me as odd:

  1. It seemed that some seasoned coaches and trainers don’t explicitly state their views.  When I read indirect statements  or views, I remained wondering how a person really felt about the subject.
  2. Among blogs and other posts that I saw, I was not able to see any discussions that covered aspects of SAFe that were of particular interest to me.

But before I go any further, here is my personal disclaimer:  I am neither SAFe practitioner, nor trainer or coach.  I have not attended a comprehensive SAFe course… However… I have studied/researched SAFe extensively on my own. And I do know some companies that have implemented SAFe (I have talked with some of their employees).  And I do know a significant number of individuals that have been trained on SAFe.  And I do know a handful of respected coaches that recommend SAFe.

Now, let me put “SAFe” topic to the side, for a moment, and shift gears to something else (we will all come back to SAFe in a minute):

I want to bring up the topic that has been a beat to death horse for awhile, for everyone who understands agility: the topic of tooling.

When it comes to discussions of agile tools, more experienced agile coaches have a long arsenal of arguments to use with their clients, prospects to explain why ‘agile tools’ are not most important for being agile.  Here some classic examples:

  • 1st postulate of Agile Manifesto: “Individuals and interactions over processes and tools”
  • “A fool with a tool is still a fool”
  • “The best tool in Scrum is a whiteboard (or excel, at most)”
  • “Agile tool is not a right solution for your deep organizational problem“
  • “Never begin your agile education with tools. Always learn principles and concepts first”
  • “Agile tool is a poor substitution for collaboration that you may never have. If you start exchanging information through a tool you will lose the benefit of a live discussion.  If you absolutely just introduce a tool, do it later in a process, when people gain sufficient amount of knowledge and experience”
  • Etc, etc, etc…

We, as coaches, are never shy to express our strong views (sometimes, overly strong) that tools are NOT a good solution to organizational problems and NOT the best method (by far) to transform organizations.   And I am glad we are not shy about that.   This is why we are called Organizational Coaches – we look at organizations holistically.  For us, tooling is just a tiny fraction of a much bigger organization puzzle.

<SIDE NOTE ON>

But I still want to confess, with regards to tooling, so here is another personal disclaimer: over the last decade, I have been around and have gained a lot of experience with tools like JIRA, Version One, Rally and others…  I consider this as a personal ‘hobby’ but I know how to decouple it from daily work that I have to do as an organizational coach.  Over the years, I got to know some great software engineers that built the tools mentioned above.  I could probably easily pass for an in-house “agile tool expert” (that is if I decided to change my profession one day) and find a job that says something like this: “Looking for a strong agile tool expert to transform our organization to the next level. PMP certification is a huge plus.”.  Yes, sadly there are many job specs out there that sound just like this 🙁 .

On a brighter side, I could probably also leverage my ‘hobby’, and look at any agile tool, used by a team or a group of teams that claims “to do” agile, and in about 5 minutes find a handful of signs of serious systemic dysfunctions (just in a tool alone!).  So, there is actually some practical use of my ‘hobby’.  In any case, I think I have earned the right to say that I know very well what tools can and CANNOT do for you.  And this is why, I strongly stand with all other coaches that use the arguments I listed above.

<SIDE NOTE OFF>

 Now I would like to come back to the topic of SAFe and set the stage to my questions, by stating the following:

High Market Penetration of SAFe:

First of all, lets take a look at some relevant data that has been recently published on InfoQ, with the original source being Version One, 10th Annual State of Agile Survey: while still being a relatively new framework, SAFe has acquired a significant share of market place –23% , while demonstrating the highest rate of growth:  “…the largest increase from 19% in 2014 to 27% in 2015…”

 

My understanding of safety that SAFe brings:

I have heard various opinions about what went into thinking of the acronym “SAFe”: was it an intention to make it sound phonetically “safe” or was it just coincidental that the words Scaled Agile Framework that begin with “S”, “A” and “F”, made up SAFe?  I don’t know.  And I don’t want to speculate.

But let me share my understanding of what makes SAFe – safe:

  • SAFe does not seem to be threatening to first-line management. Thanks to its first two layers (Team/Program & Value Stream) and abundance of processes and roles that are present in both, everyone can find place to work.  Probability of being misplaced or losing a job within SAFe is relatively low.  If we all recall, what happens with implementing basic Scrum, where teams are expected to become self-organized and self-managed, and where the role of Project Manager is not explicitly discussed, we (coaches, trainers) frequently have to answer the following question, usually coming from managers: “what now happens to my role?”  And of course, there are ways to handle this question properly and give good options to those who ask.  My point is that I don’t expect this question to be asked as frequently with introduction of SAFe.  Why?  Because SAFe seems to be a good way to harbor many existing management roles (role security).
  • SAFe looks “homey” to senior management.  SAFe graphic is very rich in colors, objects, lines, layers, icons that represent roles, groups, departments, interactions.  At a glance, SAFe appears as a natural fit and a comfortable habitat for many existing organization constructs.  SAFe does not challenge/simplify existing organizational design; no hints to change/simplify reporting lines or flatten layers (de-scaling).  No need to have unpleasant conversations with employees (!).  Senior managers that are confident that their organizations are well designed and don’t need any major repairs, see SAFe as a safe way to try agility.
  • SAFe does NOT explicitly compete with other agile practices. SAFe uses them all. In fact, a cute yellow smiley-squeeze-toy that many folks picked up in Orlando from SAFe kiosk, explicitly says: “SAFe embraces Scrum“. Indeed, at its multiple layers, SAFe diagram mentions Scrum, Kanban, XP,…and many roles, artifacts and ceremonies and iterations that support all these practices. And this, IMO, makes SAFe really safe, in a very special way: if Company X already uses, perhaps inconsistently, some agile practices, it is relatively safe, and actually convenient, for SAFe consultant to come in and say something like this: “we can help you retain most (if not all) of what you have adopted so far but it will be much better structured under the overarching umbrella of SAFe”.

 

My understanding of SAFe Partnerships and Strategic Goals:

Here, I am listing only the top few references that I found on-line.  But the list could be much longer if I spent more time searching.  I personally have attended a handful of webinars, where concepts of SAFe were presented, along-side with benefits of tools (by companies that hosted webinars).

Please, finish reading the post first and then come back to the links.

Golden Sponsorship by Consultancies (not specialized in Agile):

With TFS/VSTS:

Note: TFS/VSTS are Microsoft products.  Tool design and “logic behind” resemble MSFT Project Plan :)…

With Rally:

With Jira:

With Version One:

With Version One: Beware of “Trippe Taxation” Problem

Just to be clear for those that may not be as well familiar with these tools as I am (you don’t have to share my hobbies 🙂 ): each one of these tools now has complex “Strategic Layer” that sits at the top of a tools’ “tactical” layer (epics/stories, backlogs, sprints, releases, team views, agile boards, story/task boards, workflow management, etc, etc) – and it is used by a Project, Program and Portfolio Management.  At some companies, where I have consulted, each one of these layers usually has a manager (Project Manager, Program Manager, Portfolio Manager, respectively, etc), someone who is responsible for data collection and status reporting – just like it was without or prior to implementation of SAFe.  Tool complexity is great to offer a nice fit to an existing organizational structure.

<SIDE NOTE ON>

What is also not a surprise to anyone is that there are so many large companies that own tens of thousands of licenses for the above mentioned tools.  I consulted to a number of such companies and seen these tools being a “hallmark of organizational agility”.  Please note that very frequently “best practices of use”, even for agile tools, reside within departments like Control & Governance, PMO, and Centers of Excellence, where decisions about “what is best” are made in a vacuum and then are pushed down onto organizational domains that are thousands of miles away.

<SIDE NOTE OFF>

Here is another safety aspect of SAFe:

SAFe is very safe to client-to-vendor relationships  : it does NOT disrupt existing million-dollar (of course, depends on company size) contracts and license agreements between client companies and tool vendors.  It should be pretty safe, imo, for a SAFe consultant to come in and say something like this: “if you are using JIRA or Rally or Version One or any other tool that has Portfolio Management layer in it, it will be very complimentary to what we can do for you in terms of agile scaling”.   I think that the links that I have provided above suggest exactly that.

SAFe seems to be a great compliment and strategic alliance to some agile tooling companies that have gained a lot of  their own market share.  And it does not matter if JIRA and Version One and Rally or others, could be competitors to each other. They all seem to be great partners of SAFe (I will not speculate on exclusivity of relationships but based on the links above, there is probably none).

Now, after I brought to light some relevant market data, shared some personal views on what I consider as “safety factors of SAFe”  and gave a perspective on some possible strategic alignments that may exist between SAFe and industry leaders in the world of agile tooling, I would like to ask the following two (2) questions:

  • First Question: Do you think that market penetration of SAFe and its adoption success could be attributed to a personal safety of companies’ managers, as I have described above?  Do you feel that ‘role security’ of first-level management in particular is a significant contributor to SAFe adoption rate?  I stress this last point because the role of first-level manager is in super-abundance today at many companies.
  • Second Question: Do you think that market penetration of SAFe and its adoption success could be attributed to (at least in part) to its direct or indirect alignment with industry leaders that build agile tools?  Do you think that “SAFe + XYZ tool” produces a stronger compounded effect on organizations in terms of SAFe adoption, than SAFe applied alone?

Related Publications about SAFe by Agile Manifesto Co-signers and others:

Also, as a reference, some experience reports about the Spotify “Model”:

From LeSS Toolbox: Causal Loop Diagrams to visualize System Dynamics

Introduction:

When it comes to scaling, there is a common misconception that “bigger always means better”.  This misconception is also traceable to agile arena, where companies look for ways to expand their agile practices beyond a single organizational domain (e.g. many teams, numerous departments, multiple lines of business, etc.).  Usually, it is an existing (inherited) organizational complexity that becomes the main reason why companies look for complex, multi-tiered scaling solutions.  And of course, if there is a demand, there will be a supply: there is a number of frameworks out there that hand-hold companies to comfortably “embrace” their existing complexity and not feel too uncomfortable about their own internal dysfunctions.

However, not all scaling solutions are as “forgiving”J.  There are some agile frameworks that intentionally expose and boldly challenge organizational deficiencies. One of such frameworks is Large Scale Scrum (LeSS).  In order to set a stage for the rest of this discussion, I would like to summarize a few points about LeSS here.

I also would like to express my appreciation and acknowledgement to Craig Larman (one of co-founders of LeSS) for helping me deepen and broaden my understanding of organizational design and improve my system thinking that I have been developing over years.

 

Brief Overview of LeSS:

LeSS is a very easy to understand.  I like to speak metaphorically, so in describing LeSS, I sometimes use analogy with a legendary assault rifle AK-47 that has the following, well-known characteristics:

  • it has very few moving parts and, therefore, its internal friction is pretty low; also not too many small pieces that can jam or break
  • it is simple to disassemble, inspect and reassemble (inspection & adoption)
  • it is very reliable and adoptable under tough conditions (rarely fails in action)
  • if necessary, it can be modified and “expanded”, at low cost/low effort

But there is something else about LeSS that makes its analogy to a weapon (probably, not just to AK) appropriate: it assaults organizational dysfunctions.

LeSS also has two important characteristics:

  1. It is very simple in design and fully rests on core principles of basic Scrum (Effectively, LeSS is the same Scrum, as it is described in Scrum Guide, but performed by multiple teams)
  2. LeSS teachings rest on the pillars of:
    1. Lean Thinking: “watching the baton, not the runner”, visual management, cadence, time-boxing, managers being teachers, continuous improvement
    2. System Thinking: Weinberg-Brooks’ Law, Queueing Theory, indirect benefits of managing batch size and cycle time, being customer-centric, explain differences between local and system optimization).

Thanks to these two key characteristics, LeSS is a very powerful mechanism that helps seeing an organization systemically/holistically, while identifying and exposing (analogy, to a high power rifle scope is suitable here) its pain points that need to be addressed.

As a framework, LeSS is lean and transparent. It does not have any “secret pockets” or “special compartments”, where organizational problems can find safe heaven. No dysfunctions escape sharp focus of LeSS: ineffectively applied processes or tools, ill-defined roles and responsibilities, unhealthy elements of organizational culture and other outdated norms – all of this gets vividly exposed, when using LeSS. Interestingly, while LeSS is a scaling framework that allows to scale-up (roll-up) efforts made by multiple scrum teams, it requires organizational de-scaling to be performed first.  The metaphor that I often use at here is: “you can get more with LeSS”.  To put it another way, in order to build-up Scrum effectively, an organization must remove whatever extra/unnecessary “muda” (waste) it has already accumulated that gets in a way of scaling Scrum.  It is almost like this: LeSS prefers thin but very strong foundational layer, over thick and convoluted but unstable foundational layer, with the ladder, usually being a characteristic of an orthodox, archaic organizational design.

Another metaphor that I use to describe LeSS is that it is an organizational design mirror.  By adopting LeSS, an organization sees its own reflection and depending on its strategic goals and appetite for change, decides on necessary improvements. Similarly, to a person who takes his personal fitness training seriously and uses a mirror for “course correction”, an organization may use LeSS to decide if any further re-shaping or “trimming” is required to get to a next maturity level.

LeSS is also a great guide to technical excellence.  I have used LeSS teachings extensively to coach the importance of continuous integration, continuous delivery, clean code, unit testing, architecture & design, test automation as well as some other techniques that make agile development so great.  LeSS stresses that mature engineering practices are paramount for effective adoption of agile across multiple organizational domains, not just IT.

 

Discussion

So, how can an organization take advantage of both: simplicity of LeSS construct, on one hand, and its deep systemic views, on the other hand – to improve its organizational agility beyond a single team? How can principles of lean and system thinking – together, and along with understanding ‘beyond-first-order’ system dynamics be leveraged to implement true scrum, without reducing, minimizing or downplaying importance of its core values and principles?

As an organizational and agile coach and someone who has been using LeSS extensively in his daily coaching work, I frequently witness situations when companies have to deal with this serious dilemma.  Here, I want to share the magic “glue” that helps me bring my thoughts together and deliver them to my clients.  This “glue” is one of the most effective tools that I have discovered for myself inside the LeSS toolbox.  It is called Causal Loop Diagrams (CLD).

CLDs – are a great way to graphically illustrate cause & affect relationships between various elements of an organizational ecosystem.  CLDs help me effectively uncover second and third order system dynamics that may not as apparent to a naked eye, as first order dynamics.  CLDs help me brainstorm complex organizational puzzles and conduct deep analysis of system challenges.  Ultimately, I have found that CLDs are a great way to communicate ideas to my customers, particularly, to senior leadership.

Here are some elements of CLDs that I use in my graphics:

  • Goals – high, overarching/strategic goal that needs to be achieved
  • Variables – system elements that have effect/make influence on other system elements (other variables)
  • Causal links – arrows that connect two related variables
  • Opposite effects – “O” annotation near an arrow; suggest that effect of one variable on another variable is opposite to what could be expected
  • Delayed effect – “||” annotation that disrupts a causal link (arrow); it implies that there is a delayed effect of one variable by another variable
  • Extreme effects – one variable has an extreme (beyond normal) effect on another variable; it is represented by a thick arrow
  • Constraints – “C” annotation near arrow; implies that there is a constraint on a variable
  • Quick- Fix reactions – “QF” annotation near an arrow; action that brings about short-term, lower cost effect

 

At this point, I would like to provide an example of using CLDs, to visually illustrate second and third order dynamics between key system variables that I often see cause harm and unrest to organizational: performance-driven, discretionary monetary incentives.  

I would like to follow through the process of interaction between system variables as they come to play with one another and uncover the impact they have on the overall system.

Every year, a company (hypothetical Company X) has to distribute a large sum of money to many of its employees in the form of discretionary bonuses.  In order to make a decision-making process less subjective, a company ties it to employees’ individual performance: reviews and appraisals.  People that have demonstrated better performance, get more money, people that have demonstrated poorer performance – get less (or nothing).  This requires that every employee gets evaluated by her line manager, usually, twice every year, at which time an employee gets some rough idea about “how much she is worth as resource”.  This serves as a guide to how much discretionary money an employee might be expecting to get, as a bonus.  While at its surface, the process of performance evaluations and appraisals may seem to be more objective, than a line manager just simply deciding on his own, it is still very subjective as an employee’s opinion is disregarded, when making decisions.  Furthermore, the process is harmful and causes deterioration of individuals’ morale and relationships, on multiple fronts.  Undesirable effects and short-/long-term damage of performance evaluations and appraisals have been studied for years; lots of research and statistical data is available today.   If a reader is not well familiar with this topic or requires additional background information to deepen his understanding, he may refer to the following resources, prior to proceeding with reading:

 

Moving along with this discussion, I would like to highlight the following three downstream “system variables” that are directly (first order dynamics) impacted by individual performance reviews.  This type of system variables integration is mainly observed among technology groups.  Once we understand a first order dynamics, we shall proceed to some other downstream (“beyond first order”) variables.

 

Employee Happiness Factor

Many research studies have proved that employees don’t like to be appraised.  An appraisal is equivocal to slapping a price tag on someone and is hardly an objective process, as the only opinion that really matters is that of a line manager.  Yet, an official version at almost any company is that an appraisal helps an employee grow and mature professionally and offers a way to improve her individual performance towards some arbitrarily set target.  Truth to be told: was the intent of appraisals to help employees grow and continuously improve, the process would not be implemented once or twice a year, but rather, more frequently, in ways that would allow an employee to make a necessary course correction more iteratively.  After all, why wait for 6 months to tell a worker that she needs to improve?

At the time of appraisal, a manager delivers to an employee her final and practically undisputed decision.  An employee has practically no effective way to challenge or dispute such decision.  Frequently, even a line manager does not have control of the process (although, this is rarely admitted): he or she is presented with a fixed “bag of cash”, coming from management above, and this bag, somehow, has to be distributed among lower-ranking workers.   And just to be fair to line managers that are not delusional about the dysfunction they have to entertain, most of them also dislike the process as it makes them annihilated and resented by their own employees.

 

So, as time goes by, employees become less and less pleased with evaluations and appraisals.  The impact may not be observed immediately due to the fact that it usually takes time for an employee to mentally mature to the point, where she becomes conscious and begins comprehend the unfairness and lawfulness of the process.  (Of course, exceptions exist among people that have longer experience of dealing with this process and understand its ineffectiveness and harm.)

 

While leveraging, CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

This graphic suggests that annual appraisals have delayed and opposite effect on employees’ happiness.

 

Peer to Peer Support

Peer-to-peer support, willingness to share knowledge with colleagues, collective ownership of assignments and shared responsibility for deliverables – these are the hallmarks not only of feature teams’ dynamics but of any agile environment.  In order for employees to be mutually supportive, they must operate in non-compete environment, where they don’t view each other as competitors or rivals.  This is practically impossible to achieve when every employee perceives another employee (at least within the salary ranking tier) as a competing bonus collector.  And this is exactly what is observed in environments, where bonuses are distributed, based on individual performance: employees compete for the same, limited pool of cash.  But everyone cannot be a winner: even if a group of brightest individuals, working together, someone within that group would have to be ranked higher and someone – lower (and btw, people are frequently explained this upfront).  How could we expect people to be supportive of each other if, effectively, underperformance of one employee and her inability to collect extra money increases chances of another employee to bring home more cash?  Performance appraisals and discretionary moneys drive employees apart, not together.

Again, the adverse results of appraisals may not be immediate: pain points become more obvious after bonuses are actually paid (end-of-year/early-next-year) – this is when employees start developing resentment and jealousy towards each other over paid bonuses.

 

While leveraging, CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

This graphic suggests that annual appraisals have delayed and opposite effect on peer to peer support.

 

Both variables above, directly (first order) define employees’ Intrinsic Motivation to work and their willingness to stay with a company.  After all, can we expect that an unhappy employee, while being in constant competition with his peers and being deprived of an opportunity to safely experiment, would want to dedicate himself to a company for a long time?  Probably not, and as a result, Employee Retention should not be expected to be high, and as it has been seen in many cases: good employees that always leave first.

 

While leveraging, CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

This graphic suggests that both, employees’ happiness and their willingness to support each other, are directly related to their intrinsic motivation to work and willingness to stay with a company, and as a downstream effect – this increases employees’ retention.  The opposite would be true as well: lowering values of upstream (left-side) variables will lower values of downstream (right-side) variables.

 

 

“Environmental Safety” and Desire to Experiment

Innovation and experimentation are paramount for success in software development. This is what drives feature teams towards improvement.  Scrum, for example, requires continuous inspection and adoption.  It is expected that, while experimenting, feature/scrum teams may run into roadblocks or have short-term failures, at which point they will learn and improve.  But in order to be willing to experiment and take chances, teams need to be sure that they are safe to do so.  Another words, they need to be sure that they will not be judged and scrutinized for their interim failures.  Such “environmental safety” will be always jeopardized by individual performance appraisals. Why? Because individual success (high individual performance) of an employee is defined by her ability to precisely meet individual goals, set in stone early-on in a year.  The need to follow a “script” precisely kills any desire of an employee to experiment.  After-all, why would a person want to take any chances if her failures will be perceived by line management as underperformance?

Since appraisals make working environments unsafe and kill individuals’ desire to experiment, as soon as an employee is presented with her annual goals, she reacts self-protectively, by starting to “work to the script”, while trying to document every personal achievement “for the record” (a.k.a. “CYA”)

 

While leveraging, CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

This graphic suggests that when employees are safe and are not feared to experiment, innovation and experiments take place in a workplace.  Inversely, lack of safety in a work place and absence of desire for experimenting reduces chances of innovation and improvement.

In the sections below, I would like to take a closer look system dynamics that are beyond the first order of interaction, by tracing some additional downstream system variables:

 

Team synergy & stability:

In scrum, we would like our teams to be stable and long-lived.  We would like to see team members enjoy being a part of the same team, and do so as happy volunteers, not as prisoners, constantly looking for opportunities to escape.  In fact, best feature teams known, have been created as a result of voluntary self-organization, not as a result of a managerial mandate.

Why do we want our scrum/feature teams remain stable?  Here are some good reasons:

  • Collaborative environment and desire to work together
  • Shared domain expertize and cross-pollination with technical knowledge
  • Predictable team Velocity and ability to plan/forecast more accurately

 

So, how does team synergy and stability get impacted by performance evaluations and appraisals? Here is how this happens, indirectly:

Via low Employee Retention – as employees leave a company, feature teams disintegrate.  This brings together new team members that have never worked together and require time before they can ‘form, norm and storm’.  As feature teams get dis- and re-assembled, velocities drop/become less reliable and system variability increases (estimation becomes less accurate).  The effect is usually immediate.  In my personal experience, I have seen many feature teams breaking lose and falling apart shortly, after companies have announced annual bonuses.

While leveraging CLDs in my discussions with senior management, I use the following graphic representation to convey the concept:

This graphic suggests that high employee retention will lead to elevated team synergy and stability.  Inversely, low employee retention in a work place lowers teams’ synergy and stability.

 

Via high Internal Competition and Rivalry – once employees realize that they have to compete with their own teammates for discretionary dollars, collaboration deteriorates dramatically.  Individuals stop supporting each other in pursuit of common goals. Instead, everyone strives to be a super hero and solitary performer, while trying to demonstrate her own efficiency and hyper-productivity to a manager.  Everyone wants to look better than other peers and teammates.  Race to demonstrate best individual performance has a high cost: it happens at expense of overall team performance.   Since collaboration, swarming and shared ownership of work are critical for healthy scrum, the obvious downstream effect of performance evaluations and appraisals not becomes clearer: lowered team synergy and instability.

While leveraging, CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

 

This graphic suggests that internal competition and rivalry will have an extreme and opposite effect on team synergy and stability.

 

Healthy Scrum Dynamics:

There are many known system variables that interact with one another and define effectiveness of basic Scrum.  Assuming that most readers of this post are familiar with Scrum and in order to keep my focus on other important downstream system variables, I am going to leave detailed discussions of basic Scrum dynamics out. It would suffice to mention that the following classic Scrum-specific variables have to be always considered: feature velocity, # of defects, $ rate at which developers are hired (low vs. common), # low skilled developers, cash supply, ability to guide and improve the system, etc.  If the reader is interested in exploring this in-depth, “Seeing System Dynamics: Causal Loop Diagrams” section of https://less.works site greatly describes these system dynamics, with the use of CLDs.

However, when leveraging CLDs in my discussions with senior management, I still use the following generalizing graphic representation and annotation to convey this common-sense, overarching concept:

This graphic suggests that team synergy and stability lead to healthy Scrum dynamics and a feedback loop is positive (value increase on left leads to value increase on right).  In my experience, the effect is sometimes delayed.  A time lag is usually due to previously gained momentum.

So far we have used CLDs to explore system dynamics that primarily impact technology teams.  At this point, would like to shift my focus on business side of the house and explore the part of system dynamics that involves customers.  In particular, I would like to provide some examples of how CLDs can expose the adverse impact of individual performance appraisals and discretionary monetary incentives on Product Ownership in Scrum.

 

Identification of GREAT Product Owner:

Finding a good candidate for the role of Product Owner has been one of the most challenging tasks in Scrum. Why?

The role of Product Owner combines certain characteristics that are not easily found within the same individual, and it is organizations of high organizational complexity and Tylorian culture, where this challenge is seen most. On one hand, Product Owner is expected to have enough seniority and empowerment to make key strategic business decisions.  On the other hand, Product Owner is expected to get intimately involved in day-to-day, and sometimes, hour-by-hour interaction with technology groups.  When these two sets of characteristics come together in the same person, we hit a jackpot: we get a great Product Owner – a person who is both Empowered and Engaged.  But truth to be told that it is often challenging to identify a person that possesses both “Es”.  In most Orange organizations (the predominant color of most modern corporations, as per Laloux Culture Model), definition of every job includes a fixed set of responsibilities that individuals are obligated to fulfill.  If we look at most job descriptions, as they are defined by HR departments of Orange companies, we will hardly ever see a job spec that has ”slack time” for an person to take on responsibilities of Product Owner, in addition to his primary job, let alone a job spec that in fully dedicated to the role of PO.  For most organizations, Product Owner is still not a well-defined role and as such, it is not perceived by employees as a step towards career advancement.  Today, many organizations that use scrum have to experiment with the role of PO, by looking for right individuals, internally.  Individuals that step up for the role if Product Owner have to make a conscious decision, with full acknowledgement that they will be taking a very wide spectrum of new responsibilities.  For most people, this is risky, because, effectively, it means that attention and focus on primary activities (as per job specs) will be diluted by secondary activities – fulfilling the role of PO.  Of course, this problem could be easily mitigated with full backing and support, from senior leadership and HR, by redefining job specs and explicitly recognizing criticality of Product Owner role.  But it hardly ever happens (and mostly, at product development companies).

It is hard to argue that people have to be recognized for the work that they do.  I doubt that anyone would object to the following statement: nobody should be working two jobs for the same pay check.  People have to “feel safe” about stepping into a new territory, learning new activities and developing work dynamics that they have not experienced before.  This brings us to the same concept that we discussed earlier, when we looked at technology groups: individuals need to feel safe, in order to be willing to experiment with a new role. It would be unreasonable to expect an employee to take on more work that would not be “counted in” when a person gets evaluated for his contribution to an organization.

So again, while leveraging CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

As the graphic suggests when employees are safe and are not feared to experiment, it will be less difficult to identify a good Product Owner. Inversely, the opposite is true as well: lack of safety and inability to experiment makes the process of Product Owner selection much more challenging.

At this point, it is worth mentioning one very common Quick Fix that organizations frequently make to compensate for shortcomings in finding a good Product Owner:

Empowerment usually implies that a person occupies a senior organizational position.  As such, a business person’s career has progressed beyond a certain point; she no longer has enough bandwidth (nor desire!) to deal directly with technology. Once reaching a certain level of seniority, a person gets a “bigger fish to fry” and collaborating with individual technology (feature) teams is no longer her priority. So, while still retaining one of the “Es” (Empowered), a person is not able to demonstrate another “E” (Engaged).  In order, to compensate for the missing “E”, another person needs to be “inserted” into the system, to fill in the gap between a real Product Owner and technology teams.  This, poorly-defined (undefined) role is sometimes labeled “PO-proxy” – a surrogate person tries to act as PO but does not have the power. This role is usually occupied by someone from a lower organizational layer: a business analyst, system analyst or another person – someone who is more accustomed to work directly with technology and for whom the activity itself is not perceived as “below pay grid’.  This creates a serious dysfunction in scrum operating model, as communication between a true customer (empowered Product Owner) and technology is now hindered: a surrogate role of PO-proxy usually lacks strategic/holistic product vision and power to make important business decisions within short timeframes, as it is required by Scrum.

It is worth noting that functional expertize of a business analyst or systems analyst are both welcomed in Scrum and usually reside within teams (although single-specialty individuals are viewed as less valuable than multi-skilled, a.k.a., T-shaped individuals).

The reason why delegation of responsibilities described above is problematic is because it artificially creates unnecessary communicational layers between end customers and technology. This type organizational design causes a variety of additional dysfunctions (miscommunication, hindrance to information flow, confusion of priorities, etc.), and therefore strongly not recommended.

 

While leveraging CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

As the graphic suggests, difficulty to identify a good candidate for the role of Product Owner creates the need to look for quicker and cheaper solutions; introduction a powerless surrogate role of PO-proxy is commonly seen, undesirable ‘Plan B’.

The reason why this fix is “quick” – because it usually does not take too long for a real Product Owner to realize that he/she is not able (or willing) to handle additional responsibilities of the role.  In my practice, risks of losing a real Product Owner and getting a proxy instead, did not take too long to materialize: usually a few weeks after Scrum was introduced.

 

Effectiveness of Product Backlog management:

Effective Product Backlog management is paramount in agile product development.  This is the fundamental concept that has been introduced in simple Scrum and it remains as valid in scaled Scrum.  In fact, when an organization scales its Scrum, by involving multiple technology teams, Product Owners and lines of business, effective product backlog management becomes even more critical: work coordination, resource management, impediment removal, alignment of business priorities etc.

As we can guess, effective product backlog management, including work prioritization, story decomposition etc. can be done most effectively with participation of a real Product Owner.  And oppositely, if an organization is missing the critical figure of Product Owner, product backlog management will become ineffective.

While leveraging CLDs in my discussions with senior management, I use the following two graphic representations and annotations to convey these two related concepts:

 

As the graphic suggests, product backlog management suffers from both: lack of true Product Ownership and presence of ineffective surrogate roles.  In my personal experience, the effect is usually extreme.

 

Healthy Scrum dynamics (overall):

At this point, I usually provide senior managers with a partial summary of LCD, by showing how ‘healthy scrum dynamics’, while sitting much further downstream from individual performance evaluations, appraisals and bonuses are still impacted by the latter group via second order dynamics (through secondary variables). CLDs do a great job of bringing many aspects of system thinking together and presenting them visually.

Below is the combined view of how four upstream system variables that we have discussed earlier relate to ‘healthy scrum dynamics’:

As the graphic suggests, nature of the effect (positive vs. negative), time of onset (immediate vs. delayed) and impact (casual vs. extreme) are could be unique for each variable.

 

Scaling Scrum and Organizational Agility:

In this section, I want to describe how I, with the use of CLDs, bring my discussions with senior management to culmination, by painting a bigger picture of organizational agility.

For most large organizations, success by a single team is not the end goal.  Organizations look for “bigger” solutions.  And their reasons are obvious: huge IT departments, many lines of business, many customers, multiple competing priorities, multi-year strategy, and many other elements that make organizational needs nothing less of huge.  Luckily, most of organizational leaders that I have met in my practice, understand that the ability to effectively scale basic agile frameworks (e.g. simple Scrum) will ultimately improve organizational agility and ensure that both customers and employees are happy.

Below is the graphic that summarizes this last, ‘common sense’ relationship:

 

Tying it all back:

What I would like to do at this point, is to make one step back and describe what it takes to scale scrum effectively:

This is where another powerful concept of LeSS comes to rescue: in order to scale Scrum, an organization must be descaled first (please refer to “Less Agile or LeSS Agile?” by Craig Larman).  Other words, to construct a model of Scrum, performed by multiple teams, an organization must remove (deconstruct) its existing organizational complexly first.  As it was stated at the beginning of this post, scaling does not imply making things more complex, but unfortunately, this key concept is not always well understood.  Mistakenly, many people still think that in order to support existing organizational complexity they need to look for multi-tiered, complex agile frameworks that will provide “room and purpose” for every existing organizational element: roles, processes, tools and techniques.

The analogy that I frequently use to deliver the concept of scaling to senior management is that building a sky scraper on a wobbly, porous foundation is dangerous because it will eventually crumble. A surface must be cleaned up first, flattened and hardened, and only then there will be a chance to build something tall and strong.

Below is the graphic that summarizes this concept:

 

At this point, the most common request I get from senior leaders is to elaborate on what I mean by ‘de-scaling’ – and this is my favorite topic.  This question is natural but I usually resist on answering it immediately, since the topic is inherently large, complex and, at times, inflaming and therefore, I request a dedicated discussion for it.

However, I still produce CLD graphic illustration of the concept, as shown below, but offer a follow-up discussion to explore details:

Ultimately, when such discussion is held, I always tie it back to the present discussion and explain why Goal: distribute discretionary incentives” becomes so trivial with identification and removal of system/organizational waste.  This discussion is usually long and it requires challenging many outdated organizational norms and principles that some senior leaders are not willing to give up easily.

 

The CLD graphic illustration is a high-level generalization of the concept of the opposite (inverse) relationship between the two system variables:

As mentioned above, the variable in the dotted circle can be decomposed further into many, smaller system variables that have up- and downstream relationship with one another.

 

Summary:

The best summaries are short.  Therefore, I would like to summarize this post briefly, with one comprehensive CLD diagram that brings together and variables, relationships and annotations that were discussed so far:

Although it may take hours, or sometimes days of brainstorming to produce CLD, when complete, it becomes a great communication vehicle.   A diagram like this one can be created real-time, in collaboration with others, on a white board.  Alternatively, it could be created ahead of time by a coach or trainer and then be used as a ‘cheat sheet’, when appropriate.  CLDs can be also shared with wide audience ahead of time, to solicit questions and provoke interesting discussions at later point.