Category Archives: Behavioral Science

July 7 – LeSS Talks: Framing a perspective: LeSS, Nexus, Scrum @ Scale, SAFe

This event was about comparing and contrasting the following four known scaled agile frameworks:

The discussion was very engaging but conclusive and many topics remained to-be-discussed in a future.

Below, please find the post by one of the participants and contributors at the meetup: Sevina Sultanova:


sel_sulScaled Agile comes in different flavors. Knowing the differences and similarities between various Agile frameworks can be largely beneficial, to put organizations at ease BEFORE they get underway with Agile Transformation, while with either of the frameworks: LeSS, Nexus, Scrum @ Scale, and SAFe.

Consider the recent LeSS in NYC meetup event in which attendees gathered around the virtual canvas projected onto the wall with goal of creating a perspective about the frameworks and comparing/contrasting them across multiple Agile dimensions (e.g., dependencies, optimization, overall structure, artifacts, ceremonies/events, teaming, ability to improve system design, roles/responsibilities, strengths, challenges, etc.).

meetup_collaboration

The participants were provided with a lightweight reference for each framework in the form of virtual stickies to “move” around across the table, in order to understand the differences of the various scaling frameworks.

What I found particularly interesting was a discussion about the differences between SAFe and LeSS initiated by one of the attendees. Here are a few highlights from this discussion:

Topics

SAFe

LeSS

Solving dependencies Coordinates people People work with technology
Cost of dependencies Coordination is seemingly necessary waste Learning to work with technology is investment
Optimization Resource coordination Outcome optimization
Batch size [1]  Planning cycle 3 months. Big batch of work to reduce total cost. [1] Planning. Sprint-long iterations to enable fast feedback
Main control mechanism [2] Bureaucratic [2] Clan
Customer contact Intermediated Direct
Organizational maturity Possible with lower skill. Learning for the role “Natural” Development Higher skill needed. Learning what is needed. Skilled evolution,  leading learning

Take away: Embracing either framework is simple yet not easy as technology, competence, identities and culture need to develop.

As Edgar H. Schein says, “There will always be learning anxiety…Learning only happens when survival anxiety is greater than learning anxiety.”  [3] Like with any enduring change, learning requires time though there is sure to be some worry and resistance.

References:

  1. [1] Stefan Thomke and Donald Reinertsen, Six myths of Product Development,” Harvard Business Review. May, 2012.
  2. [2] William G. Ouchi. A Conceptual Framework for the Design of Organizational Control Mechanisms. Management Science, Vol. 25, No. 9. (Sep., 1979), pp. 833-848.
  3. [3]  Edgar H. Schein, The Anxiety of Learning,” Harvard Business Review. March, 2002.

If you have any questions for Sevina, please contact her directly here

Managing Performance by Extrinsic “Motivation”

“The idea of a merit rating is alluring. The sound of the words captivates the imagination: pay for what you get; get what you pay for; motivate people to do their best, for their own good. The effect is exactly the opposite of what the words promise.”

-Edward Deming, “Out of Crisis”


This article took me almost 10 years to write… This has been a long journey for me.  As an organizational and agile coach, I base views not merely on feelings and emotions but rather on hard scientific evidence, research, review of literature, analysis of work from other credible resources that I have been collecting over years. And of course, continuous assessment of my personal experience.

So, I wanted to start this discussion with Wikipedia definition of Performance Appraisal:

A performance appraisal (PA), also referred to as a performance review, performance evaluation,[1] (career) development discussion,[2] or employee appraisal[3] is a method by which the job performance of an employee is documented and evaluated. Performance appraisals are a part of career development and consist of regular reviews of employee performance within organizations.”

Interestingly enough, while all three terms are being included in the same definition (“review”, “evaluation”, “appraisal”), in practice, companies predominantly use the term “review” to describe PA, as it implies less scrutiny and preconception towards an employee. But does this change the essence of the process if less abrasive terminology is being used?

 A typical PA process includes setting individual career goals, by an employee that should, presumably, be her own goals but nevertheless must be in-line with organizational/department career goals that usually sent by senior management and then cascades down to line management.  It is expected that throughout a year, an employee has to steer herself towards pre-set goals, while performing her day-to-day job responsibilities.

Every company that supports PA process has a scoring system (variations exist) to rank employees, against other employees, based on score that an employee earns, while performing her yearly accomplishments (goals set vs. goals achieved). Some organizations offer a mid-year (quarterly, at best) check-point to an employee when, along with her line manager, she reviews how she performs with respect to the goals, she set originally.  Practically, no companies handle PA as an actively managed, iterative agile process: there is typically one mid-year check point and end-of-year final decision.

For most companies the whole PA process, typically serves the following three main purposes:
1. To identify low-performing employees that are potentially a subject to downsizing (or kept where they are)
2. To identify high-performing employees that are potentially a subject to promotions and compensation increases
3. To decide how discretionary incentives (bonuses) should be distributed among employees

While on its surface PAs still appears as an effective way to ensure quality of employees and to provide benefits to an organization, under the surface this process presents real challenges. These challenges become more apparent at organizations that attempt to adopt more agile culture, since in agile environments system organizational dysfunctions get exposed much better.

But before we dive deeper in the discussion, let us first briefly refer to some credible research and studies that exists today:

In his book “Out of the Crisis”, originally published in 1982, Edward Deming discusses Seven Deadly Diseases of Management and refers to individual performance reviews and performance evaluation as Disease # 3. Deming’s philosophy of transformational management is about seriousness of barriers that management faces today, while improving effectiveness and striving for continual improvement. Deming argues that by trying to evaluate and measure workers with the same yard stick, managers cause more harm than good to individuals and to companies.

Tom Coens and Mary Jenkins offer specific suggestions on how to replace performance appraisals with a more effective system that emphasizes teamwork and empowerment in their book “Abolishing Performance Appraisals: Why They Backfire and What to Do Instead.” Coen and Jenkins discuss new alternatives that produce better results for both managers and employees.

In his Forbes article “Eliminating Performance Appraisals”, Edward E. Lawler III, a distinguished professor of Business at the University of Southern California, advocates why organizations stop doing performance appraisals. Professor Lawler states that performance appraisals frequently do more damage than good, with damage levels fluctuating between wasted time (least troublesome) to reasons for alienation of employees and creating conflicts with their supervisors (most troublesome).

Garold Markle, an author, executive consultant and speaker, leverages his studies and experience with systems theory to illustrate his points with real-life examples of why employees and managers both have come to believe the “ubiquitous performance evaluation as industry’s poorest performing, most ineffective, and least efficient personnel practice”. In his book “Catalytic Coaching: The End of the Performance Review”. Markle provides an innovative way to measure ineffectiveness and inefficiency of performance evaluations and then introduces his catalytic coaching to replace them. His statement is awakening: “People hate performance reviews”.

In his book “Drive”, Daniel Pink offers a paradigm-shattering view on what truly motivates people in their lives. Pink draws on four decades of scientific research on human motivation, to expose a mismatch “between what science knows and what business does”. Pink challenges the mistaken belief of many that people doing intellectual work, will demonstrate higher performance, when incentivized monetarily. Based on Pink’s research, it becomes clear that individual performance evaluations and individual appraisals that are linked to monetary rewards, are not an effective way to steer individuals to become more efficient and productive. Therefore, they should be abolished.

Finally, in his book “Implementing beyond Budgeting: Unlocking the Performance Potential“, Bjarte Bognes, who has a long career with HR and Budgeting departments, unveils ineffectiveness of conventional budgeting processes that so many companies still follow today.  Bjarte, describes common fallacies associated with “accordion” or “against the wall” budgeting that is done under assumption that “…the world will end on December 31st…”.   By offering many real life examples and cases studies of the companies that have instituted alternative budgeting approaches, Bjarte forces his readers to fundamentally shift their mindset, away from some outlived “de facto” concepts.  For example, one of Bjarte’s recommendations is to decouple what has been mistakenly lumped together for years: Targets, Forecasts and Resources, and treat each one as an independent system variable.  The connection is astonishing.

On many occasions, in his book, Bjarte, connects the dots between conventional Budgeting process and conventional Performance Management process – both of which harmfully feeding off of one another.

And the list goes on….

So, now lets take a closer look at the problem on-hand, with some specific examples:

 

Fabricating Goals to Game the System

Are goals that employees officially set for themselves (in a system of record) truly reflecting their genuine, personal goals?

It is not uncommon that real personal goals are risky and challenging to achieve or may take longer than initially expected. Some other goals may be situational/opportunistic: they may change as a situation changes or unforeseen opportunity presents itself (job market trends, other job opportunities, personal life).  People want to have freedom and flexibility to adjust their goals to optimize their personal benefits and this is a human nature.  There is no real personal benefit to an individual to “set in stone” her personal development goals at year-start and then be locked to them at year-end, as if not meeting those goals equates to penalty.  In general, in order to set her real goals, a person needs to know that it is safe to actively manage them along the way and, if needed, safely change and/or fail them, without fearing negative consequences.

But is there any safety with PA processes if job security, career advancement ability and ability to collect fair compensation are at risk? If there is no personal safety, the exercise of setting personal goals becomes nothing but a routine of faking objectives that are “definitively achievable”. People are forced into system-gaming, to minimize the risk of being penalized by their management, if goals are not met. Setting individual goals becomes just a formality that brings no true value to an employee.

The process of individual performance reviews becomes even less meaningful if people work in small teams, where swarming (working together on the same task) and collective ownership is important, while joint delivery is expected. In cases such as these, people are forced into unhealthy competition with each other over goals, trying to privatize what should be owned and worked on collectively.
Another challenge with evaluating employees’ individual career goals is that in pursuit of personal goals, people frequently “drop the ball”  and pay less to attention common goals. Again, this dysfunction becomes much more vivid in “going-agile” environments, where agile frameworks (e.g. Scrum, Kanban, LeSS) de-emphasize individual ownership and reinforce importance of collective ownership. Often, close to mid-year and end-year performance reviews, collaboration and mutual support of team members worsens, as silos get created and everyone starts to think about their own goals, at expense of shared goals. This translates into productivity drop: swarming, velocity and throughput go down; cycle time goes up, queues grow and handovers take longer.

SO, lets take a look a few hypothetical examples that are based on real life scenarios:

Example 1:

Jane is an employee of a large insurance company.  She is being requested to enter in a company’s system of record her personal goals – things that she intends to achieve throughout a year.   Jane is smart and in order to avoid any unwarranted risk, where her personal success depends on success of others, she creates goals that are free of dependencies.  Jane creates a set of personal goals that other group members do not know and don’t care about.  Her line manager John, also discourages Jane from sharing such information. 

However, Jane does not work alone.  Her day-to-day work is tightly coupled to work of other people in her group: Jim, Jeff, Jill, Joe and Julie.

Jane really values team work. She also feels that by closely working with her group members, by swarming and sharing day-to-day activities, she can earn a lot more than if she worked by herself.   This is where Jane decides to put her full focus: on team work. She does not feel that creating an additional set of personal goals can add real value to her professional growth.  But Jane needs to “feed the beast”: she needs to provide her line manager with a list of “achievable” bullets that the latter can measure.  At the same time, Jane does not want to create a conflicting situation with her colleagues, by diluting her focus on shared goals and shifting it to personal goals. So, what does Jane do?  She fabricates her personal goals: “quick kills” and “low hanging fruits” – something that she can easily claim as her “achievements”, without jeopardizing common interests of her team.  Jane is forced to “game” the system to minimize harm to herself and her team.

In his book “Tribal Leadership”, David Logan describes five tribal stages of societal evolution. According to his research, corporate cultures typically oscillate between Stage 3 (“I am great and you are not”) and Stage 4 (“We are great and they are not”), with agile organizations trending more towards Stage 4. When individuals are motivated by force (a.k.a. “manipulated”) to think more about individual performance than about collective performance, they mentally descend to Tribal Stage 3 and, as a result, drag along their organization to this lower stage. It is very important for organizations and their senior leaders to understand that motivation is one of the most important factors that drive evolution of corporate culture.

Note: To understand how Motivation Evolution (defined by Daniel Pink in “Drive”) relates to Tribal Evolution (defined by David Logan in “Tribal Leadership”), please refer to this tool.

So, clearly, in the example above, Jane’s mindset is at Stage 4 and in order to descend to Stage 3, she “plays unethical game”.

Unhealthy Competition, Rivalry and Jealousy

Let’s face it, overemphasizing individual performance evaluations and allowing them affect job security, promotions and compensation of individuals does not come free of charge to organizations.  Organizations pay and they pay dearly.  Bad norms and processes come at expense of lowered collaboration, unwillingness to share knowledge and provide peer-to-peer support, increased selfishness and self-centric behaviors. For individuals that are encouraged to work and produce collectively (e.g. Scrum or Kanban teams) unfair performance evaluations frequently result in jealousy and feelings of unfair treatment. These dysfunctions become more frequent around times when employees are due to mid-year and end-year reviews. PAs have seasonal adverse effects on individuals’ ability to focus on work and, as a result, prevent them from producing high quality products and focusing on satisfying customers.

It is worth mentioning, ironically, that when dysfunctions are uncovered, it is agile that becomes the target for blaming.  But agile is hardly at fault here as it only provides transparency and reflection of already existing deep systemic dysfunction.

Example 2:

Jane works alongside Jim, Jeff, Jill, Joe and Julie.  All of them are smart, self-motivated and talented technical experts that cumulatively have more than 70 years of software development experience.  Their work is intense: there are lots of deliverables and their timeframes are rigid.  The group serves the same client for a number of years and, so far, a client is happy.  Work that this team performs, requires a lot of collaboration, collective thinking and brainstorming, teaching/learning from each other and, of course, collective delivery.

But then comes a mid-year review period and Jill notices that Jeff is not as supportive of her as he was at the beginning of the year.  Jeff becomes less responsive to Jill’s requests, he does not share his knowledge as readily as he used to; he does not give advice. Tasks that used to be handled collectively by Jill and Jeff are now illogically split by Jeff as he tries to focus only on what he assigns to himself.

There is also a noticeable change in Julie’s behavior.  Julie becomes very eager to be the one who stands in front of a client and presents deliverables of the whole team.  This responsibility used to be rotated from one person to another, with no one caring too much about being a “spokesman”.  But as mid-review came, Julie clearly stepped up to be the main, customer-facing presenter.  Julie also tries to make it very obvious to John (the group’s manager) that it is her – Julie, who presents to a customer. Julie wants to be viewed as a “centerpiece” and tries to gain most of spot light. 

Jim’s contribution to the group’s efforts has also decreased.  Early in the year, Jim used to be a very active participant at the team’s brainstorming meetings and workshops.  As mid-year arrived, Jim started spending a significant share of his time working on items that are not related to the team’s shared work; his focus has noticeably shifted to personal work that he chooses not to discuss with others.

Since the beginning of the year, it has been customary for the group to go out for drinks to a local bar, every Friday.  But this tradition is now barely followed, at mid-year period.  There seems to be less desire for the group to socialize outside work settings.  Everyone finds an excuse not to make it.  The group’s synergy has gone down noticeably.  What used to be a well jelled team of great collective performers has turned into a group of self-centered individual achievers that want to be acknowledged for their heroics.

“Scripted” Ranking to Force-Fit into Bell-Shaped Curve
Typically, when an organization ranks its employees based on individual performance, a bell-shaped curve is produced, where samples (ranked employees) are binomially distributed around the median: majority of samples are centered (“center mass”), representing average performing employees, left tail – representing low performing, and right tail – representing high performing (over-achievers). Statistically, a bell-shaped curve is a normal distribution of any large sample. The symmetrical shape of a curve (“bell”), however, can be influenced by additional three main factors (forces):

  • Platykurtic distribution – it lowers amount of samples around the median (average performers) and increases amount of outliers (under-performers and over-achievers), equally on both sides. A curve remains symmetrical.
  • Leptokurtic distribution – it increases amount of samples around the median (average performers) and lowers amount of outliers (under-performers and over-achievers). A curve remains symmetrical.
  • Uneven distribution of samples on left and right sides from median – Typically, this increases amount of samples on left (under-performers) or right (over-achievers) tails of a curve, while also disturbing symmetry of a curve and evenness of sample distribution around median (average performers). A curve loses its symmetry

This statistical distribution is tightly coupled to actions that management takes towards its employees at year-end. However, the shape of bell curve, does not “drive” (as it might be expected) managerial year-end decisions. On contrary, managerial decision shape up a curve.

Managerial decisions are driven by financial conditions of an organization as well as other strategic organizational plans. When managers review their employees, they have to account for such factors to make sure that a bell-shaped curve does not exceed organizational capabilities of promoting too many employees and giving out too much money. Effectively, the entire process of performance assessments becomes a retro-fitting exercise that shapes a bell curve, basing it on organizational capabilities. This makes the process, practically, staged or “scripted”.  What further ads to the irony of this situation is that at times an employee may report into a manager that does not even have sufficient skills for perform an objective assessment of an employee’s performance.  For example, an architect or a software engineer that reports into a non-technical manager (e.g. PMO) has a much lower chance to objectively discuss her work accomplishments and receive an objective feedback during PA.

There is a need for an alternative approach that will help dealing with overly complex, over-staffed organizations that spend so much time and energy trying price-tag its employees.

Here is an idea: how about more thorough checks of background and references, more rigorous interviewing processes that involve practical (hands-on) skills assessments, try & buy periods, before hiring an individual full time or some other, more objective methods?

Instead of attracting cohorts of workers of questionable quality and then dealing with inevitable force reduction or worrying (or pretending to worry!) about employees maintaining and/or improving their quality, hiring managers should be striving to acquire and retain lower quantities of higher quality workers: self-motivated, enthusiastic professionals, with a proven track record and clearly defined career goals…AND be willing to pay them higher compensations. This may require offering more competitive base salaries, and abolishing manipulative discretionary incentives: removing money from the table makes intellectual workers think more about work, and less about getting paid. This approach would also ensure that quantity of employees is kept at a minimum (this also ensures lowering overhead, complexity reduction, organizational descaling), while maximizing quality. Such alternative should render performance reviews much less important or even obsolete as there will be no need to reduce employees at year-end or thin-slice discretionary incentives among too many candidates.

 

Example 3:

John is a line manager for the development group. John has great organizational skills, he is well spoken and can greatly articulate his wishes.  But John, has never developed software products; he is not technical. John knows that all of his team members are “good guys”: knowledgeable, enthusiastic, and mutually supportive.  But when the team works together, John really cannot validate quality of work that they produce.  (Luckily, there is one reliable measurement of the team’s success – it is customer satisfaction).  The only thing that John can validate is the team’s vibe and spirit.  But even when John notices disagreements or temporary misalignment among the team members, it is impossible for him to offer a constructive advice or understand a root cause.  What is even more challenging and frustrating for John is that due to the nature of team’s work (closely collaborative, collectively shared) he cannot objectively assess individual performance of every team member.  In conversations with John, the team members rarely use the word “I”; it is typically “we”.

John is in a tough position.  How can he decide who the best performer on his team is and who is not?  John needs to be able to ‘rank’ his people and based on ranking, decide who gets promoted and paid more at the end of the year.  Deep at heart, John feels that everyone deserves a promotion and monetary “thanks” but he cannot satisfy everyone.  John’s management informs him that only one person from his team can get promoted and the amount of discretionary money allocated to his group is limited; in fact, it is less than last year.

Around mid-year time, John begins evaluating how each of his team members has performed up-to-date. John does this based on “achievable” goals that were set by each employee at year-start.  John’s inability to truly understand the nature of peoples’ technical work adds to his challenge…and frustration.   He cannot objectively evaluate his employees, let alone rank them against each other.  

Meanwhile, John’s management expects from him a ranking model that will fit into a bigger picture of an overarching ranking model, for a given year.  It means that even if John feels that all of his members are outstanding performers he will not be able recognize this officially.  At most, he will be able to recognize that they have achieved their set goals.  Further, based on what John learns from his management, he has to commit even less noble act.  Learning that certain percentage of a company’s workforce has to be reduced, John has to identify people from his group for future downsizing.  It is clear to John that people that are outstanding performers are not to be downsized (potential HR “cases”).  Therefore, John decides to force-fit some of his team members into a bell-shaped curve, away from the right-sided tail, towards the middle (average performers) and left-sized tail (underperformers).  John uses organizational “script” to play his own game.  What John does, is a wasteful act that is full of subjectivity and ambiguity.  The process is also destructive to the team’s cohesiveness and morale.  John is at risk of losing some good people sooner than he could imagine. 

Truth be told, a natural “knee-jerk” reaction of any employee when she is told by someone why she is not “perfect” and what she needs to do to improve is defensive.  Of all reasons, the biggest reason why she would become defensive is her resentment that someone will subjectively “evaluate” her and decide how much she is worth.  Although an individual may keep her feelings and emotions concealed under the umbrella of political correctness and diplomacy deep under covers, there is emotional harm being done.

 

Generating Waste
Rarely, do companies consciously analyze how much time and effort is being spent on performance evaluation process itself: by employees, by line management, by senior management and by HR. Unfortunately, for large, enterprise-size companies, these expenditures are already “budgeted for”. From the standpoint of lean thinking, today’s typical process of PA conducted by line managers is a clear example of organizational overhead that slows cultural evolution and prevents companies from maturing to Pink’s Tribal Stage 4.

Example 4:

All members of the team: Jane, Jim, Jeff, Jill, Joe and Julie spend a lot of time during the year writing and reviewing their personal coals.  John spends a lot of time reviewing and discussing personal goals of each team member.  John also spends a significant portion of his time with his line management, discussing achievements and intended ranking for each of his subordinates.  Overall, the amount of time this entire group of people spends on the PA process creates a lot of unnecessary procedural overhead and over processing. Annually, PA processes cost companies hundreds of thousands of dollars in wasted time by employees, at many organizational levels.

Alternative Approaches to Performance Reviews

Are there any working solutions to this problem? Is it possible to ensure that organizational behavior towards its employees (e.g. motivating and incentivizing) is more in-line with what is best for an organizational prosperity, customer satisfaction, waste reduction and creation of more pleasant work environment, and Kaizen culture? Is there a way to depart from archaic, 100+ years old Taylorian management principles, Skinnerian behaviorism, outdated norms and behaviors, without causing too much stress to an organizational ecosystem, perhaps, by offering alternative, less harmful solutions?

Lets be clear on something: ideally, the end goal of any organization should be to abolish individual performance appraisals completely and substitute them with other, more effective methods of individual motivation – at least for intellectual workers that are expected to work in team settings.

But for now, let’s look at some possible alternatives that can help companies gradually depart from individual performance appraisals, towards less harmful approaches.

Here are some potential, “second-best to a complete abolishment”, alternatives to discontinuing PA and incentives allocation process:

  • Instead of prizing individuals, prize teams and do so based on what an entire team has produced, not a single individual. If individuals must work in tight collaboration with each other and are expected to cross-pollinate with knowledge and domain expertize, what is the point to stress individual performance and superior excellence of each individual? Let a team, internally, decide who is elevating them above the water and who is dragging them down to the bottom. Individual underperformers will be quickly identified in such settings, and a team will either expel them or help them improve. Also, please note that prizing a team (monetarily, team bonus) does not have to be coupled to “performance assessment”. This could be done, simply, as a profit sharing model between business and technology: if work by technology has noticeably improved business profits, why cannot business say “thank you” to technology for its hard work in the form of sharing profits?
  • Take away singleton decision making capability of defining what a team deserves (in terms a monetary prize) out of line managers’ hands and spread wealthacross multiple parties: make it based on customers/stakeholders satisfaction, senior management satisfaction, third party feedback, etc. But again, judge teams, not individuals (important!).
  • Make monetary incentives allocation more objective and formula-driven, than subjective and single opinion-based. Here are a few suggested formulasto do this (other options exist):
    1. Monetary incentives are equally allocated among all employees whose work is tightly coupled to a shared goal, and where collective ownership is expected
    2. Monetary incentives are allocated in proportion to base salary of each employee: decide on employee’s “cost-basis” when she is hired (based on expertize, experience, etc) and then fall back to option “a” above
    3. Monetary incentives are allocated based on team’s internal voting, done confidentially (incremental, 360 review by all team members).
  • Please, visit the following links for graphic illustration of Conventional and Alternative incentives allocation schema.

Note: Consider the above options as temporary solutions, second-best to completely abolishing discretionary monetary incentives for intellectual workers that work in team settings. Although, team-level incentives are less dangerous than individual incentives, they may still bring harm: they make people think about getting paid, not about doing work.  There is still some risk that entire teams may engage in system-gaming.  Chances of that would seem to be lower than system-gaming by individuals. 

Ideally, for any kind of intellectual work, the topic of discretionary moneys should be removed from the table completely: people should be focused on doing work, not on how they can game the system to get a higher pay.

Conclusion

The famous quote from the book “Out of Crisis” written by Edward Deming (originally published in 1982) summarizes this topic well:

“The idea of a merit rating is alluring. The sound of the words captivates the imagination: pay for what you get; get what you pay for; motivate people to do their best, for their own good. The effect is exactly the opposite of what the words promise.”

References

  • Deming, W. E. 1993. The New Economics for Industry, Government & Education. Cambridge: Massachusetts Institute of Technology Center for Advanced Engineering Study.
  • David Logan; John Paul King; Halee Fischer-Wright. Tribal leadership: leveraging natural groups to build a thriving organization. New York : Collins, ©2008
  • Tom Coens and Mary Jenkins. 2012. Abolishing Performance Appraisals: Why They Backfire and What to Do Instead.
  • H. Pink. 2011. Drive: The Surprising Truth About What Motivates Us. Riverhead Books
  • Garold Markle. 2000. Catalytic Coaching: The End of the Performance Review.  Quorum Books
  • Edward E. Lawler III. 2014. Eliminating Performance Appraisalshttps://www.forbes.com/sites
  • Jeffrey Pfeffer, Robert I. SuttonHard, 2006. Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management
  • Tom Coens, Mary Jenkins, Peter Block., 2002. Abolishing Performance Appraisals: Why They Backfire and What to Do Instead
  • Alfie Kohn, 1993, Punished by Rewards
  • Samuel A. Culbert, 2010, Get Rid of the Performance Review
  • Adobe Systems set to scrap annual appraisals, to rely on regular feedback to reward staff
  • Microsoft’s Downfall: Inside the Executive E-mails and Cannibalistic Culture That Felled a Tech Giant
  • Get Rid of the Performance Review!

May 11 – LeSS Talks: Coordination and Management in Large Scale Scrum


First, we covered basic dynamics of Scrum (roles, artifacts, ceremonies).
Then,  we shifted gears to ‘activities on a typical project’ and the group 20, or so, did a “brain dump” of all possible activities they could think of at the moment.  Then, we removed redundancies, cleaned up the board  ….played “Who Stole my Cheese?” game, by mapping all activities on the board to the main three roles in Scrum (Product Owner, Scrum Master, Team).  We had very few unassigned activities that we labeled as ‘Other’.  Then, we had more discussions about how a team should claim back management responsibilities as it matures, over time.  Then, we talked about management & coordination in LeSS.  Then, we talked about management & coordination in LeSS Huge.  Then we had more system design discussions: great questions and points of view.
Thanks to all participants for making this a live, collaborative event.

Gene is singing baritone 🙂
less_5-10_1

Mike is adding spice with bass 🙂

less_5-10_2

…”mass-entry” of project activities on the wall….

less_5-10_4

…assigning project activities to Scrum roles….

less_5-10_3

…the wall is getting busy….

less_5-10_7

… discussing project roles and posting them  through a tablet….

less_5-10_6

…more tablet entries….stickies go right no the wall….

less_5-10_8

…mini-group collaboration…entering through tablets…
less_5-10_10

…another group in action…..
less_5-10_9

 

 

SAFe: Market Share Increase. Rapid Growth. What is the recipe?


Some time ago, there was a webinar recorded by VersionOne: How to use SAFe® to Deliver Value at Enterprise Scale Q&A Discussion with Dean Leffingwell).   If you fast-forward to about 23 min, 20 seconds into the recording, you will hear the following statement: “…We don’t typically mess with your organizational structure because that is a pretty big deal…”

This statement somewhere puzzled me.  While graphic representation of SAFe framework is nowhere short of supporting organizational complexity, I was still under impression that organizational design improvements/simplification are included in SAFe teaching.  To me, an ability to influence first-degree system variables, such as Organizational Structure, is critical.  Without this ability, any attempt to improve organizational agility and system dynamics would be short-term and limited.  Even such important second-degree system variables, as organizational culture, values, norms, behaviors, policies, agile engineering practices usually bring limited results if organizational structure remains unchanged.

…But regardless of my recent new learning I admitted to myself that SAFe still remains a very successful (financially) and popular product that many organizations are willing to buy-unwrap-install….Fast forwarding…


 

Lately, there has been so much buzz in agile arena about scaled agile frameworks.  I just came back from Scrum Global gathering in Orlando, where I heard a lot of discussions about agility at scale and various existing agile frameworks that companies use.  Following Orlando discussions, I have seen a wave of email exchanges and blogs on the same topic, some of which involved seasoned organizational coaches and trainers.  I have noticed that there has been a lot of focus on SAFe (Scaled Agile Framework): opinions, comments, attempts to compare to other agile frameworks.   There are two things, in particular, that stroked me as odd:

  1. It seemed that some seasoned coaches and trainers don’t explicitly state their views.  When I read indirect statements  or views, I remained wondering how a person really felt about the subject.
  2. Among blogs and other posts that I saw, I was not able to see any discussions that covered aspects of SAFe that were of particular interest to me.

But before I go any further, here is my personal disclaimer:  I am neither SAFe practitioner, nor trainer or coach.  I have not attended a comprehensive SAFe course… However… I have studied/researched SAFe extensively on my own. And I do know some companies that have implemented SAFe (I have talked with some of their employees).  And I do know a significant number of individuals that have been trained on SAFe.  And I do know a handful of respected coaches that recommend SAFe.

Now, let me put “SAFe” topic to the side, for a moment, and shift gears to something else (we will all come back to SAFe in a minute):

I want to bring up the topic that has been a beat to death horse for awhile, for everyone who understands agility: the topic of tooling.

When it comes to discussions of agile tools, more experienced agile coaches have a long arsenal of arguments to use with their clients, prospects to explain why ‘agile tools’ are not most important for being agile.  Here some classic examples:

  • 1st postulate of Agile Manifesto: “Individuals and interactions over processes and tools”
  • “A fool with a tool is still a fool”
  • “The best tool in Scrum is a whiteboard (or excel, at most)”
  • “Agile tool is not a right solution for your deep organizational problem“
  • “Never begin your agile education with tools. Always learn principles and concepts first”
  • “Agile tool is a poor substitution for collaboration that you may never have. If you start exchanging information through a tool you will lose the benefit of a live discussion.  If you absolutely just introduce a tool, do it later in a process, when people gain sufficient amount of knowledge and experience”
  • Etc, etc, etc…

We, as coaches, are never shy to express our strong views (sometimes, overly strong) that tools are NOT a good solution to organizational problems and NOT the best method (by far) to transform organizations.   And I am glad we are not shy about that.   This is why we are called Organizational Coaches – we look at organizations holistically.  For us, tooling is just a tiny fraction of a much bigger organization puzzle.

<SIDE NOTE ON>

But I still want to confess, with regards to tooling, so here is another personal disclaimer: over the last decade, I have been around and have gained a lot of experience with tools like JIRA, Version One, Rally and others…  I consider this as a personal ‘hobby’ but I know how to decouple it from daily work that I have to do as an organizational coach.  Over the years, I got to know some great software engineers that built the tools mentioned above.  I could probably easily pass for an in-house “agile tool expert” (that is if I decided to change my profession one day) and find a job that says something like this: “Looking for a strong agile tool expert to transform our organization to the next level. PMP certification is a huge plus.”.  Yes, sadly there are many job specs out there that sound just like this 🙁 .

On a brighter side, I could probably also leverage my ‘hobby’, and look at any agile tool, used by a team or a group of teams that claims “to do” agile, and in about 5 minutes find a handful of signs of serious systemic dysfunctions (just in a tool alone!).  So, there is actually some practical use of my ‘hobby’.  In any case, I think I have earned the right to say that I know very well what tools can and CANNOT do for you.  And this is why, I strongly stand with all other coaches that use the arguments I listed above.

<SIDE NOTE OFF>

 Now I would like to come back to the topic of SAFe and set the stage to my questions, by stating the following:

High Market Penetration of SAFe:

First of all, lets take a look at some relevant data that has been recently published on InfoQ, with the original source being Version One, 10th Annual State of Agile Survey: while still being a relatively new framework, SAFe has acquired a significant share of market place –23% , while demonstrating the highest rate of growth:  “…the largest increase from 19% in 2014 to 27% in 2015…”

 

My understanding of safety that SAFe brings:

I have heard various opinions about what went into thinking of the acronym “SAFe”: was it an intention to make it sound phonetically “safe” or was it just coincidental that the words Scaled Agile Framework that begin with “S”, “A” and “F”, made up SAFe?  I don’t know.  And I don’t want to speculate.

But let me share my understanding of what makes SAFe – safe:

  • SAFe does not seem to be threatening to first-line management. Thanks to its first two layers (Team/Program & Value Stream) and abundance of processes and roles that are present in both, everyone can find place to work.  Probability of being misplaced or losing a job within SAFe is relatively low.  If we all recall, what happens with implementing basic Scrum, where teams are expected to become self-organized and self-managed, and where the role of Project Manager is not explicitly discussed, we (coaches, trainers) frequently have to answer the following question, usually coming from managers: “what now happens to my role?”  And of course, there are ways to handle this question properly and give good options to those who ask.  My point is that I don’t expect this question to be asked as frequently with introduction of SAFe.  Why?  Because SAFe seems to be a good way to harbor many existing management roles (role security).
  • SAFe looks “homey” to senior management.  SAFe graphic is very rich in colors, objects, lines, layers, icons that represent roles, groups, departments, interactions.  At a glance, SAFe appears as a natural fit and a comfortable habitat for many existing organization constructs.  SAFe does not challenge/simplify existing organizational design; no hints to change/simplify reporting lines or flatten layers (de-scaling).  No need to have unpleasant conversations with employees (!).  Senior managers that are confident that their organizations are well designed and don’t need any major repairs, see SAFe as a safe way to try agility.
  • SAFe does NOT explicitly compete with other agile practices. SAFe uses them all. In fact, a cute yellow smiley-squeeze-toy that many folks picked up in Orlando from SAFe kiosk, explicitly says: “SAFe embraces Scrum“. Indeed, at its multiple layers, SAFe diagram mentions Scrum, Kanban, XP,…and many roles, artifacts and ceremonies and iterations that support all these practices. And this, IMO, makes SAFe really safe, in a very special way: if Company X already uses, perhaps inconsistently, some agile practices, it is relatively safe, and actually convenient, for SAFe consultant to come in and say something like this: “we can help you retain most (if not all) of what you have adopted so far but it will be much better structured under the overarching umbrella of SAFe”.

 

My understanding of SAFe Partnerships and Strategic Goals:

Here, I am listing only the top few references that I found on-line.  But the list could be much longer if I spent more time searching.  I personally have attended a handful of webinars, where concepts of SAFe were presented, along-side with benefits of tools (by companies that hosted webinars).

Please, finish reading the post first and then come back to the links.

Golden Sponsorship by Consultancies (not specialized in Agile):

With TFS/VSTS:

Note: TFS/VSTS are Microsoft products.  Tool design and “logic behind” resemble MSFT Project Plan :)…

With Rally:

With Jira:

With Version One:

With Version One: Beware of “Trippe Taxation” Problem

Just to be clear for those that may not be as well familiar with these tools as I am (you don’t have to share my hobbies 🙂 ): each one of these tools now has complex “Strategic Layer” that sits at the top of a tools’ “tactical” layer (epics/stories, backlogs, sprints, releases, team views, agile boards, story/task boards, workflow management, etc, etc) – and it is used by a Project, Program and Portfolio Management.  At some companies, where I have consulted, each one of these layers usually has a manager (Project Manager, Program Manager, Portfolio Manager, respectively, etc), someone who is responsible for data collection and status reporting – just like it was without or prior to implementation of SAFe.  Tool complexity is great to offer a nice fit to an existing organizational structure.

<SIDE NOTE ON>

What is also not a surprise to anyone is that there are so many large companies that own tens of thousands of licenses for the above mentioned tools.  I consulted to a number of such companies and seen these tools being a “hallmark of organizational agility”.  Please note that very frequently “best practices of use”, even for agile tools, reside within departments like Control & Governance, PMO, and Centers of Excellence, where decisions about “what is best” are made in a vacuum and then are pushed down onto organizational domains that are thousands of miles away.

<SIDE NOTE OFF>

Here is another safety aspect of SAFe:

SAFe is very safe to client-to-vendor relationships  : it does NOT disrupt existing million-dollar (of course, depends on company size) contracts and license agreements between client companies and tool vendors.  It should be pretty safe, imo, for a SAFe consultant to come in and say something like this: “if you are using JIRA or Rally or Version One or any other tool that has Portfolio Management layer in it, it will be very complimentary to what we can do for you in terms of agile scaling”.   I think that the links that I have provided above suggest exactly that.

SAFe seems to be a great compliment and strategic alliance to some agile tooling companies that have gained a lot of  their own market share.  And it does not matter if JIRA and Version One and Rally or others, could be competitors to each other. They all seem to be great partners of SAFe (I will not speculate on exclusivity of relationships but based on the links above, there is probably none).

Now, after I brought to light some relevant market data, shared some personal views on what I consider as “safety factors of SAFe”  and gave a perspective on some possible strategic alignments that may exist between SAFe and industry leaders in the world of agile tooling, I would like to ask the following two (2) questions:

  • First Question: Do you think that market penetration of SAFe and its adoption success could be attributed to a personal safety of companies’ managers, as I have described above?  Do you feel that ‘role security’ of first-level management in particular is a significant contributor to SAFe adoption rate?  I stress this last point because the role of first-level manager is in super-abundance today at many companies.
  • Second Question: Do you think that market penetration of SAFe and its adoption success could be attributed to (at least in part) to its direct or indirect alignment with industry leaders that build agile tools?  Do you think that “SAFe + XYZ tool” produces a stronger compounded effect on organizations in terms of SAFe adoption, than SAFe applied alone?

Related Publications about SAFe by Agile Manifesto Co-signers and others:

Also, as a reference, some experience reports about the Spotify “Model”:

From LeSS Toolbox: Causal Loop Diagrams to visualize System Dynamics

Introduction:

When it comes to scaling, there is a common misconception that “bigger always means better”.  This misconception is also traceable to agile arena, where companies look for ways to expand their agile practices beyond a single organizational domain (e.g. many teams, numerous departments, multiple lines of business, etc.).  Usually, it is an existing (inherited) organizational complexity that becomes the main reason why companies look for complex, multi-tiered scaling solutions.  And of course, if there is a demand, there will be a supply: there is a number of frameworks out there that hand-hold companies to comfortably “embrace” their existing complexity and not feel too uncomfortable about their own internal dysfunctions.

However, not all scaling solutions are as “forgiving”J.  There are some agile frameworks that intentionally expose and boldly challenge organizational deficiencies. One of such frameworks is Large Scale Scrum (LeSS).  In order to set a stage for the rest of this discussion, I would like to summarize a few points about LeSS here.

I also would like to express my appreciation and acknowledgement to Craig Larman (one of co-founders of LeSS) for helping me deepen and broaden my understanding of organizational design and improve my system thinking that I have been developing over years.

 

Brief Overview of LeSS:

LeSS is a very easy to understand.  I like to speak metaphorically, so in describing LeSS, I sometimes use analogy with a legendary assault rifle AK-47 that has the following, well-known characteristics:

  • it has very few moving parts and, therefore, its internal friction is pretty low; also not too many small pieces that can jam or break
  • it is simple to disassemble, inspect and reassemble (inspection & adoption)
  • it is very reliable and adoptable under tough conditions (rarely fails in action)
  • if necessary, it can be modified and “expanded”, at low cost/low effort

But there is something else about LeSS that makes its analogy to a weapon (probably, not just to AK) appropriate: it assaults organizational dysfunctions.

LeSS also has two important characteristics:

  1. It is very simple in design and fully rests on core principles of basic Scrum (Effectively, LeSS is the same Scrum, as it is described in Scrum Guide, but performed by multiple teams)
  2. LeSS teachings rest on the pillars of:
    1. Lean Thinking: “watching the baton, not the runner”, visual management, cadence, time-boxing, managers being teachers, continuous improvement
    2. System Thinking: Weinberg-Brooks’ Law, Queueing Theory, indirect benefits of managing batch size and cycle time, being customer-centric, explain differences between local and system optimization).

Thanks to these two key characteristics, LeSS is a very powerful mechanism that helps seeing an organization systemically/holistically, while identifying and exposing (analogy, to a high power rifle scope is suitable here) its pain points that need to be addressed.

As a framework, LeSS is lean and transparent. It does not have any “secret pockets” or “special compartments”, where organizational problems can find safe heaven. No dysfunctions escape sharp focus of LeSS: ineffectively applied processes or tools, ill-defined roles and responsibilities, unhealthy elements of organizational culture and other outdated norms – all of this gets vividly exposed, when using LeSS. Interestingly, while LeSS is a scaling framework that allows to scale-up (roll-up) efforts made by multiple scrum teams, it requires organizational de-scaling to be performed first.  The metaphor that I often use at here is: “you can get more with LeSS”.  To put it another way, in order to build-up Scrum effectively, an organization must remove whatever extra/unnecessary “muda” (waste) it has already accumulated that gets in a way of scaling Scrum.  It is almost like this: LeSS prefers thin but very strong foundational layer, over thick and convoluted but unstable foundational layer, with the ladder, usually being a characteristic of an orthodox, archaic organizational design.

Another metaphor that I use to describe LeSS is that it is an organizational design mirror.  By adopting LeSS, an organization sees its own reflection and depending on its strategic goals and appetite for change, decides on necessary improvements. Similarly, to a person who takes his personal fitness training seriously and uses a mirror for “course correction”, an organization may use LeSS to decide if any further re-shaping or “trimming” is required to get to a next maturity level.

LeSS is also a great guide to technical excellence.  I have used LeSS teachings extensively to coach the importance of continuous integration, continuous delivery, clean code, unit testing, architecture & design, test automation as well as some other techniques that make agile development so great.  LeSS stresses that mature engineering practices are paramount for effective adoption of agile across multiple organizational domains, not just IT.

 

Discussion

So, how can an organization take advantage of both: simplicity of LeSS construct, on one hand, and its deep systemic views, on the other hand – to improve its organizational agility beyond a single team? How can principles of lean and system thinking – together, and along with understanding ‘beyond-first-order’ system dynamics be leveraged to implement true scrum, without reducing, minimizing or downplaying importance of its core values and principles?

As an organizational and agile coach and someone who has been using LeSS extensively in his daily coaching work, I frequently witness situations when companies have to deal with this serious dilemma.  Here, I want to share the magic “glue” that helps me bring my thoughts together and deliver them to my clients.  This “glue” is one of the most effective tools that I have discovered for myself inside the LeSS toolbox.  It is called Causal Loop Diagrams (CLD).

CLDs – are a great way to graphically illustrate cause & affect relationships between various elements of an organizational ecosystem.  CLDs help me effectively uncover second and third order system dynamics that may not as apparent to a naked eye, as first order dynamics.  CLDs help me brainstorm complex organizational puzzles and conduct deep analysis of system challenges.  Ultimately, I have found that CLDs are a great way to communicate ideas to my customers, particularly, to senior leadership.

Here are some elements of CLDs that I use in my graphics:

  • Goals – high, overarching/strategic goal that needs to be achieved
  • Variables – system elements that have effect/make influence on other system elements (other variables)
  • Causal links – arrows that connect two related variables
  • Opposite effects – “O” annotation near an arrow; suggest that effect of one variable on another variable is opposite to what could be expected
  • Delayed effect – “||” annotation that disrupts a causal link (arrow); it implies that there is a delayed effect of one variable by another variable
  • Extreme effects – one variable has an extreme (beyond normal) effect on another variable; it is represented by a thick arrow
  • Constraints – “C” annotation near arrow; implies that there is a constraint on a variable
  • Quick- Fix reactions – “QF” annotation near an arrow; action that brings about short-term, lower cost effect

 

At this point, I would like to provide an example of using CLDs, to visually illustrate second and third order dynamics between key system variables that I often see cause harm and unrest to organizational: performance-driven, discretionary monetary incentives.  

I would like to follow through the process of interaction between system variables as they come to play with one another and uncover the impact they have on the overall system.

Every year, a company (hypothetical Company X) has to distribute a large sum of money to many of its employees in the form of discretionary bonuses.  In order to make a decision-making process less subjective, a company ties it to employees’ individual performance: reviews and appraisals.  People that have demonstrated better performance, get more money, people that have demonstrated poorer performance – get less (or nothing).  This requires that every employee gets evaluated by her line manager, usually, twice every year, at which time an employee gets some rough idea about “how much she is worth as resource”.  This serves as a guide to how much discretionary money an employee might be expecting to get, as a bonus.  While at its surface, the process of performance evaluations and appraisals may seem to be more objective, than a line manager just simply deciding on his own, it is still very subjective as an employee’s opinion is disregarded, when making decisions.  Furthermore, the process is harmful and causes deterioration of individuals’ morale and relationships, on multiple fronts.  Undesirable effects and short-/long-term damage of performance evaluations and appraisals have been studied for years; lots of research and statistical data is available today.   If a reader is not well familiar with this topic or requires additional background information to deepen his understanding, he may refer to the following resources, prior to proceeding with reading:

 

Moving along with this discussion, I would like to highlight the following three downstream “system variables” that are directly (first order dynamics) impacted by individual performance reviews.  This type of system variables integration is mainly observed among technology groups.  Once we understand a first order dynamics, we shall proceed to some other downstream (“beyond first order”) variables.

 

Employee Happiness Factor

Many research studies have proved that employees don’t like to be appraised.  An appraisal is equivocal to slapping a price tag on someone and is hardly an objective process, as the only opinion that really matters is that of a line manager.  Yet, an official version at almost any company is that an appraisal helps an employee grow and mature professionally and offers a way to improve her individual performance towards some arbitrarily set target.  Truth to be told: was the intent of appraisals to help employees grow and continuously improve, the process would not be implemented once or twice a year, but rather, more frequently, in ways that would allow an employee to make a necessary course correction more iteratively.  After all, why wait for 6 months to tell a worker that she needs to improve?

At the time of appraisal, a manager delivers to an employee her final and practically undisputed decision.  An employee has practically no effective way to challenge or dispute such decision.  Frequently, even a line manager does not have control of the process (although, this is rarely admitted): he or she is presented with a fixed “bag of cash”, coming from management above, and this bag, somehow, has to be distributed among lower-ranking workers.   And just to be fair to line managers that are not delusional about the dysfunction they have to entertain, most of them also dislike the process as it makes them annihilated and resented by their own employees.

 

So, as time goes by, employees become less and less pleased with evaluations and appraisals.  The impact may not be observed immediately due to the fact that it usually takes time for an employee to mentally mature to the point, where she becomes conscious and begins comprehend the unfairness and lawfulness of the process.  (Of course, exceptions exist among people that have longer experience of dealing with this process and understand its ineffectiveness and harm.)

 

While leveraging, CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

This graphic suggests that annual appraisals have delayed and opposite effect on employees’ happiness.

 

Peer to Peer Support

Peer-to-peer support, willingness to share knowledge with colleagues, collective ownership of assignments and shared responsibility for deliverables – these are the hallmarks not only of feature teams’ dynamics but of any agile environment.  In order for employees to be mutually supportive, they must operate in non-compete environment, where they don’t view each other as competitors or rivals.  This is practically impossible to achieve when every employee perceives another employee (at least within the salary ranking tier) as a competing bonus collector.  And this is exactly what is observed in environments, where bonuses are distributed, based on individual performance: employees compete for the same, limited pool of cash.  But everyone cannot be a winner: even if a group of brightest individuals, working together, someone within that group would have to be ranked higher and someone – lower (and btw, people are frequently explained this upfront).  How could we expect people to be supportive of each other if, effectively, underperformance of one employee and her inability to collect extra money increases chances of another employee to bring home more cash?  Performance appraisals and discretionary moneys drive employees apart, not together.

Again, the adverse results of appraisals may not be immediate: pain points become more obvious after bonuses are actually paid (end-of-year/early-next-year) – this is when employees start developing resentment and jealousy towards each other over paid bonuses.

 

While leveraging, CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

This graphic suggests that annual appraisals have delayed and opposite effect on peer to peer support.

 

Both variables above, directly (first order) define employees’ Intrinsic Motivation to work and their willingness to stay with a company.  After all, can we expect that an unhappy employee, while being in constant competition with his peers and being deprived of an opportunity to safely experiment, would want to dedicate himself to a company for a long time?  Probably not, and as a result, Employee Retention should not be expected to be high, and as it has been seen in many cases: good employees that always leave first.

 

While leveraging, CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

This graphic suggests that both, employees’ happiness and their willingness to support each other, are directly related to their intrinsic motivation to work and willingness to stay with a company, and as a downstream effect – this increases employees’ retention.  The opposite would be true as well: lowering values of upstream (left-side) variables will lower values of downstream (right-side) variables.

 

 

“Environmental Safety” and Desire to Experiment

Innovation and experimentation are paramount for success in software development. This is what drives feature teams towards improvement.  Scrum, for example, requires continuous inspection and adoption.  It is expected that, while experimenting, feature/scrum teams may run into roadblocks or have short-term failures, at which point they will learn and improve.  But in order to be willing to experiment and take chances, teams need to be sure that they are safe to do so.  Another words, they need to be sure that they will not be judged and scrutinized for their interim failures.  Such “environmental safety” will be always jeopardized by individual performance appraisals. Why? Because individual success (high individual performance) of an employee is defined by her ability to precisely meet individual goals, set in stone early-on in a year.  The need to follow a “script” precisely kills any desire of an employee to experiment.  After-all, why would a person want to take any chances if her failures will be perceived by line management as underperformance?

Since appraisals make working environments unsafe and kill individuals’ desire to experiment, as soon as an employee is presented with her annual goals, she reacts self-protectively, by starting to “work to the script”, while trying to document every personal achievement “for the record” (a.k.a. “CYA”)

 

While leveraging, CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

This graphic suggests that when employees are safe and are not feared to experiment, innovation and experiments take place in a workplace.  Inversely, lack of safety in a work place and absence of desire for experimenting reduces chances of innovation and improvement.

In the sections below, I would like to take a closer look system dynamics that are beyond the first order of interaction, by tracing some additional downstream system variables:

 

Team synergy & stability:

In scrum, we would like our teams to be stable and long-lived.  We would like to see team members enjoy being a part of the same team, and do so as happy volunteers, not as prisoners, constantly looking for opportunities to escape.  In fact, best feature teams known, have been created as a result of voluntary self-organization, not as a result of a managerial mandate.

Why do we want our scrum/feature teams remain stable?  Here are some good reasons:

  • Collaborative environment and desire to work together
  • Shared domain expertize and cross-pollination with technical knowledge
  • Predictable team Velocity and ability to plan/forecast more accurately

 

So, how does team synergy and stability get impacted by performance evaluations and appraisals? Here is how this happens, indirectly:

Via low Employee Retention – as employees leave a company, feature teams disintegrate.  This brings together new team members that have never worked together and require time before they can ‘form, norm and storm’.  As feature teams get dis- and re-assembled, velocities drop/become less reliable and system variability increases (estimation becomes less accurate).  The effect is usually immediate.  In my personal experience, I have seen many feature teams breaking lose and falling apart shortly, after companies have announced annual bonuses.

While leveraging CLDs in my discussions with senior management, I use the following graphic representation to convey the concept:

This graphic suggests that high employee retention will lead to elevated team synergy and stability.  Inversely, low employee retention in a work place lowers teams’ synergy and stability.

 

Via high Internal Competition and Rivalry – once employees realize that they have to compete with their own teammates for discretionary dollars, collaboration deteriorates dramatically.  Individuals stop supporting each other in pursuit of common goals. Instead, everyone strives to be a super hero and solitary performer, while trying to demonstrate her own efficiency and hyper-productivity to a manager.  Everyone wants to look better than other peers and teammates.  Race to demonstrate best individual performance has a high cost: it happens at expense of overall team performance.   Since collaboration, swarming and shared ownership of work are critical for healthy scrum, the obvious downstream effect of performance evaluations and appraisals not becomes clearer: lowered team synergy and instability.

While leveraging, CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

 

This graphic suggests that internal competition and rivalry will have an extreme and opposite effect on team synergy and stability.

 

Healthy Scrum Dynamics:

There are many known system variables that interact with one another and define effectiveness of basic Scrum.  Assuming that most readers of this post are familiar with Scrum and in order to keep my focus on other important downstream system variables, I am going to leave detailed discussions of basic Scrum dynamics out. It would suffice to mention that the following classic Scrum-specific variables have to be always considered: feature velocity, # of defects, $ rate at which developers are hired (low vs. common), # low skilled developers, cash supply, ability to guide and improve the system, etc.  If the reader is interested in exploring this in-depth, “Seeing System Dynamics: Causal Loop Diagrams” section of https://less.works site greatly describes these system dynamics, with the use of CLDs.

However, when leveraging CLDs in my discussions with senior management, I still use the following generalizing graphic representation and annotation to convey this common-sense, overarching concept:

This graphic suggests that team synergy and stability lead to healthy Scrum dynamics and a feedback loop is positive (value increase on left leads to value increase on right).  In my experience, the effect is sometimes delayed.  A time lag is usually due to previously gained momentum.

So far we have used CLDs to explore system dynamics that primarily impact technology teams.  At this point, would like to shift my focus on business side of the house and explore the part of system dynamics that involves customers.  In particular, I would like to provide some examples of how CLDs can expose the adverse impact of individual performance appraisals and discretionary monetary incentives on Product Ownership in Scrum.

 

Identification of GREAT Product Owner:

Finding a good candidate for the role of Product Owner has been one of the most challenging tasks in Scrum. Why?

The role of Product Owner combines certain characteristics that are not easily found within the same individual, and it is organizations of high organizational complexity and Tylorian culture, where this challenge is seen most. On one hand, Product Owner is expected to have enough seniority and empowerment to make key strategic business decisions.  On the other hand, Product Owner is expected to get intimately involved in day-to-day, and sometimes, hour-by-hour interaction with technology groups.  When these two sets of characteristics come together in the same person, we hit a jackpot: we get a great Product Owner – a person who is both Empowered and Engaged.  But truth to be told that it is often challenging to identify a person that possesses both “Es”.  In most Orange organizations (the predominant color of most modern corporations, as per Laloux Culture Model), definition of every job includes a fixed set of responsibilities that individuals are obligated to fulfill.  If we look at most job descriptions, as they are defined by HR departments of Orange companies, we will hardly ever see a job spec that has ”slack time” for an person to take on responsibilities of Product Owner, in addition to his primary job, let alone a job spec that in fully dedicated to the role of PO.  For most organizations, Product Owner is still not a well-defined role and as such, it is not perceived by employees as a step towards career advancement.  Today, many organizations that use scrum have to experiment with the role of PO, by looking for right individuals, internally.  Individuals that step up for the role if Product Owner have to make a conscious decision, with full acknowledgement that they will be taking a very wide spectrum of new responsibilities.  For most people, this is risky, because, effectively, it means that attention and focus on primary activities (as per job specs) will be diluted by secondary activities – fulfilling the role of PO.  Of course, this problem could be easily mitigated with full backing and support, from senior leadership and HR, by redefining job specs and explicitly recognizing criticality of Product Owner role.  But it hardly ever happens (and mostly, at product development companies).

It is hard to argue that people have to be recognized for the work that they do.  I doubt that anyone would object to the following statement: nobody should be working two jobs for the same pay check.  People have to “feel safe” about stepping into a new territory, learning new activities and developing work dynamics that they have not experienced before.  This brings us to the same concept that we discussed earlier, when we looked at technology groups: individuals need to feel safe, in order to be willing to experiment with a new role. It would be unreasonable to expect an employee to take on more work that would not be “counted in” when a person gets evaluated for his contribution to an organization.

So again, while leveraging CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

As the graphic suggests when employees are safe and are not feared to experiment, it will be less difficult to identify a good Product Owner. Inversely, the opposite is true as well: lack of safety and inability to experiment makes the process of Product Owner selection much more challenging.

At this point, it is worth mentioning one very common Quick Fix that organizations frequently make to compensate for shortcomings in finding a good Product Owner:

Empowerment usually implies that a person occupies a senior organizational position.  As such, a business person’s career has progressed beyond a certain point; she no longer has enough bandwidth (nor desire!) to deal directly with technology. Once reaching a certain level of seniority, a person gets a “bigger fish to fry” and collaborating with individual technology (feature) teams is no longer her priority. So, while still retaining one of the “Es” (Empowered), a person is not able to demonstrate another “E” (Engaged).  In order, to compensate for the missing “E”, another person needs to be “inserted” into the system, to fill in the gap between a real Product Owner and technology teams.  This, poorly-defined (undefined) role is sometimes labeled “PO-proxy” – a surrogate person tries to act as PO but does not have the power. This role is usually occupied by someone from a lower organizational layer: a business analyst, system analyst or another person – someone who is more accustomed to work directly with technology and for whom the activity itself is not perceived as “below pay grid’.  This creates a serious dysfunction in scrum operating model, as communication between a true customer (empowered Product Owner) and technology is now hindered: a surrogate role of PO-proxy usually lacks strategic/holistic product vision and power to make important business decisions within short timeframes, as it is required by Scrum.

It is worth noting that functional expertize of a business analyst or systems analyst are both welcomed in Scrum and usually reside within teams (although single-specialty individuals are viewed as less valuable than multi-skilled, a.k.a., T-shaped individuals).

The reason why delegation of responsibilities described above is problematic is because it artificially creates unnecessary communicational layers between end customers and technology. This type organizational design causes a variety of additional dysfunctions (miscommunication, hindrance to information flow, confusion of priorities, etc.), and therefore strongly not recommended.

 

While leveraging CLDs in my discussions with senior management, I use the following graphic representation and annotation to convey the concept:

As the graphic suggests, difficulty to identify a good candidate for the role of Product Owner creates the need to look for quicker and cheaper solutions; introduction a powerless surrogate role of PO-proxy is commonly seen, undesirable ‘Plan B’.

The reason why this fix is “quick” – because it usually does not take too long for a real Product Owner to realize that he/she is not able (or willing) to handle additional responsibilities of the role.  In my practice, risks of losing a real Product Owner and getting a proxy instead, did not take too long to materialize: usually a few weeks after Scrum was introduced.

 

Effectiveness of Product Backlog management:

Effective Product Backlog management is paramount in agile product development.  This is the fundamental concept that has been introduced in simple Scrum and it remains as valid in scaled Scrum.  In fact, when an organization scales its Scrum, by involving multiple technology teams, Product Owners and lines of business, effective product backlog management becomes even more critical: work coordination, resource management, impediment removal, alignment of business priorities etc.

As we can guess, effective product backlog management, including work prioritization, story decomposition etc. can be done most effectively with participation of a real Product Owner.  And oppositely, if an organization is missing the critical figure of Product Owner, product backlog management will become ineffective.

While leveraging CLDs in my discussions with senior management, I use the following two graphic representations and annotations to convey these two related concepts:

 

As the graphic suggests, product backlog management suffers from both: lack of true Product Ownership and presence of ineffective surrogate roles.  In my personal experience, the effect is usually extreme.

 

Healthy Scrum dynamics (overall):

At this point, I usually provide senior managers with a partial summary of LCD, by showing how ‘healthy scrum dynamics’, while sitting much further downstream from individual performance evaluations, appraisals and bonuses are still impacted by the latter group via second order dynamics (through secondary variables). CLDs do a great job of bringing many aspects of system thinking together and presenting them visually.

Below is the combined view of how four upstream system variables that we have discussed earlier relate to ‘healthy scrum dynamics’:

As the graphic suggests, nature of the effect (positive vs. negative), time of onset (immediate vs. delayed) and impact (casual vs. extreme) are could be unique for each variable.

 

Scaling Scrum and Organizational Agility:

In this section, I want to describe how I, with the use of CLDs, bring my discussions with senior management to culmination, by painting a bigger picture of organizational agility.

For most large organizations, success by a single team is not the end goal.  Organizations look for “bigger” solutions.  And their reasons are obvious: huge IT departments, many lines of business, many customers, multiple competing priorities, multi-year strategy, and many other elements that make organizational needs nothing less of huge.  Luckily, most of organizational leaders that I have met in my practice, understand that the ability to effectively scale basic agile frameworks (e.g. simple Scrum) will ultimately improve organizational agility and ensure that both customers and employees are happy.

Below is the graphic that summarizes this last, ‘common sense’ relationship:

 

Tying it all back:

What I would like to do at this point, is to make one step back and describe what it takes to scale scrum effectively:

This is where another powerful concept of LeSS comes to rescue: in order to scale Scrum, an organization must be descaled first (please refer to “Less Agile or LeSS Agile?” by Craig Larman).  Other words, to construct a model of Scrum, performed by multiple teams, an organization must remove (deconstruct) its existing organizational complexly first.  As it was stated at the beginning of this post, scaling does not imply making things more complex, but unfortunately, this key concept is not always well understood.  Mistakenly, many people still think that in order to support existing organizational complexity they need to look for multi-tiered, complex agile frameworks that will provide “room and purpose” for every existing organizational element: roles, processes, tools and techniques.

The analogy that I frequently use to deliver the concept of scaling to senior management is that building a sky scraper on a wobbly, porous foundation is dangerous because it will eventually crumble. A surface must be cleaned up first, flattened and hardened, and only then there will be a chance to build something tall and strong.

Below is the graphic that summarizes this concept:

 

At this point, the most common request I get from senior leaders is to elaborate on what I mean by ‘de-scaling’ – and this is my favorite topic.  This question is natural but I usually resist on answering it immediately, since the topic is inherently large, complex and, at times, inflaming and therefore, I request a dedicated discussion for it.

However, I still produce CLD graphic illustration of the concept, as shown below, but offer a follow-up discussion to explore details:

Ultimately, when such discussion is held, I always tie it back to the present discussion and explain why Goal: distribute discretionary incentives” becomes so trivial with identification and removal of system/organizational waste.  This discussion is usually long and it requires challenging many outdated organizational norms and principles that some senior leaders are not willing to give up easily.

 

The CLD graphic illustration is a high-level generalization of the concept of the opposite (inverse) relationship between the two system variables:

As mentioned above, the variable in the dotted circle can be decomposed further into many, smaller system variables that have up- and downstream relationship with one another.

 

Summary:

The best summaries are short.  Therefore, I would like to summarize this post briefly, with one comprehensive CLD diagram that brings together and variables, relationships and annotations that were discussed so far:

Although it may take hours, or sometimes days of brainstorming to produce CLD, when complete, it becomes a great communication vehicle.   A diagram like this one can be created real-time, in collaboration with others, on a white board.  Alternatively, it could be created ahead of time by a coach or trainer and then be used as a ‘cheat sheet’, when appropriate.  CLDs can be also shared with wide audience ahead of time, to solicit questions and provoke interesting discussions at later point.

Global SCRUM GATHERING® Orlando 2016

It was a great event, with more than 1200 people attending from all over the world.  Tons of great presentations and collaborative sessions.  Below, are some captured moments with my peers and colleagues – the people that made my personal experience at the gathering so rich and memorable.  The coaches and trainers of Scrum Alliance have always been the main driving force behind this and many similar agile events around the globe.

With Coaches and Trainers during Pre-Retreat

 peers_3  peers_4  peers_5
 peers_6  peers_7  peers_9
peers_10 peers_8 peers_12
 peers_13  peers_14  peers_15
 peers_16  peers_17  peers_19
 peers_20  peers_27
 peers_23
 peers_24  peers_25  peers_26
 peers_2


 
peers_1
 peers_28

 

Grafic work produced at Gathering

 work_8
 work_10
 work_19
 work_9
 work_3
 work_2
 work_24  work_23  work_22
 work_21  work_18  work_11
 work_7  work_6  work_16

 work_12  work_17

APRIL 6TH-8TH: CERTIFIED LESS PRACTITIONER COURSE WITH CRAIG LARMAN | NYC



Please, join Large Scale Scrum (LeSS) Meet-Up group in NYC

group_photo

Day 1

apr6_9 apr6_12 apr6_11
apr6_10 apr6_8 apr6_7
apr6_6 apr6_5 apr6_4
apr6_2 apr6_14 apr6_13
apr6_16 apr6_18 apr6_15
apr6_1 apr6_17 apr6_3

Day 2

apr7_12 apr7_11 apr7_10
apr7_9 apr7_8 apr7_7
apr7_6 apr7_5 apr7_4
apr7_2 apr7_3 apr7_1

Day 3

apr8_15 apr8_14 apr8_13
apr8_12 apr8_11 apr8_9
apr8_8 apr8_10 apr8_7
 apr8_6  apr8_5  apr8_4
 apr8_2  apr8_1  apr8_3

craig_gene_reduced

 

LeSS_Course_Logistics_Requirements

Coach’s Experience Report: Putting LeSS Teachings to Work

 

craig_gene_reducedThe following Coach’s Experience Report describes various teachings of Large Scale Scrum (LeSS) framework in the context of their practical use by Agile Coach. What is below does not represent a single case with a single organization or company. Rather, experiences with multiple organizations, under different conditions are being described. By the same reasoning, not for every organization, whose experience is being drawn upon in this report, all of LeSS teachings that are described below, have been experimented.


Coach’s Discovery: Scrum teams have been struggling to gain autonomy and independence due to close monitoring and constant involvement of line management. Teams’ decisions, made during sprint planning were continuously overruled by management. Mandatory requests coming from management were frequently in conflict with priorities coming from Product Owners. Teams – were unable to conduct sprint retrospectives privately and safely, with management insisting on its presence in ceremonies and/or reviewing retrospectives’ outcome.

LeSS Teaching by Coach: Tylorian Carrots & Sticks used to be effective during American Industrial Revolution, when they were applied towards people that performed mundane, unskilled, manual labor. But in modern work settings of the 21st century (for the most part) don’t work, if applied toward intellectual workers. Command & Control behaviors suppress individuals’ willingness to explore and innovate, discover and experiment; they demotivate and demoralize workers, and therefore lower productivity.

Senior management needs to order fist-level line management to step back and allow teams norm and gain autonomy and independence. Close oversight and supervision will not allow teams to fully explore their potentials and achieve higher productivity.

Overall Result: Positive. Many teams have been liberated from Management Type X and have been treated with Management Type Y, instead.


Coach’s Discovery: Team members were expected to work closely together, share knowledge, help each other grow complimentary expertize. Teams were also asked to deliver together, “as a whole”, at the end of each sprint, and demonstrate shared ownership and swarming during sprints.   Team members were expected to take turns in a driver’s seat during showcases, to equally gain visibility in the eyes of Product Owners and customers.

But at the same time, each team member was being stack-ranked, during an individual performance appraisal, against his/her own team members, as well as against members of other, neighboring teams. Each ranked individual understood that he/she competed with other team mates for discretionary money that would have to come in the form of a bonus at the end of year. As it came closer to mid-year reviews and end-year reviews, teams dynamics worsened and bad behaviors were observed, practically, inside every team: less collaboration, emphasis on private ownership and individual deliverables, selfishness, blame-gaming and finger-pointing. As a result, teams’ velocities dropped, quality went down, and customer satisfaction was lowered.

LeSS Teaching by Coach: “The idea of a merit rating is alluring (as per S. Deming)”. Individual performance appraisals, linked to monetary incentives lead to demotivation, loss of enthusiasm and bad behaviors, such as internal competition, rivalry, selfishness and organizational degradation. Having individual performance appraisals, linked to distribution of discretionary monetary incentives, such as bonuses and salary increases, worsen a situation even further.

Overall Result: Mostly Negative. Line management did not except the fact that merit rating and individual appraisals had such harmful downstream effect on teams’ dynamics and cause organizational degradation. Senior management seemed to understand that the problem existed and was serious, but was still too hesitant to ‘rock the boat’, as many fundamental organizational norms and policies, many of which set by HR, would be challenged. In rare situations, however, management was able to emphasize team performance and collective results as main attributes of individual performance/results.


Coach’s Discovery: Recently trained teams fell under close surveillance and scrutiny by line management. Line management viewed agile/scrum, as a magic wand that would miraculously resolve all their existing problems. Management started paying too much attention to metrics (e.g. Velocity) and set unreasonable expectations for teams’ productivity during initial sprints. When teams initially failed, management blamed agile/scrum for failures, instead of treating it as a “mirror” that just painfully reflected existing broken processes.

 

LeSS Teaching by Coach: In Scrum, when a team just gets trained and is set sail, Private Sprints with “Fake” Product Owners (if a real one is identified yet) are recommended. Why? A team may want to practice/dry run scrum dynamics (roles, artifacts, ceremonies, feedback loops) but may not necessarily want this information be publicly disseminated across an organization, to avoid premature judgments and “mis-measurements of success”. A team is not obligated to announce to the rest of the world that they are experimenting new ways of working UNTIL everyone who is involved is ready and comfortable.

Overall Result: Positive. Teams no longer viewed the last day of scrum training as a commitment point, at which they had to announce to the rest of the organization that “they were agile now”. Teams became more comfortable to transition into new dynamics, and did it gradually, while “playing it safe”, before publicizing their intentions or results. In cases, when a real Product Owner was not immediately available, teams used another surrogate to play this role (e.g. Senior BA or SME).


Coach’s Discovery: A team was experiencing a lot of distraction, coming from stakeholders and customers. Instead of going to Product Owner with requests, customers went directly to the team. Frequently, competing priorities arose: a solution that addressed one request conflicted with a solution that addressed another request. Product Owner, took advantage of his overly proactive clients, stepped back and did not do his job.

LeSS Teaching by Coach: When it comes to feature (scrum) team communication, there are three main types exist:

  • Requests: From Customers/Users and Product Owner
  • Prioritization: From Product Owner to Team
  • Clarification: Between Customers/Users and Team (also can come from PO)

Effectively, this allows for business requests to flow from various areas/departments of an organization to Product Owner but then to be prioritized and fed to a team/backlog by Product Owner himself, in a controlled fashion. While a team is shielded from Customers’/Users’ ad hoc requests, sometimes competing, it still has a right to go to Customers/Users for clarifications.

Overall Result: Positive. The Team learned how to say ‘NO’ to customers and defer their requests to stakeholders. Product Owner was ‘forced’ to step up to the plate and practicing one of his key responsibilities – being voice of a customer, facing a team.


Coach’s Discovery: An organization had wide geographic distribution, with technology resources present in India, Eastern Europe and South America. The long-standing goal of outsourcing was to find the cheapest resources for a single specialty. For example, front-end Java developers were all sourced from India, Flash and UI experts – from Eastern Europe, architects – from Argentina. This has caused a lot broken communication and unnecessary coordination between feature team members: language barriers, geo- time-zone distribution, etc.  Also, end-customers were from the US, and this further added to complexity. Inefficiency of highly distributed teams, trying to coordinate ceremonies and optimize time overlap, was painfully noticeable.

LeSS Teaching: Given today’s global market place, geographical distribution of skilled workers is practically inevitable, for most companies. However, when it comes to teaming it is critical to avoid geographical distribution within a single team. Companies should support collocation of members of the same cross-functional team (note: even with the latter approach, doing this with componentized teams presents other problems). Also, bringing business (stakeholders, SMEs, Product Owners) closer to teams, is highly desirable.

Overall Result: Partially Positive. The leadership agreed to reconsider the current geographic collocation strategy. Having a group of single specialty experts located in one place, communicating across many time zones, with another group of single specialty experts, became a less preferred option. The leadership started to see more value in collocating individuals, based on their needs to work together, on same features. At first, it became more expensive, to procure certain expertise, where it was not as abundant and its cost was higher (e.g. Flash developer in India), but over time, increased efficiency and higher rate of business value output by each team made changes worthwhile.


Coach’s Discovery: During initial stages of agile transformation, senior leadership came to realization that by restructuring their organization to improve overall organizational performance, some internal “waste management” activities had to be done. Specifically, it became clear that certain processes, artifacts and roles were redundant, unnecessary and costly. As such they had to be reduced or removed from the system, altogether. This raised a particular concern for senior management, as removing certain elements of organizational structure could become too politically inflaming. For example, an excessive amount of business analysts and project managers (PMO) represented two pretty thick organizational layers that were primarily focused on producing heavy documentation and less- than-reliable reporting, respectively. Reducing these two layers would effectively mean downsizing certain individuals – something that could loudly resonate across the rest of the organization.

LeSS Teaching: Organizational Leadership needs to understand the difference between Local Optimization (e.g. improving performance of a single organizational layer, functional silo, reporting structure) and System Optimization (e.g. improving performance of an entire system). Looking at an organization from a stand point of System Optimization, an organization should care to provide Job Security to its employees, not Role Security. Ultimately, the goal of any organization is to continuously strive towards improving its efficiency, not providing safe haven for roles that make it [organization] less efficient.

Further, from System Optimization perspective, it is wiser to support the idea of not having individuals that are all just Specialists in a particular field or domain; an organization needs to have a good amount of Generalists to avoid workflow management dysfunctions and workflow bottlenecks. Presence of T-shaped individuals are highly desirable.

Overall Result: Positive but WIP. While removing organizational waste, senior leadership tried to strike a happy balance between simplifying organizational structure and removing redundant/unnecessary roles on one end, while trying to provide job security and alternative career paths for some knowledgeable and highly qualified individuals.


Coach’s Discovery: After more than a dozen of sprints, a group of feature teams still could not show any progress in their ability to deliver potentially working software at sprint-end.   At the end of each sprint, teams still produced code that needed additional testing, test automation activities, integration with code of other teams, extensive UAT, and other “Udone” activities. The initially proposed Definition of Done (DoD), at the time when teams got trained, did not change much and before releasing to production, teams still required at least one ‘hardening’ sprint.

LeSS Teaching: As a feature team matures, gradually, it should extend it definition of “Done”, to bring itself closer to a point, where, upon finishing a sprint, it makes its deliverable production-ready.

Overall Result: Partial Success. The teams was encouraged to identify during retrospectives at least one or two elements of DoD that were either missing or needed an improvement. Based on this, the teams were able to gain some momentum and with every subsequent sprint improved production readiness of their code.   However, there were certain organizational impediments that still prevented teams from delivering faster: certain external dependencies on organizational layers that were “outside of agile sphere of influence” – they prevented DoD to become fully inclusive.


Coach’s Discovery: Individuals that have been elected (or appointed) to the role of Product Owner, did not have time to do the job. They were either too senior within an organization “to deal with IT directly” or had already too much on their plate to take on, yet another full time role of Product Owner. This created a serious gap that made scrum extremely ineffective and overall agility low. To fill this gap, Product Owners found other people, other person within their own reporting structure to fulfill this role. They delegated most of PO responsibilities to this new, artificially created role of Product Owner “proxy”.   This brought about a lot of dysfunction and hindrance to the process as “proxy” did not have the same level of empowerment as a real PO.

In some situations, existing terminology that had a completely different purpose and meaning was overloaded. For example, Area Product Owner (LeSS term) was used to describe a role that did not fit the definition of Area PO. Effectively, Area PO term was used to describe the role and behavior of PO “proxy”.

LeSS Teaching: A business person that represents a single area of a complex product is called Area Product Owner. Area PO is in close communication with other Area POs, responsible for other areas of the same product, as well as with Product Owner (main person) that oversees an entire product. Area Product Owner is not to be confused with an ill-defined role of Product Owner Proxy. The latter term is not really defined in Scrum. This term exists in places, where a real PO is not able or not willing to do his job (no time, not enough interest/motivation). PO-proxy, is PO’s surrogate that interfaces with team(s) to mimic PO (minus authority) – it is an unnecessary organizational layer.

Overall Result: Situational Success. In situations, where organizational structure (on business side) was relatively flat, the success of identifying an effective Product Owner and bringing him closer to teams was much higher.   In situations, where business organizational structure was more complex, with multiple reporting layers, the success of identifying Product Owner that would be equally knowledgeable, empowered and engaged were lower. Another consistent observation was that every time Product Owner came from middle organizational tier (e.g. Operations, a.k.a. not true end-customer) chances were higher that the role of “proxy” would emerge.


Coach’s Discovery: This was a large organization, with complex structure, heavy tooling and internal processes that was looking for scaled agile solutions to accommodate its “internet historic complexity”. The organization was looking for agile frameworks that would seamlessly fit its existing dynamics, while not requiring too many changes. The organization was not really trying to improve existing dysfunctions and repair problems. Instead, the organization was trying to look for ways to “improve” its existing condition by overlaying agile norms, terms and principles on the top of its current system. This included introducing more agile roles, ceremonies, processes, organizational layers, handovers and bureaucracy on the top of what existed already. Over time, it worsened the situation even further, not improved it.

LeSS Teaching: In order to improve organizational agility and be able to implement agile at scale (e.g. have more teams being involved in the same large scale Scrum, as opposed to having many unrelated teams attempting to do their own scrum), organizational De-Scaling is required first. This includes: removing organizational waste, lowering bureaucracy, flattening organizational structure, removing non-value adding roles, reassigning responsibilities to key roles, discontinuing norms and behaviors that have been statistically proved as harmful.

Overall Result: Partial Success. The organization understood principles of LeSS, especially, it core Lean Thinking. The organization understood that organization descaling (removing what exists, instead of adding more to it) should come before any attempts to scale agility across broader organizational boundaries. However…the organization was still not fully prepared to deal with all consequences of waste removal. There were concerns with political and legal implications of such bold actions.


To Be Continued:

TBD – more Coach’s Discoveries and respective LeSS Teachings that were used to remedy problems:

  • LeSS Teaching: The following elements and attributes lead to “The Contract Game”: componentized organizational structure, heavy/non-negotiable documentation, bureaucracy, functional silos, lack of cross-functional experts (T-shaped people), merit ratings/performance appraisals/bonuses or other forms of local optimization (e.g. harboring teams of BAs, PMs)
  • LeSS Teaching: Lack of proper understanding of cross-functional, customer-centric feature development, leads to creating fake “products” or “projects” (e.g. server-side work, back-end coding, database, UI). This further leads to creating fake project portfolios and fake project portfolios. This further creates needs for excessive coordination that mandates unnecessary roles of fake portfolio managers and alike.
  • LeSS Teaching: By analyzing system’s Feedback Loops: Velocity, Bugs, # of Developers, Budget Supply – it becomes clear that, for example, the increase in funding (budget) does not necessarily translate into increased velocity or improved product quality. Negative Feedback Loops are just as important to consider as Positive Feedback Loops: more money may help hire more developers that will produce more bugs.