Experience Report by Guest-Blogger Kurt Nielsen
Unforgettable 2 days at the 3rd Global LeSS Conference, at Angel Orensanz Foundation – the historical landmark in NYC.
Experience Report by Guest-Blogger Ram Srinivasan
Experience Report by Guest-Blogger Mark Uijen de Kleijn
LeSS Graphic Art
Personal Memorable Moments
Next LeSS conference (2019) – Munich, Germany
- There is a frequently seen confusion with respect to the definition of agile coaching: coaching focus (e.g. enterprise vs. team) is confused with coaching alignment (centralized vs. decentralized) within an organization
- Centralized coaching departments run the risk of turning into a single-specialty organizational silos that are locally optimized for their own expansion and personal success; they are also removed from real action. The reasoning behind: standardization – has its weaknesses.
- Centralized coaching is often limited to being “responsible for introducing KPIs, documentation of script-style-one-size-fits-all best practices and cookie-cutting approaches”. This leads to system gaming by other departments and organizational silos that must “meet numbers goals”
- Centralized Agile coaching makes sense only when it takes place within an organization that is small enough to be effectively managed front-to-back (including its all organizational layers) and is genuinely supportive of its own coaches, by providing them with “organizational immunity” and operational safety – to enable them perform their challenging duties
- The main advantage of decentralized coaching approach is that coaches are close to real action: deeply engaged with products/services, and are intimately engaged with senior leadership. Decentralized coaching is deep & narrow (as opposed to being broad and shallow) and takes time to cause meaningful and sustainable organizational changes.
2018 Business Agility Conference @ NYC is in the books. More than 300 people attended – they came from all over the world: to listen to selected speakers, on great topics. Evan Leybourn (the main organizer) also announced the birth of Business Agility Institute.
On impulse, and with a lot of excitement, NYC-based meet-up was created: Big Apple Business Agility (BABA)
Special thanks to Stuart Young (on right, below) from the UK – the legendary professional Business Visualiser who has perpetuated many agile events with his amazing graphic art:
Speakers’ Quotes & Main Take-Away Points
Below are some highlights of the most memorable quotes, from selected presenters. (Note: some notes are captured verbatim, others are transcribed from presenters’ slides, yet others – closely paraphrased. What is underlined – really resonated especially strong. Please forgive any potential omissions and if you find any, please request correction)
>>> Nancy Taylor of IBM
- “Beware of people that say: I am a coach but i really never coached anyone.”
- [There are too many ]”box-checking” agile transformation by companies, out there”
>>> Jonathan Smart of Barclays
- “Business agility is the future”
- “Descale work, descale org, DONT scale”
- “Substitute agile for nimble (without capital A)” [too see is meaning remains the same]
- Change your language from “hi, I am John i will enforce agile on you” to “hi i am John, i will deliver better products for you“.
- [Chasing] “increased productivity leads to churning and faking. Instead, focus on better value and safer environment”
- “You need enterprise agility, not just [agility for] IT”
- “The better your brakes, the faster you can go”
- [Move from] “task-based definition of success” to “outcome definition of success”
- [You should be ] “moving from hard/fixed budgets to rolling scorecards”
Irony in some of his Jonathan’s slides:
- “Our people are not suited for self-organizing, we’ve got the wrong ones…”
- “We want best of both worlds”
- “Easy job. Just fire the managers and tell the teams the are in charge now”
- “It can be done without restructuring the back office.”
“It won’t work, it is just a hype.”
>>>Andy Cepio of Target
- “Don’t crush the chips”: the smallest and most impact-ful innovation NOT to crush fragile chips at the bottom of a box was to make two holes on both sides of a box, as hand grips
>>>Steve Deming of Learning Consortium
- [There is] “lots of agile faking. In Learning Consortium companies have safely, and this is where they can share their experiences.”
Jimmy Allen of Bain & Company
- “The purpose of good organizational design is to create conflicts”
- “When you get to a certain organizational size, any initiative takes about 18 months…to fail“
- “Micro teams MUST report directly into senior Leadership, periodically. Otherwise, mid-level management will kill agility, as they hate agile teams”
- “Jumping to playbooks is silly.”
- “[You have to] create a vocabulary that describes misery [of your people],so you can speak about it
- “The biggest problem of failed agile efforts is distance b/w sr. leadership and doers”
- “Modern organizations must flatten”
- “Only 1 in 11 companies grow sustain-ably – and yet in only 15% of cases do those that fail to grow blame the market”
“Scaling as a capability: 10 lessons from the Masters”
- Recognize that scaling will be critical to your success, demand that your leaders remain in balance
- Winning repeatable models demand an iterative process; don’t declare victory after a good prototype
- Don’t jump to playbooks; there are different scaling models depending on the degree of tailoring needed
- The best scaling models consider the “unit of scaling” to identify resource bottlenecks early
- Address bottlenecks and “Everyone wants Brent” problem from Day 1
- Don’t underestimate behavioral change required especially across functional hierarchies
- Understand the role of the three communities; especially the Scaling Community which acts as a bridge
- Scaling well demands dynamic resource allocation; shift resources fast behind a “winner”
- Eventually scaling will demand changes to your operating model
- Use Engine 2 to build specific capabilities
Jutta Eckstein and John Buck of The Cociocracy Group
- “Every company is a software development company but some dont know it yet
- “There is no such thing as “Spotify model”. If you take their model, inspect and adapt, then maybe it is OK….So they say.”
- “Always use lower case A in “agile” (substitute for ‘nimble’, whenever your can)
- “Fixed budgets will kill organizational agility (refer to Beyond Budgetting)”
Laurence Jourdain of BNP Paribas Fortis
-[You should] “keep a small group of internal coaches but hire many professional coaches from” [reputable places]
Joshua Seckel of Sevatec
- “I don’t have a power point today. –> I may not have any power but I do have a point to make.”
Sudhir Nelvagal of General Electric
- [The company] “is transforming and it is over 125 years old into a lean start up”
- [You have to make] “huge focus on Senior Leadership coaching”
- Using burn-down charts with Senior Management has pluses and minuses. (E.g. velocity policing [is a big minus])
Susan Courtney of Blue Cross Blue Shield
- Problem statement: “Leadership did not know how to reward talent”
- [Had to] bring in Leadership Development coach
- “Culture change is not negotiable”
- Lessons Learned:
- “Build critical mass around the journey – find like minded people”
- “Right people – in the right roles (nice does not = good fit)”
- “Identify and remove toxic people, especially leaders”
- “Value culture fit as much as functional skills”
- “Clarity & co-creation of road-maps”
- “Do this WITH OR in-spite of HR”
- “CULTURE-CULTURE-CULTURE…if you say it’s who you are, you have to mean it (actions not just words)”
David Horowitz & Matias Nino of REI Systems
- “We should stop thinking that ‘everything that what happens in retros stays in retros. We should produce a lot of retrospective radiators.”
Melissa Boggs of Agile42
- “Your have to change organizational culture as a barrier to agile success”
- “It makes sense to focus on principles not on practices”
Jason Tice of World Wide Technology
“If you want to be able to speak to HR, you have to learn how to speak their language”
Amanda Bellwood of Sky Betting and Gaming
- “Embed HR people onto teams”
- “Have HR run their own daily stand-ups and have them come and see what other teams are doing at their stand-ups”
Some personal Kodak moments at the conference:
- 1st row (Left to Right): w/ Steve Denning, w/Mike Beedle
- 2nd row (Left to Right): w/Zuzi Sochova, w/Jeff Suit Lopez-Stuit
The purpose of this post is to summarize two very important and independent topics and then integrate them together, into a joint discussion. The topics are:
- Moving from rigid annual budgets to rolling forecasts (super important! in agile/adaptive product development environments)
- Quality of scaling in agile product development, specifically Scrum
…and tying effective scaling of Scrum to dynamic financial forecasting.
Rigid Annual Budgets vs. Dynamic/Rolling-Wave Forecasting
Challenges presented by rigid annual budgets have been known for a long time. For people that are new to the topic, a great way to stay on top of most recent research and publications, is to follow what is going at BBRT.org (Beyond Budgeting Round Table). One of BBRT’s core team members – Bjarte Bogsnes, in his book “Implementing Beyond Budgeting: Unlocking the Performance Potential” (please, refer to the book’s highlights here), clearly summarizes the problems with conventional, end-of-year rigid budgets. They are as follows:
- Budgets represent a retrospective look at past situation and conditions that may not be applicable in a future
- Assumptions made as a part of a budgeting process, even if somewhere accurate at the beginning, get quickly outdated
- Budgeting, in general, is very time-consuming process, and it adds additional, financial overhead to organizations
- Rigid budgets, can prevent important, value adding-activities, and often lead to fear of experimenting, researching and innovating (crucial for incremental development)
- Budget reports are frequently based on subjective metrics, as they take on the form of RAG statuses, with the latter, introducing additional errors and omissions (for details, please refer to Red, Yellow, Green or RYG/RAG Reports: How They Hide the Truth, by M. Levison and The Fallacy of Red, Amber, Green Reporting, by G. Gendel)
- Budgets, when used as a yardstick to assess individual performance, often lead to unethical behaviors (e.g. “churning & burning cash”at year-end to get as much or more next year) or other system-gaming activities
…The list of adverse effects caused by traditional budgeting is long…
On contrary, a rolling-wave forecast, respects the fact that environmental conditions are almost never static, and recognizes that if too much reliance is placed on prior years’ financial situation, it may lead to miscalculations. Rolling-wave forecasts are based on frequent reassessment of a small handful of strong KPIs, as oppose to large number of weak KPIs, as frequently done in conventional budgeting.. The more frequently forecasts are being made, the higher chance that most relevant/reliable information will be used in assessments. One good way to decide on cadence of rolling forecasts is to align them with meaningful business-driven events (e.g. merchandise shipments, production code deployments, etc.). It is natural to assume that for incremental/iterative product development (e.g. Scrum), when production deployments are made frequently and in small batches, rolling-wave forecasting could be a concurrent financial process. Short cycle time of market feedback could provide good guidance to future funding decisions.
It is worth noting that one of the key challenges that Scrum teams face today, is the “iron triangle” of conventional project management, with all three of its corners (time, scope, budget) being rigidly locked. And while the most common approach in Scrum is to make scope flexible, ‘clipping’ the budget corner brings additional advantage to teams. Above all other benefits, rolling-wave forecasts address the problem described in #4 above, as they provide safety to those teams that want to innovate and experiment.
But what if there not one but many Scrum teams, each working on their own initiatives, running under different cadences (asynchronized sprints) and servicing different customers? How many independent rolling-wave forecasts can one organization or department adopt before things become too complicated? What is too much and where to draw a line?
Before we try to answer this question, let’s review what is frequently seen, when organizations attempt to scale scrum.
Proper Scaling vs. “Copy-Paste” Scaling
Let’s look at the following two situations: (1) more than one Scrum team, independently, doing their own Scrum and (2) more than one Scrum team, working synchronously, on the same product, for the same customer, sharing the same product backlog and domain knowledge. The former case, is referred to as “Copy-Paste” Scrum, clearly described by Cesario Ramos. The latter case can be seen in skillful Large Scale Scrum (LeSS) adoptions. Here are some of the most classic characteristics of both scaling approaches:
|(1) – “Copy-Paste” Scrum||(2) – Large Scale Scrum (LeSS)|
Note: Please refer to Scaling Organizational Adaptiveness (a.k.a. “Agility”) with Large Scale Scrum (LeSS) for additional graphic illustration.
Based on the above, the following also becomes apparent:
In “copy-paste” Scrum, development efforts, marketing strategies and sales (ROI) are not treated as constituents of the same unified ecosystem. In this scenario, it is almost impossible to fund teams by means of funding real, customer-centric products. Why? There are too many independent ad-hoc activities that take place and artifacts that are created. There is no uniformed understanding of work size and complexity that is shared by all teams. Estimation and forecasting made by each individual team is not understood by other teams. Team stability (and subsequently, cost-per-team member) is low, as individuals are moved around from project to project and shared across many projects. Further, with multiple teams reporting into different lines of management, there is a much higher chance of internal competition for budget. By the same token, there is a low chance that a real paying customer would be able to step in and influence funding decisions for any given team: too many independent and competing requests are going on at the same time.
In organizations, where “copy-paste” Scrum is seen (and is often, mistakenly taken for scaled scrum, due to lack of education and expert-leadership), there is still strong preference for fake programs and fake portfolio management. Under such conditions, unrelated activities and, subsequently, data/metrics (often fudged and RAG-ed) are collected from all over the organization and “stapled” together. All this information rolls up to senior leadership, customers and sponsors. Subsequently, what rolls down, is not dynamic funding of well-defined customer-centric, revenue-generating products, but rather rigid budgets for large portfolios and programs that are composed of loosely coupled working initiatives, performed by unrelated Scrum teams (secondary, to conventional departmental budgeting). As rigid budgets cascade down from top, onto individual teams, they further solidify the “iron triangle” of conventional project management and hinder teams’ ability to do research, experimentation and adaptive planning.
On the other hand, in Large Scale Scrum, things are different:
- When up-to-eight LeSS teams work synchronously, together (side-by-side), on the same widely-defined product (real), their shared understanding of work type and complexity (having certain scrum events together really helps!) is significantly better. As a result, when it comes to forecasting a completion of certain work (features), eight LeSS teams will do a better job than eight loosely coupled teams that work completely independently, on unrelated initiatives.
- Since all LeSS teams work for the same customer (Product Owner), there is a much higher chance that they will develop a shared understanding of product vision and strategy, since they are getting it from an authentic source – and therefore will be able to do planning more effectively.
- Having more direct correlation between development efforts LeSS teams (output, in the form of shared PSPI) and business impact (outcome, in the form of overall ROI), makes strategic decisions about funding much more thoughtful. When real customers can directly sponsor product-centric development efforts, by getting real-time feedback from a market place and deciding on future strategy, they (customers) become much more interested in dynamic forecasting, as it allows them to invest into what makes most sense. Dynamic forecasting of LeSS, allows to increase/decrease number of scrum teams involved in product development flexibly, by responding to increased/decreased market demands and/or product expansion/contraction.
Noteworthy that in LeSS Huge cases, when product breadth has outgrown capacity of a single Product Owner and requires work by more than eight teams, dynamic forecasting can still be a great approach for Product (overall) Owner and Area Product owners (APO): they can strategize funding of different product areas and make necessary timely adjustments to each area size/grown, as market conditions change.
All of the above, as described in LeSS scenario, will decrease organizational dependency on fixed budgets, as there will be less interest in outdated financial information, in favor of flexibility, provided by rolling-wave forecasting that brings much closer together “the concept” (where value is built – teams) and “cash” (where, value is consumed – customers).
Another Large-Scale Scrum Training (CLP), taught by Craig Larman in NYC, is in the CompuBox.
More than thirty people from all-around the globe (North America, South America, Europe) came together for this brain-jelling learning experience! The group consisted of product owners/managers, software engineers, managers and organizational design consultants (scrum masters, coaches and trainers) – people coming from different backgrounds and with a focus on different aspects of organizational agility. What has united them all, however, was their eagerness to learn in-depth about principles of organizational design and implications of Scrum adoption at scale in complex organizational settings.
With exception of a few rare questions/clarifications, the class spent NO time discussing basic Scrum. It was implicit (assumed) that everyone in class had strong knowledge and hands-on experience with the basic framework. On occasions, the topics discussed would bump into “…oh this is not even LeSS-specific; this is just basic Scrum…” but those cases were rare.
Not until day three,is when the class took a deeper dive into LeSS Framework and LeSS-specific events, artifacts, roles…. Why was not it done sooner? Well…
- LeSS is Scrum. It is the same very Scrum described by Ken Schwaber and Jeff Sutherland in the Scrum Guide, but done by multiple teams, as they are working together, on the same product, for the same product owner. LeSS is not “…something that IT does, that is buried in a company’s basement, under many layers of organizational complexity…”. LeSS is an organizational design that uses Scrum (team) as a building block. Understanding basic Scrum made understanding of LeSS very easy for everyone.
- The class was made of people that have completed all assigned homework (self-study), before attending. People knew what LeSS picture looks like 😉, when coming in. Everyone in class was an educated customer. Importantly: there were no attempts to change LeSS (or change training content 😊 of LeSS), to make it better fit conditions of organizations, where people came from.
- Spending the first two days on understanding system modelling techniques, differences between causation and correlation (as well as other dynamics) among many system variables, made full understanding of LeSS on day three, come more naturally.
The class learned how to see ‘the whole’/full picture of organizational ecosystem and learned to appreciate why Organizational Design is the first-order Variable that defines System Dynamics (followed by everything else: culture, policies, norms, processes, etc.)
One of my (Gene) biggest take-away points (on the top of an excellent LeSS refresher, from Craig himself), that I plan on using immediately, was the fact from history that was discussed at the beginning of the course (and, sadly, forgotten or known known by many). And it goes as follows:
…Back in 2001, at Snowbird, UT, where the group of seventeen entrepreneurs-product-developers have met and came up with what is known today as ‘Agile Manifesto’, the two contending terms to-be-used were adaptive (suggested by Jim Highsmith, the author of Adaptive Software Development) and agile (suggested by Mike Beedle). ‘Agile’ won because of the reasons that are described here. Truth be told, because the English meaning of ‘agile’ is not as intuitive is the meaning of ‘adaptive’, today, there is a huge number of fads and terminology overloading/misuse that make the original meaning of agile so diluted and abused…. As it was meant to be: Agile == Adaptive ==Flexible. We all have to be careful with the meaning of words we use, to avoid this painful irony😉.
Here are some Kodak moments from the event:
Nowadays, for too many organizations, Agile Maturity Metrics (AMM) have become a trusted way to measure improvements of agility at personal (individuals), team and organizational level.
However, it is not always apparent to everyone that AMMs are different from Agile Check-Lists (e.g. classic example of Scrum Check list by H. Kniberg) and this can often lead to problems and dysfunctions:
Check-Lists are just a set of attributes that are usually viewed on-par with one another; they are not bucketed into states of maturity (other logical grouping could be applied though)
On contrary, AMMs place attributes in buckets that represent different states of maturity, with one state, following another, sequentially.
With very rare exceptions (favorably designed organizational ecosystems), there are three potential challenges that companies face, when relying on bucketed AMMs:
1 – System Gaming: If achieving a higher degree of agile maturity is coupled with monetary incentives/perks or other political gains (for many companies that are driven by scorecards and metrics, this is the case), there is will be always attempts by individuals/teams to claim successes/achievements by ‘playing the system’, in pursuit of recognition and a prize.
Note: Translation of the text in red: “(Пере)выполним годовой план за три квартала!!!” = “Will meet/exceed the annual plan in three quarters!!!”
2 – Attribute-to-Maturity Level relationship is conditional, at most: Placing agile attributes in maturity buckets implies that attributes in higher-maturity buckets have more weight than attributes in lower-maturity buckets. However, this is not always a fair assumption: weight/importance that every organization/team places on any given attribute, while defining its own maturity, is unique to that organization/team. For example, for one team, “…being fully co-located and cross-functional…” could be much more important than “…having Product Owner collocated with a team…” For another team, it could be the other way around.
3 – Correlation between attributes is not linear, at system-level: Regardless of buckets they are placed in, many agile attributes are interrelated systemically and impact one another in ways that is not apparent, to a naked eye. For example, placing “Scrum Master is effective in resolving impediments”attribute in a maturity bucket that comes before the maturity bucket with “…Organization provides strong support, recognition and career path to Scrum Master role…” attribute, dismisses the real cause-and-effect relationship between these two variables, misleads and sets false expectations.
To avoid the issues described above, it would be more advisable to treat every identified agile attribute as a system variable, that is on-par with other system variables, while assuming that it has upstream and downstream relationship. In many situations, instead of spending a lot time and resources on trying to improve a downstream variable (e.g. trying to understand why it is so difficult to prioritize a backlog) it is more practical to fix an upstream variable that has much deeper systemic roots (e.g. finding an empowered and engaged product owner who has as the right to set priorities).
Below, is the list of agile attributes (a.k.a. system variables) that are logically grouped (check-list) but are not pre-assigned to levels of maturity (all flat). Some examples of suggested system-level correlation between different attributes are provided (cells are pre-populated).
Please, click on the image to download the matrix to your desktop, amend the list of attributes if you feel that your situation calls for modification, and then use “Dependency on Other Attributes?” column to better visualize system-level correlation between the attributes are of interest to you and other related attributes (some examples are provided).
Tonight -a great presentation by Malik Graves-Pryor of Natoma Consulting, as he shared how his company leveraged LeSS to achieve stunning results, while facing challenges and learning lessons.
At the Thursday October 12, 2017 NYC Large Scale Scrum Meetup, Malik Graves-Pryor shared his company’s LeSS case study, “Web and Mobile Applications Agile Transformation”. He covered the extensive issues the company faced at the beginning ranging from only 1-2 releases a year with hundreds of defects, and how they transformed over the course of several months to an organization that released monthly, and then continuously, with low defects and high customer satisfaction and engagement.
The discussion covered the merger of the Sales and Product Management Pipelines, adoption of technical practices leading to a DevOps-focused culture, how to take the necessary steps to build trust and cooperation within the organization, as well as the road-map they used to iteratively migrate the organization to continuous integration and deployment.
The interactive discussion spanned two hours with attendees raising questions and issues about the case study, as well as correlating them with their own challenges and aspirations.
Presentation deck is available at Natoma Consulting website for download.
Agile frameworks (e.g. Scrum, Kanban, XP), individuals’ roles & responsibilities, processes & tools, metrics & reporting, burn-up charts, estimation techniques, backlog prioritization, agile engineering practices, agile maturity models etc. – all of them are important attributes of a typical agile transformation. However, NONE of them are first-degree-of-importance system variables that are responsible for transformation success. Most of them, are good superficial lagging indicators of agility but they are all corollary (secondary and tertiary) to another much more important system variable.
What is the most important system variable that defines a company’s agility? It is Organizational Design – the most deeply rooted element of organizational ecosystem that defines most of system dynamics.
When organizational leadership decides to take an organization through an agile transformation journey (it could take years, sometimes), it [leadership] needs to acknowledge that real, sustainable agile changes are only possible if deep, systemic organizational improvements are being made. For that, leadership needs to be prepared to provide to its organization much more than just support in spirit, accompanied organizational messages of encouragement and statements of vision. Leadership must be prepared to intimately engage with the rest of an organization, by doing a lot of real “gemba” (genchi genbutsu (現地現物)) and change/challenge things that for decades, and sometimes for centuries, have been treated as de-facto.
What does it really mean for leadership to engage at System Level? First, it is important to identify what a system is: what are a system’s outer boundaries? For example, one of the most commonly seen mistakes that companies make when they decide on “scope of agile transformation” is limiting its efforts to a stand-alone organizational vertical, e.g. Technology – and just focusing there. Although this could bring a lot of local (to IT) success, it may also create unforeseen and undesirable friction between the part of an organization that has decided to change (IT) and the part of an organization that decided to remain ‘as is’ (e.g. Operations, Marketing). For example, if Scrum teams successfully adopt CI/CD, TDD or other effective engineering practices that enable them deliver PSPI at the end of every sprint, but business is not able to keep up with consumption of deliverables (too many approvals, sign offs, red tape) then the whole purpose of delivering early and often gets defeated. Then, instead of delivering to customers soon, in exchange for timely feedback, teams end up delivering in large batches and too far apart on a time scale.
A successful Agile Leader must treat an organization, that is expected to transform, as a sushi roll. Just like seaweed alone does not provide a full spectrum of flavors and does not represent a complete, healthy meal, one single department (e.g. IT) is not sufficient enough to participate in agile transformation efforts. Other organizational layers need to be included as well, when identifying a slice for agile transformation experiment. A slice does not have be too thick. In fact, if organizational slice is too thick, it might be too big to “swallow and digest”. But still, even when sliced thinly, an organization must include enough layers, to be considered as a ‘complete meal’.
Note: A great example of treating an organization as a sushi role, while making it more agile, is Large Scale Scrum (LeSS) adoption.
So, what are some key focus areas that every Agile Leader must keep in mind, while setting an organization on agile transformation course?
- Location strategies. Geographic locations.
- HR policies (e.g. career growth opportunities, compensation, promotions)
- Budgeting & Finance
- Intra-departmental internal boundaries and spheres of influence
- Organizational Leadership Style
- And some other areas that historically have been considered as …untouchable…
All the above listed areas are defined by Organizational Design and can be better understood through self-assessment, done by organizational leaders at all levels.