Category Archives: Scaling

HR-Related LeSS Experiments – Deciphered

Large Scale Scrum has a history of more than a decade. The first book about LeSS was published by C. Larman and B. Vodde (the co-creators of LeSS) in 2008.  There were two more books on LeSS, subsequently written in 2010 and 2016.  There is no surprise, why the collection of LeSS experiments from the field is so valuable: the authors have documented many (more than 600) experiments, based on their personal experience with LeSS adoptions, as well as feedback and information collected from other organizational design consultants, coaches and early adopters of LeSS, around the globe.
Today, references to LeSS Guides and Experiments can be found in various places on the internet and intranet of many companies that have decided to experiment with LeSS.
This writing is about a small sub-set of LeSS experiments that are specifically related to HR norms, policies and practices. They are all listed in the guide (referenced above), under the section “Organization” and it implies that they are directly related to organizational design – the first-order factor that is responsible for success of LeSS adoptions and agile transformations, at large.
Experiments with Performance Appraisals:
Avoid… Performance appraisals – p. 273 — There is a lot of research and evidence, supporting that individual performance evaluations and individual appraisals that are linked to monetary rewards, are not an effective way to make individuals to become more efficient and productive.  When a manager appraises an employee, usually only one opinion in the room that matters: a manager’s.  Feedback that is delivered once or twice a year is not timely and therefore is hardly actionable by an employee, thus useless, for the most part.  Neither an individual that delivers an appraisal, not an individual that receives it – like the process.  The process, is also pretty expensive, as it uses a lot of company’s resources: it involves lots of documentation, coordination and men-hours spent by many people, from first-line management to HR.
It is worth noting that there is an indirect relationship between conventional Budgeting process and conventional Performance Management process – both of which harmfully feeding off of one another. This is described in the book “Implementing beyond Budgeting: Unlocking the Performance Potential“, by Bjarte Bogsnes.  In his work, Bjarte refers to performance appraisals as “legal trail for a rainy day”.

Avoid… ScrumMasters do performance appraisals – p. 275” —Just like performance appraisals done by agile coaches could lead to serious dysfunctions (page. 130), performance appraisals done by ScrumMasters are extremely harmful.  Drafting ScrumMaster into this role will create a serious conflict of interest and will hinder ScrumMaster’s ability to influence natural growth and evolution of learning among team members. Impartiality and neutrality of ScrumMaster is highly important; becoming an appraiser – takes away this advantage.  Only by remaining neutral and non-authoritative (performance appraisal is exhibition of authority) will ScrumMaster be able to help a team to self-discover, self-improve, and become autonomous in their journey to success.

“Try… De-emphasize incentives – p270.” | “Avoid… Putting incentives on productivity measures – p. 271.” — If achieving a higher productivity (output, velocity) is coupled with monetary incentives/perks or other political gains (typical of many companies that overuse scorecards, metrics, KPIs, RAGs), there is will be always attempts by individuals/teams to claim successes/achievements by ‘playing the system’, in pursuit of recognition and a prize.  For example, in pursuit of ‘higher productivity’ teams may start inflating estimates, to claim higher velocity or deliver work that is low in priority but simple to deliver – just to create an illusion of volume. Incentivizing ‘higher velocity’ is an invitation to moving from “low Fibonacci numbers to high Fibonacci numbers” during estimation.  (Also, see Addressing Problems, Caused by AMMS)

Try… Team incentives instead of individual incentives – p. 272 — The process of individual performance reviews loses its original meaning when people work on same teams, where swarming (working together on the same task) and collective ownership is encouraged.  Offering individual incentives to people would just polarize them and move in opposite directions, towards becoming selfish, individual performers and super-heroes. In cases such as these, people may be easily drafted into unhealthy competition with each other over claims of success, trying to privatize what should be owned and worked on collectively. Companies that continue incentivizing individual performance with monetary perks just continue widening the gap between “what science knowns and business does” (quote from Daniel Pink).

“Try… Team-based targets without rewards – p. 273” — Clearly, team-level behavior, is an extension of individual behavior.  Just like individuals could be inclined to ‘game the system’, so could whole teams, under certain conditions.  Just like individuals, whole teams could be drafted in unethical conspiracies to game numbers, in pursuit of meeting targets, or beating other teams (e.g. producing ‘higher velocity’), whenever monetary rewards are at stake.  It is absolutely necessary to set targets to individual teams that work on par with one another, for the same organization, it would be best to decouple team targets from team rewards.  The latter could be handled through, some sort of profit sharing formula, based on a company’s financial success that is traceable back each team’s work.
Experiments with Job Titles:
“Avoid… Job titles – p. 276 | Try… Create only one job title.  Try… Let people make their own titles – p. 277 | encourage funny titles” – p. 277 —In pursuit of job titles, individuals may also seek gaining authority and “upper hand” over their peers and colleagues.  This may lead to artificial organizational complexity and hierarchy, as well as a casting system.  Individual job titles can also polarize people and drive them in opposite directions, away from shared ownership.  It is for this reason that on agile teams (e.g. Scrum), there is only one title – Developer.  This approach encourages people to think of each other as on-par, as peers, and grow into T-shaped, multi-skilled, cross-functional, willing-to-swarm workers.  In situations, where some distinction between individual jobs is absolutely necessary funny job titles are recommended.  For example, instead of calling someone QA Tester, a person could be called “Bug Finder and Exterminator”
“Try… (if all else fails) Generic title with levels – p. 277” — If it is absolutely necessary to have title distinction (e.g. to signify different levels of seniority/expertize of individuals), try using a leveling system.  For example, Developer level 1(junior), Developer level 2 (mid-level), Developer level 3 (senior)…. However, care should be exercised, not to explicitly associate different title levels with different levels of pay.
Experiments with Jobs:
“Try… Simple general job descriptions – p. 278” – Do not overcomplicate job descriptions.   Precision in a description may lead to contractual perception of what a person should and should not do, in a workplace.  This may also limit a person’s willingness to step out of his comfort zone and learn other areas of work, other skills and becoming multi-faceted.  It may then further lead to “managing by objectives” that are based on detailed job descriptions, and subsequently bring about problems of performance appraisals, described above.  Complex job descriptions also have a tendency attracting underqualified external candidates, whose resumes are excessively long, as they are ‘tailored to closely match complex job descriptions’.  (Relevantly, attracting bad agile coaches, by creating inappropriate job descriptions is a known problem).
“Try… Job rotation – p. 279 | Try… Start people with job rotation – p. 280” — Give individuals opportunities to learn new domains, technologies, lines of business.  This is will reduce the risk of a person becoming uninterested/bored with his current job.  Further, by rotating from one job to another, a person may discover where he fits best and delivers most value.  By having this opportunity, a person will also have a higher chance of merging the gap between “having to do a job” and “wanting to do a job”.  This is especially important with newly hired people that have a limited industry experience (e.g. recent college graduates).
 Experiments with Hiring:
Try… Hire the best – p. 280 | Avoid… Hiring when you cannot find the best – p. 281” — Do not settle for less than “best people your money can buy”.  It is better to rely on fewer great people that you already have on-staff than bring on more under-qualified people, to speed up work, especially at the end of a project that is already late (Brook’s Law).  From a system thinking perspective if you are trying to increase velocity (output) by a scrum team and decide to do so by adding more developers that you procured on low budget (low pay will most likely buy you low-skilled developers), you will most likely reduce velocity, by having low-skilled developers introducing more bugs into a system. Please, see why.
Try… Team does the hiring – p. 281” — If you plan on hiring an individual to join a team, please make sure that a team does most of interviewing and vetting.  Through that, not only a person’s skills and experience will be examined but it will become more apparent if a person can organically jell with a team: if there is compatibility, chemistry and synergy with other team members.   Panel interviews by whole teams are usually much more effective, since they include practical tests, real-life simulations and hands-on exercises.  It also allows some people to observe, while others ask questions, and then rotate.  Try to reduce the level of influence that HR personnel and first-line management have on the process as much as legally possible.  This will reduce the amount of subjective, administrative, frequently bias and error-prone screening (refer to top of page 17).
Conclusion:
As a summary, please consider the following quote that describes sushi-roll-like organizational design in Large Scale Scrum (LeSS), by C. Larman (also, explained in detail in Agile Organization, as a Sushi Roll):
In it, HR policies is listed as one of the vital elements of overall organizational agility.

BABA Meetup – Does Agile Really Work in Sales?

Business Agility is at the top of conversation in the workplace. The Big Apple Business Agility (BABA) MeetUp launched on Monday, March 11, with an interactive presentation, “Does Agile Really Work in Sales?”, by Marina Alex, Business Agility Transformation Coach.
Marina related several of her experiences applying agile to sales, from banks, to an Agile Museum to a chain of dental clinics, Marina shared data that proved improvements in sales were recorded rapidly. In one case 50% in two months, 12 months later 127%. Of course, a shift in culture was at the heart of the process and the biggest challenge, but outstanding results led teams to want to work this way.  A copy of the presentation can be downloaded here.
For the first time, publicly, SWAY Framework guide has been released.  To download a copy please click here.
Some of the steps to success were adopting a backlog that was also qualitative and becoming collaborative through stand-ups, retrospectives and cross-functional teams. One significant hurdle that needed to be overcome was identifying leaders who would take ownership. Marina has adopted an Agile Framework – SWAY, that she shared with the group. One of the highlights of the evening was engaging the participants with the content with the Nureva Wall + Span Workspace. The interactive Wall and collaborative software enabled them to make predictions and add their thoughts to the conversation.
SWAY Framework Guide

[Download Meetup Presentation]

Session Feedback

 

SWAY – Agile Sales Framework 1.0

Meetup-recap.  TBA.

 

Mentor-Guided LeSS Case Study Writing Experience Report




This writing is about mentor-assisted LeSS adoption case study, written by Certified LeSS Trainer-Candidate – Gene G [MENTEE]: Certified Enterprise & Team Coach (CEC/CTC), Certified LeSS-Friendly Scrum Trainer (LFST) / LeSS-Trainer Candidate, Certified in Agile Leadership (CAL) | Certified in Scrum @Scale (CS@S) and assisted throughout by Jurgen D. S. [MENTOR]: Certified LeSS Trainer, Licensed Management 3.0 Trainer, Innovation Games Qualified Instructor, Black Belt Collaboration Architect

Purpose of a case study:

The purpose of writing a case study was to re-live the experience of Large-Scale Scrum (LeSS) adoption, by going back in time and memory to everything that was done by me – the agile coach, trainer and organizational design consultant at a large financial institution.  This engagement was done in conjunction/partnership with my former trusted colleague Stuart P. (also, an experienced agile and software engineering coach).   Writing this case study gave me a great opportunity to self-reflect (retrospect) and think about what I could have done differently back then, if I had to go through adoption again.  The name of the organization, as well as names of people, products, projects, applications, components, etc. that were involved in the study are intentionally withheld, for confidentiality and privacy protection reasons.Nevertheless, hopefully the case study, when published on less.works will serve as a guideline to others, in their attempts to experiment with LeSS adoptions in their respective organizations.  It is worth nothing that many existing LeSS case studies on less.works had provided my former colleague and me with some great references when we worked on our artifact piece.


More About my Mentor:

My mentor, also one of not too many Certified LeSS trainers, was very knowledgeable about LeSS (as trainer, coach and practitioner) and very supportive in my case study work.  Him and I have met more than once in real life, at various agile- and LeSS-related public events (conferences, retreats), and this allowed for some of in-personal mentoring sessions.  Visual technology took care of the rest and made our remote sessions also effective (Note: I am based in the US, he is based in Europe)

Dynamics of Case Study writing:

The process had been very iterative all along.  My mentor and I used google docs, as a communication media and it allowed us to work incrementally and transparently with one another: typically, I would capture my thoughts directly in the google document, iterate multiple times through them and then, once feeling comfortable enough, would share them with the mentor, asking for his feedback. The mentor would provide feedback, ask questions and suggest clarifications.  My former colleague and the peer-coach, who also had full access to the case study, would attend to it at any point in time, leave his comments, provide clarifications and add his details to mine.  Notably, my former colleague-coach has helped me significantly, by recalling facts, decisions, ideas, events that we lived through together (LeSS adoption took place a few years before the case study was incepted).  Specifically, my former colleague also helped me significantly in those areas of the case study that talked about technology: architecture, design, and development.  In all fairness, this was ‘our’ case study, not just ‘mine’.
Regularly, at least once a month, when meeting with my mentor, I would receive feedback on those parts of the case study that required further refinement and re-work.  Many times, my mentor would ask me questions that initially seemed to be intentionally tricky or even irrelevant.  But I always had to give my mentor the benefit of the doubt that he, being a deep system thinking just like me, tried to set me up to think deeper, broader and most systemically into the matter, helping me to discover better ways to formulate my thoughts.   Specifically, many of his questions made me go backwards from many of the LeSS experiments that were leveraged during the case study, to underlining LeSS principles – and making a connection.

From time to time, my mentor would also share his own experiences and give his own perspective like mine, or related situations.  This made our mentoring more interactive, engaging and fulfilling.


How did I decide on the scope of my case study?

One of the most important mentoring ‘aha moments’ for me was the decision on how many LeSS experiments that were actually used during LeSS adoption did I really want to describe in detail, as a part of my case study.Here, one of LeSS adoption concepts came to rescue: Deep & Narrow is better than Broad & Shallow.  I consulted with my former colleague-coach on how many of our LeSS experiments and experiences do we really want to discuss and how deep.  We agreed on the shorter list of experiments that represented the crust of our work and could be aligned with logical and chronological sequence of events, as we remembered them.  We made our selection described experiments, based on what we felt was most important during the adoption, relevant to the case study and memorable to us, as coaches.  I consulted with my mentor on the final list and the overall approach and based on his recommendations, proceeded with deeper dives into the case study.


A picture is worth a thousand of words.

During one of the many case study reviews with my mentor, it became obvious that long paragraphs and dry text would make many readers bored.  This is when I have decided to spice up the case study with graphic illustrations and other visual artifacts (e.g. causal loop diagrams, tabular data).  I had to make a dedicated iteration throughout the whole case study and introduce graphics, were they seemed most appropriate.  Ultimately, this made the case study more readable and informative.

Overall experience.

My overall experience of writing the case study was amazing.  It took me through the process of additional deep re-learning and self-discovery.  It made me reassess my past decisions, now seeing them through the prism of additional experience acquired during the last three years of professional work.

September 17-19th: Certified LeSS Practitioner Course With Bas Vodde | NYC

Experience Report by Guest-Blogger Heitor Roriz Filho
I am not going to entertain your hypothetical situation” answers Bas during the LeSS training in the last three days in NYC. His modesty during answering the questions posed by participants, advanced or more basic, really struck. The strong influence from Systems Thinking brings to mind the importance of experiments and hypothesis validation, one thing that most companies using Scrum today have completely misunderstood. Overall the three days of training were entertaining and served for me to consolidate the knowledge acquired during the first LeSS training I attended in Minneapolis with Craig Larman earlier this year.
Less (Large Scale Scrum) is a very strong and solid option scale Scrum in organizations. This is due to the fact that LeSS, as pointed out by Bas Vodde, was actually the result of Systems Modeling exercises and discussions. As a consequence of that, LeSS explores the organizational ability and desire to be more adaptive and to create and maintain customers by producing products or services they actually love. Systems Thinking applied in practice to actual problems organizations face, Product Owner responsibilities, team accountability and several real life examples and case studies were the things that stood out in the training.
If you are willing to learn more about LeSS, or become a LeSS trainer, you need to attend both classes: with Bas and the one with Craig. One complements the other in such a way that someone who is passionate with Agile can feel reinvigorated to go back the their clients and promote real Agility. Both instructors teach theory and practice but Craig’s class stands out laying more theoretical and philosophical foundations (crucial for true Agility) while Bas brings that to the trenches (crucial to get your hands dirty).

Experience Report by Guest-Blogger Michelle Lee
This was the first training class I have attended in several years. I’ve been reading Bas’s books and visiting the LeSS.works website to learn about this scaling framework. I ended up in New York on accident. I was suppose to attend this same training a week prior in Atlanta, but that class was cancelled due to scheduling issues. I am so glad it was. 

I have been interested in LeSS for about 5 years. What attracted me to this framework over others was the simplicity of the principles. For anyone who has done Scrum with a team, the principles just make sense, period. Had I attended the previously scheduled class in Atlanta, I would not have had Bas as the facilitator and I don’t think I would have learned as much. No offense to the trainer who I would have learned from, but the opportunity to learn from one of the co-creators made the class all the better. Bas does a great job telling stories and giving examples, he doesn’t pretend to know the answer to everything and he is honest about it. Just as all good Scrum Masters know, you can set up the guardrails, but until you experiment with what works for your company, team, style, it’s just an opinion. 

The content of the class was what I expected, and more. To be honest, I was frustrated after the first day. Why was I frustrated? I was frustrated because my table group and I were storming during our first exercise. Most of the people in the class are used to being the coach, not the player! When you are used to being the coach, jumping “in the game” requires you to look at the problem from a different angle and I wasn’t used to looking from that angle!! The exercises Bas put together forced each of us into having to play the game, listen to our teammates and self-manage our time to accomplish the outcomes. Sometimes we did well, sometimes we didn’t – sometimes we failed. I was reminded that failing is hard. Yet, we coach teams through failure all the time? We coach teams to learn from their failures, in fact, as Bas shared, most of the time we know an idea will fail, and we let it play out because we know the learning will be worth it!

The 3-days have made me look at how I coach differently and I thank the “banking table” team and Bas for allowing me the opportunity to fail, to learn, and to improve! New York was also a great city and the location was amazing, nothing against Atlanta. 🙂

Sept 13 -14 | 3rd Global LeSS Conference | NYC


Unforgettable 2 days at the 3rd Global LeSS Conference, at Angel Orensanz Foundation – the historical landmark in NYC.


Conference Space and Our People
Experience Report by Guest-Blogger Ram Srinivasan

Though I have been associated with the Large Scale scrum (LeSS) community for about five years (though the “community” did not exist,  I can think of my association with like minded folks) this is my first LeSS conference. While I used to attend a lot of conferences in the past, I have started focusing more on deep learning (by attending focused workshops) than focusing on conferences. But this year, I had to make an exception for the LeSS conference Why (a) it was the first LeSS conference in North America  (b) It was not very far and (c) I was thinking that I might meet some of the smartest people in the LeSS community whom I may not meet otherwise and (d) I have heard that it is a “team based” conference (unlike other conferences where you are on your own) and I wanted to find out what the heck it was. I was not disappointed.

The venue itself was very different from the conventional Agile conferences  – not a hotel. That definitely caught my attention !! I was pleasantly suprirsed to see both Howard Sublet (the new Chief Product Owner from Scrum Alliance) and Eric Engelmann  (the Chairman of the Board of Director of Scrum Alliance ).  Howard and I had good discussions on LeSS, Scrum Alliance, the marketplace, and scaling
Some sessions that I attended and major takeaways:
  • Day 1 morning keynote –  Nokia LTE  implementation  – Takeaway – Yes, you can do Scrum with more than 5000 engineers
  • Day 2 keynote  by Craig Larman. I always find Craig’s thinking fascinating and learnt quite a few interesting facts about cognitive biases (and strategies to overcome them).
  • LeSS Games – component team and feature team simulation lead by Pierluigi Pugliese – very interesting simulation – I used a variation of this in my CSM class past weekend and people liked it. I hope to write about sometime, in the coming days
  • LeSS roles exercise by Michael James –  I have always been a fan of MJ. Very interesting exercise which reinforces the concept of LeSS roles
  • TDD in a flip chart – Guess I was there again, with MJ. Well, just learned that you do not need a computer to learn about TDD.
  • An open space session with Howard Sublett on LeSS and Scrum Alliance partnership (yours truly was the scribe) – Lot of interesting discussions on market, strategy, and positioning of the LeSS brand.  I personally got some insights from Rafael Sabbagh and Viktor Grgic.
Two days was short !! Time flew away.  It was a great experience !! And  I wish we could have a North American LeSS conference every year !!

Experience Report by Guest-Blogger Mark Uijen de Kleijn

I’ve attended the 2018 LeSS Conference- my first – in the Angela Orensanz Center in New York. I was really inspired by the many great speakers, experiments and experiences and was glad I could help Jurgen de Smet by his workshop on Management 3.0 practices that can complement LeSS with experiments.

A couple of notes on the Conference; it has been the first Conference I attended in years where I actually learned a lot, either from the many speakers, experiments and experiences, but from my ‘team’ as well. As the LeSS Conference is a team-based conference, we reflected on the content and our insights during the Conference, which accelerated my learnings.

As I use many games and practices in organizations or courses, I’ve seen several great new games that I can use myself. The ‘building agile structures’ game of Tomasz Wykowski and Justyna Wykowska was the most outstanding game for me, because it makes the differences between component and feature teams very clear when scaling work, and I will use this for sure in the future. The experiences at Nokia by Tero Peltola were very inspiring and especially the focus on the competences (of everybody) and technical excellence I will take with me.Thoughts that will stick with me the most after the conference: the focus on technical excellence (including e.g. automation, code quality, engineering practices etc.) and the importance of the structure of the organization, following Larman’s fifth law ‘Culture follows structure’. The latter I’m already familiar with, but needs to be reprioritized in my mind again. The former will be my main learning goal the coming period and I will need to dust off my former experiences.

Interesting quote to think about, by Bas Vodde: ‘we should maximize dependencies between teams’ (to increase collaboration between teams).


Games and Team Activities

LeSS Graphic Art


My partner in crime (Ari Tikka) and me  – Presenting on Coaching

Click here to download presentation: Ari’s deck | Gene’s deck.


Personal Memorable Moments


Next LeSS conference (2019) – Munich, Germany

May 30th-June 1st: Certified LeSS Practitioner Course With Craig Larman | NYC

Another LeSS Training (CLP) with Craig Larman is in the CompuBox.  This highly engaging training brought together 35 attendees from all over the globe.  One of the attendees was Chet Hendrikson.  A bit about Chet:
Chet has been involved with Agile Software Development since 1996 and is the first signatory to the Agile Manifesto. Along with his long-time friend and colleague Ron Jeffries, Chet has made the following important contributions to the global agile community:
  • Wrote Extreme Programming Installed (also with Ann Anderson)
    In 2009, developed for Scrum Alliance the Certified Scrum Developer program
  • Taught the first Certified Scrum Developer (CSD) course
  • Have been curating the Scrum Alliance’s Agile Atlas website
  • Created the SA’s official Scrum description, Core Scrum
  • Speak at conferences, bringing an interesting mix of humor and deep knowledge, and the odd cat picture.

This is what Chet had to say about the course:

“Chet Went to Craig’s LeSS Course”

Many years ago, I wrote an article entitled “Inside every 100-person project is a 10-person project trying to get out.”  That pretty much sums up my feelings about Agile at scale.

My interests have always been with the programmers and their safety and not with how to “Agilize” the organization.  Some of this was a reaction to the failure of most Agile transformations.
But, as someone deeply rooted in the Agile movement, I feel it is important to pay some attention to the “scaling” end of things.  A couple of years ago,  Ron Jeffries and I took (most) of the four-day Implementing SAFe course.  You can read about that at https://ronjeffries.com/xprog/articles/safe-good-but-not-good-enough/.
I have also been paying attention to Craig Larman and Bas Vodde’s Large Scale Scrum (Less).  So, when I saw that Craig was teaching a LeSS Practitioner course in New York on a week I was not working, I signed up.
There were a couple of reasons for me to take some time away from my wife and cats to do this.  First, after having read the LeSS books, I wanted to learn more.  And, secondly, I have always enjoyed my interactions with Craig and wanted to spend some more time with him.
The course is three full days, 8:30 to 6:00, and involves a great deal of hands on work.  And, I do mean work.
Craigs starts the class by saying that “you won’t successfully be able to return to your workplace and ‘give a summary’ of your insights; it is futile & won’t be understood.”
He is right about this.  But I will try and give you my impressions of the course.
One of the key takeaways from the course is something I already believed, which is don’t scale.  Do everything possible to build your product with one time.  If that is not enough, find ways of descaling your problem.  Only if that fails take the steps required to turn your organization into one that can build large products with Scrum.  Doing this effectively will require many changes.  Most of which are about removing management and simplifying information flows.

Craig’s organizing principle for the course is that in order to successfully use these ideas,  you must own them.  Having an instructor, no matter how good they are, no matter the depth of their experience, teach you something is no where as good as discovering the answers yourself.  To this end, we spent most the the course learning and practicing organizational modeling to derive the practices and structures that align with our goals.

In the course, our goals where to create a learning organization that has the ability to “turn on a dime for a dime.”  You may have other goals, but these tools will help better align with them no matter what they are.
Only on the afternoon of the last day did we turn to a full discussion of LeSS.  This was very insightful and was a fitting way to close out the course.
If you are interested in Scrum at scale, I highly recommend  this course.  If you are interested in bringing your organization into sync with its goals, then this is the place to start.

 

Some more Kodak moments from the event are below:






 

 

 

 

 

 

Proper Scaling of Scrum and Dynamic Financial Forecasting


The purpose of this post is to summarize two very important and independent topics and then integrate them together, into a joint discussion.  The topics are:

  1. Moving from rigid annual budgets to rolling forecasts (super important! in agile/adaptive product development environments)
  2. Quality of scaling in agile product development, specifically Scrum

…and tying effective scaling of Scrum to dynamic financial forecasting.


Rigid Annual Budgets vs. Dynamic/Rolling-Wave Forecasting

Challenges presented by rigid annual budgets have been known for a long time.  For people that are new to the topic, a great way to stay on top of most recent research and publications, is to follow what is going at BBRT.org (Beyond Budgeting Round Table).  One of BBRT’s core team members – Bjarte Bogsnes, in his book “Implementing Beyond Budgeting: Unlocking the Performance Potential” (please, refer to the book’s highlights here),  clearly summarizes the problems with conventional, end-of-year rigid budgets. They are as follows:

  1. Budgets represent a retrospective look at past situation and conditions that may not be applicable in a future
  2. Assumptions made as a part of a budgeting process, even if somewhere accurate at the beginning, get quickly outdated
  3. Budgeting, in general, is very time-consuming process, and it adds additional, financial overhead to organizations
  4. Rigid budgets, can prevent important, value adding-activities, and often lead to fear of experimenting, researching and innovating (crucial for incremental development)
  5. Budget reports are frequently based on subjective metrics, as they take on the form of RAG statuses, with the latter, introducing additional errors and omissions (for details, please refer to Red, Yellow, Green or RYG/RAG Reports: How They Hide the Truth, by M. Levison and The Fallacy of Red, Amber, Green Reporting, by G. Gendel)
  6. Budgets, when used as a yardstick to assess individual performance, often lead to unethical behaviors (e.g. “churning & burning cash”at year-end to get as much or more next year) or other system-gaming activities

…The list of adverse effects caused by traditional budgeting is long…

On contrary, a rolling-wave forecast, respects the fact that environmental conditions are almost never static, and recognizes that if too much reliance is placed on prior years’ financial situation, it may lead to miscalculations.  Rolling-wave forecasts are based on frequent reassessment of a small handful of strong KPIs, as oppose to large number of weak KPIs, as frequently done in conventional budgeting..  The more frequently forecasts are being made, the higher chance that most relevant/reliable information will be used in assessments.  One good way to decide on cadence of rolling forecasts is to align them with meaningful business-driven events (e.g. merchandise shipments, production code deployments, etc.).  It is natural to assume that for incremental/iterative product development (e.g. Scrum), when production deployments are made frequently and in small batches, rolling-wave forecasting could be a concurrent financial process.  Short cycle time of market feedback could provide good guidance to future funding decisions.

It is worth noting that one of the key challenges that Scrum teams face today, is the “iron triangle” of conventional project management, with all three of its corners (time, scope, budget) being rigidly locked. And while the most common approach in Scrum is to make scope flexible, ‘clipping’ the budget corner brings additional advantage to teams.  Above all other benefits, rolling-wave forecasts address the problem described in #4 above, as they provide safety to those teams that want to innovate and experiment.

But what if there not one but many Scrum teams, each working on their own initiatives, running under different cadences (asynchronized sprints) and servicing different customers?  How many independent rolling-wave forecasts can one organization or department adopt before things become too complicated?  What is too much and where to draw a line?

Before we try to answer this question, let’s review what is frequently seen, when organizations attempt to scale scrum.

 

Proper Scaling vs. “Copy-Paste” Scaling

Let’s look at the following two situations: (1) more than one Scrum team, independently, doing their own Scrum and (2) more than one Scrum team, working synchronously, on the same product, for the same customer, sharing the same product backlog and domain knowledge.  The former case, is referred to as “Copy-Paste” Scrum, clearly described by Cesario Ramos. The latter case can be seen in skillful Large Scale Scrum (LeSS) adoptions. Here are some of the most classic characteristics of both scaling approaches:

(1) – “Copy-Paste” Scrum (2) – Large Scale Scrum (LeSS) 
  • Product definition is weak. Applications and components that don’t have strong customer alignment are treated as products
  • “Doing Scrum” efforts are often a result of trying to meet goals of agile transformation (some annual % goals must be met), set at enterprise level
  • Tight subsystem code ownership
  • Top-down, “command & control” governance, with little autonomy and self-management at team level
  • Importance of Scrum dynamics and its roles are viewed as secondary to existing organizational structure blueprints
  • Too many single-specialty experts and very few T-shaped workers
  • No meaningful HR changes to support Scrum team design
  • Simplified organizational design. Reduction of: silos, handovers, translation layers and  bureaucracy
  • Scrum is implemented by coordinated, feature-centric teams, working on  widely defined Product, for the same PO.
  •  Local Optimization by single specialists is eradicated
  • Scrum is a building block of IT organizational structure
  • Teams are collocated Multi-site development is used for multiple locations
  • Strong reliance of technical Mentoring and Communities of Practice
  • No subsystem code ownership
  • Reduction of “undone” work and “undone department”
  • Focus on Customer values
  • Strong support by Senior Leadership & intimate involvement of HR

Note: Please refer to Scaling Organizational Adaptiveness (a.k.a. “Agility”) with Large Scale Scrum (LeSS) for additional graphic illustration.

Based on the above, the following also becomes apparent:

In “copy-paste” Scrum, development efforts, marketing strategies and sales (ROI) are not treated as constituents of the same unified ecosystem.  In this scenario, it is almost impossible to fund teams by means of funding real, customer-centric products.  Why?  There are too many independent ad-hoc activities that take place and artifacts that are created.  There is no uniformed understanding of work size and complexity that is shared by all teams.  Estimation and forecasting made by each individual team is not understood by other teams.  Team stability (and subsequently, cost-per-team member) is low, as individuals are moved around from project to project and shared across many projects.  Further, with multiple teams reporting into different lines of management, there is a much higher chance of internal competition for budget.  By the same token, there is a low chance that a real paying customer would be able to step in and influence funding decisions for any given team: too many independent and competing requests are going on at the same time.

In organizations, where “copy-paste” Scrum is seen (and is often, mistakenly taken for scaled scrum, due to lack of education and expert-leadership), there is still strong preference for fake programs and fake portfolio management.  Under such conditions, unrelated activities and, subsequently, data/metrics (often fudged and RAG-ed) are collected from all over the organization and “stapled” together.   All this information rolls up to senior leadership, customers and sponsors.  Subsequently, what rolls down, is not dynamic funding of well-defined customer-centric, revenue-generating products, but rather rigid budgets for large portfolios and programs that are composed of loosely coupled working initiatives, performed by unrelated Scrum teams (secondary, to conventional departmental budgeting).  As rigid budgets cascade down from top, onto individual teams, they further solidify the “iron triangle” of conventional project management and hinder teams’ ability to do research, experimentation and adaptive planning.

On the other hand, in Large Scale Scrum, things are different:

  • When up-to-eight LeSS teams work synchronously, together (side-by-side), on the same widely-defined product (real), their shared understanding of work type and complexity (having certain scrum events together really helps!) is significantly better. As a result, when it comes to forecasting a completion of certain work (features), eight LeSS teams will do a better job than eight loosely coupled teams that work completely independently, on unrelated initiatives.
  • Since all LeSS teams work for the same customer (Product Owner), there is a much higher chance that they will develop a shared understanding of product vision and strategy, since they are getting it from an authentic source – and therefore will be able to do planning more effectively.
  • Having more direct correlation between development efforts LeSS teams (output, in the form of shared PSPI) and business impact (outcome, in the form of overall ROI), makes strategic decisions about funding much more thoughtful.  When real customers can directly sponsor product-centric development efforts, by getting real-time feedback from a market place and deciding on future strategy, they (customers) become much more interested in dynamic forecasting, as it allows them to invest into what makes most sense.  Dynamic forecasting of LeSS, allows to increase/decrease number of scrum teams involved in product development flexibly, by responding to increased/decreased market demands and/or product expansion/contraction.

Noteworthy that in LeSS Huge cases, when product breadth has outgrown capacity of a single Product Owner and requires work by more than eight teams, dynamic forecasting can still be a great approach for Product (overall) Owner and Area Product owners (APO): they can strategize funding of different product areas and make necessary timely adjustments to each area size/grown, as market conditions change.


Conclusion:

All of the above, as described in LeSS scenario, will decrease organizational dependency on fixed budgets, as there will be less interest in outdated financial information, in favor of flexibility, provided by rolling-wave forecasting that brings much closer together “the concept” (where value is built – teams) and “cash” (where, value is consumed – customers).

December 6th-8th: Certified LeSS Practitioner Course With Craig Larman | NYC


Another Large-Scale Scrum Training (CLP), taught by Craig Larman in NYC, is in the CompuBox.

More than thirty people from all-around the globe (North America, South America, Europe) came together for this brain-jelling learning experience! The group consisted of product owners/managers, software engineers, managers and organizational design consultants (scrum masters, coaches and trainers) – people coming from different backgrounds and with a focus on different aspects of organizational agility. What has united them all, however, was their eagerness to learn in-depth about principles of organizational design and implications of Scrum adoption at scale in complex organizational settings.

Course Highlights

With exception of a few rare questions/clarifications, the class spent NO time discussing basic Scrum.  It was implicit (assumed) that everyone in class had strong knowledge and hands-on experience with the basic framework.  On occasions, the topics discussed would bump into “…oh this is not even LeSS-specific; this is just basic Scrum…” but those cases were rare.

Not until day three,is when the class took a deeper dive into LeSS Framework and LeSS-specific events, artifacts, roles…. Why was not it done sooner?   Well…

  • LeSS is Scrum. It is the same very Scrum described by Ken Schwaber and Jeff Sutherland in the Scrum Guide, but done by multiple teams, as they are working together, on the same product, for the same product owner.  LeSS is not “…something that IT does, that is buried in a company’s basement, under many layers of organizational complexity…”. LeSS is an organizational design that uses Scrum (team) as a building block.  Understanding basic Scrum made understanding of LeSS very easy for everyone.
  • The class was made of people that have completed all assigned homework (self-study), before attending. People knew what LeSS picture looks like 😉, when coming in.  Everyone in class was an educated customer.  Importantly: there were no attempts to change LeSS (or change training content 😊  of LeSS), to make it better fit conditions of organizations, where people came from.
  • Spending the first two days on understanding system modelling techniques, differences between causation and correlation (as well as other dynamics) among many system variables, made full understanding of LeSS on day three, come more naturally.

The class learned how to see ‘the whole’/full picture of organizational ecosystem and learned to appreciate why Organizational Design is the first-order Variable that defines System Dynamics (followed by everything else: culture, policies, norms, processes, etc.)

One of my (Gene) biggest take-away points (on the top of an excellent LeSS refresher, from Craig himself), that I plan on using immediately, was the fact from history that was discussed at the beginning of the course (and, sadly, forgotten or known known by many).  And it goes as follows:

…Back in 2001, at Snowbird, UT, where the group of seventeen entrepreneurs-product-developers have met and came up with what is known today as ‘Agile Manifesto’, the two contending terms to-be-used were adaptive (suggested by Jim Highsmith, the author of Adaptive Software Development) and agile (suggested by Mike Beedle).  ‘Agile’ won because of the reasons that are described here.  Truth be told, because the English meaning of ‘agile’ is not as intuitive is the meaning of ‘adaptive’, today, there is a huge number of fads and terminology overloading/misuse that make the original meaning of agile so diluted and abused…. As it was meant to be: Agile == Adaptive ==Flexible.  We all have to be careful with the meaning of words we use, to avoid this painful irony😉.


Here are some Kodak moments from the event:

Addressing Problems, Caused by AMMS


Nowadays, for too many organizations, Agile Maturity Metrics (AMM) have become a trusted way to measure improvements of agility at personal (individuals), team and organizational level.

However, it is not always apparent to everyone that AMMs are different from Agile Check-Lists (e.g. classic example of Scrum Check list by H. Kniberg) and this can often lead to problems and dysfunctions:

Check-Lists are just a set of attributes that are usually viewed on-par with one another; they are not bucketed into states of maturity (other logical grouping could be applied though)

On contrary, AMMs place attributes in buckets that represent different states of maturity, with one state, following another, sequentially.

With very rare exceptions (favorably designed organizational ecosystems), there are three potential challenges that companies face, when relying on bucketed AMMs:

1 – System Gaming: If achieving a higher degree of agile maturity is coupled with monetary incentives/perks or other political gains (for many companies that are driven by scorecards and metrics, this is the case), there is will be always attempts by individuals/teams to claim successes/achievements by ‘playing the system’, in pursuit of recognition and a prize.

Note: Translation of the text in red: “(Пере)выполним годовой план за три квартала!!!” = “Will meet/exceed the annual plan in three quarters!!!”

2 – Attribute-to-Maturity Level relationship is conditional, at most: Placing agile attributes in maturity buckets implies that attributes in higher-maturity buckets have more weight than attributes in lower-maturity buckets. However, this is not always a fair assumption: weight/importance that every organization/team places on any given attribute, while defining its own maturity, is unique to that organization/teamFor example, for one team, “…being fully co-located and cross-functional…” could be much more important than “…having Product Owner collocated with a team…” For another team, it could be the other way around.

3 – Correlation between attributes is not linear, at system-level: Regardless of buckets they are placed in, many agile attributes are interrelated systemically and impact one another in ways that is not apparent, to a naked eye.  For example, placing “Scrum Master is effective in resolving impediments”attribute in a maturity bucket that comes before the maturity bucket with “…Organization provides strong support, recognition and career path to Scrum Master role…” attribute, dismisses the real cause-and-effect relationship between these two variables, misleads and sets false expectations.

To avoid the issues described above, it would be more advisable to treat every identified agile attribute as a system variable, that is on-par with other system variables, while assuming that it has upstream and downstream relationship.  In many situations, instead of spending a lot time and resources on trying to improve a downstream variable (e.g.  trying to understand why it is so difficult to prioritize a backlog) it is more practical to fix an upstream variable that has much deeper systemic roots (e.g. finding an empowered and engaged product owner who has as the right to set priorities).

Below, is the list of agile attributes (a.k.a. system variables) that are logically grouped (check-list) but are not pre-assigned to levels of maturity (all flat).   Some examples  of suggested system-level correlation between different attributes are provided (cells are pre-populated).

Please, click on the image to download the matrix to your desktop, amend the list of attributes if you feel that your situation calls for modification, and then use “Dependency on Other Attributes?” column to better visualize system-level correlation between the attributes are of interest to you and other related attributes (some examples are provided).