Customer Connections + Corporate Services

Meeting Customer Expectations One Release at a Time

Portland General Electric’s IT department goal this year seemed simple enough: to complete our customer-valued work in a predictable manner. It was a challenge to shift our customers’ perceptions of IT application development. In addition, we also wanted to close an organizational gap around a lack of focus and prioritization, which was revealed in our employee engagement survey. We knew that both goals would be challenging, but we had to respond to rising expectations from both our customers and employees.

To meet these goals, PGE needed to find a way to take Agile software development methodology to the next level. Lean, a systematic method for the elimination of waste within a manufacturing system, was a logical next step. But we wondered: could a manufacturing improvement method provide similar value with software delivery? We decided to find out.

Structure for Success

In 2009, PGE started utilizing Scrum (an iterative and Agile methodology for software development), as a proof of concept. The proof of concept paid off and to keep moving forward, the applications organization was divided into 14 Scrum teams assigned to support specific lines of business. Scrum’s structure was straightforward, but for those departments that did not participate in the proof of concept, it introduced a new way for them to interact with us. Over the next 100 sprints, we continually looked for ways to improve the process, which included the creation of Business Sponsor Groups, project gates, and we introduced the role Business Relationship Manager (BRM) to act as the liaison between IT and the business units. With the introduction of the BRM role, the delivery of business value predictably became a regular request from the business that needed our response.

Lean, was the perfect next step for our organization. In 2014, the foundation for Lean was put into place. We brought in consultants to get us started. The focus of the first year included getting as much of the organization as possible to take Lean foundations training. We ended up getting about half of our teams through the training, but that was enough to get started. After a year of training, regular check-ins and coaching, we took off our training wheels and were ready to translate our preparation into action.

…could a manufacturing improvement method provide similar value with software delivery?

By the end of the year, most of our operation support teams started to shift from Scrum to Scrumban (a blend of Scrum and Kanban, which is an inventory control system). The shift was driven with a goal to better support the immediate and changing demands required for keep-the-lights-on work. By having the teams’ work and work flow visible on Kanban boards, we could measure process times across our value streams, and experiment with work-in-progress limits within each process.

Another benefit of Lean was the development of a focused improvement direction for our organization. Scrum had already introduced cadence to regularly scheduled retrospective meetings, but the improvement efforts were focused at the team level, not across the organization. By establishing a challenge goal for all teams, we were able to coordinate our improvement efforts to ensure a greater overall impact.

Standardize Progress Flow and Metrics

Before we could jump into our improvement efforts, we needed to put a few things into place. First, we needed to standardize how our teams tracked their work from the time they received a request to when they fulfilled it. Each team tracked their work progress individually in a way that provided the most value to them, but we still needed a standardized workflow that was consistent across all teams. We put together a core team to take the lead on our Lean efforts and the chart on the right shows the workflow we established. Definitions we are using for each work state:

Queue for planning: An item of work that has been added to the backlog but is not being prepared for work by the business analyst, product owner or team.

Planning: An item of work that has started analysis but does not yet meet the requirements for the team’s definition of ready.

Queue for development: An item of work that meets the team’s definition of ready and can be pulled into a sprint for development.

Development: An item of work currently being developed, tested and demonstrated to clients.

Queue for system test: An item of work accepted by the product owner as meeting the expectations of the clients, and is ready for installation into the test environment.

System test: An item of work that has been installed into the test environment and that is being tested.

Queue for release: An item of work that has passed the system test and that is ready to move into production.

Done: An item of work installed into production that is providing value to the client.

If you walk the floor in a manufacturing environment, it’s easy to see a physical item making its way through the production line. In software development, the work product is not as visible. The work has to be defined so we can consistently measure our progress across the teams. We initially thought about measuring an item of work at the release level, but releases were too inconsistent in size. That approach would have doomed our metrics from the start. So we decided to measure an item of work at the story/feature level. Stories/features were used by all teams and, unlike releases, we could get much closer to consistently sized stories/features than we could with releases.

We needed to determine which metrics could provide the most value to track our improvement efforts. We needed metrics that could help us:

  • Understand our current condition;
  • Help us identify trends at the team level;
  • Show us where to focus our improvement; and
  • Help us measure the impact of improvement efforts.

Our Scrum teams were used to metrics such as the percent of sprint commitment completed and velocity, but these metrics wouldn’t serve us well as we changed how we tracked our work. Having the same metrics across all teams (development and operations) enabled us to merge individual team metrics to expand our view to the organizational level. Having an organizational view contributed to a better understanding of our beginning-to-end process, and it helped identify areas to focus on for our broader improvement efforts.

The key metrics were:

  • Throughput – stories finished per day
  • Exit rate-days per finished story
  • Total cycle time – days from request to release
  • Planning cycle time – days from planning started to release
  • Development cycle time – days from development started to release

To reduce the effort required for teams and functional managers to access their metrics, we built a dashboard that provides throughput and cycle time trending by team for any duration of time.

Cumulative Flow Diagrams

Once we made the decision on what an item of work consisted of, and what process states we were going to use, we needed a tool to make our data visible. We decided to use cumulative-flow diagrams (CFDs). A CFD is a stacked diagram that visualizes where the stories are in the value stream across time (see Figure 1 below).

WINTER16_BT 10

The goal of a CFD is to see a consistent flow of stories across the value stream. Each time we saw queues building up, we focused our improvement efforts in that area.

By using the key metrics and CFDs, we have been able to have the right conversations about where to improve. We also were able to see the timely impact of our improvement efforts. To keep our efforts visible, we keep updated CFDs for each team printed on the wall in our main conference room. We review them regularly. We also built a feature into our dashboard that generates CFDs on demand so functional managers have visibility into their team’s value stream at any time.

Lean in Practice

Our ScrumMasters were the key to our success when it came to implement Lean. They ensured that the work of the team was tracked through the value stream. They entered the data required to generate our CFDs and, most important, acted as Lean coaches to change the culture of continuous improvement on their teams. The ScrumMasters were responsible for executing the Lean “kata,” as developed by Mike Rother in his book, Toyota Kata

As coaches, the ScrumMasters helped the teams determine what improvement target condition they wanted to accomplish over the next three weeks (which was aligned with our sprint cycles), and what obstacles would get in their way of meeting the target condition. The target condition was determined by reviewing the CFDs to identify which process in the value stream delayed the delivery of business value. Then the ScrumMasters facilitated two-to-three coaching sessions a week to walk through what efforts had been attempted to eliminate the obstacles. During the meetings, the ScrumMasters guided the team through a “plan, do, check, act” cycle and asked the coaching kata questions listed below, developed by Mike Rother.

WINTER16_BT 12

Failure Demand Versus Value Demand

As we practiced the Lean kata, we determined that one of our largest obstacles to delivering predictable business value was technical debt. Technical debt is defined by Technopedia as a “concept in programming that reflects the extra development work that arises when code that is easy to implement in the short run is used instead of applying the best overall solution. That is, it implies that restructuring existing code (refactoring) is required as part of the development process.”

Technical debt was causing our operations teams to have to spend more time completing workarounds for prior development, rather than providing new value to the business. We again brought in consultants to hold workshops to help us determine how to eliminate existing technical debt and avoid creating new technical debt.

For the workshop, we focused on a single operations team. We were surprised to find that 47 percent of the development performed fell under what we considered technical debt. However, the consultant team helped us determine that what we were calling technical debt was actually failure demand. Failure demand is a concept identified by occupation psychologist and author Professor John Seddon as “demand caused by a failure to do something or do something right for the customer.” Seddon makes the distinction between failure demand and value demand, which is what the service exists to provide. Failure demand represents a common type of waste found in service organizations.

For us, failure demand was caused during larger projects when a decision was made to release prior to all features being developed — despite knowing that workarounds or extra effort would be needed by the team supporting the feature. The decisions usually were based on time or budget constraints. Most of what we were referring to as technical debt ended up being failure demand.

The focus of our improvement efforts was to reduce failure demand on the operation teams by freeing up capacity We also worked closely with our project teams to identify where technical debt and failure demand were originating. We partnered with the business to ensure that it was fixed during the project and not left as a burden for our operation teams. In the end, spending extra time on failure demand during a project allowed our operation teams to provide more value features to the business and to reduce our total cost of ownership.

Benefits

By focusing on Lean, we were able to:

  • Establish a current condition for each team to track our improvement efforts;
  • Standardize the metrics for each team;
  • Average two Lean coaching sessions a week;
  • Develop a strong team of Lean coaches;
  • Start up a Lean advanced group to drive the design and strategy;
  • Use CFDs to drive improvement efforts;
  • Increase the throughput of business valued work by 21 percent; and
  • Get a better picture of the teams’ backlog from a value delivered perspective.

 

To top it off, now groups outside of IT are asking for our support for improvement efforts. However…

Our experience resulted in six lessons we can pass along:

1. ScrumMasters: Tracking work progress, making the work visible and facilitating regular “a coaching sessions was challenging for teams that did not have ScrumMasters.

 – ScrumMasters are our smaller teams of two or three people, generally in system administrator roles.

 – The ScrumMasters facilitated a workshops to help these teams make their work visible so they could generate CFDs. They are not participating in the regular coaching sessions.

2. Leadership support: It was 14 difficult for our leadership team to reprioritize their time to champion the effort.

 – Top-down support was vital to the success of our Lean effort.

3. Making the call: Determining the necessity and ability to track incidents across the teams’ value streams.

 – We needed to find a way to represent the emerging keep-the- lights-on work to have the full picture of where the teams were spending their time.

 – We decided not to include incidents in our throughput calculations, since the work didn’t provide new value or features to the business.

4. Missing metrics: Calculating each team’s current state was challenging because we were not tracking our work in the way needed to calculate our metrics.

 – The initial team metrics took a lot of effort because we wanted our current conditions to be based on 1 8 weeks of data (six sprints).

 – Upgrading our Agile project management software provided the custom reporting we needed to automate the effort.

5. Standardize workload: Tracking throughput with different-sized stories/features

 – We observed a lot of benefits from teams that were able to standardize the size of each story/feature that they worked on.

6. Lack of effective coaches: Improvements with Lean are driven by quality coaching interactions versus the quantity of interactions. Good coaches are a must to be successful.

We consider our Lean efforts to be a success. Our focus on continuous improvement supported the predictable delivery of business value in a department where predictability was difficult. We encourage others to try the approach that worked for us and expect that they will derive as much value as we did.

About the Author

Rick VanBeek, Portland General Electric
Rick VanBeek serves the information technology organization as a manager on the applications delivery leadership team at Portland General Electric (PGE). His areas of responsibility include Agile software delivery, Lean process ownership and leadership development.