Background
Accelerate
Management & LeadershipProductivityTechnology & the Future

Accelerate

Nicole Forsgren, Jez Humble, Gene Kim
18 Chapters
Time
~50m
Level
advanced

Chapter Summaries

01

What's Here for You

In today's rapidly evolving technological landscape, clinging to outdated methods of value delivery is a sure path to obsolescence. "Accelerate" by Nicole Forsgren, Jez Humble, and Gene Kim is your definitive guide to not just surviving, but thriving. This book promises to equip you with the scientific understanding and practical tools needed to achieve consistently high performance in software development. You will gain a profound insight into how culture, technical practices, architecture, and management approaches are interconnected and crucial for success. Forget guesswork and anecdotal evidence; this book is grounded in rigorous research, revealing the true drivers of high performance and providing a clear, evidence-based roadmap. Prepare to move beyond the limitations of traditional project management and agile frameworks that often fail to deliver their full potential. You'll learn how to integrate essential functions like information security seamlessly, build sustainable development processes that prevent burnout, and foster employee satisfaction and engagement. This is not just about delivering software faster, but about delivering it better, safer, and more sustainably. The tone is intellectually stimulating yet deeply practical, empowering leaders and managers with the knowledge to transform their organizations. You'll discover how to measure what truly matters, understand the science behind effective change, and implement strategies that lead to tangible improvements in profitability, market share, and overall organizational greatness. "Accelerate" is your invitation to a new era of high-performing technology organizations, where innovation flourishes and human potential is fully realized.

02

ACCELERATE

In a world where the ground beneath us is constantly shifting, the authors Nicole Forsgren, Jez Humble, and Gene Kim reveal that the old ways of delivering value – through large, slow-moving projects – are no longer sufficient to thrive. Instead, a new paradigm is emerging, one embraced by high-performing organizations across every sector, from finance to government. These leaders are cultivating small, agile teams that operate in short cycles, keenly attuned to user feedback, building products and services that not only delight customers but also rapidly deliver tangible value. They possess an relentless drive for improvement, navigating high stakes and profound uncertainty with a singular focus on getting better, faster. The engine of this acceleration, they argue, is software, acting as the critical differentiator in today's economy. Banks no longer hoard gold; they trade faster and more securely. Retailers win with seamless digital and physical experiences. Governments leverage technology for more effective public service. This transformation, rooted in the principles of the DevOps movement, is not merely for tech giants like Netflix or Amazon, but is demonstrably beneficial for large, established enterprises as well. Yet, the journey is far from complete; a significant portion of the industry still lags behind, often overestimating their progress. This discrepancy between executive perception and practitioner reality underscores a crucial point: the imperative to accurately measure and communicate progress, focusing on capabilities rather than a static notion of maturity. Maturity models, the authors contend, offer a false sense of completion, a linear path that doesn't account for the dynamic, ever-changing landscape of technology and business. Instead, they advocate for a capabilities-based approach – one that is multidimensional, context-aware, and, most importantly, outcome-driven. This shift allows organizations to continuously improve, focusing on the specific levers that drive measurable gains in speed, stability, and overall organizational performance. Their extensive research has identified 24 key capabilities that consistently predict success, offering a clear, evidence-based path forward. The value of adopting these capabilities, particularly those championed by DevOps, is staggering: high performers deploy code 46 times more frequently, achieve lead times from commit to deploy 440 times faster, recover from downtime 170 times faster, and experience a change failure rate five times lower than their counterparts. This isn't about choosing speed over stability; it's about building quality into the process to achieve both, a feat high performers accomplish by relentlessly improving the right capabilities across all types of organizations, regardless of size, industry, or technological stack.

03

MEASURING PERFORMANCE

In the intricate world of software development, where progress often feels invisible and work can be arbitrary, Nicole Forsgren, Jez Humble, and Gene Kim embark on a crucial quest: to scientifically define and measure what 'good' truly means. They reveal the pervasive flaws in past attempts to quantify performance, noting how metrics like lines of code, velocity, and even utilization often fall short, leading to unintended consequences like bloated codebases, gaming the system, and cripplingly long lead times. Imagine a factory floor where the product is unseen, and the assembly line is simultaneously designing and building – that’s the challenge. The authors then introduce a more robust framework, emphasizing global outcomes over local outputs, and team performance over individual metrics. They present four key measures: delivery lead time, deployment frequency, time to restore service, and change fail rate. These are not just abstract numbers; they are vital signs of a healthy software delivery engine. Delivery lead time, for instance, tracks the journey from code commit to production, a crucial indicator of agility. Deployment frequency acts as a proxy for batch size, a cornerstone of Lean principles, ensuring work moves in small, manageable pieces. Time to restore service addresses the inevitability of failure in complex systems, shifting focus to resilience. And change fail rate directly measures quality, ensuring that speed doesn't come at the cost of stability. Through rigorous cluster analysis, Forsgren, Humble, and Kim discovered that high performers excel across all these metrics, shattering the myth that speed and quality are mutually exclusive. They paint a vivid picture: the high-performing cluster pulling away, a testament to continuous improvement, while lower performers falter, often trying to force speed without addressing underlying issues. The impact is profound, showing that superior software delivery performance directly correlates with exceeding organizational goals, both commercial and non-commercial, demonstrating a clear competitive advantage. This isn't just about faster code; it's about driving business success. The authors urge us to move beyond correlation to prediction, to use these evidence-based insights to drive change, and to foster cultures where measurement enables improvement, not control. The journey begins with understanding the metrics, but the true transformation lies in cultivating a culture that embraces learning and continuous evolution, setting the stage for the next critical topic: culture itself.

04

MEASURING AND CHANGING CULTURE

In the intricate landscape of modern technology, the authors Nicole Forsgren, Jez Humble, and Gene Kim embark on a crucial exploration: the profound impact of organizational culture. They recognized a pervasive belief in DevOps circles that culture is paramount, yet its intangible nature presented a significant challenge. Their quest led them to Ron Westrum's well-defined typology, a scientific framework that could be measured and, crucially, possessed predictive power. Westrum's model, delineating pathological, bureaucratic, and generative cultures, offers a lens through which to understand how organizations handle information and failure. A pathological culture, steeped in fear and hoarding, acts like a choked dam, preventing vital information from flowing. In contrast, a generative culture, focused on the mission and fostering high cooperation, allows information to flow freely, like a clear river. The authors discovered that this cultural framework isn't just descriptive; it's measurable. By translating Westrum's typology into Likert-scale survey questions, they could statistically validate and reliably gauge an organization's cultural leanings. This rigorous approach revealed a powerful truth: culture isn't merely a soft skill; it's a predictor of tangible outcomes. Their research demonstrated that a generative culture, characterized by trust and collaboration, directly correlates with higher software delivery performance and overall organizational effectiveness. This finding resonates with Google's own research, which found that team dynamics, how members interact, structure work, and perceive their contributions, are far more critical than individual skills alone. The central tension, then, is how to move from cultures that stifle innovation and learning to those that foster them. The authors offer a compelling resolution: culture can be changed by changing behavior. By implementing practices rooted in Lean management and continuous delivery, organizations can actively cultivate a more generative culture, transforming how teams operate and, ultimately, how they succeed. It's a powerful testament to the idea that you can act your way to a better culture, transforming the very fabric of how work gets done, one practice at a time.

05

TECHNICAL PRACTICES

In the quest for software that is delivered safely, quickly, and sustainably, Nicole Forsgren, Jez Humble, and Gene Kim illuminate the profound impact of technical practices, moving them from the periphery to the heart of agile methodologies. They reveal that while frameworks like Scrum often emphasize management and team dynamics, the bedrock of success, particularly in achieving continuous delivery, lies in robust technical foundations. Imagine a finely tuned orchestra, where each instrument, each technical practice, must play in perfect harmony to produce a symphony of high-quality software. The authors challenge the notion that technical practices are secondary, presenting research that demonstrates their vital role in boosting software delivery performance, fostering a positive organizational culture, and crucially, reducing team burnout and deployment pain. At its core, continuous delivery is a capability, a set of principles designed to get any change – be it a feature, a bug fix, or an experiment – into the hands of users with speed and reliability. This is achieved by building quality in from the start, eschewing the costly practice of relying solely on inspection, and instead investing in systems and culture that catch issues early when they are cheap to resolve. Work is broken down into small batches, allowing for rapid feedback and course correction, much like a sculptor chipping away at stone, revealing the form piece by piece rather than attempting to carve the whole statue at once. This approach, coupled with the relentless pursuit of continuous improvement and a shared sense of responsibility across development, testing, and operations, transforms the economics of software delivery, making the cost of individual changes remarkably low. Key enablers emerge: comprehensive configuration management, where environments and deployments are automated from version control; continuous integration, where code is merged frequently to keep branches short-lived and issues immediately addressed; and continuous testing, where automated tests are an integral part of the development process, providing fast feedback on every commit. The impact is transformative: higher software delivery performance, lower change fail rates, a stronger sense of organizational identification, and a more generative, performance-oriented culture. Even the very felt experience of work improves, making technology investments feel like investments in people, fostering a sustainable pace of development. Quality itself is redefined not just by the absence of defects, but by the significant reduction in unplanned work and rework, a clear indicator of having built quality in from the outset. The authors stress that while practices like comprehensive version control, reliable test automation primarily driven by developers, effective test data management, trunk-based development, and integrating information security early are crucial, the true challenge often lies in the underlying architecture and the commitment to substantial investment in automation. Ultimately, embracing these technical practices is not just about building better software; it's about building a more resilient, engaged, and effective organization.

06

ARCHITECTURE

In the realm of software development, the authors Nicole Forsgren, Jez Humble, and Gene Kim illuminate a crucial truth: while continuous delivery can transform team performance and reduce burnout, the very architecture of our systems can become a formidable barrier. They reveal that high performance isn't confined to the pristine landscape of 'greenfield' projects or modern web applications; it's achievable even within the labyrinthine complexity of mainframe systems or the dreaded 'big ball of mud' enterprise environments, provided a singular, vital characteristic is present: loose coupling. This architectural property acts like a well-designed circulatory system, allowing individual components or services to be modified and deployed independently, enabling organizations to scale their productivity without their growth becoming a tangled mess. Their research dispels the myth that certain system types, like packaged software or systems of record, are inherently performance bottlenecks, finding that the correlation is weak. Instead, the true differentiators lie in deployability and testability – the ability to test without a fully integrated environment and to release independently. Imagine a grand orchestra where each musician can tune their instrument and practice their part in isolation, only coming together for the final, synchronized performance, rather than a chaotic jam session where every change requires the attention of the entire ensemble. This independence, they explain, is the biggest contributor to continuous delivery, even more so than automation itself. It empowers teams to make large-scale changes without seeking permission or depending on others, fostering autonomy and accelerating progress. This concept echoes Melvin Conway's observation that system designs mirror communication structures, leading to the 'inverse Conway maneuver' – deliberately shaping team structures to achieve desired architectures. By embracing loose coupling, organizations can not only boost delivery tempo and stability but also scale their engineering efforts linearly, or even better, defying the orthodox view that adding more developers inevitably slows things down. Furthermore, Forsgren, Humble, and Kim champion the power of team autonomy in tool selection, arguing that engineers on the ground are best positioned to choose technologies that enhance their work, a principle that, when applied thoughtfully, can significantly boost performance. Ultimately, they stress that architects should focus not on the ephemeral trends of tools and technologies, but on the enduring outcomes and the engineers who drive them, fostering an environment where systems enable teams to deliver value efficiently and autonomously, much like a master craftsman equips an apprentice with the finest tools for the task at hand.

07

INTEGRATING INFOSEC INTO THE DELIVERY LIFECYCLE

The authors, Nicole Forsgren, Jez Humble, and Gene Kim, illuminate a critical blind spot in the evolution of software development, revealing how the very spirit of DevOps, intended to foster collaboration and system-level thinking, often falters when it excludes vital functions like information security. They explain that the original aim was to bridge the gap between disparate teams, preventing the detrimental practice of 'throwing work over the wall.' Yet, this fragmentation persists, particularly with information security teams, who, despite the omnipresent threat landscape, are frequently understaffed—sometimes with a ratio as stark as one infosec professional for every hundred developers. This under-resourcing often leads to their involvement only late in the delivery cycle, transforming necessary security enhancements into costly and painful rework, a scenario akin to trying to reinforce the foundations of a skyscraper after its upper floors are already built. A core insight emerges: this is not just an operational challenge, but a fundamental misunderstanding of how to achieve true agility and security. The authors champion the concept of 'shifting left on security,' a proactive approach where security is woven into the fabric of the entire software delivery lifecycle, from initial design through to operations, rather than being an afterthought. This means security experts are embedded in the design process, offer feedback on software demonstrations, and ensure security is a part of automated testing. Furthermore, they advocate for empowering developers by providing readily available, pre-approved tools and libraries, shifting the responsibility from inspection to enablement—making it easier for developers to 'do the right thing.' This integration, they posit, not only streamlines the delivery process, enabling faster deployments and improving delivery performance, but also significantly enhances security quality, with high-performing organizations spending up to 50% less time remediating security issues. This philosophy extends to movements like 'Rugged DevOps,' which, through the 'Rugged Manifesto,' calls for recognizing the inherent risks and adversarial nature of modern software and choosing to build resilient, secure systems as a matter of necessity. Ultimately, the chapter argues that by making security an integral, collaborative part of daily work, organizations move from a reactive posture of fixing vulnerabilities to a proactive stance of building inherently secure and high-performing systems, creating a win-win that aligns perfectly with the deeper goals of DevOps.

08

MANAGEMENT PRACTICES FOR SOFTWARE

The landscape of software delivery management has dramatically shifted, moving from traditional project frameworks like PMI and PRINCE2 to the rapid adoption of Agile principles following the 2001 Manifesto. Parallel to this, the profound philosophy of Lean manufacturing, born from Toyota's quest to produce diverse cars efficiently for a smaller market, began to influence software development. Toyota's relentless pursuit of improvement allowed them to outpace competitors in speed, cost, and quality, a revolution that eventually compelled the US auto industry to adapt. Mary and Tom Poppendieck were instrumental in translating these Lean ideas for the software world, and this chapter delves into how these practices can ignite software delivery performance. The authors, Nicole Forsgren, Jez Humble, and Gene Kim, identify three core components of Lean management in software: first, the crucial practice of limiting Work in Progress (WIP) and using these limits to fuel process improvement and boost throughput; second, creating and maintaining vibrant visual displays that showcase key quality and productivity metrics, alongside the real-time status of work and defects, making this information accessible to everyone from engineers to leaders and aligning it with operational goals; and third, leveraging daily data from application performance and infrastructure monitoring to inform critical business decisions. While WIP limits and visual displays are familiar Lean tools, their true power emerges not in isolation, but in their synergy. When combined with a feedback loop from production monitoring that informs delivery teams and the business, these practices create a powerful engine for improvement. The research reveals that WIP limits alone are insufficient; they must expose obstacles to flow and lead to tangible process improvements that increase throughput. Similarly, visual displays are most effective when they facilitate broad information sharing, making quality and productivity data, including failure rates, readily accessible and transparent. This visibility fosters high-quality communication, a cornerstone of effective teamwork. The impact of these integrated Lean practices extends beyond mere delivery speed, positively influencing team culture and reducing burnout, creating a more generative environment. Furthermore, the chapter scrutinizes change management processes, revealing that mandatory approvals by external bodies like a Change Advisory Board (CAB) or managers are not only ineffective but detrimental to delivery performance, significantly increasing lead times and negatively impacting restore times without improving change fail rates. Instead, the research strongly supports a lightweight, peer-review-based approach—such as pair programming or intra-team code reviews—coupled with a robust deployment pipeline. This streamlined process, applicable to all changes including code, infrastructure, and databases, ensures that changes are validated and tracked, providing auditors with a complete, automated record, satisfying regulatory requirements without the bureaucratic drag. The authors posit that external approvals often represent a form of "risk management theater," a superficial adherence to process that delays progress without genuinely enhancing stability. True risk management, they argue, lies in empowering teams with visibility and facilitating their improvement through practices known to enhance quality and speed, rather than imposing external gatekeepers. This journey from rigid oversight to empowered autonomy is the essence of accelerating software delivery.

09

PRODUCT DEVELOPMENT

The authors, Nicole Forsgren, Jez Humble, and Gene Kim, delve into the heart of modern product development, revealing that while the Agile methodology has largely triumphed in name, its true spirit is often lost in translation. Many organizations, particularly larger ones, still cling to outdated practices—months of budgeting and requirements gathering, followed by massive projects and infrequent releases, with customer feedback treated as an afterthought. This contrasts sharply with the principles championed by Lean product development and the Lean Startup movement, popularized by Eric Ries. Forsgren, Humble, and Kim emphasize a crucial shift: from lengthy, upfront planning to a dynamic, experimental approach. They highlight the power of testing product designs and business models early and often, drawing from Steve Blank's insights on building and validating prototypes from the outset. The core tension lies in bridging the gap between perceived Agile adoption and genuine Lean principles. The research presented identifies four critical Lean product development capabilities: 1) slicing work into small batches completable in under a week, often using Minimum Viable Products (MVPs) for validated learning; 2) maintaining clear visibility into the flow of work from inception to customer delivery; 3) actively seeking and incorporating customer feedback throughout the product lifecycle; and 4) empowering development teams with the authority to create and modify specifications without external approval bottlenecks. These capabilities, the authors demonstrate, are not mere suggestions but statistically significant predictors of higher software delivery performance, improved organizational outcomes like productivity, market share, and profitability, and a healthier organizational culture that reduces burnout. The narrative unfolds to reveal a fascinating reciprocal relationship, a virtuous cycle, where improved software delivery performance actually enhances the ability to work in small batches and incorporate customer feedback, which in turn drives further delivery improvements. Imagine a sculptor not chipping away at a massive block of marble for months, but instead, delicately shaping small, manageable pieces, constantly holding them up to the light and to the discerning eye of a client, refining with each gentle touch. This iterative sculpting, this constant dialogue with the material and the audience, is the essence of effective Lean product development. The ability of teams to experiment, to try new ideas and adjust course based on real-world feedback, is paramount. When development teams are empowered to respond to customer insights without needing endless approvals, innovation flourishes, leading to products that truly delight customers and deliver tangible business value. This isn't about unchecked freedom, but about informed autonomy, where experimentation is guided by visibility, small batches, and customer feedback, ensuring that every decision is well-reasoned and aligned with organizational goals. Ultimately, Forsgren, Humble, and Kim present a compelling case: embracing Lean product development isn't just about faster delivery; it's about building a more resilient, responsive, and successful organization.

10

MAKING WORK SUSTAINABLE

The authors, Nicole Forsgren, Jez Humble, and Gene Kim, delve into the critical concept of making software delivery performance sustainable, moving beyond brute force to address the human cost. They reveal that the friction and anxiety surrounding code deployments, termed 'deployment pain,' is a potent indicator of underlying issues in software delivery, organizational performance, and culture. This pain arises from the disconnect between development and operations, a chasm widened by differing environments, processes, mindsets, and even language. Microsoft's Azure team, for instance, saw a dramatic leap in work-life balance satisfaction scores, from 38 to 75, after implementing continuous delivery practices, freeing engineers from manual, stressful deployment processes and allowing them to keep work stresses contained within work hours. Conversely, a lack of awareness or visibility into deployments by development and testing teams signals potential barriers and isolation from the consequences of their work, often leading to poorer outcomes. Forsgren, Humble, and Kim found that the technical practices that enable speed and stability in software delivery—such as comprehensive test and deployment automation, continuous integration, trunk-based development, and loosely coupled architectures—are precisely those that reduce deployment pain. The more painful deployments are, the poorer the IT performance, organizational performance, and culture become; in essence, a brittle, complex deployment process, often stemming from software not written with deployability in mind, manual production changes, or siloed handoffs, directly correlates with organizational distress. Beyond deployment pain, the chapter confronts the pervasive issue of burnout, defined not just as overwork but as a deep exhaustion that renders work meaningless and often leads to feelings of helplessness, impacting individuals and organizations with significant costs in lost productivity and employee turnover. They highlight that managers often err by trying to fix the person rather than the work environment, when the latter, encompassing factors like a lack of control, insufficient rewards, a breakdown of community, absence of fairness, and value conflicts, holds the key to prevention. The authors present a compelling insight: improving technical capabilities, like those supporting continuous delivery and Lean practices, directly correlates with reduced feelings of burnout. This is because these practices foster environments where work is meaningful, supportive, and aligned with strategic objectives, empowering employees and reducing the chronic stress that erodes well-being. Ultimately, the research presented by Forsgren, Humble, and Kim demonstrates that investments in technology and Lean management are not merely about better software delivery; they are profound investments in the quality of professionals' work lives, fostering environments where employees thrive rather than merely survive, a crucial distinction for long-term success and innovation.

11

EMPLOYEE SATISFACTION, IDENTITY, AND ENGAGEMENT

In the relentless current of technological transformation, the human element stands as the bedrock. Forsgren, Humble, and Kim, in their exploration of employee satisfaction, identity, and engagement, reveal that the true engine of progress isn't just code or infrastructure, but the people who craft and wield it. They found that high-performing organizations, those that truly accelerate, cultivate a profound sense of loyalty among their employees, a loyalty measurable by the employee Net Promoter Score, or eNPS. Imagine a team, not just completing tasks, but actively championing their workplace, eager to recommend it as a place of growth and innovation. This isn't mere sentiment; it's a powerful indicator that translates directly into tangible business outcomes: increased profitability, enhanced productivity, and a stronger market share. The authors illuminate how this loyalty is intertwined with a deeper sense of identity. When individuals feel their values align with the organization's, when they see their contributions making a real impact, they become more than just employees; they become invested stakeholders. This connection acts as a potent antidote to burnout, a shield against the corrosive effects of a values mismatch. It's like finding your true north in a bustling city, a sense of belonging that fuels purpose. Furthermore, the research underscores the transformative power of job satisfaction, which is not merely about feeling good, but about having the autonomy, the tools, and the resources to excel. Practices like automation, when implemented thoughtfully, liberate minds from rote tasks, allowing individuals to engage their critical thinking and problem-solving skills, the very essence of fulfilling work. This creates a virtuous cycle: better tools lead to greater satisfaction, which fuels better performance, and ultimately, superior organizational outcomes. Finally, the chapter turns its gaze to the vital issue of diversity and inclusion. While research consistently shows that diverse teams are smarter and achieve better performance, the reality in tech often falls short. The authors present stark data on the underrepresentation of women and minorities, highlighting that diversity alone is insufficient; it must be paired with an inclusive culture where every voice is valued and heard, fostering a sense of belonging that allows unique talents to truly flourish. The message is clear: investing in people, fostering their engagement, and championing their unique identities isn't just good management; it's the strategic imperative for any organization aiming to thrive in the modern technological landscape.

12

LEADERS AND MANAGERS

The authors, Nicole Forsgren, Jez Humble, and Gene Kim, illuminate a crucial, often overlooked, element in the engine of technological transformation: leadership. They reveal that while grassroots efforts can spark change, true acceleration requires the guiding hand of engaged leaders. Gartner's stark prediction that half of CIOs lacking transformed capabilities will be displaced by 2020 underscores the urgency; leadership isn't just about organizational charts, but about inspiring innovation, architecting robust systems, and fostering Lean principles that directly impact profitability and market share. Forsgren, Humble, and Kim present transformational leadership as the bedrock, defining it through five key dimensions: vision, inspirational communication, intellectual stimulation, supportive leadership, and personal recognition. Their research unequivocally shows that high-performing teams are led by those exhibiting strong behaviors across all these dimensions, while low-performing teams languish with leaders demonstrating these traits minimally. It's a stark correlation: teams with the least transformative leaders are half as likely to be high performers, a testament to the fact that leaders, though powerful, cannot achieve goals in isolation; they need empowered teams and sound technical practices. The authors emphasize that leaders amplify the work of their teams, and their influence flows indirectly through the technical and Lean capabilities they enable. Managers, as a specific subset of leaders responsible for people and resources, play a critical role in bridging strategic objectives with daily work. They can foster high-performing environments by creating psychological safety, investing in employee development, and proactively removing obstacles. Crucially, managers can improve culture and performance by enabling DevOps practices, visibly investing in professional development, and making deployments less painful. The research points to three pillars for improving culture and supporting teams: fostering cross-functional collaboration built on trust, creating a climate for learning through dedicated budgets and safe-to-fail environments, and ensuring teams have effective tools they can choose. This deliberate investment in leadership, culture, and tools, the authors conclude, is not merely an expenditure but a vital investment in a team's, technology's, and organization's future success, transforming potential into tangible value.

13

THE SCIENCE BEHIND THIS BOOK

In a world awash with promises of transformation, how do we discern genuine drivers of change from mere coincidences? This is the central question Nicole Forsgren, Jez Humble, and Gene Kim explore, anchoring their work in the bedrock of rigorous primary research. They distinguish primary research, where data is collected anew by the researchers themselves—like the decennial U.S. Census—offering unparalleled control and insight, from secondary research, which repurposes existing data, often faster and cheaper but less tailored. Imagine trying to understand a complex ecosystem by only looking at old maps versus going out and observing the flora and fauna firsthand; primary research is that direct observation. The authors then navigate the landscape of qualitative versus quantitative research, highlighting that while qualitative data offers rich descriptive narratives, it's the quantitative approach, with its numerical precision—often derived from Likert-type scales where responses are assigned numerical values—that forms the backbone of their investigation. This quantitative data then undergoes analysis, moving through stages of increasing complexity: descriptive analysis, which simply summarizes and reports what is, much like a census detailing population demographics; exploratory analysis, which seeks correlations and patterns, like spotting that cheese consumption and bedsheet strangulations might trend together, a fascinating but often spurious link that doesn't imply causation; and inferential predictive analysis, the crucial third stage where hypotheses are tested against theory to understand impacts and drive outcomes. Forsgren, Humble, and Kim emphasize that their research, spanning four years and employing quantitative survey data, utilizes this inferential predictive analysis, allowing them to rigorously test hypotheses about what capabilities truly drive software delivery and organizational performance. They deliberately exclude the more complex predictive, causal, and mechanistic analyses due to data limitations, while also introducing classification analysis, a method used to group entities, such as categorizing software delivery teams into high, medium, and low performers based on key metrics. Ultimately, the chapter serves as a foundational guide, demystifying the scientific underpinnings of their findings and empowering readers to critically evaluate claims of technological and organizational improvement.

14

INTRODUCTION TO PSYCHOMETRICS

The authors, Nicole Forsgren, Jez Humble, and Gene Kim, begin by confronting a fundamental question about their research: can we truly trust data gleaned from surveys, as opposed to system-generated metrics? This skepticism, they explain, often stems from a poor understanding of survey design, with many people’s only exposure being to manipulative 'push polls' or poorly constructed questionnaires. These flawed surveys, characterized by leading questions that steer respondents, loaded questions that force assumptions, and unclear language, can indeed yield unreliable data. Imagine trying to gauge someone's true feelings about a policy with a question like, 'Do you agree with President Trump's media strategy to cut through the medias noise and deliver our message straight to the people?' Such questions are designed to elicit agreement rather than genuine insight, creating a fog of bias. However, Forsgren, Humble, and Kim reveal a powerful way to navigate this complexity: the concept of latent constructs. These are phenomena that cannot be measured directly, like organizational culture. Instead, we measure them indirectly by observing their component parts, or 'manifest variables,' through carefully crafted survey items. Think of it like piecing together a mosaic; each tile (survey item) contributes to the larger picture (the latent construct). By defining our constructs rigorously—for instance, understanding organizational culture not as a vague feeling, but as specific behaviors like shared responsibility and active information seeking, as Dr. Ron Westrum’s typology illustrates—we can build more robust measures. This approach shields us from the pitfalls of individual bad data points or deliberate manipulation. When multiple, carefully validated survey items all point to the same underlying concept, it creates a powerful assurance of trustworthiness. This is akin to triangulating a signal; a single faulty sensor might mislead, but multiple sensors confirming the same reading provide confidence. Furthermore, statistical tests for validity and reliability ensure that the survey items accurately measure what they intend to and are interpreted consistently by respondents. This rigorous process, Forsgren, Humble, and Kim emphasize, is crucial not only for survey data but also for system-generated metrics, as all measurements are, in essence, proxies. By using multiple measures that look for similar patterns, we can better detect anomalies and gain a more accurate understanding of complex phenomena, moving from mere data points to meaningful insights.

15

WHY USE A SURVEY

In the intricate dance of understanding software delivery performance, the authors Nicole Forsgren, Jez Humble, and Gene Kim illuminate a crucial question: why, in an age of ubiquitous system data, should we still turn to surveys? While instrumenting toolchains to gather system data, offering metrics like lead time, is a common starting point, it often paints an incomplete picture. The authors reveal that surveys offer a powerful, often indispensable, complement. One compelling reason is sheer speed and ease; imagine collecting data from thousands of organizations worldwide over a few weeks – a feat nearly impossible with system data alone due to legal and technical hurdles. Even when system data is obtainable, the challenge of cleaning and interpreting it is immense, as terms like 'lead time' and 'cycle time' can be used interchangeably or ambiguously across teams, leading to significant analysis problems. This is where carefully crafted surveys, with their standardized questions and definitions, act as a unifying force, ensuring everyone is working from the same page, much like a precisely tuned orchestra playing from a single score. Furthermore, system data, by its very nature, is confined within system boundaries. It can tell us what's happening inside the box, but not necessarily why or how it interacts with the outside world. Forsgren, Humble, and Kim illustrate this with an anecdote of a performance engineer at IBM whose team, despite having exhaustive system logs, missed critical performance degradations occurring at the customer interface. It was only by listening to customer feedback – a form of survey data, albeit informal – that they uncovered the true bottleneck. Similarly, systems only know what they 'see'; they can track file commits but not the percentage of all files that *should* be in version control. People, however, possess the broader context. They can see beyond the immediate system boundaries, offering insights into perceptions, feelings, and opinions that objective data alone cannot capture. The authors address the common skepticism towards survey data, contrasting it with the often-blind trust placed in system data. Yet, they argue, system data is not inherently immune to errors or malicious manipulation; a single line of flawed code or a 'bad actor' can corrupt it, often undetected for years. In contrast, while individual survey responses can be skewed, the sheer scale of well-designed surveys, coupled with anonymity, acts as a robust safeguard against widespread manipulation, requiring concerted, large-scale deception to significantly alter results. Ultimately, surveys are the sole conduit for understanding subjective realities – organizational culture, psychological safety, and how people *feel* about their work. While proxies like employee turnover might seem objective, they often fail to capture the nuances of culture, as illustrated by cases where good culture can *drive* turnover for positive reasons, or where managers game retention metrics. To truly grasp these vital, performance-predictive elements, we must ask. The authors conclude that by employing rigorous psychometric methods, ensuring anonymity, and analyzing large datasets, survey data becomes a reliable, invaluable tool, offering a window into the human element that system data alone cannot provide, resolving the tension between objective measurement and subjective reality.

16

THE DATA FOR THE PROJECT

The authors, Nicole Forsgren, Jez Humble, and Gene Kim, embarked on a significant research journey, driven by a fundamental question: how can technology elevate organizations and how can organizations harness technology for greatness? Their focus zeroed in on the burgeoning methodologies of Agile and Lean, particularly the paradigm then known as DevOps, which emphasized trust, seamless information flow, and small, cross-functional teams. To gather their insights, they designed a research approach that, while necessarily focused, aimed for deep explanatory power. They cast a wide net, targeting professionals familiar with DevOps concepts through emails and social media, understanding that extensive explanations of technical terms like continuous integration would deter participation. This strategic focus, akin to a skilled artisan choosing the right tool for a specific task, allowed them to delve deeply into the practices of those already engaged with modern software development. However, they acknowledge the inherent trade-off: by excluding those unfamiliar with these practices, they might miss the organizations performing at the lowest levels, thus not capturing the full spectrum of potential transformation. Yet, this deliberate narrowing amplified their ability to analyze the behaviors and outcomes within a defined, technologically advanced cohort. To mitigate potential bias from respondents eager to present their teams favorably, Forsgren, Humble, and Kim ingeniously steered clear of direct 'yes/no' questions about practices like continuous integration. Instead, they probed deeper, asking about observable, core components, such as whether automated tests are triggered upon code check-in, a method designed to surface genuine implementation rather than mere aspiration. They recognized that while their sample was deliberately chosen, the underlying principles of version control, automated testing, and a culture of transparency, trust, and innovation hold broad applicability, like universal constants in a shifting universe, impacting performance across diverse industries from healthcare to aviation. The authors then navigated the complexities of sampling, opting for non-probability methods, specifically referral or snowball sampling, because compiling an exhaustive list of DevOps professionals is, quite simply, impossible – unlike professions with clear certifications, the field lacks a central registry. This method, where participants invite others, proved essential for reaching a population often wary of being studied, a caution born from historical instances where organizational studies led to workforce reductions. The snowball effect, like a single voice amplified through a crowd, allowed the research to grow organically. To counteract the inherent limitations of snowball sampling – the potential for initial bias and the strong influence of early participants – they cast an exceptionally wide and diverse net. They combined multiple email lists, actively reached out to underrepresented groups, and leveraged social media, ensuring their initial 'informants' were as varied as possible. The authors further refined their understanding by actively engaging with the industry, seeking feedback at conferences, comparing notes with peers, and inviting external experts to review their hypotheses annually, a process of triangulation that weaves together data, experience, and expert opinion to create a robust tapestry of understanding, ensuring their findings remain grounded and relevant in the ever-evolving landscape of technology and organizational development.

17

HIGH-PERFORMANCE LEADERSHIP AND MANAGEMENT

The authors, Nicole Forsgren, Jez Humble, and Gene Kim, illuminate a profound truth: leadership is not merely a function, but a powerful catalyst for organizational success, shaping everything from code delivery to profitability and market share. Yet, they observe, its critical role in technological transformation has been surprisingly overlooked. In an era defined by rapid value creation and consumption through technology, peak technical performance alone is insufficient; we must also master the art of connecting enterprise strategy with tangible action, fostering a lightweight, high-performance management framework that streamlines the flow of ideas to value. Consider the inspiring transformation at ING Netherlands, a global financial institution that, under leaders like Jannes Smit, has moved from being an offsite function to a central engine of digital innovation, with IT teams now situated just below the C-suite. Their journey, marked by the adoption of Lean management practices, showcases a radical reimagining of organizational structure and workflow. We witness the emergence of 'Tribes' and 'Squads,' self-steering, cross-functional teams guided by product owners and empowered by principles like the 'Two Pizza Rule,' a tangible embodiment of efficient collaboration. These teams operate within visually rich 'Obeya' rooms, where strategic objectives, performance metrics, and action items are transparently displayed, fostering a shared understanding and rapid feedback loop. The narrative unfolds with a virtual visit, detailing the vibrant, open workspaces, adjustable-height desks, and the daily stand-up ritual—a concise, fifteen-minute cadence of communication that drastically cuts down on meeting time while accelerating problem-solving through a structured 'catchball' communication system. This system ensures that learning flows vertically and horizontally, connecting frontline teams with strategic priorities and customer insights with leadership. This practice, akin to Hoshin Kanri, creates a continuous cycle of learning, testing, and adjustment, fostering a generative culture where improvement is not an afterthought but an integral part of the work. The authors emphasize that this transformation is not about mimicking practices but about cultivating an environment where leaders act as coaches, empowering individuals to not only do the work but to improve it and, crucially, to develop themselves, fostering psychological safety for open discussion and experimentation. Jordi de Vos, a chapter lead, exemplifies this, experimenting with security improvements and fostering team safety. A poignant moment arrives when Jannes reassures his teams, 'If the quality isn't there, don't release. I'll cover your back,' a statement that underscores the courage required from leaders to prioritize quality over mere speed, ultimately leading to greater long-term effectiveness and customer trust. The journey is iterative, a continuous stretching and learning, where leaders like Jannes must first learn to learn themselves before they can guide their teams. The narrative concludes by reinforcing that true transformation is not about implementing a checklist or outsourcing change, but about nurturing an organization's unique capacity for learning and adaptation, making it your own, driven by discipline, patience, and relentless practice, ultimately leading to improved quality, speed, innovation, and sustained competitive advantage. The research consistently shows that these practices have a measurable impact on profitability, productivity, market share, and customer satisfaction, achieving broader organizational goals.

18

Conclusion

“Accelerate” fundamentally reshapes our understanding of high-performing technology organizations by establishing a data-driven, scientific foundation for success. The core takeaway is that achieving both speed and stability in software delivery isn't a trade-off, but a mutually reinforcing outcome of adopting specific, evidence-based technical and management practices. The book dismantles outdated notions that equate output with outcome, highlighting that true progress lies in focusing on capabilities that drive key results, such as delivery lead time, deployment frequency, time to restore service, and change fail rate. This shift necessitates moving away from large, monolithic projects towards small, rapid-cycle teams that prioritize user feedback. Crucially, the authors reveal that organizational culture is not an intangible obstacle but a measurable, scientific construct that can be cultivated. A generative culture, characterized by trust, cooperation, and psychological safety, is paramount, and it's not achieved by changing minds but by changing behaviors and implementing practices that foster such an environment. Technical excellence, from continuous delivery and automated testing to loosely coupled architectures and proactive security integration ('shifting left'), forms the bedrock upon which high performance is built. These practices, far from being solely about efficiency, are profound investments in people, directly combating burnout and improving the quality of work life. The emotional lessons are clear: fear and anxiety around deployments ('deployment pain') are indicators of deeper systemic issues, and addressing them through robust practices leads to greater well-being and job satisfaction. The book underscores that empowered teams, armed with the right tools and architectural freedom, are the engines of innovation. Leadership, particularly transformational leadership, acts as a critical multiplier, enabling these practices and fostering a culture of continuous learning. Ultimately, “Accelerate” offers practical wisdom by providing a clear, scientific roadmap: measure what matters, invest in technical and cultural capabilities, foster generative leadership, and embrace continuous improvement. By doing so, organizations can unlock significant competitive advantage, achieve superior business outcomes, and create more sustainable, satisfying work environments.

Key Takeaways

1

Organizations must shift from large, long-lead-time projects to small, rapid-cycle teams that prioritize user feedback to remain competitive.

2

Software and technology are the primary drivers of value and differentiation in modern business, transcending industry boundaries.

3

Continuous improvement, measured by capabilities rather than static maturity, is essential for navigating the dynamic technological landscape.

4

A capabilities-based measurement model, focused on driving key outcomes, provides clearer strategic direction than traditional maturity models.

5

High-performing organizations achieve both speed and stability by building quality into their processes and focusing on specific, evidence-based capabilities.

6

The gap between high and low performers in software delivery is widening, highlighting the critical need for organizations to adopt proven, evidence-based practices.

7

Traditional software performance metrics like lines of code, velocity, and utilization are flawed because they focus on output rather than outcome, and individual or local measures rather than global team performance, leading to counterproductive behaviors.

8

Effective software delivery performance measurement requires focusing on global outcomes and team-level metrics, specifically tracking delivery lead time, deployment frequency, time to restore service, and change fail rate.

9

High software delivery performance is not a trade-off with stability or quality; rather, high-performing organizations excel across all key metrics, demonstrating that speed and quality are mutually reinforcing.

10

Superior software delivery capability directly translates into significant competitive advantage, enabling organizations to consistently exceed both commercial and non-commercial goals.

11

The adoption of scientific measurement and data-driven decision-making in software delivery allows for evidence-based improvements, moving beyond correlation to test predictive relationships between practices, culture, and outcomes.

12

A learning culture is essential for effective performance measurement; in pathological or bureaucratic cultures, measurement can become a tool for control, leading to inaccurate data and hindering genuine improvement.

13

Organizational culture, often considered intangible, can be scientifically modeled and measured using frameworks like Ron Westrum's typology (pathological, bureaucratic, generative).

14

A generative organizational culture, characterized by high cooperation, trust, and mission focus, facilitates superior information flow, which is critical for high-tempo, high-consequence environments.

15

Westrum's cultural typology has predictive power, correlating generative cultures with improved software delivery performance, organizational performance, and higher job satisfaction.

16

Effective organizational change, particularly in culture, begins not with changing how people think, but by changing what they do – their behaviors and practices.

17

Implementing specific technical and management practices, such as those found in Lean and Continuous Delivery, can actively influence and improve an organization's culture towards a more generative state.

18

Failure investigations in complex systems should start with human error as a point of inquiry into systemic improvements, rather than ending with blame, aligning with generative cultural principles.

19

Technical practices, often overlooked, are fundamental enablers of continuous delivery and high software delivery performance, directly impacting organizational culture and team well-being.

20

Building quality into the product from the outset, rather than relying on post-development inspection, drastically reduces costs and increases speed and reliability.

21

Working in small batches and automating repetitive tasks frees up human capacity for higher-value problem-solving and innovation.

22

Shared responsibility and collaboration across all roles in the software delivery lifecycle are essential for achieving system-level outcomes like quality and stability.

23

Continuous delivery practices significantly reduce deployment pain and team burnout by making releases routine, on-demand activities rather than high-stress events.

24

Investments in technical practices like continuous delivery are direct investments in people, fostering a sustainable pace and a more positive work experience.

25

High delivery performance is achievable across diverse system types, including legacy and complex enterprise environments, provided systems and teams are loosely coupled.

26

The core architectural characteristics of deployability and testability, allowing independent changes and validation, are more critical for high performance than the system type itself.

27

Loose coupling enables organizations to scale their engineering efforts and increase developer productivity linearly or better, counteracting the typical overhead of growth.

28

Empowering teams to choose their own tools, based on their specific needs and expertise, significantly contributes to software delivery and organizational performance.

29

Architects should prioritize enabling teams to achieve desired outcomes and facilitate independent work, rather than focusing on specific tools or technologies, by fostering autonomous team structures and supporting independent system evolution.

30

The traditional separation of information security from the core software development lifecycle creates bottlenecks and increases costs, hindering both delivery speed and security quality.

31

Proactively integrating security into every phase of the software delivery process, known as 'shifting left,' leads to improved delivery performance and significantly reduces time spent on security remediation.

32

Empowering developers with easy-to-use, pre-approved security tools and resources is more effective than relying solely on late-stage security reviews by often overstretched infosec teams.

33

The 'Rugged DevOps' movement emphasizes building resilience and security into software from the outset, acknowledging the inherent adversarial nature of the modern digital landscape.

34

True DevOps principles extend beyond development and operations to encompass all functions within the software delivery value stream, fostering system-level thinking and shared responsibility.

35

The synergistic combination of limiting Work in Progress (WIP), using visual displays for transparency, and integrating production monitoring feedback is essential for driving significant software delivery performance improvements, rather than relying on individual practices.

36

Effective Lean management in software necessitates that WIP limits actively expose obstacles to flow, prompting teams to engage in continuous process improvement that demonstrably increases throughput.

37

Lightweight change management processes, centered on peer review and automated deployment pipelines, dramatically outperform rigid, external approval processes in enhancing both speed and stability, debunking the efficacy of 'risk management theater'.

38

Visibility and high-quality communication, enabled by readily accessible and broadly shared metrics on quality and productivity, are critical drivers of improved software delivery performance and healthier team cultures.

39

External approval processes for changes, often perceived as risk mitigation, are negatively correlated with key performance indicators like lead time and restore time, and do not improve change fail rates, highlighting their ineffectiveness.

40

The widespread adoption of 'faux Agile' practices, characterized by traditional large-batch processes and delayed customer feedback, hinders true organizational performance, contrasting with the validated learning approach of Lean product development.

41

Four core Lean product development capabilities—small batch work (including MVPs), visible workflow, active customer feedback incorporation, and empowered team decision-making on specifications—are critical drivers of high software delivery and overall organizational performance.

42

Empowering development teams with the authority to iterate on specifications based on real-time feedback, without external approval delays, is essential for innovation and creating products that delight customers and achieve business results.

43

A virtuous cycle exists between software delivery performance and Lean product management practices, where improvements in one mutually reinforce and enhance the other, leading to escalating organizational success.

44

Lean product development practices directly contribute to a healthier organizational culture by reducing burnout and fostering environments where informed experimentation and customer-centricity are prioritized.

45

Deployment pain, characterized by fear and anxiety during code releases, is a direct and measurable indicator of poor software delivery performance, organizational health, and cultural issues, highlighting the critical need to address the friction between development and operations.

46

Implementing robust technical practices such as comprehensive test and deployment automation, continuous integration, and loosely coupled architectures not only enhances software delivery speed and stability but also significantly reduces the stress and anxiety associated with deployments, thereby improving employee well-being.

47

Burnout is a critical sustainability issue, stemming from organizational risk factors like work overload, lack of control, and unfairness, and can be effectively prevented or reversed by focusing on improving the work environment rather than solely on individual coping mechanisms.

48

Organizations can significantly reduce burnout by fostering supportive, blame-free cultures, ensuring work is meaningful and aligned with strategic objectives, investing in employee development, and empowering teams with the authority to make decisions affecting their work.

49

Investments in DevOps, Lean management, and continuous delivery practices yield dual benefits: improved software delivery performance and enhanced quality of work life for professionals, demonstrating that technological advancement and human sustainability are intrinsically linked.

50

A misalignment between an individual's core values and the organization's lived values creates a fertile ground for burnout, underscoring the importance of cultivating environments where stated and actual organizational values are in harmony to foster employee satisfaction and retention.

51

Employee loyalty, measured by eNPS, is a critical predictor of organizational performance, directly impacting profitability, productivity, and market share.

52

A strong sense of organizational identity, rooted in aligned values and perceived impact, is crucial for reducing employee burnout and fostering greater commitment.

53

Job satisfaction is significantly enhanced when employees are provided with the right tools and resources, allowing them to utilize their skills and judgment on challenging problems rather than rote tasks.

54

DevOps practices, particularly those involving automation and proactive monitoring, contribute to job satisfaction by freeing up employees to focus on higher-level decision-making and problem-solving.

55

Diversity in teams, especially regarding gender and underrepresented minorities, is linked to improved cognitive abilities and team performance, but requires an inclusive environment to be effective.

56

Organizations that foster a culture of experimentation and learning, coupled with investments in technical and management capabilities, create a virtuous cycle of employee engagement and superior software delivery performance.

57

Transformational leadership, characterized by vision, inspirational communication, intellectual stimulation, supportive leadership, and personal recognition, is a statistically significant predictor of high software delivery performance.

58

Engaged leaders are essential for large-scale technology transformations, providing the authority, budget, and 'air cover' needed to implement necessary changes and shift organizational incentives.

59

While leaders amplify team capabilities, they cannot achieve high outcomes alone; success requires leaders to enable teams to execute work with sound architecture, good technical practices, and Lean principles.

60

Managers, as leaders responsible for people and resources, significantly impact team performance by creating safe work environments, investing in employee development, and removing obstacles.

61

Fostering cross-functional collaboration, a climate of learning, and providing effective tools are critical, research-backed strategies for improving team culture and supporting high performance.

62

Investing in leadership development is a strategic investment in a team's technology and products, directly impacting organizational value and success.

63

The effectiveness of technical and organizational practices is amplified by transformational leadership, not replaced by it, making leadership a multiplier of existing efforts.

64

Distinguish between primary research, which collects new data for specific questions, and secondary research, which uses existing data, to ensure research is directly relevant to the inquiry.

65

Understand that quantitative research, using numerical data and scales like the Likert scale, provides a measurable basis for statistical analysis, crucial for testing hypotheses rigorously.

66

Recognize the limitations of correlation found in exploratory analysis, as it indicates association but not causation, preventing misleading conclusions from spurious relationships.

67

Appreciate that inferential predictive analysis, grounded in theory-driven hypotheses, is essential for understanding the impact of practices and driving desired organizational outcomes.

68

Grasp the different levels of data analysis—descriptive, exploratory, inferential, predictive, causal, mechanistic—to critically assess the depth and validity of research claims.

69

Acknowledge that classification analysis can reveal distinct groups within data, such as performance tiers in software delivery, offering insights into nuanced differences.

70

Skepticism towards survey data is valid, particularly when exposed to biased 'push polls' and poorly designed questionnaires, which fail to capture genuine perceptions.

71

Latent constructs provide a framework for measuring abstract concepts (e.g., organizational culture) by identifying and measuring their observable component parts (manifest variables).

72

Rigorous definition of constructs and the use of multiple, validated survey items (manifest variables) create a robust measurement system that safeguards against individual data errors or manipulation.

73

Statistical tests for validity (measuring what's intended) and reliability (consistent interpretation) are essential for confirming that survey measures accurately reflect the underlying latent construct.

74

The principles of using latent constructs and multiple measures to ensure data integrity apply equally to system-generated data, as all metrics serve as proxies for underlying phenomena.

75

Careful deliberation on what is being measured and how it is defined is a critical first step in obtaining meaningful data, whether through surveys or system metrics.

76

Surveys provide rapid data collection across organizational boundaries, overcoming the logistical and legal barriers often faced by system data instrumentation.

77

Standardized survey questions, when rigorously designed, resolve ambiguity in terminology (e.g., lead time vs. cycle time) that plagues system data analysis.

78

System data is limited to internal operations, while surveys capture external perceptions and interactions crucial for understanding the full system context.

79

Despite skepticism, well-designed, anonymized surveys, especially at scale, are more resilient to manipulation than system data, which can be subtly corrupted.

80

Certain critical, performance-predictive aspects of work, such as organizational culture and psychological safety, can only be reliably measured through direct inquiry via surveys.

81

The research design intentionally targets professionals familiar with modern software development practices (like DevOps) to maximize explanatory power, acknowledging that this focus may exclude the lowest performers, thus limiting the scope of observed transformation.

82

To mitigate survey bias and capture genuine practice, Forsgren, Humble, and Kim focus questions on observable, constituent behaviors (e.g., automated tests on check-in) rather than self-reported adoption of broad concepts (e.g., continuous integration).

83

Fundamental practices like version control and automated testing, alongside a culture valuing transparency and trust, are believed to yield positive results across various software development methodologies and industries, acting as foundational principles for performance.

84

Probability sampling is often infeasible for niche professional groups like DevOps practitioners due to the lack of a comprehensive, identifiable population registry, necessitating alternative methods.

85

Snowball sampling is a critical tool for studying populations that may be difficult to identify or are historically averse to external study, allowing research to grow through participant referrals and fostering trust.

86

To counteract the inherent limitations of snowball sampling (potential bias and influence of initial participants), researchers must cast an exceptionally wide and diverse net for initial recruitment and continuously engage with the broader industry for validation.

87

High-performance leadership is foundational to technological success and business outcomes, influencing productivity, profitability, and market share.

88

Sustainable competitive advantage requires a lightweight, high-performance management framework that connects strategy to action and facilitates continuous learning.

89

Organizational transformation is driven by cultivating a learning culture where leaders act as coaches, prioritizing psychological safety and empowering teams to improve work and develop people.

90

Effective problem-solving and improvement arise from rigorous analysis, hypothesis testing, and the integration of learning into standard work, applied at all organizational levels.

91

True organizational change is an emergent process of experimentation and adaptation, emphasizing 'making it your own' rather than copying external models or outsourcing transformation.

92

Developing leaders as learners themselves, who then foster learning within their teams through disciplined practice and patience, is the key to building a continuously improving organization.

Action Plan

  • Transition from large, monolithic projects to smaller, cross-functional teams working in short, iterative cycles.

  • Implement mechanisms to continuously gather and act upon user feedback to guide product development.

  • Shift measurement focus from static maturity levels to dynamic, context-specific capabilities that drive desired outcomes.

  • Identify and prioritize the 24 key capabilities identified by Forsgren, Humble, and Kim that are most relevant to your organization's goals.

  • Invest in building quality into the development and deployment process to achieve both speed and stability.

  • Foster a culture of continuous learning and improvement, recognizing that technology transformations are ongoing journeys, not endpoints.

  • Identify and discard flawed performance metrics like lines of code or raw velocity that focus on output rather than outcome.

  • Implement the four key software delivery performance measures: delivery lead time, deployment frequency, time to restore service, and change fail rate.

  • Analyze your organization's performance against industry benchmarks using these four key metrics to identify areas for improvement.

  • Foster a culture where collaboration between development and operations is encouraged and measured globally, not in silos.

  • Ensure that efforts to increase speed (tempo) do not come at the expense of system stability and quality (change fail rate).

  • Evaluate whether your organization's measurement practices are fostering a learning culture or enabling control and fear.

  • Champion the idea that improving software delivery performance is a strategic driver of overall organizational success, not just an IT initiative.

  • Assess your organization's current culture using survey questions based on Westrum's typology to establish a baseline.

  • Identify specific practices from Lean management or Continuous Delivery that can be implemented to foster collaboration and trust.

  • Shift the focus of failure investigations from assigning blame to understanding systemic causes and improving information flow.

  • Encourage open communication and the sharing of information across teams and hierarchies, actively training 'messengers' rather than neglecting or shooting them.

  • Prioritize organizational mission and performance over departmental turf wars and adherence to rigid rules when making decisions.

  • Begin implementing small, observable behavioral changes within teams and observe their impact on team dynamics and outcomes.

  • Implement comprehensive configuration management by ensuring all environments, builds, and deployments are automated and controlled via version control.

  • Adopt continuous integration by keeping feature branches short-lived (less than a day) and merging into the main trunk frequently, addressing any build or test failures immediately.

  • Integrate continuous testing by ensuring automated unit and acceptance tests run on every commit and that developers have fast feedback loops on their workstations.

  • Prioritize building quality in by investing in test automation that is reliable and maintained primarily by developers, ensuring code is testable from the start.

  • Break down work into smaller batches to enable quicker feedback loops and reduce the overhead and risk associated with large releases.

  • Foster collaboration across development, testing, and operations teams to create transparency around system-level outcomes and shared goals.

  • Invest in managing test data effectively to ensure automated test suites can run reliably and on demand.

  • Integrate information security personnel into the software delivery process from the design phase onwards, ensuring security is part of daily work rather than a bottleneck.

  • Analyze your current system architecture to identify dependencies and opportunities for decoupling services or components.

  • Prioritize investments in practices that enhance testability, such as enabling testing without a fully integrated environment.

  • Actively work to enable teams to deploy their services independently, reducing reliance on coordinated releases with other teams.

  • Encourage cross-functional teams and explore the 'inverse Conway maneuver' to align organizational structure with desired loose coupling.

  • Delegate tool selection to development teams, empowering them to choose technologies best suited for their specific tasks and challenges.

  • Architects should engage directly with engineers to understand their workflow and challenges, focusing on enabling outcomes rather than dictating specific technologies.

  • Evaluate the impact of third-party or outsourced custom software on delivery performance and consider bringing critical capabilities in-house.

  • Integrate security professionals into the early stages of application design and development.

  • Develop and provide developers with readily accessible, pre-approved security libraries, toolchains, and processes.

  • Incorporate security testing into the automated test suites for all major features.

  • Train developers on common security risks, such as the OWASP Top 10, and how to prevent them.

  • Conduct security reviews for all major features in a way that does not impede the development process.

  • Adopt principles of 'Rugged DevOps' to foster a mindset of building resilient software from the ground up.

  • Implement WIP limits on your team's workflow and actively identify and remove the obstacles that these limits expose.

  • Create and maintain visual displays (e.g., Kanban boards, dashboards) that show key quality and productivity metrics and the current status of work, making them accessible to all team members.

  • Establish a feedback loop from production monitoring tools to your delivery team and business stakeholders to inform daily decision-making.

  • Transition from external change approval processes to a lightweight, peer-review-based system (e.g., pull requests, intra-team code reviews) for all types of changes.

  • Develop or enhance a deployment pipeline to automate the testing and deployment of changes, ensuring a verifiable audit trail and rejecting bad changes.

  • Empower teams to manage their own change processes through peer review and automated validation, rather than relying on external bodies for approval.

  • Evaluate current product development processes to identify where work is batched into large, infrequent releases and seek opportunities to break tasks into smaller, weekly deliverable units.

  • Implement mechanisms for regularly collecting and actively incorporating customer feedback at multiple stages of the product development lifecycle, not just at the end.

  • Assess the level of autonomy development teams have in creating and modifying specifications, and advocate for granting them more authority to respond to learnings.

  • Establish clear visibility into the flow of work, from initial concept to customer delivery, to identify bottlenecks and areas for improvement.

  • Begin experimenting with Minimum Viable Products (MVPs) to validate core assumptions and gather early, actionable user insights with minimal investment.

  • Foster open communication channels between development teams and other stakeholders to ensure that informed decisions made through experimentation are shared throughout the organization.

  • Actively measure and discuss 'deployment pain' within your team, identifying specific sources of anxiety or friction during code releases.

  • Investigate and implement technical practices that automate testing and deployment processes, such as continuous integration and trunk-based development.

  • Assess your work environment for the six organizational risk factors for burnout (work overload, lack of control, insufficient rewards, breakdown of community, absence of fairness, value conflicts) and address them.

  • Foster a blame-free culture that emphasizes learning from failures rather than assigning fault, and clearly communicate the purpose and strategic objectives of the team's work.

  • Seek to align organizational values with individual employee values by ensuring the company's lived practices reflect its stated principles, particularly in areas of social responsibility or ethical conduct.

  • Empower team members by giving them the authority to make decisions that directly affect their work and outcomes, especially in areas where they are responsible for results.

  • Encourage experimentation and learning by providing employees with dedicated time, space, and resources to explore new ideas and develop new skills, such as through '20% time' initiatives.

  • Implement employee Net Promoter Score (eNPS) surveys to gauge employee loyalty and identify areas for improvement.

  • Actively seek to align organizational values with individual employee values through clear communication and demonstrated actions.

  • Invest in providing employees with the necessary tools, resources, and autonomy to perform their jobs effectively and engage their skills.

  • Leverage automation for rote tasks to free up employee time and cognitive resources for more challenging and fulfilling problem-solving.

  • Prioritize the recruitment and retention of diverse talent, focusing on gender and underrepresented minorities.

  • Cultivate an inclusive environment where all employees feel welcomed, valued, and empowered to contribute their unique perspectives.

  • Encourage a culture of experimentation and continuous learning, providing feedback loops that enable employees to see the impact of their work.

  • Cultivate a clear vision for your team or organization and communicate it inspirationally, even amidst uncertainty.

  • Actively challenge your team to think about problems in new ways, fostering intellectual stimulation.

  • Demonstrate care and consideration for your team members' personal needs and feelings, embodying supportive leadership.

  • Personally recognize and praise achievements and improvements in work quality, reinforcing positive behaviors.

  • Invest in your team's professional development by creating training budgets, offering dedicated learning time, and encouraging conference attendance.

  • Enable cross-functional collaboration by building trust with other teams and actively rewarding collaborative work.

  • Foster a climate of learning by creating safe spaces for experimentation, treating failures as learning opportunities, and holding blameless postmortems.

  • Ensure your team has access to and can choose the tools that best enable them to perform their work effectively.

  • When evaluating claims about new strategies or tools, ask whether the evidence is based on primary research.

  • Familiarize yourself with the differences between correlation and causation to avoid misinterpreting data relationships.

  • Seek out research that employs quantitative methods and clearly defines its analytical approach.

  • Consider how Likert-type scales or similar numerical measurements could be used to gather data in your own context.

  • When reading reports or studies, identify which level of analysis (descriptive, exploratory, inferential) is being presented.

  • Be mindful of the potential for spurious correlations and look for underlying theoretical explanations for observed patterns.

  • If discussing organizational performance, aim to base hypotheses on established theories rather than just observed trends.

  • When designing surveys, clearly define the latent construct you intend to measure before writing any questions.

  • Break down abstract concepts into smaller, measurable components (manifest variables) to form the basis of your survey items.

  • Employ statistical validation and reliability tests on your survey measures to ensure they accurately and consistently capture the intended construct.

  • When collecting system data, use multiple related metrics as proxies for a single underlying phenomenon to increase detection of anomalies.

  • Periodically re-evaluate the psychometric properties of both survey and system data measures, especially if the environment or system changes.

  • Recognize that all data points, whether from surveys or systems, are proxies and critically assess their underlying meaning.

  • When initiating a data collection effort, consider surveys for their speed and broad reach, especially across organizational lines.

  • Design survey questions with precise definitions and clear language to avoid the ambiguity that plagues system data terminology.

  • Actively seek out feedback from individuals to uncover system performance bottlenecks that lie outside purely technical instrumentation.

  • Challenge assumptions about the inherent trustworthiness of system data by looking for potential sources of error or manipulation.

  • Prioritize surveying subjective elements like team morale, perceived safety, and cultural norms, as these are often strong predictors of performance.

  • When designing surveys for specialized groups, focus questions on observable behaviors rather than self-reported adoption of broad concepts to ensure data accuracy.

  • Recognize that even with a focused research design, core principles like version control, automated testing, and a culture of trust have broad applicability across different methodologies.

  • If your target population is difficult to identify or list exhaustively, consider using snowball or referral sampling methods, while taking steps to diversify the initial recruitment pool.

  • Actively engage with your community and industry peers to validate research findings and stay attuned to emerging trends, triangulating data from multiple sources.

  • When probing sensitive topics or practices, ask about underlying components rather than the overarching label to elicit more genuine responses.

  • Be transparent about the limitations of your research design, particularly regarding the specific population studied and any potential blind spots.

  • Continuously seek feedback on research hypotheses and methodologies from external subject matter experts to ensure relevance and accuracy.

  • Begin by questioning your own assumptions and experimenting with new behaviors as a leader.

  • Implement visual management systems (like Obeya or team visual boards) to increase transparency and facilitate rapid feedback.

  • Establish regular, concise team stand-ups to improve communication and accelerate problem identification and resolution.

  • Actively practice 'catchball' to ensure learning flows effectively between different levels and functions of the organization.

  • Dedicate a fixed percentage of time for improvement activities, treating them as integral parts of the work, not separate tasks.

  • Shift from a command-and-control leadership style to a coaching approach, focusing on helping teams improve their work and develop their capabilities.

  • Encourage psychological safety by creating an environment where team members can openly discuss problems and obstacles without fear of reprisal.

  • When implementing new practices, focus on understanding the underlying principles and adapting them to your specific organizational context, rather than blindly copying.

0:00
0:00