Category: ITSM

  • Big Ideas That ITIL® Product Brings to the Table

    Big Ideas That ITIL® Product Brings to the Table

    With the imminent release of ITIL® Product (Gray et al, 2026), PeopleCert has expanded the scope of traditional Service Management by combining Digital Product and Service Management. While product concepts are present throughout all of the new books, ITIL® Product reframes product development as an end‑to‑end lifecycle that runs from discovering opportunities to operating and continually improving digital products and services, rather than as a narrow “build” phase inside projects. This mirrors the historical shift from early brand management, where a single manager was accountable for a product’s commercial performance over time, to modern product management, which owns outcomes across discovery, design, delivery, and operations. By explicitly structuring the lifecycle into activities such as Discover, Design, Acquire, Build, Transition, and Operate, ITIL® Product codifies the same holistic view that has gradually emerged in the history of product development as organizations moved away from one‑off project thinking toward continuous product stewardship.​

    A second big idea is the strong emphasis on aligning product work with vision, strategy, and portfolio, which reflects the evolution from “feature factories” to outcome‑driven product organizations. In the Discover and Design chapters, ITIL® Product repeatedly stresses the need to understand context and inputs, agree direction and objectives, and ensure that product roadmaps stay anchored in organizational strategy and value creation. Historically, this echoes the move from reactive, sales‑driven roadmaps toward strategic product management, where decisions are guided by portfolio trade‑offs, positioning, and long‑term customer value rather than just near‑term delivery capacity.​

    ITIL Product® also embeds cross‑functional collaboration and agile ways of working as core success factors, which parallels the industry’s shift from siloed development and operations to integrated product teams. The Build and Operate chapters highlight critical success factors such as strong collaboration between engineering, product management, design, and quality assurance; agile cadences; automated CI/CD pipelines; and incremental delivery with feature toggles and safe rollouts. These practices track closely with the historical rise of Agile, DevOps, and empowered product teams that own the full lifecycle, breaking down the old handoffs between development, operations, and service management.​

    Another major theme is the product‑and‑service value chain: ITIL® Product systematically ties product decisions to the realities of cloud sourcing, service providers, and operational environments, including activities like acquiring cloud services, planning transitions, and coordinating responsibilities between vendors and service providers. This reflects how modern product development history has moved from shipping boxed software or discrete projects to operating live digital services in complex ecosystems. As organizations adopted SaaS, cloud platforms, and managed services, product management had to expand its remit to include sourcing, deployment, and long‑term operability—precisely the terrain ITIL® Product formalizes.​

    Finally, the book’s focus on metrics and continuous improvement—such as delivery velocity, cycle time, defect leakage, and team health—as part of each lifecycle activity reflects the historical maturation of product development into a data‑driven discipline. ITIL® Product treats these measures and their associated “critical success factors” as integral to managing products effectively, not as optional reporting after the fact. This aligns with the broader trajectory from intuitive, craft‑driven product development to modern product operations and analytics practices, where teams continuously learn from production feedback and telemetry to refine products and processes over time. ITIL® Product is definitely a contribution to the field and connects to the broader concepts of ITIL® nicely.

    #ProductManagement #ITSM #PeopleCert

    Bibliography

    Gray, V., Konageski, W., & McDonald, S. (2026). ITIL Product [Book]. PeopleCert.

  • I’m BAAAACK

    I’m BAAAACK

    You may have noticed that I haven’t been active on my blog site (https://adaptiman.com) for some time. Good news – nothing happened to me, I’m not sick, or burned out. I spent the last year working with PeopleCert on the authoring team writing the newest version of ITIL®. The authoring and editorial team at PeopleCert is unbelievable and worth a longer description. In the coming days, I will share my insights on new ITIL®, the authoring process, and how the new version will affect our profession moving forward.

  • The New ITIL is Here

    The New ITIL is Here

    Three weeks ago, PeopleCert released the latest version of ITIL® Foundation . This is the fifth version of ITIL® released since 1989 and, in my opinion, could be the best yet. I was one of the authors for the new version. The experience was eye-opening on a number of levels, which I will describe below. But first, most colleagues want to know what is different with this version – why would I want to spend money to get certified for version 5?

    What’s New?

    There are three main changes to Foundation with a number of smaller but meaningful updates. Most importantly, ITIL® has expanded its scope beyond IT Service Management (ITSM) to include Product Management. The language in Foundation has changed from “ITSM” to “DPSM” (Digital Product and Service Management). This shift, while subtle, is seismic. The product management community has had a long run operating outside the scope of ITSM and created their own tribe. Product managers have their own way of looking at the world of digital products and services. So for PeopleCert to expand into their territory feels a little like stolen land without an acknowledgement. Even so, the purpose in doing this is mainly to address criticism that ITIL® has been too “operational” over the years, not focused enough on the product and service strategy, creation, design, and transition. I agree with this criticism and believe it’s the right thing to do to provide greater more comprehensive coverage of “this thing we do.”

    Pasted image 20260303223628.png

    This shift becomes apparent in a number of changes. The new ITIL® Digital Product and Service Lifecycle represents the Service Value Chain as a cycle of activities that move from the product management side of the model (Discover, Design, Acquire, Build, Transition) to the service management side of the model (Transition, Operate, Deliver, Support). This is a redux of the ITIL® version 3 Service Lifecycle within the context of the Service Value Chain Activities. I think it’s a more complete way of looking at the whole.

    Another notable shift is a greater focus on role capabilities rather than organizational capabilities. This is apparent in the new course designations: Product, Service, Experience, Strategy, and Implementation. The first four are focused more on organizational roles than previous version of ITIL® while Implementation address the need for clear “how to” advice – a frequent criticism of ITIL® over the years. Developing best practices around role-based capabilities helps practitioners answer the question, “Where do I fit?”

    Lastly, new ITIL® is “AI Native.” This is not a recommendation of specific AI technologies, but the development of recommendations that help organizations become disruptors rather than disrupted within increasingly VUCA environments. As with most ITIL® practices, the material related to AI provides solid recommendations that are designed to be timeless.

    The Editorial Process

    The process to create the books belonging to this version of ITIL® took more than a year of our time to complete. The authoring teams, led by two to three lead authors for each book, would write each version of the material in short two-week sprints. At the end, the version would be reviewed by the larger team for feedback and revision. Key ideas and concepts would be discussed in detail with decisions on what and what not to include going back to the core team. Surveys on contested ideas would be distributed between authoring sprints to settle sticky questions.

    Between these sprints, the excellent PeopleCert editorial team would prepare the versions, clean them up, format them, and redistribute to the authoring teams. This process was very efficient, bringing to bear the collective wisdom of the experts around the room. Even though I’ve been in ITSM for three decades, I was in awe of the expertise. The process resembled a modified Delphi research model with the world’s leading experts in our field shaping the collective direction.

    All of this adds up to a reshaping of our profession to broaden IT Service Management into Digital Product and Service Management – a long overdue upgrade to the venerable compendium of best practices we all know and love.

  • The Message IS NOT the Medium

    The Message IS NOT the Medium

    In ITSM, we’ve been talking about “products and services” for a long time. It seems as if these terms live together – you can’t state one without the other. But what do they really mean? Are they the same thing? How are they related? To figure this out, as usual, I’ll go off on a tangent older than most reading this post.

    Forty years ago, Richard Clark wrote a seminal paper on instructional design in which he made the case that the medium of instruction had no effect on learning. He made this point famously when he stated, “The best current evidence is that media are mere vehicles that deliver instruction but do not influence student achievement any more than the truck that delivers our groceries causes changes in our nutrition .” But would we get any nutrition if the truck never came?

    Robert Kozma, who believed that the medium DID matter, would say “no.” He engaged in a friendly debate with Clark over the next twenty years (e.g., ) which played out in a myriad of journals. Is learning realized through the message or the medium that carries it? To this day, the question is still unsettled in the research. But for me, the question was clarified during conversation I had with my librarian wife the other day:

    David: “What are you reading?”

    Allyson: “A book…”

    David: “Is it a good book?”

    Allyson: “Yes, why do you ask?”

    David: “How do you know it’s a good book?”

    Allyson: “(getting annoyed) Because the ideas in it are interesting.”

    David: “Do you value the physical book or the ideas the book conveys more?”

    Allyson: “The ideas, naturally.”

    David: “But would you have learned those ideas WITHOUT the physical book?”

    Allyson: “Well, I don’t really need the physical book. I could’ve read the words somewhere else.”

    David: “Like where?”

    Allyson: “Like another book, or heard them through an audio book, or someone could have told me.”

    David: “So in each case, you need the medium of the printed words, the recorded words, or the spoken words to get the ideas?”

    Allyson: “Well, yes. You have to have some kind of medium to transfer the ideas.”

    I believe the Clark-Kozma debate lingered because the debaters always took one side or the other, never thinking outside the two halves of the question. But the real answer is that both the medium and the message are necessary for learning. The message contains learning (i.e., value) for the learner, which is transferred through the medium.

    This story has a lot in common with the products and services puzzle confronting ITSM professionals. To set it up, in the “good old days,” the difference between products and services was pretty clear. As an example, software designers produced software products. Services such as order fulfillment and technical support were left to the “operational” side of the house to deliver and support the product. There was a clear conceptual definition between products and services. Products were “things.” Services were “actions”.

    These days, with the proliferation of digital “things”, the boundary between products and services (or development and operations) is not that clear. Software has become more service-oriented with the Software as a Service (SaaS) model. In fact, many products are now delivered via the network. Other models (e.g., PaaS, IaaS, etc.) continue to evolve and gain dominance (that’s a lot of aaSes). Services contain both the thing being delivered and the delivery mode. So consulting services is a thing (i.e., advice) being delivered as a service (the engagement, communication, reports, etc.). Is there a difference between products and services, and if so, what is their relationship?

    As true ITSM professionals, we can begin to answer this by asking from where does the value come? Is the value realized through the product or the service? I think digital products represent potential value to the consumer, but the value is only realized when it is delivered through a service. This service may take two forms – access or a service action related to the product. This is true even in the case of a good. The good must be delivered to a consumer for value to be realized. This delivery is a form of a service. Products by themselves are of no value in the same way that an axe is of no value unless you pick it up (i.e., access) and swing it (i.e., service action). Value is contained in products and delivered via services. You cannot provide value with only one – both product and service are necessary and related.

    References:

    {16058788:YFLNY82W};{16058788:YFLNY82W};{16058788:ZXTKWD4C} apa default 0 16389

  • How much is that DOGE in the window?

    How much is that DOGE in the window?

    I was re-reading a really good book this week. A quote stuck out:

    In most governmental services, there is no market to capture. In place of capture of the market, a governmental agency should deliver economically the service prescribed by law or regulation. The aim should be distinction in service. Continual improvement in government service would earn appreciation of the American public and would hold jobs in the service, and help industry to create more jobs.

    W. Edwards Deming: Out of the Crisis, 1982, MIT Press.

    This seems especially relevant this week as we had the first meeting of the DOGE Subcommittee of the US House Oversight and Government Reform Committee. This is not to be confused with President Trump’s DOGE, headed by Elon Musk. There has been a lot of ink on the nature and relationship between these two DOGEs – enough to perplex and confuse most of the American public. I’m not here to lend an opinion about the relationship or constitutionality of the two organizations. I want to focus on higher questions in light of the above quote. But it seems that we may be losing sight of the bigger picture – that one of the purposes of DOGE is to improve our government by making it more fiscally efficient.

    No Market to Capture

    “No market to capture” means no competition. No competition results in an organizational culture of complacency and mediocrity operating with increasing inefficiency and producing less valuable programs and services unless/until someone/something holds them to account. It’s clear that an organization with no competition is an abnormal condition in a capitalist society. This is the crux of the Marxist argument – that competition should be replaced with socialism and eventually communism. But the end result is centralized control of the means of production, and we have seen what kind of society that leads to.

    Deming makes a key observation. He asserts that since it has a captured market, government has an exceptional duty to deliver economically efficient services in the absence of market forces. Is our government delivering on this promise?

    Distinction in Service

    Distinction in service would seem to indicate that the efficiency and effectiveness of government programs should be exemplary. The way to achieve exemplary services in any sector is to engage in a culture of continual improvement. Since our government services don’t appear to be exemplary in many cases, is this an indication of a lack of focus on continual improvement? How do we change that?

    The first two steps of the ITIL Continual Improvement process are 1) What is the vision? and 2) Where are we now? The vision (or strategy, if you will) comes from our executive branch, i.e., the president. This is the way our government is structured, whether we like it or not. Where are we now? I would point out that the debt-to-GDP ratio of the U.S. over the last 45 years has increase four-fold. In 1980, the ratio was 31%. Today, the ratio is 120%. We can argue about whether or not this fiscal path is sustainable, but that’s not my point. It would seem obvious to anyone that our current state is not efficient and arguably not effective. It is definitely not exemplary. How do we change this?

    Continual Improvement

    Deming points out HOW this is done – by focusing on continual improvement. As an ITSM practitioner and educator, I frequently think about continual improvement and how it affects value. Having worked in the government sector, I have seen how a lack of competition can lead to complacency and mediocrity. But I’ve also seen the results of having the RIGHT people in charge. My observation is that the biggest difference between the right people and the wrong people is a focus on developing a culture of continual improvement within the organization. In the case of our government, these people understand that they have an awesome and sacred responsibility to use their position with honesty and integrity, and in so doing will earn the respect and appreciation of the American people. This is what I believe our government can and should become.

  • Crowdstrike Outage “Not What You Thought”

    Crowdstrike Outage “Not What You Thought”

    It’s been six months since the Crowdstrike outage – enough time to reflect on the incident and take stock. I had lunch with my CISO about a week after the outage. It was the first time we had seen each other in several weeks. “So,” I asked sheepishly, “how have you been since the outage?” “I’ve been fine. But the Service Desk has been swamped. Since my security team wasn’t that busy, we pitched in to help remediate the outage. They touched 15,000 servers and client machines in three days.” I inquired further. His role focused on the management of encryption keys that were necessary to unlock and manually patch the operating systems of the affected machines. “The hard part of the recovery was managing the keys,” he said. As his team was jointly responsible for the security of those keys, that was the extent of his involvement. You see, Crowdstrike pushed a bad patch – one file – but an important one that loads at the kernel level. This caused all of those Windows machines to “blue screen.”

    Something didn’t compute. I thought he was going to be falling asleep at the table, eyes bloodshot, bags under them, a quart jug of coffee in his hand. Instead, he seemed rather chipper. Then it hit me. This wasn’t a security incident. Rather, it’s what we call in ITSM a deployment and release management issue. It’s not that Security Management wasn’t involved, they were. But it was apparent early in the Problem cycle that this wasn’t a cyberattack.

    The response from our university IT was quick and appropriate. Within thirty seconds of the patches being applied, customers began to call and report “blue screens.” This spawned a number of related incidents at the Service Desk. These incidents were quickly correlated into a Problem record, which was upgraded to a major incident (i.e., outage) record in less than an hour, all of this happening around midnight on July 19th. During the early morning hours, an incident response team did a root cause analysis and quickly determined the problem was a vendor patch.

    The vendor response was quick and the patch was available by early morning, although the CEO of Crowdstrike was criticized in subsequent days for not issuing a timely apology. The damage to Crowdstrike’s reputation was done. After all, the outage affected roughly 8.5 million computers. Crowdstrike was quickly seen as the responsible party and IT folks around the world became heroes as the outage response progressed. But Microsoft was also responsible for letting Crowdstrike play in the Windows kernel. Microsoft distanced themselves from responsibility by asserting, “Although this was not a Microsoft incident, given it impacts our ecosystem, we want to provide an update on the steps we’ve taken with CrowdStrike and others to remediate and support our customers.” In this instance, Microsoft was acting as an integrator, more specifically, as a Service Guardian, where they managed both a third-party vendor (Crowdstrike) and provided services (Windows). In this instance, ITIL best-practices dictate that we have a high-level of communication and trust with the integrator, but also acknowledge that our customers will hold us – not our vendors – responsible. After all, who are our customers going to blame – us or our vendor?

    I see a double failure here. Crowdstrike failed by deploying a service with a critical bug in it, which they should’ve uncovered in their acceptance testing. This is not George Kurtz’s first high-visibility failure. In 2010, he was CEO of McAfee when a similar outage occurred. The second failure was Microsoft’s mismanagement of their vendor. One may ask why they allowed a vendor to deploy a file at the kernel level without sufficient testing. You would also expect Microsoft to have caught the error prior to approving the release of the errant file. Was Microsoft’s trust of Crowdstrike so great that they didn’t do acceptance testing and simply passed the updates through? If so, they need to review their Deployment and Release Management practices. Of course, this is pure speculation.

    Meanwhile, back at “the ranch,” the IRT created a Change Request that included testing of the patch on a number of machines. Procedures to apply the patch were documented at both the individual asset level and the more strategic coordination level. On the communication side, customer communication began as soon at the Problem was identified, about an hour into the incident, with a number of communications happening in the early morning hours via IT staff in the colleges and university communications to stakeholders. Communication continued through the next few days as the incidents were remediated and non-reported servers and endpoints patched. An After Action Review was conducted less than a week after the initial incident was reported. Lessons learned were documented. DONE!! YAY!!!

    Since I retired from IT, I’m an “observer” these days and I can tell you that I don’t miss the excitement surrounding outages. Been there, done that, got the t-shirt. But I must say that I’m very proud of the way our university handled this major incident – responsive, professional, by the book. I don’t think our response would’ve been as good five years ago. We’ve come a long way in our journey in understanding ITSM.

    In summary, what ITSM practice areas were involved in this outage?

    1. Service Desk
    2. Incident Management
    3. Problem Management
    4. Continuity Management (via Major Incident/Outage)
    5. Vendor Management
    6. Asset Management
    7. Relationship Management (i.e., communication with stakeholders)
    8. Change Management
    9. Security Management (indirectly)

    This is a pretty impressive slice of the ITIL ITSM Practices for a single issue. I think our IT folks would report that we have varying levels of maturity in each of the Practice areas, but I can tell you from experience that this kind of outage hones our skills to respond better the next time. Iron sharpens iron.

  • Lost Improvements: An Analogy to Defects

    Lost Improvements: An Analogy to Defects

    Defects are not free. Somebody makes them, and gets paid for making them.

    W. Edwards Deming

    To summarize Deming’s teaching on defects, they cost an organization thrice. First, the defect is made, which robs the organization of a “working” product or service. Second, the defect must be identified, which also takes time and resources. Lastly, the defect must be resolved, thus taking more resources away from producing non-defective products and services. If this isn’t bad enough, these costs don’t include opportunity costs which could be mitigated with improvements.

    In manufacturing (and IT ;-)), a defect happens because of a quality failure either at the source or somewhere upstream. Once a defect is built into a product, there are two ways to detect it. First, it may be detected prior to shipping. Second, the customer may see the defect, which is significantly worse from a CX perspective. To draw the analogy to lost improvements, if there is no system in place to record improvements, that’s the equivalent of allowing a defect to get to the customer. Lack of improvement causes more technical debt and operational overhead down the line and will be reflected in much of the work that is done by the organization. These defects will be visible to customers, one way or another. How does an organization create a culture of continual improvement?

    First, an organization must embrace a culture of improvement. According to ITIL4, a culture of improvement requires three things; transparency, managing by example, and building trust (CDS, 2.3.4, 2.3.8). I’ll treat these three topics in more detail in a future post, but suffice it to say that my perspective is that the former are dependent on the latter – that is, trust is the “coin of the realm” and other aspects of an improvement culture are dependent on it. For example, organizations that have a high degree of trust manifest a corresponding high level of transparency.

    Trust is the “coin of the realm” and other aspects of an improvement culture are dependent on it.

    Second, an organization must provide mechanisms for conserving, prioritizing, and executing improvement initiatives. Starting with a Continual Improvement Register (CIR) is a good first step. If systems are too proscribed, or improvement processes not defined, team members don’t feel empowered (or able) to record improvement ideas. Without improvement, the organization will continue to produce defects. Making the CIR accessible at all levels of the organization is also recommended. Appointing a small, dedicated improvement person or team responsible for prioritizing and executing on those improvement opportunities closes the loop. Communicating the status of improvement opportunities creates buy-in from the organization and keeps the suggestions rolling in. In my experience, organizations go awry in the second requirement. They may build a culture of trust and improvement, but that culture must be operationalized to realize the true benefits.

  • The Non-Technical Economy

    The Non-Technical Economy

    It seems that everything these days is about AI and how the world as we know it will end. Some are prophesying that entire swaths of our economy will be replaced by AI. Writers are lining up on both sides of the argument. As IT professionals, much has been written (or assumed) about the use of AI in IT.

    In 2019, Brian Merchant wrote, “A robot is not ‘coming for’, or ‘stealing’ or ‘killing’ or ‘threatening’ to take away your job. Management is.” Is that true?

    While most people are focusing on those things that AI can do to replace human work, there are a significant number of things that AI can’t do better than humans. Interestingly, I asked AI (i.e., MS CoPilot) for a list of things that humans do better. Here’s the list:

    1. Emotional Intelligence: Humans are naturally equipped with the ability to understand, interpret, and respond to emotions in a way that AI cannot. This includes empathy, compassion, and the ability to read subtle social cues, which are essential for meaningful human interactions.
    2. Creativity and Innovation: While AI can generate new content based on existing data, humans have the innate ability to think outside the box, imagine completely new concepts, and create innovative solutions to problems.
    3. Ethical and Moral Judgment: Humans can navigate complex ethical dilemmas and make decisions based on a nuanced understanding of morality, societal norms, and personal values. AI lacks the consciousness to make such judgments.
    4. Adaptability and Flexibility: Humans can easily adapt to new situations, learn from diverse experiences, and apply knowledge in various contexts. AI systems, on the other hand, can struggle with tasks that deviate from their training data or require adaptation to new environments.
    5. Complex Problem-Solving: Humans are capable of solving poorly defined or novel problems that require intuition, judgment, and the ability to find connections between seemingly unrelated information. AI systems are typically limited to solving algorithmic problems and can struggle with tasks that cannot be systematically defined.
    6. Physical Dexterity and Mobility: Humans have fine motor skills and the ability to navigate complex physical environments, which are challenging for AI and robotics to replicate with the same level of proficiency.
    7. Interpersonal Skills: The ability to build relationships, collaborate effectively, and navigate social networks is a distinctly human trait. AI can assist with communication but cannot fully replicate the depth of human interaction.
    8. Learning from Limited Data: Humans can learn new skills or understand new concepts from a few examples, whereas AI often requires large datasets to learn effectively.
    9. Understanding Context and Nuance: Humans excel at understanding context, sarcasm, irony, and nuanced language, which can be challenging for AI to interpret correctly.

    It’s important to recognize that AI is a tool designed to augment human abilities, not replace them. The collaboration between human intelligence and AI has the potential to enhance productivity and innovation across various fields.

    What’s interesting about this list is that most of these skills are closely related to those needed to provide excellent IT service management. As the emphasis in IT has grown over the last three decades from technical to customer-service competencies, the identification of these soft skills has been one of the ways the profession has defined and delineated itself. Take, for example, the list of skills necessary to provide excellent service desk support (ITIL4 Foundation Training, 2024):

    • Customer service
    • Empathy
    • Incident analysis and prioritization
    • Effective communication
    • Emotional Intelligence

    It would appear, at least at this moment in time, that AI will not be able to do some of the fundamental things we do in IT service management. Indeed, a survey of those industries most susceptible to “takeover” by AI include manufacturing, finance, healthcare, cybersecurity, and education. Note that these fields don’t rely heavily on stakeholder interactions to be effective.

    So why are “managers” still trying to replace us? I think the answer is that they are thinking in a binary way – either we use AI to do work or we use humans. The real answer is that AI will augment and complement humans in IT service management, not replace them. The collaboration between human intelligence and AI has the potential to enhance productivity and innovation across various fields. This is reflected in the newest ITIL4 Create, Deliver, Support curriculum which stresses the effective integration of AI, among other tools. Mature IT Managers will realize that AI is a tool that can automate steps of the value stream, but at the end of the day, customers will have better outcomes and realize more value if humans are left to do what humans do best.