Project Management, A Process or Practice?

Is the work of project management a process or practice?

I think project management is both a work of process and practice, so what is the distinction?

A process focuses on being consistent and repeatable – you should be able to get predictable results from a process.

PMBOK has five process groups with ten knowledge areas intersecting those process groups for various project management stages.

Practice focuses on applying knowledge, judgment, and wisdom to achieve the desired outcome and dealing with changes. All project managers need to produce results by balancing the triple constraints: scope, time, and cost. Those constraints can often change during project execution.

A good project management practice established and executed by qualified personnel will bring the desired results, even with those limitations.

These days, just about everything we do requires a mix of process and practice. For example, implementing just the ITIL processes verbatim from the framework without applying the necessary effort on building an ITSM practice will just yield generalized, paper processes.

With the pace of changes picking up and the predictability of our environment shrinking, developing a practice is just as important as developing the required processes.

Golf and Tennis

Both golf and tennis are competitive sports. Many people play the games, but there are only so many highly-skilled players.

Both sports involve hitting a small object with an apparatus, but it takes two different approaches to succeed in them.

Playing great golf takes a well-executed process. When the process of swinging is executed consistently with very little or no error, the golf ball will always travel to the spot where the golfer wants it to be. Of course, assuming the same environment conditions such as winds, ground texture, etc.

Playing great tennis takes a well-executed practice. Practice is where the knowledge, skills, and judgment of the player come together in response to the current situation on hand.

Playing golf takes consistency and predictability, so each swing will always get the ball to where it needs to go.

Playing tennis takes flexibility and unpredictability, so each time the ball will go to where your opponent least expects it to be.

Playing golf with the unpredictable swings will likely result in a poor score.

Playing tennis with a predictable consistency will likely get you pwned by your opponent.

Many organizations spend a great amount of effort to perfect their processes. At the same time, they need to spend just as much time to develop a practice that can deal with change and unpredictability thrown at them.

People, Process, Technology

“People, Process, and Technology” (or PPT) is a popular formula used in many process frameworks, ITIL included.

This model has worked well for the corporate factory age. In today’s increasingly idea and connection-oriented economy, an updated model is needed.

In “The Cognitive Enterprise” by Lewis and Lee, Customers, Communities, Capabilities, I believe, are the new PPT.

Customers do not appear in the PPT formula, but customers ultimately define value.

Communities promote the connection between people, internal and external. Communities and connections enable the free flow of ideas.

Capabilities are more than a combination of processes and technologies. Capabilities further enable the Customers and Communities to achieve results.

The “Three C’s” are the new PPT for our connection economy.

Book Review: An Integrated Requirements Process by Peter Brooks

http://www.dreamstime.com/stock-photo-tablet-pc-computer-book-image23624210Summary: Compelling recommendations for instituting an integrated requirement management process in any enterprise

After managing IT projects and practicing IT service management for a number of years, the idea of having an integrated requirement process (IRP) for an enterprise intrigues me. I am certified in ITIL and have studied IIBA’s BABOK and ISACA’ COBIT frameworks. I was particularly interested in reading Peter’s recommendations for managing enterprise requirements.

The author proposed IRP based on the premises that:

  • Requirements are corporate assets and should be methodically captured, tracked, managed, and re-used for the benefit of the enterprise.
  • Many frameworks describe the needs of capturing and managing requirements but do not go into more details on how requirements should be properly captured and managed
  • An unified view of the requirement is necessary and can be leveraged by other IT frameworks and activities

Why would you want to read this book and examine the proposed process? I think the book is relevant if you are looking for:

  • A starting point into a more organized and formalized requirement management process for your organization
  • Ways to capture requirements from discrete projects into a centralized enterprise repository and to leverage their re-use
  • Recommendations for integrating requirement management more seamlessly with other IT activities/lifecycles such as application development, business analysis (BABOK), ITSM (ITIL), and IT governance/audit (COBIT).

How would this book help you? After reading the book, I think you will be able to:

  • Define or design a requirement management process for your organization. For example process flow, roles and responsibilities, recommended CSFs and KPIs
  • Define or design categories and statuses to enable a requirement managing workflow for logging, tracking, and re-use of the requirements
  • Define or design the necessary measurements for evaluating the IRP’s effectiveness
  • Understand or identify the necessary controls for governing and sustaining IRP
  • Understand or identify the integration points between IRP, BABOK, ITIL, and COBIT
  • Understand or identify supporting tool requirements

In summary, Peter has provided some compelling reasons and recommendations for instituting an integrated requirement management process in any enterprise. The book has defined all the necessary elements for designing, implementing, and governing the IRP. Peter also has taken a great deal of care by adding plenty of worked examples to help explain the process. I believe his recommendations provide an excellent starting point for those who are ready to manage requirements as corporate assets, rather than just one-time project occurrences.

Change Management Process Design – Part 2

http://www.dreamstime.com/-image25302740

This post is part two of a series where we discuss the ITIL-based Change Management process and how to put one together. In the previous post, I presented some design considerations such as goal/purpose; the intended scope; and roles and responsibilities. In this follow-up post, I will discuss additional process governance and planning elements.

Categorization and Prioritization

Why do we categorize? Proper categorization can facilitate or drive certain governance decisions. Categorization can drive the lead time required for review and approval. Categorization can also determine what process workflow or approval authorities may be required to facilitate the change. ITIL recommends three types of change request: Standard, Emergency, and Normal. If ITIL’s definitions work for your organization, go ahead and adopt them. However, that categorization alone is probably not sufficient to describe the changes and how they affect the business operation. Finding a way to describe the risk and impact associated with a change is also important. Risk can be used to measure the level of the potential disruption of business operation associated with a change request. Impact can be used to measure how far-reaching a change can be. The key idea here is to find a way of properly assessing changes and managing risks.

When designing a categorization scheme for changes, I recommend examining your existing incident and problem categorization scheme and make an attempt to have a consistent categorization across the ITSM processes. I believe that a uniform categorization will make risk assessment more reliable than not having it. A consistent categorization could also make the design and analysis of the reports more meaningful. Some organizations choose to use separate categorization schemes for incidents, problems, and changes – a decision sometimes influenced by the organizational boundaries or the tools on-hand. Just keep in mind that a change management exercise is also a very much a risk management exercise.

Workflow and Documentation

Once you have the change requests categorized, you will need a workflow to process the change requests for review and approval with a well-defined lead time. These lead-times are necessary to provide the adequate time required to review and to approve the change request by the change manager and the stakeholders. Many organizations I was with have had either weekly or semi-weekly CAB review cycles. That means the approval and scheduling timing need to be well defined so CAB and change manager can review, discuss, approve, or even escalate the change with sufficient time. Also, different types of change or change risks/impacts will likely require different lead-times or maybe different work-flows to process them. Some organizations may be required to impose change freeze windows in order to support critical business systems or processes.

Just like other ITSM artifacts, change requests should be documented and, ideally, captured in a tool. The levels of detail needed to be capture will vary from one organization to another, but most organizations should capture the baseline of required data such as:

  • Change requester, owner, and implementer
  • Type, category, risk, impact, priority as defined in the process document
  • Configuration items (systems, applications, devices, etc.) affected by the change
  • Summarized and detailed description of the change
  • Business justification
  • Proposed schedule or implementation timing
  • Dependencies and required resources identified
  • Key approvers needed
  • Final report on the closure of the change

Once the changes are captured and processed, the process guide also needs to define the necessary communication and coordination mechanisms to support the change management activities.

Metrics and Measurements

Tracking and measurement are the key elements to the continual process improvement. Depending on the goal and purpose defined for the change management process, we can further define the critical success factors and the key performance indicators we will need to track in order to measure the effectiveness, efficiency, and the quality of the process. The process design should spell out the metrics requirements and make sure the tools can support the required metrics.

Process Integration

It will also be useful to define some potential connecting ITSM processes to CHM. Incidents, problem, change, configuration, and release management process all have activities that are closely tied to one another. Which processes will trigger the CHM process from an upstream workflow or will receive the CHM output downstream? If you have an incident identified and it requires changes implemented to restore the service, how will those changes be handled? If you practice problem management in your environment, how will CHM process be injected into the root-cause remediation or deficiency remediation activities? Will the incident tickets, problem records, configuration items, and requests for change be linked in some fashion? These are just some governance related questions that should be considered upfront as they will affect how you plan and design the CHM process.

In the last two posts, we just went over a number of governance and planning elements for the change management process. We talked about the scope and purpose of the process, how we categorize and prioritize changes, the essential roles for executing the process, the necessary categorization, workflow, metrics, and integration with other ITSM processes. On the next post, we will go over and example process flow and spell out more details for the change management activities.

Change Management Process Design – Part 1

http://www.dreamstime.com/-image25302740This is the first post of a series where we do the tutorial and some deep-dive of an ITIL-based change management (CHM) process design. In the next few posts, we will go over some of the process design considerations such as the goal/purpose; the intended scope; roles and responsibilities; RFC categorization and prioritization; scheduling; integration; and metrics and measurements. Towards the end, we will examine how all these planning considerations come to together as we design the example process flow.

Goal and Purpose of Change Management

When designing an ITSM process such as change management, one of the most fundamental questions to ask is why your organization needs a formalized process. By ITIL’s definition, the intention behind the change management process is to control the lifecycle of all changes. By exercising proper control over the changes, we put ourselves in a better position to benefit from the changes and to minimize any potential disruptions to our IT environment. Do most organizations need a formalized change management process for their IT environments? I believe so because changes are a part of our complex IT and business environments. There are a number of reasons and objectives for an organization to implement a well-organized change management process. Some of those reasons include

  • Accommodate changes but not at the expense of system or application availability
  • Identify and categorize changes that may have different approval workflow or implementation lead time
  • Determine a better approach of assessing the levels of risk and the potential impact
  • Ensure that request for change (RFC) is properly evaluated for technical merit and balanced with the business needs

Depending on your organization’s intent or requirement, it is important to determine your objectives and fully answer the question of “why.” For many organizations, having a well-thought-out and documented change management process can help reduce the risks brought on by poorly planned changes. That, in turn, can go far in strengthening the IT organization’s ability to deliver timely and useful technology services that meet the needs of business.

Scope and Policy Implications

In defining your change management process, it will be useful to define a few scope or policy related items upfront. Some of my thoughts are:

  • To what organizational boundaries will the CHM process be applicable? Who should initiate, review, and/or authorize the CHM activities? Like implementing most ITSM processes, the benefits will be more far-reaching and visible when everyone is adopting the unified approach and vocabulary. Consider just how connected many business systems are these days in order to support the business processes or services, it will be important to understand and determine scope boundaries of the CHM process beforehand.
  • What activities constitute changes and will all changes receive the CHM treatment? Fundamentally, I believe all alterations to the enterprise-wide computing environment should be treated as changes, and those changes should be handled, in some fashion, by the CHM process. Depending on the technical nature and business implication of the change, some organizations may require different levels of review or scrutiny for different categories of change. To have an effective CHM process, all changes should be identified, addressed, and documented by the process in some ways.

Roles & Responsibilities

A change management process can involve a number of participants. Here are some typical roles to be factored into the design.

  • Requester: Who can initiate a RFC? How will the requester participate in the overall CHM process?
  • Change Management Process Owner: The process owner makes sure that the process is defined, documented, maintained, and communicated at all levels within the organization. The process owner is not necessarily the one doing the actual work, but the process ownership comes with the accountability of ensuring a certain level of quality for the process execution. The process owner also drives the continuous improvement activities for the CHM process.
  • Change Manager: The change manager is the main actor in the CHM process and has the most visible role within the CHM process. The change manager ensures all RFCs receive the proper handling and review by chairing the Change Advisory Board (CAB). As an outcome of the CAB meeting, the change manager publishes and communicates schedule of changes. The change manager is also responsible for reporting the metrics to the process owner for quality assurance and continual improvement purposes.
  • Change Implementer: The change implementer role is often played by the subject matter experts who perform the implementation work. After the change is executed, the change implementer also reports the outcome of change to the change manager for documentation and further actions.
  • Stakeholder: There could be several different types of stakeholders involved in CHM process. The stakeholders should be identified as members of the CAB or for RFC approval purposes. Ideally, the CHM process could use at least one executive-level stakeholder who can act in a governing or mediation capacity when conflicts arise.

We will discuss additional topics such as RFC categorization and prioritization; scheduling; integration; and metrics and measurements on the subsequent posts. Stay tuned, and I welcome your feedback.

COBIT 5 and What You Can Leverage for ITSM Work

ISACA recently released COBIT 5, a governance and management framework that can help organizations create optimal value from IT. If you are familiar with COBIT, hopefully you have already downloaded the framework documents. If you are not familiar with COBIT or ISACA, follow this link to get more information on the framework. In this post, I will outline some of the useful information you can leverage from COBIT to help you in your ITSM journey, based on my early perusal of the framework.

Good Practices

For a number of processes we use in ITSM, there is a corresponding one in COBIT. For example, DSS02 in COBIT “Manage Service Requests and Incident” maps approximately to the Incident Management and Service Request Management processes in ITIL. Within the process DSS02, COBIT breaks the processes down further into seven management practices. Within each management practice, there are a number of activities associated with each management practice. If you want to implement or improve an ITIL Incident Management process for your organization and wonder what are considered as good practices, these management practice activities can provide some valuable insights for your effort. Tailor those activities further into exactly what you would do in your organization and you have a list of good practices for your shop.

Metrics

For each process, COBIT 5 has outlined various IT-related and process goals that a process contributed directly towards. Next to each goal, COBIT outlines a list of recommended metrics to measure for those goals. Of course, depending on your organization and the availability of certain service management data, you will have to find tune those metrics for your environment. It offers an excellent starting point for defining the list of metrics you plan to capture.

RACI Chart

For each process, COBIT 5 has a RACI chat that talks about who is responsible and/or accountable for certain key management practices within the process. Granted, the RACI chart can be high-level and somewhat generic. It nevertheless offers a good starting point for those who are working on a process design exercise or just want to better define the roles and responsibilities within your environment.

In summary, I must say I like what I have seen from COBIT 5 so far because the framework offers a great deal of good information to use for your ITSM work. I definitely recommend downloading and checking out the new framework further. On Tuesday, April 17, 2012, Debbie Lew of Ernst & Young and Robert Stroud of CA hosted an education session on COBIT 5 during ISACA Los Angeles Chapter’s annual spring conference. Normally the presentation deck is available only to the attendee of the conference. Ms. Lew has graciously given me the permission to make the presentation deck available via this blog. Check out their deck for more information on COBIT 5 and feel free to post questions and comments.

DIY Process Assessment Wrap-up – Constructing the Report and Presenting the Results

This is the concluding post on the DIY Process Assessment series. In the previous posts, we went from lining up the approaches and resources, planning various aspects of the assessment, running the assessment and collecting the data, and eventually making sense of the data collected. The last major steps are to write up the report and present the results to the stakeholders.

Writing up the Report

The final report should summarize the assessment effort, provide solid findings on the current maturity level, and suggest both near-term and long-term actions for improvement. Generally, the assessment report will contain the following elements:

  • Executive Summary
    • Short summary of project background and problem definition
    • Brief description of the assessment methodology used
    • Summary of maturity scores for each process assessed
    • Discussion on the integration between processes and other comparative benchmark information
    • Project Scope – mention the processes and organization units covered under the assessment
    • Overall conclusion, recommendations, and next steps
      • Did the conclusions appear to be logically drawn based on data gathered?
      • Did the results confirm the perceived problem?
      • Are the recommendations aligned logically with the conclusions?
      • A roadmap showing the sequence of actions and dependencies between actions
      • Analysis of the Processes (for each process)
        • Scores or maturity levels by processes
        • Process goals, intended outcomes, and perceived importance
        • Process specific conclusions and recommendations
        • Organizational Considerations
          • Any noteworthy factors encountered during the assessment that could provide more insight or context on the conclusions
          • Any other organization related factors that should be taken into account when implementing the recommendations or actions

Presenting the Results

When presenting the results, keep the following suggestions in mind.

  • Depending on your organization, you may use different types of meetings or communication vehicles to present the results. At a minimum, I feel the project sponsor should host at least one presentation with all assessment participants and senior leadership team.
  • Hold additional meetings with the process owners to discuss the results and to identify quick-wins or other improvement opportunities.
  • Anticipate questions and how to address them, especially the ones that could be considered emotional or sensitive due to organization politics or other considerations.

It took seven posts in total to cover this process assessment topic, and I feel we have only covered this subject at a somewhat rudimentary level. There are more things to drill down in-depth, but everything we have covered so far will make a very good starting point. As you can see from the steps involved, the assessment is not a trivial effort. Before you go off and start planning the next assessment, some people might ask one important question “why bother?” I can think of a few good reasons for taking the time to plan and to do the assessment.

  1. Most organizations do not have the processes at a minimally effective level they need to support their business or operations. They want to fix or improve those processes, and a process assessment effort can help to identify where things might be broken and need attention. The problem definition is a key area to spend some effort on.
  2. Many organizations undertake process improvement projects and need some ways to measure progress. Process assessment helps not only for establishing the initial benchmarks but also for providing subsequent benchmarks that can be used to calculate the progress. A lot of us do measurements by gut-feel. Intuition and gut-feel can be right sometimes about these things but having the more concrete measurement is much better.
  3. Along the same line of reasoning for having the concrete measurement, I cannot think of another better way to show evidence of process improvement or ROI to your management or project sponsor than with formal assessment. Many people do process improvement initiatives via a grass-root or informal effort with internal funding due to organizational realities. At some point, you may find yourself needing to go to management and ask for real budget for time, people, and tools. Having a structured approach to show the potential contributions or ROI down the road can only help out your cause.

In conclusion, process assessment can be an effective way to understand where your process pain points are, how to address those pain points, and how far your organization has come along in term of improvement. All meaningful measurements usually take two or more data points to calculate the delta. Conducting process assessment periodically can provide the data points you need to measure your own effectiveness and to justify further improvement work.

Links to other posts in the series

DIY Process Assessment Execution – Analyzing Results and Evaluating Maturity Levels

In the previous post, I gave an example of process assessment survey. Using a one-to-five scale model, you can arrive at a weighted (or a simple average) score for a given process after collecting the data from the assessment participants. The more data points (or survey results) you can collect, the more realistic (and hopefully accurate) the process maturity score will be. Before you take the process maturity scores and start making improvement plans, I think there two other factors to consider when analyzing and evaluating the overall effectiveness of your processes. The two additional factors are:

  • Perceived importance of the process:

In addition to measuring the maturity level of a process, I think it is also important to measure how your customer and the business perceive the importance of the processes. The information gained from measuring the perceived importance can be important when gauging the level of and prioritizing the investments that should go into the improvement plan for a process. For example, a process with a low maturity level but perceived to be of high importance to business may be a good candidate for some serious and well-planned investment. On the other hand, a process, that has a high maturity level in IT’s eyes but perceived to have a lower importance to business, may signal you to have a further look at the current investment level and see whether some scaling-back or reallocation of the funds could be an option. After all, we want to be in a position where the investments of any process will yield the most value for the organization overall. We simply cannot make decisions on the improvement plans without understanding the perceived business values.

Measuring the perceived importance accurately requires asking the right questions and getting the feedback from the right audience. People from the senior management team or IT customers who are considered power users are probably in a better position than others to provide this necessary insight. Also, simply asking IT customers how important a process is to the organization may not be effective because those customers are not likely to be as familiar with the nitty-gritty IT processes as we are. We will need to find a way to extract the information by stating the questions in a way that our customers can understand and respond, without getting into too much of technical jargons.

As an example, the result of this analysis could be a bar chart showing the maturity level and the perceived importance level for the processes under assessment.

  • Degree of Integration Between Processes

Another factor to consider before taking a process maturity score and making an improvement plan is to also understand how well processes integrate with one another. Most ITSM processes rarely act alone, and the effectiveness of an overall ITSM program also depends on the level of integration between processes. Assessing how well one process integrates with another generally involves looking at just how well the output from one process is used in other processes. Some examples of process integration for problem management can include:

    • Processes Providing Input Into Problem Management:
      • Capacity management process could provide historical usage and capacity trends information to aid the root cause analysis or formulation of permanent solutions.
      • Incident management process could provide incident details for the root cause analysis activities. Incident data can also enable proactive problem management through the use of trend analysis.
      • Configuration management process could provide relationship information between configuration items, which can help in determining the impact of problems and potential resolutions.
    • Processes Receiving Output from Problem Management:
      • Incident management process could receive known error records and details of temporary fixes in order to minimize the impact of incidents.
      • Change management process could receive requests for change triggered by problem management to implement permanent solutions to known errors.

What scale should you use to rate the integration between processes? I think a simple scale of one to five should work just fine. For example:

    • One could indicate the output from the originating process is used inconsistently by the target process
    • Two could indicate the output from the originating process is used consistently but only informally by the target process
    • Three could indicate the output from the originating process is both consistently and equally by the target process in a documented manner
    • Four could indicate the output from the originating process is used consistently to support the target process in a managed way
    • Five could indicate the output from the originating process is used consistently to support the target process in an optimized way

You define what the scale really means for your environment in a way that is easily understandable by your team. Also keep in mind that not all processes must integrate seamlessly with other processes on every possible front in order to have an effective ITSM program; however, a good use of the integration scores can help us to uncover opportunities to capitalize on our strengths or to improve upon our challenges. For example, a low integration score between incident and problem management processes could signal us an opportunity to improve how those two processes exchange and consume the output from one another. If we find the known error database is not being utilized as much as we should during incident triage, we should dig in further and see what actions we can take to improve the information flow. If the problem management process is being hampered due to lack of accurate incident information coming from the incident management process, the integration score should point us to the need that we need to raise the quality of information exchange between the two processes.

As an example, the result of the process integration analysis could be a two-by-two chart showing the integration scores between processes.

We have come a long way in this DIY process assessment journey, from gathering the potential resources, planning for the assessment, executing the assessment, to analyzing the results. For the next and concluding post on the process assessment topic, we will discuss presenting the assessment results and suggesting some quick-win items to consider as part of the follow-up activities.

DIY Process Assessment Execution – Process Survey Example

From the last DIY assessment post, we discussed the data gathering methods and instruments to use for the surveys, workshops, and interviews. No matter what method(s) you end up deploying for your assessment, you will need a list of good/effective/best practices for a process in order to formulate the assessment questions. During the first post of the series, we talked about what reference sources you can use to come up with a list of good practices for a given process. In this post, we will illustrate an example of what the good practices and survey questions might look like for Problem Management.

Problem Management Process Assessment Questionnaire Example

As you look through the example document, I would like to point out the following:

  1. Each question in the questionnaire represents a good practice as part of what a mature process would look like. To come up with the list of practices, I leveraged the information from ISO/IEC 20000 Part 2: Guidance on the Application of Service Management Systems. With helpful information sources like ITIL, ISO 20000, COBIT, etc., they provide a great starting point for us DIY’ers and there is no reason to reinvent the wheels for the most part.
  2. To rank the responses and calculate the maturity level, I plan to use the 5-point scale of CMMI. The maturity levels used by CMMI include 1) Initial, 2) Repeatable, 3) Defined, 4) Managed, and 5) Optimized. However, the maturity levels will not likely be something your survey audience will know very well, so we need to find some other ways for our survey audience to rank their answers. As you can see from the example, I used either the scale of 1) Never, 2) Rarely, 3) Sometimes, 4) Often, 5) Always or 1) Not at All, 2) Minimally, 3) Partially, 4) Mostly, 5) Completely. You don’t have to use both scales – it all depends on how you ask the questions. I could have asked all questions with the scale of 1) Never, 2) Rarely, 3) Sometimes, 4) Often, 5) Always or vice versa. In my example, I chose to mix things up a bit by using both scales just to illustrate the fact that both scales are viable for what we need to do.
  3. Some questions are better asked with close-end options like Yes or No, instead of using a scale. Those questions tend to deal with whether you have certain required artifacts or deliverables. For example, you either have documented problem management process and procedures, or you don’t.
  4. As you can see, the scale questions translate nicely when calculating the maturity level. You may calculate the maturity level by using a simple average of all responses from the scale questions, where all questions have an equal weight or preference. Depending on your environment or organizational culture, you may also assign a different weight to each question by emphasizing certain practices over others. For the close-end questions, you will need to think about what the responses of “yes” and “no” mean when you calculate the final maturity level. For example, you may say having an “Yes” for a group of questions gets a score of 3 out 5, where the response of “no” equal to 1. For some questions, you may even say the “yes” response equals to 5.
  5. This is a simplistic model for assessing and calculating maturity level for a DIY approach. You will need to construct a similar good practice model for each process you plan to assess. Coming up with a list of good practices model to assess against can turn into a significant time investment. However, the majority of effort is upfront and you can re-use the model for subsequent assessments. If you contract out the assessment exercise to a consultant, coming up with the best practice model to evaluate your processes against is normally a deliverables from the consultant. Be sure to spend some time to understand your consultant’s model, and make sure the best practice model is applicable to your organization. It is an important way to ensure the assessment results will be meaningful and easier for everyone to understand.

Please have a look at the example document and let me know what you would do to improve it. On the next post, we will continue the discussion of the assessment execution phase by examining how to analyze the results and evaluate the maturity Levels. We will also discuss how inter-process integration as well as organization and culture could play a part in the maturity level assessment.