Live Apdex Report

    Apdex Report [?] is powered by
    CA Technologies User Experiences Monitor.

    LinkedIn

    See our LinkedIn profile, click on this button:

    APDEX

    Metrics and Performance Indicators: A Bibliography

    I’m writing a series of posts about Generalizing Apdex. This is #3.

    In my previous post on Core Apdex Qualities, I set out to enumerate the essential characteristics of the Apdex metric. What aspects of the Apdex method, and the metric it produces, would make it useful for reporting measurements in domains other than the application response time, the focus of the Apdex specification today?

    The Apdex goals already identified in section 2.1 of the spec are certainly a good starting point, but I felt that to properly assess where a generalized Apdex metric might potentially be applied, I needed to establish a general picture of the use of metrics and performance indicators in management. This is easier said than done. How, for example, can we discuss the applicability of an Apdex metric as a Key Performance Indicator (KPI), when no standard definition of that concept exists?

    If you spend a couple of weeks sorting through Google’s ~4 million references to key performance indicators, two things will happen. First, you will see that confusion abounds; there is no generally accepted framework we can use to position Apdex. Second, you will begin to assemble your own short list of useful reading that will eventually reveal some broad conclusions about the field of management metrics. This post records my own experience doing this, with potential applications of a generalized Apdex standard in mind.

    Goals, Metrics, and Management

    I begin with two excellent introductory papers. Written in 1994, two years before the publication of The Balanced Scorecard by Norton and Kaplan, Thoughts on Goals and Metrics by David Walden WALD94 provides a readable and surprisingly useful summary of the role of metrics in management. I could write an entire post consisting only of quotations from this paper; my appreciation for Walden’s commonsense advice grew the longer I spent reading other articles on this topic.

    Twelve years later, Kenneth Rau’s opening statement in the 2006 Cutter IT Journal devoted to The CIO Dashboard and IT Performance Management RAU06 demonstrates how the passage of time has served only to confuse and complicate the subject. While the tremendous growth in interest in Business Intelligence (BI) and Business Performance Management (BPM) has spawned countless data warehouses, analytics vendors, dashboard tools and metrics specialists, it has not really altered the fundamental principles so concisely described by Walden.

    WALD94
    Thoughts on Goals and Metrics, David Walden, Center for Quality of Management Journal, Volume 3, Number 1, Winter 1994, pp. 33-38. [37Kb pdf]
    RAU06
    The CIO Dashboard and IT Performance Management. Opening Statement, Kenneth Rau, 2006. [Cutter IT Journal Vol 19 No.4, April 2006, pp 3-5. | 3.1Mb pdf]

    Metrics Soup

    Here are some short articles that testify to the widespread confusion over metrics, especially about the distinction between metrics and KPIs. Curt Hall HALL04 notes that one reason is sloppy usage, a common problem in IT. He begins with the admission “I’m guilty of this, along with many other people who tend to throw the terms around without giving much thought to their actual meanings”.

    Another reason is the proliferation of packaged solutions. Each of the four articles by Kent Bauer BAUE04a, BAUE04b, BAUE04c, BAUE05 begins with some variant of this statement: “Selecting and defining KPIs is not as easy as it sounds. In the current marketplace, any time you purchase business intelligence (BI), enterprise resource planning (ERP), supply chain management (SCM), customer relationship management (CRM) or business performance management (BPM) systems, you have the dilemma of choosing 15 to 20 KPIs from the several hundred (or thousand) metrics that are included in the package” BAUE04a. All four articles discuss schemes for classifying and organizing this multitude of metrics. However, bottom-up methodologies like these do not sound nearly as effective as the simple top-down approach advocated by David Walden WALD94.

    HALL04
    Key Performance Indicators and Metrics, Curt Hall, Senior Consultant, Cutter Consortium, 2004. [The Cutter Edge, 23 November 2004]
    BAUE04a
    KPIs – The Metrics that Drive Performance Management, Kent Bauer, 2004. [Information Management Magazine, September 2004]
    BAUE04b
    Key Performance Indicators: The Multiple Dimensions, Kent Bauer, 2004. [Information Management Magazine, October 2004]
    BAUE04c
    KPIs: Not All Metrics are Created Equal, Kent Bauer, 2004. [Information Management Magazine, December 2004]
    BAUE05
    Key Performance Indicators: Taming the Metrics Chaos, Kent Bauer, 2005. [Information Management Magazine, January 2005]

    Metrics Distinctions

    Now some papers that focus on defining key performance indicators. Wayne Eckerson, Director of Research at The Data Warehousing Institute, is widely cited for his list of ten characteristics ECKE05. Wayne begins his article with this observation: “(KPIs) … are the backbone of scorecards and dashboards, which have become an irresistible way for organizations to present performance information to executives and staff. Unfortunately, BI developers seem to focus more on creating visual metaphors (dials, gauges, arrows, etc.) than understanding what constitutes a good KPI that delivers long-term value to the organization”. It’s a good list, but it still focuses on how to recognize a KPI, as opposed to the process for selecting one. Perhaps that’s because the IT staff who manage data warehouses are also not the people who would actually select and use a KPI.

    Jonathan Becher was CEO of Pilot Software before it was acquired by SAP in 2007; PilotWorks became SAP Strategy Management. His short paper, Mitigating Metrics Madness: How to Tell KPIs from Mere Metrics BECH06, offers a readable and convincing discussion of the ideal characteristics of a KPI. Because the article is written for an IT audience, he also presents a bottom-up approach, writing (for example) that the process of “determining appropriate targets for your organization can be a bit of a scavenger hunt”. I suspect I may return to quote from this paper in a future post, as I think about the possible uses of a generalized Apdex metric.

    David Parmenter has written a book about Key Performance Indicators PARM07. I have not read it; if you have, please post your review in the comments. I have read his short paper, The New Thinking on KPIs PARM10, which distinguishes four types of indicator: performance indicator (PI), key performance indicator (KPI), results indicator (RI), and key results indicator (KRI). The key difference is that “result indicators … summarise activities and performance indicators … are tied to a precise activity”.

    Andrew Smart SMAR09 introduces two more types of metrics to be distinguished from KPIs: Key Risk Indicators and Key Control Indicators. His company, Manigent, delivers performance and risk management solutions. And his description of their Risk-based performance scoring methodology SMAR08 is relevant to Apdex — see Statistics and Scoring below.

    ECKE05
    Ten Characteristics of a Good KPI, Wayne Eckerson, TDWI, 2005. [The Data Warehousing Institute]
    BECH06
    Mitigating Metrics Madness: How to Tell KPIs from Mere Metrics, Jonathan Becher, 2006. [From Cutter IT Journal Vol. 19 No. 4, April 2006, pp13-16]
    PARM07
    Key Performance Indicators: Developing, Implementing,and Using Winning KPIs, David Parmenter, 2007. [Amazon Books]
    PARM10
    The New Thinking on KPIs, David Parmenter, 2010. [Extract from “Implementing winning KPIs”, 19 April 2010]
    SMAR09
    KPIs, KRIs & KCIs – Are they different? If so, does it really matter?, Andrew Smart, Manigent, 2009. [Risk-based performance, January 10, 2009]

    Statistics and Scoring

    The StatTrek Web page Statistics Tutorial: Measures of Central Tendency STATa, states: “When the focus is on the degree to which a population possesses a particular attribute, the measure of interest is a percentage or a proportion”. Apdex is a weighted proportion, in which measurements are weighted 1 or 1/2 based on their proximity to a target value. Because of the importance of goals and targets in business performance management, I expected to find more discussion of metrics like Apdex that reflect degrees of success in meeting goals. But, for example, such a metric does not even appear in Bauer’s list of six KPI Categories BAUE04b. Maybe these discussions are out there, and I’ve simply missed them. If so, please post a link in the comments.

    I found Andrew Smart’s description Manigent’s Risk-based performance scoring methodology SMAR08 particularly interesting. Like Apdex, it is a simple approach based on three quality zones, termed out of control, out of tolerance, and within tolerance, which can be displayed graphically using red, amber, and green (RAG) respectively. I will be discussing what’s involved in generalizing this aspect of Apdex in more detail in a future post.

    STATa
    Statistics Tutorial: Measures of Central Tendency. [StatTrek: Teach Yourself Statistics]
    SMAR08
    Risk-based performance scoring methodology, Andrew Smart, Manigent, 2008. [Risk-based performance, August 17, 2008]

    Dashboards and Indexes

    Data is prolific but usually poorly digested, often irrelevant and some issues entirely lack the illumination of measurement.

    – John D.C. Little in “Models and Managers: The Concept of a Decision Calculus”, Management Science, Vol 16 Number 8, April 1970

    Dashboards allow metrics and indicators to be made widely visible, as opposed to being hidden in a database. Koen Pauwels et al PAUW08 begin with Little’s quote (above), highlighting “the tension between the abundance of marketing data at our disposal and the lack of actionable insights that derive from it”. They define a dashboard as “a relatively small collection of integrated key performance metrics and underlying performance drivers that reflects both short and long-term interests to be viewed in common throughout the organization”.

    Jim Love and Alex Resnick LOVE06 claim that the term dashboard is an inaccurate metaphor, because “car dashboards tell us what has already happened.” They argue that companies must not create “the simplistic dashboard of the automobile when what they need is closer to the sophisticated instrumentation found in the navigation system of an airplane. On an airplane’s instrument panel, you can see the current status — speed, fuel consumption, and the like. But you can also get information that can help guide you to where you need to be in the future.”

    Sateesh Andra ANDR06 also employs an automotive analogy, stating that “a dashboard is a visual representation of data that is normally hidden, such as the engine temperature or the amount of gas remaining in a gas tank”. He presents a top-down process for creating an effective dashboard, using a conceptual pyramid to illustrate the hierarchy of data summary levels to be defined. He concludes that “information … not acted upon represents a wasted opportunity, (which) can have dangerous consequences … The dashboard is there to help the organization keep score and adjust so that informed decisions are made to produce desired results”.

    PAUW08
    Dashboards & Marketing: Why, What, How, and What Research is Needed?, Koen Pauwels et al, University of Frankfurt, 2008. [389Kb pdf]
    LOVE06
    Getting on the Same Page: Dashboard Development from Planning to Implementation, Jim Love and Alex Resnick, 2006. [Cutter IT Journal Vol 19 No.4, April 2006, pp 6-12. | 3.1Mb pdf]
    ANDR06
    Action-Oriented Metrics for IT Performance Management, Sateesh Andra, 2006. [Cutter IT Journal Vol 19 No.4, April 2006, pp 17-21. | 3.1Mb pdf]

    Conceptual Pyramids

    A recurring theme in many discussions of metrics is the conceptual pyramid, used to illustrate relationships among various types of metrics or indicators. For examples, see the three documents or Web pages listed immediately below, and two — BAUE04a and ANDR06 — reviewed above. If these pyramids illustrated a common conceptual framework, they could provide a useful way to explain the role of a generalized Apdex metric. But actually, although they do embody some similarities, their details are all different. I plan to compare and contrast some of these pyramids in a future post, to see just what we can say about where Apdex fits.

    KAIS04
    Background Paper: Could a Quality Index Help Us Navigate the Chasm?, Kaiser Permanente, 2004 [274Kb pdf]
    WILS09
    What is “Analysis?”, Tim Wilson, 2009. [Gilligan on Data, May 5th, 2009]
    AHO10
    The Distinction Between Business Intelligence And Corporate Performance Management, Mika Aho, Mini Conference on Scientific Publishing (MCSP), Tampere, Finland, 2010. [60Kb pdf]

    Conclusions

    I make no claim to be an expert in the fields of business intelligence (BI) and business performance management (BPM). Also, I collected this bibliography as a by-product of my focus on generalizing Apdex, not in an attempt to document the essence of BI and BPM. I’m sure I could have cited many other useful documents and Web sites, if I had found them while searching the Web over the last month. So if you have bookmarked useful material, please share a link and a brief review in a comment.

    Note: I have been extending my original list of 20 references, appending newer discoveries of relevant online resources as comments. Eventually I intend to consolidate all this material into a single annotated bibliography on the Apdex site, which can then be referenced within posts on the Apdex Exchange. Until then, I will continue to extend this post with comments, and simply repeat any relevant references within my posts.

    4 comments to Metrics and Performance Indicators: A Bibliography

    • Chris Loosley

      Web Analytics and KPI’s

      Here’s a useful blog post to add to the bibliography. Avinash Kaushik is a Web Analytics guru, author of two books on the subject, and the Analytics Evangelist for Google. His bog, Occam’s Razor, is a regular source of useful insight, presented simply and clearly. This recent post begins by explaining and defining the terms ‘Business Objective’, ‘Goal’, ‘Metric’, ‘Key Performance Indicator’, ‘Target’, ‘Dimension’, and ‘Segment’, and ends with a simple example of their use in a Web Analytics Measurement Framework.

      KAUS10
      Web Analytics 101: Definitions: Goals, Metrics, KPIs, Dimensions, Targets, Avinash Kaushik, 2010. [Occam’s Razor blog, April 19, 2010]
    • Chris Loosley

      Web Analytics and KPI’s

      Eric Peterson is another Web Analytics expert. He’s the author of three books: Web Site Measurement Hacks, Web Analytics Demystified, and The Big Book of Key Performance Indicators. I own all three; copies of last two can now be downloaded free from Eric’s Web site, which is also called Web Analytics Demystified.

      Defining key performance indicators in The Big Book …, Eric writes that they “are always rates, ratios, averages or percentages; they are never raw numbers. Raw numbers … because they don’t provide context, are less powerful …”. This important observation is relevant to a discussion of the value of the Apdex metric.

      PETE04
      Web Analytics Demystified, Eric T. Peterson, 2004. [3.7Mb pdf]
      PETE06
      The Big Book of Key Performance Indicators, Eric T. Peterson, 2006. [1.1Mb pdf]
    • Chris Loosley

      Statistics and Scoring

      If you are managing Internet service levels, improper use of traditional averages and standard deviations can mislead you, even hurt you financially. Using empirical data from an Internet/WAN distributed Web response time measurement system, this paper explores the relative applicability and usefulness of the geometric mean and geometric standard deviation, and introduces the lognormal distribution for quantifying the response time measurements.

      These statistics are particularly useful in the areas of Web content performance comparison and SLA monitoring of service providers such as Content Providers, CDNs, ASPs, MSPs, and Web Hosting companies.

      CIEM01
      What Do You ’Mean’?: Revisiting Statistics for Web Response Time Measurements, David M. Ciemiewicz, CMG Intl. Conference 2001. [224Kb pdf]
    • Chris Loosley

      Executive Dashboards

      For an introduction to executive dashboard design, two online discussions begun in 2003 are still worth reading today. Alex Kirtland provides a useful primer on dashboard purposes, usage, design and implementation. On information design and visualization, he writes: For an information architect … this is the most exciting challenge: Organizing data in a way that is meaningful for the user, as opposed to reflecting how the systems collect and manage data. It is the essential Tufte challenge: how to take massive amounts of data and clearly tell the story inherent within it.

      He is, of course, referring to Edward Tufte, the renowned author and expert in the visual communication of information. Tufte’s site contains a long-running thread on executive dashboards. Spanning more than 6 years, this wide-ranging discussion is a nice complement to Kirtland’s more prescriptive content. Reader contributions cover dashboard motivations, information design, the politics of dashboard implementation, and more. There seems to be broad agreement that “Dashboard” is a lame metaphor, but also that we’re stuck with it anyway, because it’s simple and easily grasped.

      KIRT03
      Executive Dashboards, Alex Kirtland, 2003-2008. [Article and discussion thread on “Boxes and Arrows”]
      TUFT03
      Executive Dashboards, Edward Tufte and others, 2003-2009. [Discussion thread on “Ask E.T.”]

    Leave a Reply

      

      

      


    *

    You can use these HTML tags

    <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>