From Davis Balestracci -- Statistical Stratification...of Sorts

Published: Mon, 09/08/14

From Davis Balestracci -- Statistical Stratification...of Sorts

Note to plain text readers:  To see the figure in the body of the newsletter, I suggest you read it by clicking on the "View as Web Page" link immediately below.

[~ 975 words: take 4-6 minutes to read over a break or lunch]

View as Web Page

SWAGs Remain Alive and Well


Hi, Folks,

I chatted about u-charts for rates last time, and today's newsletter was going to be about p-charts for percentage data.  These are the two major charts for dealing with count data and are very helpful for stratifying a stable section of process performance.  But something recently happened that saddens me and has become all-too-common in many organizations in which I have consulted.  It reminded me of the need to warn you about a very common approach to (allegedly) stratify data -- to find the "bad" performers.  I have a wonderful data set using percentages on which I will demonstrate the proper analysis and interpretation via p-charts next time; but I am going to use it today to make a major point about something to be avoided at all costs.

I have been mentoring a very good data analyst for the past three years.  Despite the support of the medical director, it has been pretty much an all-out war with the C-suite executives to implement "data sanity" -- resistance, to put it mildly, has been fierce from the start.  I received a note from him last week:

"I'm sorry to report that it appears control charts [of key indicators] are nearly dead...As of last week, they have been pulled off all but one report...

"In other news [organization] has moved towards Lean Six Sigma.  The first Black Belt 'course' is being offered right now -- and I have been 'drafted' to teach the statistics portion... I have been working on my slides over the past couple of weeks -- and I must say that I don't understand where any of this is going to come in handy for Quality Directors...I spent half of an hour trying to find information on calculating the confidence interval for the correlation coefficient by hand.  It involves the inverse hyperbolic tangent function... I'm sure everyone will get that one, right?!?

"It all seems a little ridiculous to me."

And it's courses like these that lead to consequences and analyses such as I'm about to describe -- techniques inappropriately used to convert a wild-ass guess (WAG) into a statistical wild-ass guess (SWAG).

I Can't Make this Stuff Up

.
Published rankings with feedback are very often used as a cost-cutting measure to identify and motivate "those bad workers." Some are even derived, er... uh... "statistically?"

In an effort to reduce unnecessary expensive prescriptions, a pharmacy administrator developed a proposal  to monitor and compare individual physicians' tendency to prescribe the most expensive drug within a class. Data were obtained for each of a peer group of 51 physicians -- the total number of prescriptions written and, of that number, how many were for the target drug.

Someone was kind enough to send me this proposal -- while begging me not to be identified as the source.  I quote it verbatim.

Given the 51 physician results:

1. "Data will be tested for the normal distribution,"

2. "If distribution is normal -- Physicians whose prescribing deviates greater than one or two standard deviations from the mean are identified as outliers,"

3. "If distribution is not normal -- Examine distribution of data and establish an arbitrary cutoff point above which physicians should receive feedback (this cutoff point is subjective and variable based on the distribution of ratio data)."

For my own amusement, I tested the data for normality and it "passed" (p-value of 0.277).  Yes, I said "for my own amusement" because this test is moot and inappropriate for percentage data (the number of prescriptions in the denominator ranged from 30 to 217)...but the computer will do anything you want.

The scary issue here is the proposed ensuing "analysis" resulting from whether the data are normal.  If data are normally distributed, doesn't that mean that there are no outliers? But suppose outliers are present -- doesn't this mean they're atypical? In fact, wouldn't their presence tend to inflate the traditional calculation of standard deviation? But wait, the data passed the normality test... it's all so confusing!

Yet that doesn't seem to stop our quality police from lowering the "gotcha" threshold to two or even one standard deviation to find outliers (in my experience, a very common practice).

Returning to the protocol, even scarier is what's proposed if the distribution isn't normal: Establish an arbitrary cutoff point -- a WAG for what the administrator feels it "should" be.

I'll play his game:  Because the data pass the normality test, the graph below shows the suggested analysis with one, two and three standard deviation lines drawn in around the mean. (The standard deviation of the 51 numbers was 10.7.)



Get Out the Ouija Boards!


Depending on the analyst's mood and the standard deviation criterion subjectively selected, he or she could claim to statistically find one -- or 10 -- upper outliers (What about lower outliers?).  Even worse, he or she could have just as easily used the WAG approach, decided that 15 percent was what the standard "should" be, and given feedback to the 27 physicians above 15 percent.  Or maybe he could even set a "tougher" standard of 10 percent, in which case thirty-five physicians would receive feedback, which consisted of a wealth of educational material.  And then, there is the tried-and-true, "Let's go after the top quartile (or 10%...or 15%...or 20%.)"  When I present this to a roomful of doctors, there is raucous laughter and a collective pantomime of people throwing things into the garbage when I ask what they do with such "helpful" feedback. 

What's not so funny -- this and similar SWAGs are fast becoming "simple...obvious...and wrong" techniques in the current pay-for-performance craze in healthcare.  Who knows?  Maybe some of these schemes will even involve the inverse hyperbolic tangent function, so my friend's training will not have gone to waste.

As my poor friend said, ""It all seems a little ridiculous to me."

Until next time...

Kind regards,
Davis
=======================================================================
P.S. I PROMISE never to mention either normality or inverse hyperbolic tangent function
=======================================================================

If you are interested in hearing about or applying the innovative ideas of my upcoming revised edition of Data Sanity,  I can help you with that by...

* ...spicing up your professional or internal conferences as a plenary speaker

*...sharpening your and your staff's skills with a retreat

* ...mentoring you to create awareness that you are surrounded by similar opportunities.  Solve these and watch the resulting breakthrough in your thinking and effectiveness...and respect for your role

Please contact me to discuss these opportunities, ask for the Preface and Introduction of the revised Data Sanity, or just about any other reason!  I love corresponding with my readers and answering their questions. [ davis@davisdatasanity.com ]

=========================================================
Was this forwarded to you?  Would you like to sign up?
=========================================================
If so, please visit my web site -- www.davisdatasanity.com -- and fill out the box in the left margin on the home page, then click on the link in the confirmation e-mail you will immediately receive.

==========================================================
Want a concise summary of Data Sanity...in my own words?
==========================================================
Listen to my 10-minute podcast. Go to the bottom left of this page:
www.davisdatasanity.com .