Stats Made Easy

Practical Tools for Effective Experimentation

Sunday, December 30, 2007

Medical writer's '08 resolution: Do not report results from poorly designed experiments

Earlier this month,* Newsweek’s “Health Matters” columnist Jerry More resolved that
“I will not report on any amazing new treatments for anything, unless they were tested in large, randomized, placebo-controlled, double-blind clinical trials published in high-quality peer-reviewed medical journals.”

He was “shamed” into this by biostatistician R. Barker Bausell’s book Snake Oil Science. Based on observations of his own loved ones, Bausell explains how immensely powerful placebo effects make almost any medical treatment appear to work, even though they are often caused by faith alone. Those who feel the benefits, whether it be placebo or physical (double-blind verified), usually do not care how this comes to pass. However, Bausell makes the case that placebo effects tend to be mild and temporary.

In an ironic twist of Jerry More’s resolution, Bausell advises that those afflicted with chronic health problems seek a promising remedy from an enthusiastic promoter and take the plunge with no reservations, thus maximizing any placebo effect!

Obviously one must be extremely cautious about side effects from any such purported therapy or substance. For example, some years ago my wife and I hosted a weight-obsessed exchange 16-year-old student from Mexico who harbored strong faith in alternative medicines (an oxymoron?) for curbing appetite. One day while shopping at a local mall, she got away from me and made a bee line to a health-food store. I found her at the counter with half a dozen bottles of very potent natural (?) substances that the teenage clerk recommended to her. I hustled us out of there without making that purchase.

Blind faith just does not work well for me. I like the double-blind approach much better.

* A Big Dose of Skepticism

Monday, December 24, 2007

Tidings and tools for Yule


I get asked all the time why experimenters should license our dedicated software for design of experiments (DOE) when their company already provides a general statistics program with DOE capability. My answer is that our package makes it incredibly easy to design experiments, analyze (and graph!) the results, and find the optimal system setup based on the resulting predictive models. In other words, it is just the right tool for the job. Similarly, although I can do wonders with a channel-lock wrench and a couple of screwdrivers (they do double duty for pounding things), a box full of specialized tools serves far better for making work handier.


Similarly, when it comes to shoveling snow off my driveway, an array of five specialized shovels works really well for me. The biggest one scoops large amounts very quickly and slides them over the berms that build up as the winter progresses. A low-energy twist dumps the load in a satisfactory puff of powder. Next I scrape the residue with my heavy plow shovel. An ice chopper takes up the tire-compressed remainders and a light aluminum shovel provides some cleanup around the edges. The worst part comes after the city plow clears our street. Then I get my scoop shovel and dig out – taking it slow in layers. My goal is to keep a pace of exercise that does not exceed my daily cardiovascular workout on an elliptical machine. I love the snow and enjoy our Minnesota winters!

PS. Happy tidings to all of you this holiday season! By the way, as I wrote this I happened to be watching The History Channel show “It’s a Wonderful Time to be Weird” (2005) which featured ‘Santa Math’ by statistician William Briggs. The TV hosts acted very goofy about all the numbers and equations, but I gathered that Briggs assumed some millions of children believe in Santa, but some must be subtracted due to being on the naughty list (not nice). After a lot of calculations, he figures that Santa’s Sleigh would be moving at such as speed that it would vaporize all the homes – not good. However, it turns out the by applying chaos theory in gift momentum and gift probability equations by Briggs (energized by Santa’s secret force), the requested gifts do get delivered every Christmas Season. Cheers!

Sunday, December 16, 2007

Sports ‘randomination’?



















In a recent guest post to the Freakonomics blog, Yale economist Ian Ayres suggests that sports teams run randomized experiments to improve their winning ways. He solicited feedback from anyone who has done such an experiment, but so far no one has come forward with an affirmative post to this blog.

It turns out that I performed an experiment back in my days as a slow-pitch softball player. At that time my position was ‘rover’ – a tenth man who augmented the normal three who play the outfield in baseball. Depending on my whim, I would play ‘long’ – I line with the other outfielders – or ‘short’ – nearer the infield in a gap where I guessed a batter might want to drop a hit. It occurred to me that by randomly positioning myself inning-by-inning from one game to the next and measuring the oppositions success I might develop statistics that would show favor to either short or long as a general practice. Our team was originally sponsored by General Mills Chemical Technical Center, so my mates met this proposal with surprising enthusiasm. After all, our prospects for winning in our Class D (lowest level) league were never very good.

As Yogi Berra said, “We were overwhelming underdogs.”

One thing we could count on is that during any given inning, the opposition would be sure to achieve some hits, if not runs. However, it seemed likely that while playing short might cut off singles, it would lead to more doubles and triples from players knocking the softball over my head. Therefore my teammates and I agreed that total bases would be a good measure of success. Thus we counted a single as one, a double as two and so forth. (If you are not familiar with the game of baseball, see these simplified baseball rules from Wikipedia). Since opponents varied in their quality of play, I laid out a randomized block experiment game-by-game (results in first graph -- the points labeled "2" represent two innings with the same total bases).

As the experiment proceeded I assessed the results after each game to see if the cumulative data produced a significant outcome. Patience proved to be the key. During one particularly bad inning – 16 total bases by the opposition – my fellow outfielder screamed at me to abandon my proscribed position and go to the opposite choice. “We are sure to lose,” he yelled – ready to knock sense into my statistically-addled brain. However, my teammates stepped up to protect me and my experiment. “Yes, we may lose this particular game,” said they, “but from what we learn our team will win more games over the course of this season and ones to come.” Indeed we did: From the knowledge gained from this experiment and other strategic moves, our team of techies went on to win our Class D league the following season. True, we did get decimated in the first round of the State Tourney, but at least we got there!

PS. As shown on the second graph, positioning myself short in the outfield proved to be significantly better.

Saturday, December 08, 2007

Is anyone out there, or is it just bla, bla blog?


San Francisco-based search engineer Technorati recently reported* these stats about the blogging phenomenon:
-- Over 100 million blogs up on the Internet
-- One for every 23 people with access
-- Over 99 percent of blogs get zero (0) hits per year!

I asked my blogmeister, son Hank, to help me track how many people read this StatsMadeEasy blog. He set us up on a free web tracker called STATCOUNTER. They produced this bar graph showing a characteristic spike in hits that comes every month with the broadcast of the DOE FAQ Alert – a monthly ezine that I publish for aficionados of design of experiments (DOE). Click this link to subscribe. It provides answers to frequently asked questions (FAQs) on issues of design and analysis of experiments.

Thank you for paying attention to this blog. Do not be shy about posting comments. This fall I attended a conference of industrial statisticians, one of whom went out of his way to say “Hi” to me and say that the StatsMadeEasy blog provided a bright break from tedious times in his work week. That’s my mission!

*(Source Patrick T. Reardon, Chicago Tribune “Welcome to obscurity: Blogs and the real world”.)

Saturday, December 01, 2007

Extrapolating beyond the point where you have a leg to stand on

In the November issue of The American Statistician the Editor published this comment by David J. Finney, Professor Emeritus of Statistics at the University of Edinburgh: “On a day in 1919 or 1920…I murmured silently to myself: ‘David, you are becoming a clever boy…you have learnt to stand on one leg. What next?’…I showed creditable numerical ability in saying to myself ‘2-1-0.’ I willed myself to make…an effort to lift the foot. I rapidly became aware of a sore bottom…I never tried again but memory of the pain has given me a prejudice against any form of extrapolation from particular to general.”

In 1798, Thomas Malthus created one of the most controversial extrapolations in history in An Essay on the Principle of Population. Based on the sketchy statistics of the time, he assumed that population grows geometrically (exponential), whereas food supplies increase at an arithmetic rate (linear). No wonder he became known as “Gloomy” Malthus! Fortunately, due to development of new agricultural regions and improvements in productivity from these lands, the mass misery of his projections did not happen...yet.

In 1972, I read a book called Limits to Growth by the Club of Rome think tank. They presented a series of Malthusian projections with consumption increasing exponentially relative to linear expansion of resources. I found it terribly depressing. It seemed certain to me from all the graphs that mankind would not last beyond 2000. As subsequent developments averted what seemed sure disaster, I became ever more wary of extrapolation, especially ones that predict doom and gloom.

“Human history becomes more and more a race between education and catastrophe.”
-- H. G. Wells

On the other hand, I’ve seen forecasts fizzle and become forgotten that turned out true -- albeit much delayed. For example, in the spring of 1989 I co-taught a DOE class in the San Francisco Bay Area. One night my top-floor room shook so hard that it woke me up – it was an earthquake that registered a bit above 5 on the Richter scale. The next morning another earthquake rocked our training room – built on stilts just off shore on the Bay. It measured near 6. That led to a forecast that by week end there was a 20 percent probability of a really major quake. It did not materialize. However, in October, San Francisco suffered the devastating Loma Prieta earthquake – nearly 7 points on the Richter scale.

So, whereas little Finney took a fall from misguided extrapolation, Bay Area residents found themselves floored from what turned out to be a prophetically forecasted tremblor. I suppose one way of dealing with scenarios for the future is to expect, and plan, for the worst, but hope for the best, that is, hedge your bets. For example, try to keep two feet on the ground as often as possible, keep a hold on the bannister and look before you leap.