"J. Clarke" wrote in news: snipped-for-privacy@hamster.jcbsbsdomain.local:
I'm as suspicious as any person when it comes to statistics. Moreover, I hate statistics, because as you imply (I think) with statistics you can "prove" almost anything. Nevertheless, there are ways to make statistics at least more scientific. It starts out with a "null" hypothesis, namely that there is no difference between treatments A and B, or populations, or whatever collections of data. Next you need to determinewhether the variations between individual measurements of A and B follow a Gaussian, or normal distribution. Then you need to know whether you have enough data points obtained in an non-biased manner. Then after you let the statistics program loose on the data, it will tell you the level of significance of the deviation from the null hypothesis. p=0.5 means there is a 50/50 chance of the result being right. p=0.05 means it is 95% sure that the difference from the null hypothesis is correct, and a 5% chance that the difference was just by chance. p=0.005, 99.5 a% and 0.5% chance etc. People who gamble go for
1:1 million chances to win the jackpot. Sometimes a small chance is still very significant, and judged very important. Vioxx was taken of the market as a painkiller because when a group of patients was (supposedly totally unbiased) split in two, and half the patients received placebo and the other half Vioxx, it appeared Vioxx was really bad: The study reported 29 deaths (2.7 percent) among 1,067 rofecoxib patients and 17 deaths (1.6 percent) among 1,075 placebo patients. Almost double the death rate, but still only 27 in a 1000. So what are you going to do if aspirin doesn't help your arthritis pain, and Vioxx did?