D&C GLug - Home Page

[ Date Index ] [ Thread Index ] [ <= Previous by date / thread ] [ Next by date / thread => ]

Re: [LUG] OT - placebos

 

On Sun, 20 Dec 2009 15:10:49 +0000
bas <baslake@xxxxxxxxxxxxxxxx> wrote:

> > If 'better' requires a human assessment, it suffers from bias. If 
> > "better" can be reduced to merely the absolute measurement against
> > a universally accepted scale, then the comparison can be objective. 
> > Therefore, 1 decibel can be objectively compared with 4 decibels.
> > "Too loud" is entirely subjective and cannot be assessed without
> > bias. So you could compare the results objectively by measuring
> > signal:noise ratios or some other absolute measurement but whether
> > that measurement means that the equipment is "better" is not
> > necessarily objective unless everyone agrees on what type of
> > measurement result IS better. 
> surely the reference point would be the object that is being replaced.

No, the reference point depends on the parameter being measured.

You have to be able to calculate absolute parameters for the two
objects *AND* agree ranges for those parameters that are universally
accepted as comparable. 

Merely comparing two values without reference to an accepted range is
meaningless. If both values are within "normal limits", any difference
is without merit. Therefore the reference point itself is not a single
point but a range with an error range as well, depending on how it was
measured.

DeviceA could give 24 decibels and DeviceB could give 25 decibels but
if the measurement itself has an error margin of +/- 2 decibels, the
comparison is meaningless. The reference has to be a range, beyond
which you can statistically show that the difference could not have
arisen from pure chance. Hopefully you'd use a measurement device that
is more accurate than +/- 2 decibels but the point is that a range must
still exist, even if it is +/- 0.05 decibels.

That also then mandates repeated measurements in multiple circumstances.

The most common problem with "scientific trials" in junk science is
insufficient sample size. The second most common is a lack of a true
control sample, revealing bias once more. Even "respectable" scientific
data can be flawed if the comparative ranges are not universally agreed
before the trial is designed. This is the "end-point" problem that is
so contentious in the evaluation of pharmaceuticals versus cost.

Patient groups see the extension of a single human life at a higher
valuation than the representatives of those who actually pay the bill,
equally patient groups put a higher emphasis on the one patient who
does improve and the NHS has to put the higher emphasis on the many
patients who do NOT improve despite the NHS paying for treatment. The
comparative range is not agreed, the limits of the trial cannot be
assessed without bias. Bias leads to contention and controversy and
even with the best possible double-blind randomised crossover trials
involving thousands of patients, some outcomes simply cannot be
measured or assessed without bias. There won't ever be a universally
acceptable method for quantifying the monetary value of a few months
extra life.

A good trial can show that DrugA can provide X extra months per 1,000
patients versus control but applying that (objective) data against the
cost of that treatment cannot be done without bias. If DrugA could
*cure* the patient, the analysis is different but still not necessarily
objective. e.g. if DrugA costs £5m per patient to achieve cure, (or
costs £5,000 per patient but only cures 1 in 1000 patients which
amounts to the same thing), the value for money assessment is still
going to be controversial.

-- 


Neil Williams
=============
http://www.data-freedom.org/
http://www.linux.codehelp.co.uk/
http://e-mail.is-not-s.ms/

Attachment: pgp5mGurNOFOD.pgp
Description: PGP signature

-- 
The Mailing List for the Devon & Cornwall LUG
http://mailman.dclug.org.uk/listinfo/list
FAQ: http://www.dcglug.org.uk/linux_adm/list-faq.html