Each theory starts as a guess, a hypothesis e.g. Blood sugar levels above X level over Y time causes eye blood vessel problems.
In reality it is up to scientists to try to not only prove that the hypothesis is possibly valid but to try every way possible to see whether it can be invalidated.
A theory stands over time for as long as it withstands all attempts to disprove it.
The theory/hypothesis may need to be altered as more information or variability is found, as more orher factors may influence the result, or be abandoned altogether if an alterative hypothesis seems to better fit the facts as these become better known.
Statistics though seems to me to be a mathematical attempt to rationalise things, make sense of results of an experiment to determine whether that ineffable thing called chance is playing a part in the results. There is a saying: there, is statistics, **** lies and statistics.
There used to be a term bandied about called Evidence Based Medicine, EBM for short, EBM was supposed to epitomise the very best current and evolving theories about medical conditions etc. And statistics were considered a solid part of that functionng framework. Proof that this worked and that over there did not.
Now there are multiple examples of research findings shown to be biased through things like conflict of interests, lack of being able to produce the same results with identical experiments, use of flawed statistics and poor research design and flawed collection of data techniques.
If you can 'bend' the statistics rules you might be able to obtain the findings and conclusions you wish for rather than the results and findings as they fall out from proper research design, and correct application of statistics.
For example, from one of Zoe Harcombe's blogs: research was suggesting that red meat consumption was associated with cancer. What she knew was that processed meat consumption had been found to be associated with cancer. Note asociated but not proven as a cause. But what did the researcher's do? They lumped red meat AND processed meat together and assumed that the increased rate in cancer was due to the red meat. How such a finding was allowed past the editor's desk is unbelievable.
In another study the resaerchers found that women who consumed over a certain amount of gluten in their Diet during preganacy were twice as likely to have children who developed TID in the first ten years of life. This is reviewed and suggested acpretty strong case, not absolute for making some intervention. A 2x association finding is mentioned by
@Mbaker above.
But we have to remeber that vested interests may hold a share in the Journal in which this paper is published and so shoddy research might get through.
Why should we then believe anything we read? Or what HCP's or NICE tell us.?
The best seems to be a rating ofthe evidence. How well does this research paper meet the criteria for being exem
lary in its design etc?
NICE does this with its work to try to ensure the best studies are given precedence and the not so good given less or no credence.
That should give us the best near proofs possible - provided the vetting process is sound, the conflict of interest exclusion practices robust and the resultant guidelines flexible and open to change. The translation from research to application to patients is deplorably long also - is that because the 'near proof'' is not sound or because services and clinicians are slow on the uptake ir economical factors intervene.?
Good research is valuable, burt as others have said, its findings need to be applied carefully, with all proper precautions, unlike the medical device industry in USA ( see film, 'The Bleeding Edge' , a very powerful film.
Two Final points:
how often does some treatment, test etc work but not for the reason given, or not for the correct reason? Sometimes i gave heard of situations where something is found to work when not expected to work. There is no reason or clear idea for why it works, so does that mean that the 'magic cure' is not valid? It may only be later on that someone is able to connect the dots which provide the answer. Discoveries have been made in the past from this.
What if someone in authority decides not to believe new research findings despite their being rigorously tested, and with statistically valid results or refuses to weigh up work that throws a past theory into great doubt and just seeks to confirm previous findings based on shonky statistics and findings?
That could place a lot of patients in continuing jeopardy.
This is very much the debate about cholesterol and statins where there is no association or definite proof of cause and effect. Yet doctor keep prescribing statins. Go figure!,