Theoretical Physics | Quantum Biology | Dark Matter Research | Energy Consulting | Creation of Hydrogen ATOM in the Higgs Field >> Vote for Nobel Prize

https://fivethirtyeight.com/features/statisticians-found-one-thing-they-can-agree-on-its-time-to-stop-misusing-p-values/

p-values ARE disproved - Quantum biology teaches us that every human being is a random sample, every microorganism is a random sample, even every quantum energy state is a random sample ......

There is no direct link between phenotypes - look up Dark Matter of our genome or for instance Sandwalk: Basic Concepts: The Central Dogma of Molecular Biology.

Discussion: (Excerpt)

quote by ycombinator

The article submitted here leads to the American Statistical Association statement on the meaning of p values,[1] the first such methodological statement ever formally issued by the association. It's free to read and download. The statement summarizes into these main points, with further explanation in the text of the statement.

"What is a p-value?

"Informally, a p-value is the probability under a specified statistical model that a statistical summary of the data (for example, the sample mean difference between two compared groups) would be equal to or more extreme than its observed value.

"Principles

"1. P-values can indicate how incompatible the data are with a specified statistical model.

"2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.

"3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.

"4. Proper inference requires full reporting and transparency.

"5. A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.

"6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis."

[1] "The ASA's statement on p-values: context, process, and purpose"

http://amstat.tandfonline.com/doi/abs/10.1080/00031305.2016.

thearn4 104 days ago

I remember the feeling after my first undergraduate course in statistics essentially being that we stated these principles, then spent the remaining weeks essentially invalidating them without offering any real alternatives. My professor may have been more careful than I remember, but the subtleness was lost on me at the time if that was the case.

The testing of statistical hypotheses always seemed like an odd area of the mathematical sciences to me, even after later taking a graduate mathematical statistics sequence. Like an academic squabble between giants in the field of frequentist inference (Fisher vs. Neyman and Pearson) that ended suddenly without resolution, with the scientific community decided to sloppily merge the two positions for the purposes of publication and forge onward.

wfunction 104 days ago

> 2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.

Sure, but is there not a significant correlation between the two in practice? Or would you trust something that gives a 1% p-value equally as one that gives a 99% p-value?

(Yes, I realize it's easy to construct counterexamples, hence why I asked "in practice".)

wdewind 104 days ago

> "3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.

There is no lower threshold at which the data becomes non-predictive?

carbocation 104 days ago

More likely they mean the opposite: that a statistically significant P value (by whatever threshold you decide to use) should not be used, by itself, to drive policy decisions. Internally, the effect size still matters. Externally, there are numerous other factors that should drive decisionmaking.

Further reading:

http://fivethirtyeight.com/features/not-even-scientists-can-easily-explain-p-values/