Date
Theme
Dec. 23
Type I and Type II errors (difference)
Dec. 21
In case you learned that a 95% confidence interval (e.g., [0.3,0.6] for some true parameter) is interpreted like: With probability 0.95 the true parameter is between 0.3 and 0.6, then STOP IT! Its wrong!
Correct is:
If you could resample data from the true data generating process infinitely many times, then 95% of the resampled confidence intervals will contain the true (unknown) parameter (e.g. the mean).
Best visualization I know: https://rpsychologist.com/d3/ci/
The one single confidence interval [0.3,0.6] computed from your data will either contain the true parameter or not you never know.
This makes it extremely difficult to properly interpret a given single (frequentist) confidence interval.
Dec. 18
Falsely interpreting the p-value as the probability that the null hypothesis (H0) is true, leads to the misconception that insignificant test results (large p-values) can be interpreted like H0 is true.
Due to this and similar errors, the American Statistical Association (@AmstatNews) published this statement: https://amstat.org/asa/files/pdfs/p-valuestatement.pdf
It happens even in highly ranked science journals including top medical journals like the BMJ.
Therefore, D. G. Altman wrote his famous paper: Absence of evidence is not evidence of absence https://doi.org/10.1136/bmj.311.7003.485
But sometimes one really wants to show that there is no real difference between groups. E.g., that children are as infectious as adults.
How to do this? By using Equivalence Tests! Unfortunately, almost nobody seems to know this.
But now you know! https://en.wikipedia.org/wiki/Equivalence_test
Chapter 2
Conditionally on X
Chapter 3
Coefficient of determination R2
Chapter 4
H0H0HAa
Only logged in customers who have purchased this product may leave a review.
Reviews
There are no reviews yet.