International Journal of Data Science and Big Data Analytics
|
Volume 1, Issue 3, November 2021 | |
Research PaperOpenAccess | |
The Uncertainty of the Statistical Data |
|
Andrea Berdondini1* |
|
1Independent Researcher, Ravenna,, Italy. E-mail: andrea.berdondini@libero.it
*Corresponding Author | |
Int.J.Data.Sci. & Big Data Anal. 1(3) (2021) 22-26, DOI: https://doi.org/10.51483/IJDSBDA.1.3.2021.22-26 | |
Received: 07/08/2021|Accepted: 25/10/2021|Published: 05/11/2021 |
Any result can be generated randomly and any random result is useless. Traditional methods define uncertainty as a measure of the dispersion around the true value and are based on the hypothesis that any divergence from uniformity is the result of a deterministic event. The problem with this approach is that even non-uniform distributions can be generated randomly and the probability of this event rises as the number of hypotheses tested increases. Consequently, there is a risk of considering a random and therefore non-repeatable hypothesis as deterministic. Indeed, it is believed that this way of acting is the cause of the high number of non-reproducible results. Therefore, we believe that the probability of obtaining an equal or better result randomly is the true uncertainty of the statistical data. Because it represents the probability that the data is useful and therefore the validity of any other analysis depends on this parameter.
Keywords: Uncertainty, Statistics, Statistical inference
Full text | Download |
Copyright © SvedbergOpen. All rights reserved