Randomness
Part of a series on statistics 
Probability theory 

In common parlance, randomness is the apparent or actual lack of pattern or predictability in events.^{ [1]}^{ [2]} A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if the probability distribution is known, the frequency of different outcomes over repeated events (or "trials") is predictable.^{ [3]}^{ [note 1]} For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4. In this view, randomness is not haphazardness; it is a measure of uncertainty of an outcome. Randomness applies to concepts of chance, probability, and information entropy.
The fields of mathematics, probability, and statistics use formal definitions of randomness. In statistics, a random variable is an assignment of a numerical value to each possible outcome of an event space. This association facilitates the identification and the calculation of probabilities of the events. Random variables can appear in random sequences. A random process is a sequence of random variables whose outcomes do not follow a deterministic pattern, but follow an evolution described by probability distributions. These and other constructs are extremely useful in probability theory and the various applications of randomness.
Randomness is most often used in statistics to signify welldefined statistical properties. Monte Carlo methods, which rely on random input (such as from random number generators or pseudorandom number generators), are important techniques in science, particularly in the field of computational science.^{ [4]} By analogy, quasiMonte Carlo methods use quasirandom number generators.
Random selection, when narrowly associated with a simple random sample, is a method of selecting items (often called units) from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, with a bowl containing just 10 red marbles and 90 blue marbles, a random selection mechanism would choose a red marble with probability 1/10. Note that a random selection mechanism that selected 10 marbles from this bowl would not necessarily result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable, a random selection mechanism requires equal probabilities for any item to be chosen. That is, if the selection process is such that each member of a population, say research subjects, has the same probability of being chosen, then we can say the selection process is random.^{ [2]}
According to Ramsey theory, pure randomness is impossible, especially for large structures. Mathematician Theodore Motzkin suggested that "while disorder is more probable in general, complete disorder is impossible".^{ [5]} Misunderstanding this can lead to numerous conspiracy theories.^{ [6]} Cristian S. Calude stated that "given the impossibility of true randomness, the effort is directed towards studying degrees of randomness".^{ [7]} It can be proven that there is infinite hierarchy (in terms of quality or strength) of forms of randomness.^{ [7]}
History
In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threw dice to determine fate, and this later evolved into games of chance. Most ancient cultures used various methods of divination to attempt to circumvent randomness and fate.^{ [8]}^{ [9]}
The Chinese of 3000 years ago were perhaps the earliest people to formalize odds and chance. The Greek philosophers discussed randomness at length, but only in nonquantitative forms. It was only in the 16th century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention of calculus had a positive impact on the formal study of randomness. In the 1888 edition of his book The Logic of Chance, John Venn wrote a chapter on The conception of randomness that included his view of the randomness of the digits of pi, by using them to construct a random walk in two dimensions.^{ [10]}
The early part of the 20th century saw a rapid growth in the formal analysis of randomness, as various approaches to the mathematical foundations of probability were introduced. In the midtolate20th century, ideas of algorithmic information theory introduced new dimensions to the field via the concept of algorithmic randomness.
Although randomness had often been viewed as an obstacle and a nuisance for many centuries, in the 20th century computer scientists began to realize that the deliberate introduction of randomness into computations can be an effective tool for designing better algorithms. In some cases, such randomized algorithms even outperform the best deterministic methods.^{ [11]}
In science
Many scientific fields are concerned with randomness:
In the physical sciences
In the 19th century, scientists used the idea of random motions of molecules in the development of statistical mechanics to explain phenomena in thermodynamics and the properties of gases.
According to several standard interpretations of quantum mechanics, microscopic phenomena are objectively random.^{ [12]} That is, in an experiment that controls all causally relevant parameters, some aspects of the outcome still vary randomly. For example, if a single unstable atom is placed in a controlled environment, it cannot be predicted how long it will take for the atom to decay—only the probability of decay in a given time.^{ [13]} Thus, quantum mechanics does not specify the outcome of individual experiments, but only the probabilities. Hidden variable theories reject the view that nature contains irreducible randomness: such theories posit that in the processes that appear random, properties with a certain statistical distribution are at work behind the scenes, determining the outcome in each case.
In biology
The modern evolutionary synthesis ascribes the observed diversity of life to random genetic mutations followed by natural selection. The latter retains some random mutations in the gene pool due to the systematically improved chance for survival and reproduction that those mutated genes confer on individuals who possess them.
Several authors also claim that evolution (and sometimes development) requires a specific form of randomness, namely the introduction of qualitatively new behaviors. Instead of the choice of one possibility among several pregiven ones, this randomness corresponds to the formation of new possibilities.^{ [14]}^{ [15]}
The characteristics of an organism arise to some extent deterministically (e.g., under the influence of genes and the environment), and to some extent randomly. For example, the density of freckles that appear on a person's skin is controlled by genes and exposure to light; whereas the exact location of individual freckles seems random.^{ [16]}
As far as behavior is concerned, randomness is important if an animal is to behave in a way that is unpredictable to others. For instance, insects in flight tend to move about with random changes in direction, making it difficult for pursuing predators to predict their trajectories.
In mathematics
The mathematical theory of probability arose from attempts to formulate mathematical descriptions of chance events, originally in the context of gambling, but later in connection with physics. Statistics is used to infer the underlying probability distribution of a collection of empirical observations. For the purposes of simulation, it is necessary to have a large supply of random numbers—or means to generate them on demand.
Algorithmic information theory studies, among other topics, what constitutes a random sequence. The central idea is that a string of bits is random if and only if it is shorter than any computer program that can produce that string ( Kolmogorov randomness), which means that random strings are those that cannot be compressed. Pioneers of this field include Andrey Kolmogorov and his student Per MartinLöf, Ray Solomonoff, and Gregory Chaitin. For the notion of infinite sequence, mathematicians generally accept Per MartinLöf's semieponymous definition: An infinite sequence is random if and only if it withstands all recursively enumerable null sets.^{ [17]} The other notions of random sequences include, among others, recursive randomness and Schnorr randomness, which are based on recursively computable martingales. It was shown by Yongge Wang that these randomness notions are generally different.^{ [18]}
Randomness occurs in numbers such as log(2) and pi. The decimal digits of pi constitute an infinite sequence and "never repeat in a cyclical fashion." Numbers like pi are also considered likely to be normal:
Pi certainly seems to behave this way. In the first six billion decimal places of pi, each of the digits from 0 through 9 shows up about six hundred million times. Yet such results, conceivably accidental, do not prove normality even in base 10, much less normality in other number bases.^{ [19]}
In statistics
In statistics, randomness is commonly used to create simple random samples. This allows surveys of completely random groups of people to provide realistic data that is reflective of the population. Common methods of doing this include drawing names out of a hat or using a random digit chart (a large table of random digits).
In information science
In information science, irrelevant or meaningless data is considered noise. Noise consists of numerous transient disturbances, with a statistically randomized time distribution.
In communication theory, randomness in a signal is called "noise", and is opposed to that component of its variation that is causally attributable to the source, the signal.
In terms of the development of random networks, for communication randomness rests on the two simple assumptions of Paul Erdős and Alfréd Rényi, who said that there were a fixed number of nodes and this number remained fixed for the life of the network, and that all nodes were equal and linked randomly to each other.^{[ clarification needed]}^{ [20]}
In finance
The random walk hypothesis considers that asset prices in an organized market evolve at random, in the sense that the expected value of their change is zero but the actual value may turn out to be positive or negative. More generally, asset prices are influenced by a variety of unpredictable events in the general economic environment.
In politics
Random selection can be an official method to resolve tied elections in some jurisdictions.^{ [21]} Its use in politics originates long ago. Many offices in Ancient Athens were chosen by lot instead of modern voting.
Randomness and religion
Randomness can be seen as conflicting with the deterministic ideas of some religions, such as those where the universe is created by an omniscient deity who is aware of all past and future events. If the universe is regarded to have a purpose, then randomness can be seen as impossible. This is one of the rationales for religious opposition to evolution, which states that nonrandom selection is applied to the results of random genetic variation.
Hindu and Buddhist philosophies state that any event is the result of previous events, as is reflected in the concept of karma. As such, this conception is at odd with the idea of randomness, and any reconciliation between both of them would require an explanation.^{ [22]}
In some religious contexts, procedures that are commonly perceived as randomizers are used for divination. Cleromancy uses the casting of bones or dice to reveal what is seen as the will of the gods.
Applications
In most of its mathematical, political, social and religious uses, randomness is used for its innate "fairness" and lack of bias.
Politics: Athenian democracy was based on the concept of isonomia (equality of political rights), and used complex allotment machines to ensure that the positions on the ruling committees that ran Athens were fairly allocated. Allotment is now restricted to selecting jurors in AngloSaxon legal systems, and in situations where "fairness" is approximated by randomization, such as selecting jurors and military draft lotteries.
Games: Random numbers were first investigated in the context of gambling, and many randomizing devices, such as dice, shuffling playing cards, and roulette wheels, were first developed for use in gambling. The ability to produce random numbers fairly is vital to electronic gambling, and, as such, the methods used to create them are usually regulated by government Gaming Control Boards. Random drawings are also used to determine lottery winners. In fact, randomness has been used for games of chance throughout history, and to select out individuals for an unwanted task in a fair way (see drawing straws).
Sports: Some sports, including American football, use coin tosses to randomly select starting conditions for games or seed tied teams for postseason play. The National Basketball Association uses a weighted lottery to order teams in its draft.
Mathematics: Random numbers are also employed where their use is mathematically important, such as sampling for opinion polls and for statistical sampling in quality control systems. Computational solutions for some types of problems use random numbers extensively, such as in the Monte Carlo method and in genetic algorithms.
Medicine: Random allocation of a clinical intervention is used to reduce bias in controlled trials (e.g., randomized controlled trials).
Religion: Although not intended to be random, various forms of divination such as cleromancy see what appears to be a random event as a means for a divine being to communicate their will (see also Free will and Determinism for more).
Generation
It is generally accepted that there exist three mechanisms responsible for (apparently) random behavior in systems:
 Randomness coming from the environment (for example, Brownian motion, but also hardware random number generators).
 Randomness coming from the initial conditions. This aspect is studied by chaos theory, and is observed in systems whose behavior is very sensitive to small variations in initial conditions (such as pachinko machines and dice).
 Randomness intrinsically generated by the system. This is also called pseudorandomness, and is the kind used in pseudorandom number generators. There are many algorithms (based on arithmetics or cellular automaton) for generating pseudorandom numbers. The behavior of the system can be determined by knowing the seed state and the algorithm used. These methods are often quicker than getting "true" randomness from the environment.
The many applications of randomness have led to many different methods for generating random data. These methods may vary as to how unpredictable or statistically random they are, and how quickly they can generate random numbers.
Before the advent of computational random number generators, generating large amounts of sufficiently random numbers (which is important in statistics) required a lot of work. Results would sometimes be collected and distributed as random number tables.
Measures and tests
There are many practical measures of randomness for a binary sequence. These include measures based on frequency, discrete transforms, complexity, or a mixture of these, such as the tests by Kak, Phillips, Yuen, Hopkins, Beth and Dai, Mund, and Marsaglia and Zaman.^{ [23]}
Quantum nonlocality has been used to certify the presence of genuine or strong form of randomness in a given string of numbers.^{ [24]}
Misconceptions and logical fallacies
Popular perceptions of randomness are frequently mistaken, and are often based on fallacious reasoning or intuitions.
A number is "due"
This argument is, "In a random selection of numbers, since all numbers eventually appear, those that have not come up yet are 'due', and thus more likely to come up soon." This logic is only correct if applied to a system where numbers that come up are removed from the system, such as when playing cards are drawn and not returned to the deck. In this case, once a jack is removed from the deck, the next draw is less likely to be a jack and more likely to be some other card. However, if the jack is returned to the deck, and the deck is thoroughly reshuffled, a jack is as likely to be drawn as any other card. The same applies in any other process where objects are selected independently, and none are removed after each event, such as the roll of a die, a coin toss, or most lottery number selection schemes. Truly random processes such as these do not have memory, which makes it impossible for past outcomes to affect future outcomes. In fact, there is no finite number of trials that can guarantee a success.
A number is "cursed" or "blessed"
In a random sequence of numbers, a number may be said to be cursed because it has come up less often in the past, and so it is thought that it will occur less often in the future. A number may be assumed to be blessed because it has occurred more often than others in the past, and so it is thought likely to come up more often in the future. This logic is valid only if the randomisation is biased, for example with a loaded die. If the die is fair, then previous rolls can give no indication of future events.
In nature, events rarely occur with perfectly equal frequency, so observing outcomes to determine which events are more probable makes sense. However, it is fallacious to apply this logic to systems designed to make all outcomes equally likely, such as shuffled cards, dice, and roulette wheels.
Odds are never dynamic
In the beginning of a scenario, one might calculate the probability of a certain event. However, as soon as one gains more information about the scenario, one may need to recalculate the probability accordingly.
For example, when being told that a woman has two children, one might be interested in knowing if either of them is a girl, and if yes, what is probability that the other child is also a girl. Considering the two events independently, one might expect that the probability that the other child is female is ½ (50%), but by building a probability space illustrating all possible outcomes, one would notice that the probability is actually only ⅓ (33%).
To be sure, the probability space does illustrate four ways of having these two children: boyboy, girlboy, boygirl, and girlgirl. But once it is known that at least one of the children is female, this rules out the boyboy scenario, leaving only three ways of having the two children: boygirl, girlboy, girlgirl. From this, it can be seen only ⅓ of these scenarios would have the other child also be a girl^{ [25]}(see Boy or girl paradox for more).
In general, by using a probability space, one is less likely to miss out on possible scenarios, or to neglect the importance of new information. This technique can be used to provide insights in other situations such as the Monty Hall problem, a game show scenario in which a car is hidden behind one of three doors, and two goats are hidden as booby prizes behind the others. Once the contestant has chosen a door, the host opens one of the remaining doors to reveal a goat, eliminating that door as an option. With only two doors left (one with the car, the other with another goat), the player must decide to either keep their decision, or to switch and select the other door. Intuitively, one might think the player is choosing between two doors with equal probability, and that the opportunity to choose another door makes no difference. However, an analysis of the probability spaces would reveal that the contestant has received new information, and that changing to the other door would increase their chances of winning.^{ [25]}
See also
 Aleatory
 Chaitin's constant
 Chance (disambiguation)
 Frequency probability
 Indeterminism
 Nonlinear system
 Probability interpretations
 Probability theory
 Pseudorandomness
 Random.org—generates random numbers using atmospheric noise
 Sortition
Notes
 ^ Strictly speaking, the frequency of an outcome will converge almost surely to a predictable value as the number of trials becomes arbitrarily large. Nonconvergence or convergence to a different value is possible, but has probability zero.
References
 ^ The Oxford English Dictionary defines "random" as "Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method or conscious choice; haphazard."
 ^ ^{a} ^{b} "Definition of randomness  Dictionary.com". www.dictionary.com. Retrieved 21 November 2019.
 ^ "The Definitive Glossary of Higher Mathematical Jargon – Arbitrary". Math Vault. 1 August 2019. Retrieved 21 November 2019.
 ^ Third Workshop on Monte Carlo Methods, Jun Liu, Professor of Statistics, Harvard University
 ^ Hans Jürgen Prömel (2005). "Complete Disorder is Impossible: The Mathematical Work of Walter Deuber". Combinatorics, Probability and Computing. Cambridge University Press. 14: 3–16. doi: 10.1017/S0963548304006674.
 ^ Ted.com, (May 2016). The origin of countless conspiracy theories
 ^ ^{a} ^{b} Cristian S. Calude, (2017). "Quantum Randomness: From Practice to Theory and Back" in "The Incomputable Journeys Beyond the Turing Barrier" Editors: S. Barry Cooper, Mariya I. Soskova, 169–181, doi:10.1007/9783319436692_11.
 ^ Handbook to life in ancient Rome by Lesley Adkins 1998 ISBN 0195123328 page 279
 ^ Religions of the ancient world by Sarah Iles Johnston 2004 ISBN 0674015177 page 370
 ^ Annotated readings in the history of statistics by Herbert Aron David, 2001 ISBN 0387988440 page 115. Note that the 1866 edition of Venn's book (on Google books) does not include this chapter.
 ^ Reinert, Knut (2010). "Concept: Types of algorithms" (PDF). Freie Universität Berlin. Retrieved 20 November 2019.
 ^ Zeilinger, Anton; Aspelmeyer, Markus; Żukowski, Marek; Brukner, Časlav; Kaltenbaek, Rainer; Paterek, Tomasz; Gröblacher, Simon (April 2007). "An experimental test of nonlocal realism". Nature. 446 (7138): 871–875. arXiv: 0704.2529. Bibcode: 2007Natur.446..871G. doi: 10.1038/nature05677. ISSN 14764687. PMID 17443179. S2CID 4412358.
 ^ "Each nucleus decays spontaneously, at random, in accordance with the blind workings of chance." Q for Quantum, John Gribbin
 ^ Longo, Giuseppe; Montévil, Maël; Kauffman, Stuart (1 January 2012). No Entailing Laws, but Enablement in the Evolution of the Biosphere. Proceedings of the 14th Annual Conference Companion on Genetic and Evolutionary Computation. GECCO '12. New York, NY, USA: ACM. pp. 1379–1392. arXiv: 1201.2069. CiteSeerX 10.1.1.701.3838. doi: 10.1145/2330784.2330946. ISBN 9781450311786. S2CID 15609415.
 ^ Longo, Giuseppe; Montévil, Maël (1 October 2013). "Extended criticality, phase spaces and enablement in biology". Chaos, Solitons & Fractals. Emergent Critical Brain Dynamics. 55: 64–79. Bibcode: 2013CSF....55...64L. doi: 10.1016/j.chaos.2013.03.008.

^ Breathnach, A. S. (1982). "A longterm hypopigmentary effect of thoriumX on freckled skin". British Journal of Dermatology. 106 (1): 19–25.
doi:
10.1111/j.13652133.1982.tb00897.x.
PMID
7059501.
S2CID
72016377.
The distribution of freckles seems entirely random, and not associated with any other obviously punctuate anatomical or physiological feature of skin.
 ^ MartinLöf, Per (1966). "The definition of random sequences". Information and Control. 9 (6): 602–619. doi: 10.1016/S00199958(66)800189.
 ^ Yongge Wang: Randomness and Complexity. PhD Thesis, 1996. http://webpages.uncc.edu/yonwang/papers/thesis.pdf
 ^ "Are the digits of pi random? researcher may hold the key". Lbl.gov. 23 July 2001. Retrieved 27 July 2012.
 ^ Laszso Barabasi, (2003), Linked, Rich Gets Richer, P81
 ^ Municipal Elections Act (Ontario, Canada) 1996, c. 32, Sched., s. 62 (3) : "If the recount indicates that two or more candidates who cannot both or all be declared elected to an office have received the same number of votes, the clerk shall choose the successful candidate or candidates by lot."
 ^ Reichenbach, Bruce (1990). The Law of Karma: A Philosophical Study. Palgrave Macmillan UK. p. 121. ISBN 9781349118991.
 ^ Terry Ritter, Randomness tests: a literature survey. ciphersbyritter.com
 ^ Pironio, S.; et al. (2010). "Random Numbers Certified by Bell's Theorem". Nature. 464 (7291): 1021–1024. arXiv: 0911.3427. Bibcode: 2010Natur.464.1021P. doi: 10.1038/nature09008. PMID 20393558. S2CID 4300790.
 ^ ^{a} ^{b} Johnson, George (8 June 2008). "Playing the Odds". The New York Times.
Further reading
 Randomness by Deborah J. Bennett. Harvard University Press, 1998. ISBN 0674107454.
 Random Measures, 4th ed. by Olav Kallenberg. Academic Press, New York, London; AkademieVerlag, Berlin, 1986. MR 0854102.
 The Art of Computer Programming. Vol. 2: Seminumerical Algorithms, 3rd ed. by Donald E. Knuth. Reading, MA: AddisonWesley, 1997. ISBN 0201896842.
 Fooled by Randomness, 2nd ed. by Nassim Nicholas Taleb. Thomson Texere, 2004. ISBN 158799190X.
 Exploring Randomness by Gregory Chaitin. SpringerVerlag London, 2001. ISBN 1852334177.
 Random by Kenneth Chan includes a "Random Scale" for grading the level of randomness.
 The Drunkard’s Walk: How Randomness Rules our Lives by Leonard Mlodinow. Pantheon Books, New York, 2008. ISBN 9780375424045.
External links
Wikiversity has learning resources about Random 
Look up randomness in Wiktionary, the free dictionary. 
Wikiquote has quotations related to: Randomness 
Wikimedia Commons has media related to Randomness. 
 QuantumLab Quantum random number generator with single photons as interactive experiment.
 HotBits generates random numbers from radioactive decay.
 QRBG Quantum Random Bit Generator
 QRNG Fast Quantum Random Bit Generator
 Chaitin: Randomness and Mathematical Proof
 A Pseudorandom Number Sequence Test Program (Public Domain)
 Dictionary of the History of Ideas: Chance
 Computing a Glimpse of Randomness
 Chance versus Randomness, from the Stanford Encyclopedia of Philosophy