The concept of a random sequence is essential in
probability theory and
statistics. The concept generally relies on the notion of a
random variables and many statistical discussions begin with the words "let X1,...,Xn be independent random variables...". Yet as
D. H. Lehmer stated in 1951: "A random sequence is a vague notion... in which each term is unpredictable to the uninitiated and whose digits pass a certain number of tests traditional with statisticians".
Axiomatic probability theorydeliberately avoids a definition of a random sequence. Traditional probability theory does not state if a specific sequence is random, but generally proceeds to discuss the properties of random variables and stochastic sequences assuming some definition of randomness. The
Bourbaki school considered the statement "let us consider a random sequence" an
abuse of language.
Émile Borel was one of the first mathematicians to formally address randomness in 1909. In 1919
Richard von Mises gave the first definition of algorithmic randomness, which was inspired by the law of large numbers, although he used the term collective rather than random sequence. Using the concept of the
impossibility of a gambling system, von Mises defined an infinite sequence of zeros and ones as random if it is not biased by having the frequency stability property i.e. the frequency of zeros goes to 1/2 and every sub-sequence we can select from it by a "proper" method of selection is also not biased.
The sub-sequence selection criterion imposed by von Mises is important, because although 0101010101... is not biased, by selecting the odd positions, we get 000000... which is not random. Von Mises never totally formalized his definition of a proper selection rule for sub-sequences, but in 1940
Alonzo Church defined it as any
recursive function which having read the first N elements of the sequence decides if it wants to select element number N + 1. Church was a pioneer in the field of computable functions, and the definition he made relied on the
Church Turing Thesis for computability. This definition is often called Mises–Church randomness.
During the 20th century various technical approaches to defining random sequences were developed and now three distinct paradigms can be identified. In the mid 1960s,
A. N. Kolmogorov and
D. W. Loveland independently proposed a more permissive selection rule. In their view Church's recursive function definition was too restrictive in that it read the elements in order. Instead they proposed a rule based on a partially computable process which having read anyN elements of the sequence, decides if it wants to select another element which has not been read yet. This definition is often called Kolmogorov–Loveland stochasticity. But this method was considered too weak by
Alexander Shen who showed that there is a Kolmogorov–Loveland stochastic sequence which does not conform to the general notion of randomness.
Three basic paradigms for dealing with random sequences have now emerged:
The frequency / measure-theoretic approach. This approach started with the work of Richard von Mises and Alonzo Church. In the 1960s Per Martin-Löf noticed that the sets coding such frequency-based stochastic properties are a special kind of
measure zero sets, and that a more general and smooth definition can be obtained by considering all effectively measure zero sets.
The complexity / compressibility approach. This paradigm was championed by A. N. Kolmogorov along with contributions from
Leonid Levin and
Gregory Chaitin. For finite sequences, Kolmogorov defines randomness of a binary string of length n as the entropy (or
Kolmogorov complexity) normalized by the length n. In other words, if the Kolmogorov complexity of the string is close to n, it is very random; if the complexity is far below n, it is not so random. The dual concept of randomness is compressibility ‒ the more random a sequence is, the less compressible, and vice versa.
The predictability approach. This paradigm is due to
Claus P. Schnorr and uses a slightly different definition of constructive
martingales than martingales used in traditional probability theory. Schnorr showed how the existence of a selective betting strategy implied the existence of a selection rule for a biased sub-sequence. If one only requires a recursive martingale to succeed on a sequence instead of constructively succeed on a sequence, then one gets the concept of recursive randomness.[further explanation needed]Yongge Wang showed that recursive randomness concept is different from Schnorr's randomness concept.[further explanation needed]
In most cases, theorems relating the three paradigms (often equivalence) have been proven.