This article needs additional citations for
verification. (January 2021) 
In mathematics, reduction refers to the rewriting of an expression into a simpler form. For example, the process of rewriting a fraction into one with the smallest wholenumber denominator possible (while keeping the numerator a whole number) is called " reducing a fraction". Rewriting a radical (or "root") expression with the smallest possible whole number under the radical symbol is called "reducing a radical". Minimizing the number of radicals that appear underneath other radicals in an expression is called denesting radicals.
In linear algebra, reduction refers to applying simple rules to a series of equations or matrices to change them into a simpler form. In the case of matrices, the process involves manipulating either the rows or the columns of the matrix and so is usually referred to as rowreduction or columnreduction, respectively. Often the aim of reduction is to transform a matrix into its "rowreduced echelon form" or "rowechelon form"; this is the goal of Gaussian elimination.
In calculus, reduction refers to using the technique of integration by parts to evaluate integrals by reducing them to simpler forms.
In dynamic analysis, static reduction refers to reducing the number of degrees of freedom. Static reduction can also be used in finite element analysis to refer to simplification of a linear algebraic problem. Since a static reduction requires several inversion steps it is an expensive matrix operation and is prone to some error in the solution. Consider the following system of linear equations in an FEA problem:
where K and F are known and K, x and F are divided into submatrices as shown above. If F_{2} contains only zeros, and only x_{1} is desired, K can be reduced to yield the following system of equations
is obtained by writing out the set of equations as follows:

(1) 

(2) 
Equation ( 2) can be solved for (assuming invertibility of ):
And substituting into ( 1) gives
Thus
In a similar fashion, any row or column i of F with a zero value may be eliminated if the corresponding value of x_{i} is not desired. A reduced K may be reduced again. As a note, since each reduction requires an inversion, and each inversion is an operation with computational cost O(n^{3}), most large matrices are preprocessed to reduce calculation time.
In the 9th century, Persian mathematician AlKhwarizmi's AlJabr introduced the fundamental concepts of "reduction" and "balancing", referring to the transposition of subtracted terms to the other side of an equation and the cancellation of like terms on opposite sides of the equation. This is the operation which AlKhwarizmi originally described as aljabr.^{ [1]} The name " algebra" comes from the "aljabr" in the title of his book.
It is not certain just what the terms aljabr and muqabalah mean, but the usual interpretation is similar to that implied in the translation above. The word aljabr presumably meant something like "restoration" or "completion" and seems to refer to the transposition of subtracted terms to the other side of an equation, which is evident in the treatise; the word muqabalah is said to refer to "reduction" or "balancing"—that is, the cancellation of like terms on opposite sides of the equation.