with a_{n} > 0 for all n. The signs of the general terms alternate between positive and negative. Like any series, an alternating
series converges if and only if the associated sequence of partial sums
converges.
The functions sine and cosine used in
trigonometry can be defined as alternating series in
calculus even though they are introduced in elementary algebra as the ratio of sides of a right triangle. In fact,
The theorem known as "Leibniz Test" or the
alternating series test tells us that an alternating series will converge if the terms a_{n} converge to 0
monotonically.
Proof: Suppose the sequence $a_{n}$ converges to zero and is monotone decreasing. If $m$ is odd and $m<n$, we obtain the estimate $S_{n}-S_{m}\leq a_{m}$ via the following calculation:
Since $a_{n}$ is monotonically decreasing, the terms $-(a_{m}-a_{m+1})$ are negative. Thus, we have the final inequality: $S_{n}-S_{m}\leq a_{m}$. Similarly, it can be shown that $-a_{m}\leq S_{n}-S_{m}$. Since $a_{m}$ converges to $0$, our partial sums $S_{m}$ form a
Cauchy sequence (i.e., the series satisfies the
Cauchy criterion) and therefore converge. The argument for $m$ even is similar.
Approximating sums
The estimate above does not depend on $n$. So, if $a_{n}$ is approaching 0 monotonically, the estimate provides an
error bound for approximating infinite sums by partial sums:
That does not mean that this estimate always finds the very first element after which error is less than the modulus of the next term in the series. Indeed if you take $1-1/2+1/3-1/4+...=\ln 2$ and try to find the term after which error is at most 0.00005, the inequality above shows that the partial sum up through $a_{20000}$ is enough, but in fact this is twice as many terms as needed. Indeed, the error after summing first 9999 elements is 0.0000500025, and so taking the partial sum up through $a_{10000}$ is sufficient. This series happens to have the property that constructing a new series with $a_{n}-a_{n+1}$ also gives an alternating series where the Leibniz test applies and thus makes this simple error bound not optimal. This was improved by the Calabrese bound,^{
[1]} discovered in 1962, that says that this property allows for a result 2 times less than with the Leibniz error bound. In fact this is also not optimal for series where this property applies 2 or more times, which is described by
Johnsonbaugh error bound.^{
[2]} If one can apply the property an infinite number of times,
Euler's transform applies.^{
[3]}
Absolute convergence
A series ${\textstyle \sum a_{n}}$converges absolutely if the series ${\textstyle \sum |a_{n}|}$ converges.
Theorem: Absolutely convergent series are convergent.
Proof: Suppose ${\textstyle \sum a_{n}}$ is absolutely convergent. Then, ${\textstyle \sum |a_{n}|}$ is convergent and it follows that ${\textstyle \sum 2|a_{n}|}$ converges as well. Since ${\textstyle 0\leq a_{n}+|a_{n}|\leq 2|a_{n}|}$, the series ${\textstyle \sum (a_{n}+|a_{n}|)}$ converges by the
comparison test. Therefore, the series ${\textstyle \sum a_{n}}$ converges as the difference of two convergent series ${\textstyle \sum a_{n}=\sum (a_{n}+|a_{n}|)-\sum |a_{n}|}$.
For any series, we can create a new series by rearranging the order of summation. A series is
unconditionally convergent if any rearrangement creates a series with the same convergence as the original series.
Absolutely convergent series are unconditionally convergent. But the
Riemann series theorem states that conditionally convergent series can be rearranged to create arbitrary convergence.^{
[4]} The general principle is that addition of infinite sums is only commutative for absolutely convergent series.
For example, one false proof that 1=0 exploits the failure of associativity for infinite sums.
In practice, the numerical summation of an alternating series may be sped up using any one of a variety of
series acceleration techniques. One of the oldest techniques is that of
Euler summation, and there are many modern techniques that can offer even more rapid convergence.