with an > 0 for all n. The signs of the general terms alternate between positive and negative. Like any series, an alternating
series converges if and only if the associated sequence of partial sums
converges.
The functions sine and cosine used in
trigonometry can be defined as alternating series in
calculus even though they are introduced in elementary algebra as the ratio of sides of a right triangle. In fact,
and
When the alternating factor (–1)n is removed from these series one obtains the
hyperbolic functions sinh and cosh used in calculus.
For integer or positive index α the
Bessel function of the first kind may be defined with the alternating series
The theorem known as "Leibniz Test" or the
alternating series test tells us that an alternating series will converge if the terms an converge to 0
monotonically.
Proof: Suppose the sequence converges to zero and is monotone decreasing. If is odd and , we obtain the estimate via the following calculation:
Since is monotonically decreasing, the terms are negative. Thus, we have the final inequality: . Similarly, it can be shown that . Since converges to , our partial sums form a
Cauchy sequence (i.e., the series satisfies the
Cauchy criterion) and therefore converge. The argument for even is similar.
Approximating sums
The estimate above does not depend on . So, if is approaching 0 monotonically, the estimate provides an
error bound for approximating infinite sums by partial sums:
That does not mean that this estimate always finds the very first element after which error is less than the modulus of the next term in the series. Indeed if you take and try to find the term after which error is at most 0.00005, the inequality above shows that the partial sum up through is enough, but in fact this is twice as many terms as needed. Indeed, the error after summing first 9999 elements is 0.0000500025, and so taking the partial sum up through is sufficient. This series happens to have the property that constructing a new series with also gives an alternating series where the Leibniz test applies and thus makes this simple error bound not optimal. This was improved by the Calabrese bound,[1] discovered in 1962, that says that this property allows for a result 2 times less than with the Leibniz error bound. In fact this is also not optimal for series where this property applies 2 or more times, which is described by
Johnsonbaugh error bound.[2] If one can apply the property an infinite number of times,
Euler's transform applies.[3]
Theorem: Absolutely convergent series are convergent.
Proof: Suppose is absolutely convergent. Then, is convergent and it follows that converges as well. Since , the series converges by the
comparison test. Therefore, the series converges as the difference of two convergent series .
For any series, we can create a new series by rearranging the order of summation. A series is
unconditionally convergent if any rearrangement creates a series with the same convergence as the original series.
Absolutely convergent series are unconditionally convergent. But the
Riemann series theorem states that conditionally convergent series can be rearranged to create arbitrary convergence.[4] The general principle is that addition of infinite sums is only commutative for absolutely convergent series.
For example, one false proof that 1=0 exploits the failure of associativity for infinite sums.
But, since the series does not converge absolutely, we can rearrange the terms to obtain a series for :
Series acceleration
In practice, the numerical summation of an alternating series may be sped up using any one of a variety of
series acceleration techniques. One of the oldest techniques is that of
Euler summation, and there are many modern techniques that can offer even more rapid convergence.