(10 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
We will start this from the beginning with the series: | We will start this from the beginning with the series: | ||
− | <math>1+r+r^2+r^3+ | + | <math>1+r+r^2+r^3+\cdots+r^N=\frac{1}{1-r}-\frac{r^{N+1}}{1-r}</math> |
From here we substitute <math>r=-x^2</math> to get | From here we substitute <math>r=-x^2</math> to get | ||
− | <math>1-x^2+x^4-x^6+ | + | <math>1-x^2+x^4-x^6+\cdots+(-1)^Nx^{2N}=\frac{1}{1+x^2}-\frac{(-1)^{N+1}x^{2(N+1)}}{1+x^2}</math> |
Now integrate both sides from 0 to 1 with respect to x. | Now integrate both sides from 0 to 1 with respect to x. | ||
− | <math>\int_0^11-x^2+x^4-x^6+ | + | <math>\int_0^11-x^2+x^4-x^6+\cdots+(-1)^Nx^{2N}dx=\int_0^1\frac{1}{1+x^2}dx-\int_0^1\frac{(-1)^{N+1}x^{2(N+1)}}{1+x^2}dx</math> |
− | <math>1-1/3+1/5-1/7+...+\frac{(-1)^N}{2N+1}=\frac{\pi}{4}+\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx</math> | + | <math>1-1/3+1/5-1/7+\cdots+\frac{(-1)^N}{2N+1}=\frac{\pi}{4}+\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx</math> |
+ | |||
+ | Notice that <math>\lim_{N\to\infty}\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx=0</math> so the more terms we use on the left side of the equation the closer we get to <math>\frac{\pi}{4}</math>. This integral can therefore be called the error function. | ||
+ | |||
+ | For the sake of space we will give labels for several parts. | ||
+ | |||
+ | <math>S(N)=1-1/3+1/5-1/7+\cdots+\frac{(-1)^N}{2N+1}</math> | ||
+ | |||
+ | <math>E(N)=\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx</math> | ||
+ | |||
+ | <math>S(N)=\frac{\pi}{4}+E(N)</math> | ||
+ | |||
+ | We can call S(N) the sum and E(N) the error, since as you add more and more terms to the sum, the error becomes closer and closer to zero. | ||
+ | |||
+ | Just how quickly does this sum approach <math>\frac{\pi}{4}</math>? To find out, we can estimate the error for any N by the following comparison: | ||
+ | |||
+ | for <math>0\le x\le 1 </math> | ||
+ | |||
+ | <math>0\le x^2\le 1 </math> | ||
+ | |||
+ | <math>1\le 1+x^2 \le 2</math> | ||
+ | |||
+ | <math>\frac{1}{2}\le \frac{1}{1+x^2}\le 1</math> | ||
+ | |||
+ | so | ||
+ | |||
+ | <math>\frac{x^{2(N+1)}}{1+x^2}\le x^{2(N+1)}</math> | ||
+ | |||
+ | And finally | ||
+ | |||
+ | <math>\begin{align} | ||
+ | |E(N)|&=\left|\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx\right|\\ | ||
+ | &=\int_0^1\frac{x^{2(N+1)}}{1+x^2}dx\\ | ||
+ | &\le\int_0^1x^{2(N+1)}dx | ||
+ | \end{align}</math> | ||
+ | |||
+ | Integrating the estimate gives us: | ||
+ | |||
+ | <math>|E(N)|\le\frac{1}{2(N+1)+1}\le\frac{1}{2N}</math> | ||
+ | |||
+ | Wow, the sum of N terms can reall only be guaranteed to be within <math>\frac{1}{2N}</math> of <math>\frac{\pi}{4}</math>. That's a lot of terms we have to take to guarantee even two digits of accuracy. And it's even worse when we multiply everything by four to estimate pi. | ||
+ | |||
+ | We need a faster way. There's lots of faster ways. But let's say we're too lazy to find another method. We just want to work with this series. What can we do to make it converge to <math>\frac{\pi}{4}</math> faster? | ||
+ | |||
+ | First, let's notice that for every other term the sum is too great and then too little. <math>\frac{\pi}{4}</math> is always between two consecutive sums. Dr. Bell noticed this and posed the challenge to exploit this property in order to make a new way to estimate that converges more quickly. He said we should try to take the average of two consecutive terms. Since the target lies between the two sums, the average of the sums should yield a new estimate that is closer than both the other two. The challenge was to prove this true. | ||
+ | |||
+ | So we notice that for | ||
+ | |||
+ | <math>N=\text{odd}, E(N)<0</math> | ||
+ | |||
+ | <math>N=\text{even}, E(N)>0 </math> | ||
+ | |||
+ | Now we add two consecutive sums together and divide by two to get the average: | ||
+ | |||
+ | <math>\begin{align} | ||
+ | \frac{S(N) + S(N+1)}{2} &= \frac{2\frac{\pi}{4}+E(N)+E(N+1)}{2}\\ | ||
+ | &=\frac{\pi}{4}+\frac{E(N)+E(N+1)}{2} | ||
+ | \end{align}</math> | ||
+ | |||
+ | So now we have a new error because | ||
+ | |||
+ | <math>\begin{align} | ||
+ | \lim_{N\to\infty}\frac{E(N)+E(N+1)}{2}&=\lim_{N\to\infty}\frac{E(N)}{2}+\lim_{N\to\infty}\frac{E(N+1)}{2}\\ | ||
+ | &=0+0\\ | ||
+ | &=0 | ||
+ | \end{align}</math> | ||
+ | |||
+ | And so the average still goes to pi/4 as N goes to infinity because: | ||
+ | |||
+ | <math>\begin{align} | ||
+ | \lim_{N\to\infty}\frac{S(N) + S(N+1)}{2}&=\lim_{N\to\infty}\frac{\pi}{4}+\frac{E(N)+E(N+1)}{2}\\ | ||
+ | &=\frac{\pi}{4} + 0 \\ | ||
+ | &= \frac{\pi}{4}\end{align}</math> | ||
+ | |||
+ | Okay, so the new series still approaches <math>\frac{\pi}{4}</math>, but does it approach it more quickly? Again, we can use error estimation to predict the convergence of the series. | ||
+ | |||
+ | <math>|E(N)|\le\frac{1}{2(N+1)+1}\le\frac{1}{2N}</math> | ||
+ | |||
+ | <math>|E(N+1)|\le\frac{1}{2(N+2)+1}\le\frac{1}{2N+2}</math> | ||
+ | |||
+ | Also remember that either E(N) or E(N+1) is negative and that <math>|E(N)|>|E(N+1|</math> for all N. So if we sum them, the absolute value of the sum will be the difference of their absolute values. | ||
+ | |||
+ | <math>|E(N)+E(N+1)|=|E(N)|-|E(N+1)|\le\frac{1}{2N}-\frac{1}{2N+2}=\frac{2}{(2N)(2N+2)}</math> | ||
+ | |||
+ | Finally, the new error function is this sum divided by two. | ||
+ | |||
+ | <math>|E'(N)|=\frac{|E(N)+E(N+1)|}{2}\le\frac{\frac{2}{(2N)(2N+2)}}{2}=\frac{1}{(2N)(2N+2)}</math> | ||
+ | |||
+ | Wow! By taking the average of two consecutive sums we can guarantee that the new error will be less than <math>\frac{1}{(2N)^2}</math>. | ||
+ | |||
+ | So taking the mean of two consecutive series is in fact much more accurate than doing the simple series, especially for large N. Oh, but what if we were to take the mean of two consecutive means. Does the same principal apply? Let's review. | ||
+ | |||
+ | The first average worked because for any two consecutive sums, one would be too high and one too little, Furthermore, the absolute value of the second error would be less than the absolute value of the first. Because of these two important characteristics, when we take the average of two consecutive terms the average of their errors is actually the difference of their absolute values. Upon further investigation, we see that since the absolute value of the preceding error is greater than the absolute value of the second error, the new error of the average will have the same sign as the first error. Or in other words. | ||
+ | |||
+ | <math>\text{For N even, }E'(N)>0</math> | ||
+ | |||
+ | <math>\text{For N odd, }E'(N)<0</math> | ||
+ | |||
+ | And from our new error estimates we see that <math>|E'(N)|>|E'(N+1)|</math> | ||
+ | |||
+ | Hooray! The same characteristics that allowed us to take averages of the original series and find a new estimate that was more accurate than both of the previous estimates are now present in this new method. So let's try to take the average of two consecutive averages from the beginning and see what happens. | ||
+ | |||
+ | <math>\begin{align} | ||
+ | \dfrac{\dfrac{S(N)+S(N+1)}{2}+\dfrac{S(N+1)+S(N+2)}{2}}{2}&=\frac{S(N)+2S(N+1)+S(N+2)}{4}\\ | ||
+ | &=\frac{\pi+E(N)+2E(N+1)+E(N+2)}{4}\\ | ||
+ | &=\frac{\pi}{4}+\frac{E(N)+2E(N+1)+E(N+2)}{4} | ||
+ | \end{align}</math> | ||
+ | |||
+ | Once again we get a new error term which also goes to zero since each error goes to zero, their sum will go to zero. But again, how fast does it go to zero? Let's do some tinkering. | ||
+ | |||
+ | Remember from earlier that <math>E'(N) = \frac{E(N)+E(N+1)}{2}</math> So the average of two consecutive mean errors is | ||
+ | |||
+ | <math>\begin{align} | ||
+ | \frac{E'(N)+E'(N+1)}{2}&=\frac{\frac{E(N)+E(N+1)}{2}+\frac{E(N+1)+E(N+2)}{2}}{2}\\ | ||
+ | &=\frac{E(N)+2E(N+1)+E(N+2)}{4}\end{align}</math> | ||
+ | |||
+ | Great! That's the same as the error of the new estimate. So the error of the mean of two consecutive means is the same as the mean of the errors of two consecutive means. Keeping track of what level the errors is at is getting confusing. To make it more clear what error we're speaking of, let's use a subscript on the error function E(N) to denote how many means have been taken. This will come in handy later. So | ||
+ | |||
+ | <math>E_0(N)=\text{error of original series}.</math> | ||
+ | |||
+ | <math>E_1(N)=E_0(N)+E_0(N+1)</math> | ||
+ | |||
+ | <math>E_2(N)=E_1(N)+E_1(N+1)</math> | ||
+ | |||
+ | <math>E_k(N)=E_{k-1}(N)+E_{k-1}(N+1)</math> | ||
+ | |||
+ | Subscripts on the series function, S(N), can be used in the same way. | ||
+ | |||
+ | So, back to calculating the new error, <math>E_2(N)</math> | ||
+ | |||
+ | <math>|E_2(N)|=\frac{|E_1(N)+E_1(N+1)|}{2}</math> | ||
+ | |||
+ | Using the characteristics outlined above, we can make this next step | ||
+ | |||
+ | <math>\begin{align} | ||
+ | \frac{|E_1(N)+E_1(N+1)|}{2}&=\frac{|E_1(N)|-|E_1(N+1)|}{2}\\ | ||
+ | &\le\frac{\frac{1}{(2N)(2N+2)}-\frac{1}{(2N+2)(2N+4)}}{2}\\ | ||
+ | &\le\frac{\frac{2N+4-2N}{(2N)(2N+2)(2N+4)}}{2}\\ | ||
+ | &\le\frac{2}{(2N)(2N+2)(2N+4)}\end{align}</math> | ||
+ | |||
+ | <math>E_2(N)\le\frac{2}{(2N)(2N+2)(2N+4)}\le\frac{2}{(2N)^3}</math> | ||
+ | |||
+ | Wow, so this is even faster convergence! Now we're taking N to the third! And this is all with just the N+2 terms of the series and some adding and dividing skills. | ||
+ | |||
+ | I don't think I need to continue doing this to convince you that as we take more terms and more averages, we get a better estimate of <math>\frac{\pi}{4}</math>. As we continue to take averages, it turns out that the error can be approximated by: | ||
+ | |||
+ | <math>|E_k(N)|\le\frac{k}{(2N)^{k+1}}</math> | ||
+ | |||
+ | So we can get really fast convergence in not many terms. By reviewing the process of taking averages, we realize that k is in fact the number of terms after N we will need in order to get this new average. So we need N+k+1 terms (since there is a 0th term). | ||
+ | |||
+ | But part of the reason for showing that taking averages makes the series converge faster is to find an easier way to approximate <math>\pi</math>. Taking all these averages doesn't sound very easy. Well, it's actually very easy. We can just use Pascal's triangle as the coefficients of the sums and then just divide by the number of terms to get the average. | ||
+ | |||
+ | For instance. Say we want to find the <math>S_4(N)</math> | ||
+ | |||
+ | The 4 row of Pascal's triangle is 1 4 6 4 1, use these as the coefficients of the sums like this: | ||
+ | |||
+ | <math>S_4(N)=\frac{S(N)+4S(N+1)+6S(N+2)+4S(N+3)+S(N+4)}{16}</math> | ||
+ | |||
+ | To make matters even easier, we realize that the number of terms used in k averages is <math>2^k</math> terms. So let's wrap this up by writing a generalized formula for this method. | ||
+ | |||
+ | Suppose we start at term N and want to use k more terms to make k averages. And let <math>P(S_k(N))</math> mean that you use the kth row of pascal's triangle as the coefficients for the terms of S(N)+S(N+2)+...S(N+k) respectively. | ||
+ | |||
+ | <math>S_k(N)=\frac{P(S_k(N))}{2^k}=\frac{\pi}{4}+E_k(N)</math> | ||
+ | |||
+ | Where | ||
+ | |||
+ | <math>|E_k(N)|\le\frac{k}{(2N)^k}</math> | ||
+ | |||
+ | And that concludes the exploration of taking the averages of averages of alternating series. Further exploration, I believe, would show that this method could be generalized to work for all alternating series since all alternating series have the same characteristics that make this method work for this series. |
Latest revision as of 14:32, 8 December 2008
We will start this from the beginning with the series:
$ 1+r+r^2+r^3+\cdots+r^N=\frac{1}{1-r}-\frac{r^{N+1}}{1-r} $
From here we substitute $ r=-x^2 $ to get
$ 1-x^2+x^4-x^6+\cdots+(-1)^Nx^{2N}=\frac{1}{1+x^2}-\frac{(-1)^{N+1}x^{2(N+1)}}{1+x^2} $
Now integrate both sides from 0 to 1 with respect to x.
$ \int_0^11-x^2+x^4-x^6+\cdots+(-1)^Nx^{2N}dx=\int_0^1\frac{1}{1+x^2}dx-\int_0^1\frac{(-1)^{N+1}x^{2(N+1)}}{1+x^2}dx $
$ 1-1/3+1/5-1/7+\cdots+\frac{(-1)^N}{2N+1}=\frac{\pi}{4}+\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx $
Notice that $ \lim_{N\to\infty}\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx=0 $ so the more terms we use on the left side of the equation the closer we get to $ \frac{\pi}{4} $. This integral can therefore be called the error function.
For the sake of space we will give labels for several parts.
$ S(N)=1-1/3+1/5-1/7+\cdots+\frac{(-1)^N}{2N+1} $
$ E(N)=\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx $
$ S(N)=\frac{\pi}{4}+E(N) $
We can call S(N) the sum and E(N) the error, since as you add more and more terms to the sum, the error becomes closer and closer to zero.
Just how quickly does this sum approach $ \frac{\pi}{4} $? To find out, we can estimate the error for any N by the following comparison:
for $ 0\le x\le 1 $
$ 0\le x^2\le 1 $
$ 1\le 1+x^2 \le 2 $
$ \frac{1}{2}\le \frac{1}{1+x^2}\le 1 $
so
$ \frac{x^{2(N+1)}}{1+x^2}\le x^{2(N+1)} $
And finally
$ \begin{align} |E(N)|&=\left|\int_0^1\frac{(-1)^{N}x^{2(N+1)}}{1+x^2}dx\right|\\ &=\int_0^1\frac{x^{2(N+1)}}{1+x^2}dx\\ &\le\int_0^1x^{2(N+1)}dx \end{align} $
Integrating the estimate gives us:
$ |E(N)|\le\frac{1}{2(N+1)+1}\le\frac{1}{2N} $
Wow, the sum of N terms can reall only be guaranteed to be within $ \frac{1}{2N} $ of $ \frac{\pi}{4} $. That's a lot of terms we have to take to guarantee even two digits of accuracy. And it's even worse when we multiply everything by four to estimate pi.
We need a faster way. There's lots of faster ways. But let's say we're too lazy to find another method. We just want to work with this series. What can we do to make it converge to $ \frac{\pi}{4} $ faster?
First, let's notice that for every other term the sum is too great and then too little. $ \frac{\pi}{4} $ is always between two consecutive sums. Dr. Bell noticed this and posed the challenge to exploit this property in order to make a new way to estimate that converges more quickly. He said we should try to take the average of two consecutive terms. Since the target lies between the two sums, the average of the sums should yield a new estimate that is closer than both the other two. The challenge was to prove this true.
So we notice that for
$ N=\text{odd}, E(N)<0 $
$ N=\text{even}, E(N)>0 $
Now we add two consecutive sums together and divide by two to get the average:
$ \begin{align} \frac{S(N) + S(N+1)}{2} &= \frac{2\frac{\pi}{4}+E(N)+E(N+1)}{2}\\ &=\frac{\pi}{4}+\frac{E(N)+E(N+1)}{2} \end{align} $
So now we have a new error because
$ \begin{align} \lim_{N\to\infty}\frac{E(N)+E(N+1)}{2}&=\lim_{N\to\infty}\frac{E(N)}{2}+\lim_{N\to\infty}\frac{E(N+1)}{2}\\ &=0+0\\ &=0 \end{align} $
And so the average still goes to pi/4 as N goes to infinity because:
$ \begin{align} \lim_{N\to\infty}\frac{S(N) + S(N+1)}{2}&=\lim_{N\to\infty}\frac{\pi}{4}+\frac{E(N)+E(N+1)}{2}\\ &=\frac{\pi}{4} + 0 \\ &= \frac{\pi}{4}\end{align} $
Okay, so the new series still approaches $ \frac{\pi}{4} $, but does it approach it more quickly? Again, we can use error estimation to predict the convergence of the series.
$ |E(N)|\le\frac{1}{2(N+1)+1}\le\frac{1}{2N} $
$ |E(N+1)|\le\frac{1}{2(N+2)+1}\le\frac{1}{2N+2} $
Also remember that either E(N) or E(N+1) is negative and that $ |E(N)|>|E(N+1| $ for all N. So if we sum them, the absolute value of the sum will be the difference of their absolute values.
$ |E(N)+E(N+1)|=|E(N)|-|E(N+1)|\le\frac{1}{2N}-\frac{1}{2N+2}=\frac{2}{(2N)(2N+2)} $
Finally, the new error function is this sum divided by two.
$ |E'(N)|=\frac{|E(N)+E(N+1)|}{2}\le\frac{\frac{2}{(2N)(2N+2)}}{2}=\frac{1}{(2N)(2N+2)} $
Wow! By taking the average of two consecutive sums we can guarantee that the new error will be less than $ \frac{1}{(2N)^2} $.
So taking the mean of two consecutive series is in fact much more accurate than doing the simple series, especially for large N. Oh, but what if we were to take the mean of two consecutive means. Does the same principal apply? Let's review.
The first average worked because for any two consecutive sums, one would be too high and one too little, Furthermore, the absolute value of the second error would be less than the absolute value of the first. Because of these two important characteristics, when we take the average of two consecutive terms the average of their errors is actually the difference of their absolute values. Upon further investigation, we see that since the absolute value of the preceding error is greater than the absolute value of the second error, the new error of the average will have the same sign as the first error. Or in other words.
$ \text{For N even, }E'(N)>0 $
$ \text{For N odd, }E'(N)<0 $
And from our new error estimates we see that $ |E'(N)|>|E'(N+1)| $
Hooray! The same characteristics that allowed us to take averages of the original series and find a new estimate that was more accurate than both of the previous estimates are now present in this new method. So let's try to take the average of two consecutive averages from the beginning and see what happens.
$ \begin{align} \dfrac{\dfrac{S(N)+S(N+1)}{2}+\dfrac{S(N+1)+S(N+2)}{2}}{2}&=\frac{S(N)+2S(N+1)+S(N+2)}{4}\\ &=\frac{\pi+E(N)+2E(N+1)+E(N+2)}{4}\\ &=\frac{\pi}{4}+\frac{E(N)+2E(N+1)+E(N+2)}{4} \end{align} $
Once again we get a new error term which also goes to zero since each error goes to zero, their sum will go to zero. But again, how fast does it go to zero? Let's do some tinkering.
Remember from earlier that $ E'(N) = \frac{E(N)+E(N+1)}{2} $ So the average of two consecutive mean errors is
$ \begin{align} \frac{E'(N)+E'(N+1)}{2}&=\frac{\frac{E(N)+E(N+1)}{2}+\frac{E(N+1)+E(N+2)}{2}}{2}\\ &=\frac{E(N)+2E(N+1)+E(N+2)}{4}\end{align} $
Great! That's the same as the error of the new estimate. So the error of the mean of two consecutive means is the same as the mean of the errors of two consecutive means. Keeping track of what level the errors is at is getting confusing. To make it more clear what error we're speaking of, let's use a subscript on the error function E(N) to denote how many means have been taken. This will come in handy later. So
$ E_0(N)=\text{error of original series}. $
$ E_1(N)=E_0(N)+E_0(N+1) $
$ E_2(N)=E_1(N)+E_1(N+1) $
$ E_k(N)=E_{k-1}(N)+E_{k-1}(N+1) $
Subscripts on the series function, S(N), can be used in the same way.
So, back to calculating the new error, $ E_2(N) $
$ |E_2(N)|=\frac{|E_1(N)+E_1(N+1)|}{2} $
Using the characteristics outlined above, we can make this next step
$ \begin{align} \frac{|E_1(N)+E_1(N+1)|}{2}&=\frac{|E_1(N)|-|E_1(N+1)|}{2}\\ &\le\frac{\frac{1}{(2N)(2N+2)}-\frac{1}{(2N+2)(2N+4)}}{2}\\ &\le\frac{\frac{2N+4-2N}{(2N)(2N+2)(2N+4)}}{2}\\ &\le\frac{2}{(2N)(2N+2)(2N+4)}\end{align} $
$ E_2(N)\le\frac{2}{(2N)(2N+2)(2N+4)}\le\frac{2}{(2N)^3} $
Wow, so this is even faster convergence! Now we're taking N to the third! And this is all with just the N+2 terms of the series and some adding and dividing skills.
I don't think I need to continue doing this to convince you that as we take more terms and more averages, we get a better estimate of $ \frac{\pi}{4} $. As we continue to take averages, it turns out that the error can be approximated by:
$ |E_k(N)|\le\frac{k}{(2N)^{k+1}} $
So we can get really fast convergence in not many terms. By reviewing the process of taking averages, we realize that k is in fact the number of terms after N we will need in order to get this new average. So we need N+k+1 terms (since there is a 0th term).
But part of the reason for showing that taking averages makes the series converge faster is to find an easier way to approximate $ \pi $. Taking all these averages doesn't sound very easy. Well, it's actually very easy. We can just use Pascal's triangle as the coefficients of the sums and then just divide by the number of terms to get the average.
For instance. Say we want to find the $ S_4(N) $
The 4 row of Pascal's triangle is 1 4 6 4 1, use these as the coefficients of the sums like this:
$ S_4(N)=\frac{S(N)+4S(N+1)+6S(N+2)+4S(N+3)+S(N+4)}{16} $
To make matters even easier, we realize that the number of terms used in k averages is $ 2^k $ terms. So let's wrap this up by writing a generalized formula for this method.
Suppose we start at term N and want to use k more terms to make k averages. And let $ P(S_k(N)) $ mean that you use the kth row of pascal's triangle as the coefficients for the terms of S(N)+S(N+2)+...S(N+k) respectively.
$ S_k(N)=\frac{P(S_k(N))}{2^k}=\frac{\pi}{4}+E_k(N) $
Where
$ |E_k(N)|\le\frac{k}{(2N)^k} $
And that concludes the exploration of taking the averages of averages of alternating series. Further exploration, I believe, would show that this method could be generalized to work for all alternating series since all alternating series have the same characteristics that make this method work for this series.