(New page: Yikes, wish I had better experience with series. This seems plausible and all, and I can see how it would work for the challenge problem, but it seems to me that other averages may not ac...)
 
 
(12 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
Cool, I follow it.  Interesting to see that I more or less postulated the error estimate for alternating series in the original discussion.
 +
 +
As a future EE student, you should find some estimates of how long each of these operations would take, processor-wise, and compare the efficiency of your algorithm for computing X digits to that of the simple arctangent integral estimate.
 +
 +
Another thought:  Would it be better to start averaging immediately, say, for terms 1-20, or would it be better to go up to 10 and then begin averaging 10-20?  Obviously the first would be better as far as digits go, but how much better, and which is more efficient? --[[User:Jmason|John Mason]]
 +
 +
* Good ideas!  However, I am gonna have to hold that off until I know how to answer those questions.  Although, I think it depends on how many digits you want to figure out whether its better to start later or earlier.  Remember that N is the base of the exponent, so the higher you start N, the more accurate the estimate is, obviously.  Some day, I'll write an algorithm that figures out its efficiency.[[User:Jhunsber|His Awesomeness, Josh Hunsberger]]
 +
 +
== The Old Stuff ==
 +
 
Yikes, wish I had better experience with series.
 
Yikes, wish I had better experience with series.
  
 
This seems plausible and all, and I can see how it would work for the challenge problem, but it seems to me that other averages may not actually make the error smaller, which is a fact I think you realize.  I guess only sums that oscillate around their ultimate value will act this way.  I'd rather take the easy way out and take more points. --[[User:Jmason|John Mason]]
 
This seems plausible and all, and I can see how it would work for the challenge problem, but it seems to me that other averages may not actually make the error smaller, which is a fact I think you realize.  I guess only sums that oscillate around their ultimate value will act this way.  I'd rather take the easy way out and take more points. --[[User:Jmason|John Mason]]
 +
 +
*Well, I also realized about halfway through that it most definitely isn't Taylor Series is it?  Major Brain fart. 
 +
 +
But yes, the averages of averages should oscillate around the actual average since each average would go back and forth between being too big and too small, and since their consecutive we can show how the error gets smaller (Whether the actual error gets smaller I'm not sure, but I know that I can guarantee it is within a certain amount with more and more accuracy.
 +
 +
* I've changed the name now.  Still not descriptive enough though. [[User:Jhunsber|His Awesomeness, Josh Hunsberger]]
 +
 +
Last night I had a much better idea on how to show the average of consecutive averages, which I will put in the page when I have time.  It involves Rows of Pascal's triangle over powers of two.[[User:Jhunsber|His Awesomeness, Josh Hunsberger]]
 +
 +
* I've decided to start completely over from scratch.  I'm going to start from the beginning, 1+r+r^2 and all.  From there I will show the error of estimating pi and then the decreasing error by using multimeans. [[User:Jhunsber|His Awesomeness, Josh Hunsberger]]
 +
 +
* Okay, so I decided to see what wikipedia has to say:
 +
π can also be calculated using purely mathematical methods.  Most formulas used for calculating the value of π have desirable mathematical properties, but are difficult to understand without a background in [[trigonometry_MA181Fall2008bell]] and [[calculus_MA181Fall2008bell]].  However, some are quite simple, such as this form of the [[Leibniz formula for pi_MA181Fall2008bell|Gregory-Leibniz series]]:<ref>{{cite book |first=Pierre |last=Eymard |coauthors=Jean-Pierre Lafon |others=Stephen S. Wilson (translator)|title=The Number &pi;|url=http://books.google.com/books?id=qZcCSskdtwcC&pg=PA53&dq=leibniz+pi&ei=uFsuR5fOAZTY7QLqouDpCQ&sig=k8VlN5VTxcX9a6Ewc71OCGe_5jk |accessdate=2007-11-04 |year=2004 |month=02 |publisher=American Mathematical Society  |isbn=0821832468 |pages=53 |chapter=2.6 }}</ref>
 +
 +
:<math>\pi = \frac{4}{1}-\frac{4}{3}+\frac{4}{5}-\frac{4}{7}+\frac{4}{9}-\frac{4}{11}\cdots\! </math>.
 +
 +
While that series is easy to write and calculate, it is not immediately obvious why it yields π.  In addition, this series converges so slowly that 300 terms are not sufficient to calculate '''π''' correctly to 2 decimal places.<ref>{{cite journal|url=http://www.scm.org.co/Articulos/832.pdf|format=[[PDF_MA181Fall2008bell]]|title=Even from Gregory-Leibniz series &pi; could be computed: an example of how convergence of series can be accelerated|journal=Lecturas Mathematicas|volume=27|year=2006|pages=21–25|first=Vito|last=Lampret, Spanish|accessdate=2007-11-04}}</ref>  However, by computing this series in a somewhat more clever way by taking the midpoints of partial sums, it can be made to converge much faster.  Let
 +
 +
<math>\pi_{0,1} = \frac{4}{1}, \pi_{0,2} =\frac{4}{1}-\frac{4}{3}, \pi_{0,3} =\frac{4}{1}-\frac{4}{3}+\frac{4}{5}, \pi_{0,4} =\frac{4}{1}-\frac{4}{3}+\frac{4}{5}-\frac{4}{7}, \cdots\! </math>
 +
 +
and then define
 +
 +
<math>\pi_{i,j} = \frac{\pi_{i-1,j}+\pi_{i-1,j+1}}{2}</math> for all <math>i,j\ge 1</math>
 +
 +
then computing <math>\pi_{10,10}</math> will take similar computation time to computing 150 terms of the original series in a brute force manner, and <math>\pi_{10,10}=3.141592653\cdots</math>, correct to 9 decimal places.  This computation is an example of the [[Van Wijngaarden transformation_MA181Fall2008bell]].<ref>A. van Wijngaarden, in:  Cursus:  Wetenschappelijk Rekenen B, Process Analyse, Stichting Mathematisch Centrum, (Amsterdam, 1965) pp. 51-60.</ref>
 +
 +
This makes me slightly less excited, but I will try my best to explain it to you all why it works. [[User:Jhunsber|His Awesomeness, Josh Hunsberger]]
 +
 +
Wow, that got ''much'' simpler.  Makes sense, though, and I can actually visualize this result.  Are you working on what the error is for this new series? --[[User:Jmason|John Mason]]
 +
 +
*Yes, I plan on showing the error for any number of averages you may take. [[User:Jhunsber|His Awesomeness, Josh Hunsberger]]
 +
 +
* The preparation is complete.  Now the next thing I will do is prove the error decreases after taking one error, then two, then any number of averages after that, and then propose the method with which to calculate this estimate. [[User:Jhunsber|His Awesomeness, Josh Hunsberger]]

Latest revision as of 12:12, 8 December 2008

Cool, I follow it. Interesting to see that I more or less postulated the error estimate for alternating series in the original discussion.

As a future EE student, you should find some estimates of how long each of these operations would take, processor-wise, and compare the efficiency of your algorithm for computing X digits to that of the simple arctangent integral estimate.

Another thought: Would it be better to start averaging immediately, say, for terms 1-20, or would it be better to go up to 10 and then begin averaging 10-20? Obviously the first would be better as far as digits go, but how much better, and which is more efficient? --John Mason

  • Good ideas! However, I am gonna have to hold that off until I know how to answer those questions. Although, I think it depends on how many digits you want to figure out whether its better to start later or earlier. Remember that N is the base of the exponent, so the higher you start N, the more accurate the estimate is, obviously. Some day, I'll write an algorithm that figures out its efficiency.His Awesomeness, Josh Hunsberger

The Old Stuff

Yikes, wish I had better experience with series.

This seems plausible and all, and I can see how it would work for the challenge problem, but it seems to me that other averages may not actually make the error smaller, which is a fact I think you realize. I guess only sums that oscillate around their ultimate value will act this way. I'd rather take the easy way out and take more points. --John Mason

  • Well, I also realized about halfway through that it most definitely isn't Taylor Series is it? Major Brain fart.

But yes, the averages of averages should oscillate around the actual average since each average would go back and forth between being too big and too small, and since their consecutive we can show how the error gets smaller (Whether the actual error gets smaller I'm not sure, but I know that I can guarantee it is within a certain amount with more and more accuracy.

Last night I had a much better idea on how to show the average of consecutive averages, which I will put in the page when I have time. It involves Rows of Pascal's triangle over powers of two.His Awesomeness, Josh Hunsberger

  • I've decided to start completely over from scratch. I'm going to start from the beginning, 1+r+r^2 and all. From there I will show the error of estimating pi and then the decreasing error by using multimeans. His Awesomeness, Josh Hunsberger
  • Okay, so I decided to see what wikipedia has to say:

π can also be calculated using purely mathematical methods. Most formulas used for calculating the value of π have desirable mathematical properties, but are difficult to understand without a background in trigonometry_MA181Fall2008bell and calculus_MA181Fall2008bell. However, some are quite simple, such as this form of the Gregory-Leibniz series:<ref>Template:Cite book</ref>

$ \pi = \frac{4}{1}-\frac{4}{3}+\frac{4}{5}-\frac{4}{7}+\frac{4}{9}-\frac{4}{11}\cdots\! $.

While that series is easy to write and calculate, it is not immediately obvious why it yields π. In addition, this series converges so slowly that 300 terms are not sufficient to calculate π correctly to 2 decimal places.<ref>Template:Cite journal</ref> However, by computing this series in a somewhat more clever way by taking the midpoints of partial sums, it can be made to converge much faster. Let

$ \pi_{0,1} = \frac{4}{1}, \pi_{0,2} =\frac{4}{1}-\frac{4}{3}, \pi_{0,3} =\frac{4}{1}-\frac{4}{3}+\frac{4}{5}, \pi_{0,4} =\frac{4}{1}-\frac{4}{3}+\frac{4}{5}-\frac{4}{7}, \cdots\! $

and then define

$ \pi_{i,j} = \frac{\pi_{i-1,j}+\pi_{i-1,j+1}}{2} $ for all $ i,j\ge 1 $

then computing $ \pi_{10,10} $ will take similar computation time to computing 150 terms of the original series in a brute force manner, and $ \pi_{10,10}=3.141592653\cdots $, correct to 9 decimal places. This computation is an example of the Van Wijngaarden transformation_MA181Fall2008bell.<ref>A. van Wijngaarden, in: Cursus: Wetenschappelijk Rekenen B, Process Analyse, Stichting Mathematisch Centrum, (Amsterdam, 1965) pp. 51-60.</ref>

This makes me slightly less excited, but I will try my best to explain it to you all why it works. His Awesomeness, Josh Hunsberger

Wow, that got much simpler. Makes sense, though, and I can actually visualize this result. Are you working on what the error is for this new series? --John Mason

  • The preparation is complete. Now the next thing I will do is prove the error decreases after taking one error, then two, then any number of averages after that, and then propose the method with which to calculate this estimate. His Awesomeness, Josh Hunsberger

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood