Quarter Life Crisis

The world according to Sven-S. Porst

« Fraktur mon AmourMainSine Computations »

Taylor Series

825 words on ,

Learning about Taylor series can be quite painful. When you first see it, you don’t really know what’s going on, then you’ll be frustrated by having do potentially tricky exercises with computations of the convergence radius or such series and in the end you’ll end up wondering what the heck these series are for. And that’s just Analysis I. Once you start working with several variables, you may meet Taylor series again in a form that either looks exactly like the one dimensional formula just with every index being a ‘multi-index’ now which means there is a lot of implicit cleverness hiding beneath the surface or in a form that’s more explicit and useful but a downright notational nightmare as well.

Taylor expansion formula

Only later on, you start appreciating Taylor series. Possibly in complex analysis where they gain a lot of power and simplify things a lot or, for the more practically minded, because Taylor’s theorem gives you a way to approximate things and get a bound for the error. Physicists are particularly notorious for saying things like using the first two terms of the Taylor series will give a precise result. Which is totally in-character, as the statement is usually blatantly wrong, but still true for all practical purposes.

The most famous Taylor series will be that of the exponential function, developed at 0. Use it for the value 1 and you can approximate Euler’s number. It’s surprisingly easy to do and converges surprisingly quickly.

computing exp(1)

I.e. the first five terms already give 2,5+ 0,16̅ + 0,0416̅ = 2,7083̅ with the remaining error being smaller than 1/120. Not too shabby.

To perhaps make students think about this and its relevance a bit more, I thought I should point out that if their calculators or computers need to compute trigonometric functions, Taylor series expansions are a way they can achieve that magic. And then I started wondering whether in practice they actually do so. Now that was a bit tricky to find out.

XCode’s help didn’t mention how sin and cos are implemented on the Mac. The best I could find there was a note on Mac OS X PowerPC numerics. The document contains choice passages like this one:

The cosine, sine, and tangent functions use an argument reduction based on the remainder function (see page 5-11) and the constant pi, where pi is the nearest approximation of π with 53 bits of precision. The cosine, sine, and tangent functions are periodic with respect to the constant pi, so their periods are different from their mathematical counterparts and diverge from their counterparts when their arguments become very large.

And that strongly suggests that no Taylor series are used there. In fact, the whole thing is based on a constant π. Which of course is wrong because computer architectures aren’t good at storing irrational numbers. Eeek! Another document on the web (link, as well as many of the others, kindly provided by Tom) suggests that the problem isn’t really PPC specific and that people found Java’s performance ‘bad’ on ‘x87’ because the virtual machine tries to avoid that problem. Amusing quote from that text:

So Java is accurate, but slower. I’ve never been a fan of “fast, but wrong” when “wrong” is roughly random(). Benchmarks rarely test accuracy. “double sin(double theta) { return 0; }” would be a great benchmark-compatible implementation of sin(). For large values of theta, 0 would be arguably more accurate since the absolute error is never greater than 1. fsin/fcos can have absolute errors as large as 2 (correct answer=1; returned result=-1).

While Java might do better because of moving all arguments for these functions in the [0, 2π[ range, this suggests it still doesn’t necessarily use Taylor series. The next question would be whether or not Taylor series expansions actually make sense performance-wise for computers or whether programmers came up with more efficient algorithms for their limited usage cases.

All this still doesn’t make a load of sense to me. Most likely because I’m not a programmer and because strategies for implementing such a non-trivial computations can vary wildly with the times, fashion, hardware they run on and the program they run in. Now I’ve become curious and I’d like to know how various processors and maths libraries compute their trigonometry. So what about making a little table with results for sin(1), sin(100000) and sin(100000000) run on different systems? In fact, we started making one already which will appear in the next post. Be sure to join in and provide more examples – particularly if you have exotic hardware.

Bonus material

January 19, 2008, 20:02

Tagged as software, uni.

Comments

Comment by d.w.: User icon

I’m fairly hopeless at advanced math, but thanks to the business my employer is in I have access to a pretty wide array of computing platforms. If you have access to somewhat portable C or Fortran code I could probably run some trials on a fairly wide range of architectures.

January 20, 2008, 0:50

Comment by ssp: User icon

That would be great. We already got a few results which I’ll put up soon, but a slightly broader range would be very cool. I’ll send you the details in an e-mail.

January 20, 2008, 1:13

Comment by Steve Canon: User icon

As it happens, I work in the numerics group at Apple. We’re the ones who implement these functions for OS X. I’m not going to go into much detail here, but I will tell you that we do not, in fact, use taylor series to compute sin( ) in the C math library.

Taylor series are incredibly useful for analysis, but they are decidedly suboptimal for computing approximations of transcendental functions. The reason for this is that the approximation error of a Taylor polynomial is not spread evenly across the interval of approximation, but is instead generally bunched up at the end of the interval. Other types of polynomial approximations (Chebyshev and Minimax polynomials are the most commonly used) have much better error characteristics from a numerical standpoint. You might look at the (surprisingly accurate) wikipedia page on Approximation theory for more information: http://en.wikipedia.org/wiki/Approximation_theory

I would also make the following notes for you:

  1. In the Apple math library the computation of many transcendental functions breaks roughly into two stages — argument reduction and a core approximation. Polynomial approximations are generally only used the the core approximation. For sin, the bulk of the computational difficulty is in argument reduction, not in the core approximation.

  2. The PowerPC Numerics document you found is just that, a PowerPC numerics document. It does not accurately reflect the behavior of the standard math library on Intel processors. As of OS X Leopard, we use a so-called “infinite pi” argument reduction on Intel machines, not the 53-bit reduction used on PPC. This allows us to supply a very accurate result from the trig functions on the entire range of double-precision numbers. [Incidentally, using a 53-bit approximation isn’t as damaging to computations as you might think. No less an authority than William Kahan (Turing prize winner, one of the key authors of the first IEEE-754 arithmetic spec) has often been known to espouse the merits of finite-pi reductions, as they are easier to make performant, and the loss of accuracy is very infrequently detrimental as long as certain key equalities are still approximately satisfied].

  3. If you’re curious about the algorithms used in the math library, you should download the source and have a look for yourself! libm is part of Darwin, and as such the source is freely available from www.opensource.apple.com. If you’re really interested in how we construct approximations for transcendental functions, I’d be happy to answer more questions via email.

February 13, 2008, 9:29

Comment by ssp: User icon

Thanks for your detailed answer, Steve. That’s easily more helpful than anything I found in the Google results I got. Even though it confirms my suspicion that a more efficient way than Taylor series was found for the computer.

Also thanks for mentioning libm – XCode has managed to ‘just work’ enough and completely hide the name of that library from me so far. I wouldn’t even have known where to look. (I had searched that open source page for ‘math’ before because I included math.h in my code, but that didn’t find anything.)

That said I’d need to spend more time with that code to fully understand and appreciate the techniques it uses as I have very little knowledge about all the approximation and algorithm things and it’s been ages since I programmed anything lower-level than Cocoa. (To be honest I’m not even entirely sure which function gets called… blame my lack of enthusiasm to know what stuff like extern means.)

A – admittedly somewhat tangential – question I’d have would be why the absurd computation of sin(10^50) gives different values on 32 and 64 bit Intel, as well as the G4 (with the 64bit Intel one being what I’d consider correct if that term makes sense in this context).

February 13, 2008, 23:26

Comment by Steve Canon: User icon

You should get the same results for sin( ) on OS X Leopard on Intel whether the program is running in 64- or 32-bit mode. You should only be able to get different results by running the program on a different OS, or by running on a PPC machine. The breakdown is something like this:

OS X on PPC: argument reduction with 53-bit precise pi. pre-leopard on Intel: argument reduction with 67-bit precise pi (what you get using the fprem1 instruction on the x87 fpu). leopard on Intel: argument reduction with “infinitely precise” pi. (We don’t actually have an infinitely precise representation of pi, of course — but you only need about 1100 bits of precision to give an accurate result over the entire number range, so it’s just as good as having an infinitely precise representation.)

February 14, 2008, 0:46

Comment by ssp: User icon

I see – the machine we tested the 32bit Intel on was still running X.4. So that will explain the difference.

Thanks for clarifying!

February 14, 2008, 8:23

Add your comment

« Fraktur mon AmourMainSine Computations »

Comments on

Photos

Categories

Me

This page

Out & About

pinboard Links

People

Ego-Linking