To truly compute it, you probably would have to do a MacLaurin Series:
http://mathworld.wolfram.com/InverseTangent.html
There are likely numerical methods to speed that up, however.
Depending on your problem domain and range, and the processor memory
capacity, it is frequently simpler to precompute a range of values,
store them in a table, and then interpolate as needed at runtime. This
tends to work well in PIC-type applications where you are not usually
looking for 14-digits of precision. This would be my preferred approach
assuming that I could store enough values to make interpolation reasonable
for the desired precision in the problem set...
--
Yepp, that's the way to do it. Unfortunately atan's a nasty function to
interpolate from a fixed interval domain. I've found that building a
logarithmic interval domain, using binary bit pattern is a useful approach.
One way I've achieved this in the past is by working with fixed point math
with an integer part in the high byte of a 16 bit word, and a fractional
part in the low order byte. Thus the values 1 to 65535 can represent the
numbers 1/256 to 255 and 255/256ths or about .004 to 255.996. A logarithmic
indexed lookup table of 32 entries can be made using the values 1, 2, 3, 4,
6, 8, 12, ... (2^n, 2^n + 2^(n-1), 2^(n+1), 2^(n+1)+2^n ... To get greater
accuracy at the expense of a large table you can use 3 bits instead of 2.
You can still get a reasonable approximation with a linear interpolation
from this, I worked out the maximum error once, but can't remember, I think
it was on the order of 0.1 degree for 32 entries.
I'm not sure if you'll find this method in any text book, as I came up with
it independently (but it probably does exist and is named after some Indian
mathematician, somewhere).
Cheers