While we all know that matrix exponentiation is asymptotically fastest way to compute NthFibonacci number, there is a still faster way-
If we know F(k) and F(k+1), then we can find
F(2k) = F(k)[2F(k+1) – F(k)]
F(2k+1) = F(k+1)2 + F(k)2
This is fast doubling. Here is my python implementation-
#Fast Fibonnaci mem = dict() def fib(n): if n <= 2: return 1 if n in mem: return mem[n] k = n/2 a = fib(k + 1) b = fib(k) ans = 0 if n % 2 == 1: ans = a*a + b*b else: ans = b*(2*a - b) mem[n] = ans return ans if __name__ == "__main__": n = int(raw_input("Enter n(0 to quit) : ")) while n: print fib(n) n = int(raw_input("Enter n(0 to quit) : "))
Note: I have taken the sequence as 1, 1, 2, 3… However some people consider the sequence as 0, 1, 1, 2, 3…
Note 2: For large ‘n’ (~ 10^8), the intermediate values are very large. Hence most of the time is spent in multiplication of such big numbers.
Hi, good one. Just noted this last comment: “Note: I have taken the sequence as 1, 1, 2, 3… However some people consider the sequence as 0, 1, 1, 2, 3…”
In fact both the sequences are the same (not to mention they are correct). When some people start the sequence at 0, the value of n starts at 0. When others start the sequence at 1, the value of n starts at 1. In fact, in any standard text of reference I have seen, this is the case. That means, both are same. We just need to mention ” n starts at 1″, rather than “the sequence starts at 1”.
This is the paper: http://www.m-hikari.com/imf-password2008/1-4-2008/ulutasIMF1-4-2008.pdf
Your Python implementation is poor; the double-recursion it turns what should be an O(log n) algorithm into an O(n) algorithm.
Hi Jason,
Thanks for pointing this out. I have added memoization to the code, hence it should be of the order of O(log n) now. However we can’t really claim it to be O(log n), because for large ‘n’, we must factor multiplication time as well.
Memoization doesn’t really help much here (vs. in the “slow” recursive approach of fib(n) = fib(n-1) – fib(n-2)) because the call to fib(n) is “sparse” in terms of the recursion steps. Try profiling it.
… in other words, in the “slow” memoized definition of fib, we do things like this. (And NOTE that we call fib(n-2) *before* calling fib(n-1) )
fib(6) = fib(4) + fib(5)
fib(4) = fib(2) + fib(3)
fib(2) = fib(0) + fib(1)
fib(0) = 1
fib(1) = 1
–> fib(2) = 2
fib(3) = fib(1) + fib(2) = 3 [memoized lookup]
–> fib(4) = 5
fib(5) = fib(3) + fib(4) = 8 [memoized lookup]
–> fib(6) = 13
But in this version of fib(), the call tree looks like this: (worst-case for 2^k – 1)
fib(63) requires fib(31), fib(32)
fib(31) requires fib(15), fib(16)
fib(32) requires fib(16), fib(17)
fib(15) requires fib(7), fib(8)
fib(16) requires fib(8), fib(9)
fib(17) requires fib(8), fib(9)
fib(7) requires fib(3), fib(4)
fib(8) requires fib(4), fib(5)
fib(9) requires fib(4), fib(5)
fib(3) requires fib(1), fib(2)
fib(4) requires fib(2), fib(3)
fib(5) requires fib(2), fib(3)
and then the rest of the calls are either base case or memoized.
Great mathematical approach. It is more about maths than programming
It’s worth mentioning that bitwise and with 1 is faster than modulo for testing evenness.
http://stackoverflow.com/questions/1089936/is-faster-than-when-checking-for-odd-numbers