Sometimes, the divide and conquer approach seems appropriate but fails to produce an efficient algorithm.

We all know the algorithm for calculating Fibonacci numbers:

int fib( int n ) { if ( n < 2 ) return n; else return fib(n-1) + fib(n-2); }This algorithm is commonly used as an example of the elegance of recursion as a programming technique. However, when we examine its time complexity, we find it's far from elegant!

#### Analysis

If*is the time required to calculate*

**t**_{n}*, where*

**f**_{n}**is the**

*f*_{n}

*n*^{th}Fibonacci number. Then, by examining the function above, it's clear that

**t**=_{n}**t**+_{n-1}**t**_{n-2}*and*

**=**

*t*_{1}**=**

*t*_{2}**,**

*c***c**is a constant.

*Therefore*

**t**=_{n}**cf**_{n}*=*

**t**_{n}*=*

**O(f**_{n})

*O*(1.618..^{n})#### An Iterative Solution

However, this simple alternative:int fib( int n ) { int k, f1, f2; if ( n < 2 ) return n; else { f1 = f2 = 1; for(k=2;k<n;k++) { f = f1 + f2; f2 = f1; f1 = f; } return f; }runs in

**time.**

*O(n)*This algorithm solves the problem of calculating

**and***f*_{0}**first, calculates***f*_{1}**from these, then***f*_{2}**from***f*_{3}**and***f*_{2}**, and so on. Thus, instead of dividing the large problem into two (or more) smaller problems and solving those problems (as we did in the***f*_{1}**divide and conquer**approach), we start with the simplest possible problems. We solve them (usually trivially) and save these results. These results are then used to solve slightly larger problems which are, in turn, saved and used to solve larger problems again.#### Free Lunch?

As we know, there's never one! Dynamic problems obtain their efficiency by solving and*storing*the answers to small problems. Thus they usually trade

**space**for increased speed. In the Fibonacci case, the extra space is insignificant - the two variables f1 and f2, but in some more complex dynamic algorithms, we'll see that the space used is significant.

## Key terms |

**Dynamic Algorithm**- A general class of algorithms which solve problems by solving smaller versions of the problem, saving the solutions to the small problems and then combining them to solve the larger problem.