Example: Fibonacci numbers
F_{0} = 1 F_{1} = 1 F_{N} = F_{N1} + F_{N2} for N >= 2 ^^^^^^^^^^^^^^^ recurrence relation 

int N; for ( i = 0; i < N; i++ ) do_marker(); // Marker operation 

(Clearly, C_{n} = n
What we want to do now is to find C_{n} using a recurrence relation)
i = 0 1 2 3 ... n # do_marker(): 1 1 1 1 ... 1   ++ amount of work done = C_{n} 
So we have that:


Typical initial conditions:


Example: to solve

you need one initial condition because there are 2 unknowns: C_{n} and C_{n+1}

Example:
int N = 1; for ( i = 0; i < N; i++ ) do_marker(); // Marker operation 
The running time (= # times that do_marker() is executed) when N = 1 is: 1
Therefore:



rather than:

Example:
C_{n+1} = C_{n} + 1 Substitution: n = m  1 Result: C_{(m1)+1} = C_{(m1)} + 1 C_{m} = C_{m1} + 1 And after replacing (renaming) m by n: C_{n} = C_{n1} + 1 
Therefore, the following set of recurrence relations are equivalent:
Recurrence relation 1  Recurrence relation 2 




We can substitute n with n1 directly.
Example:
C_{n+1} = C_{n} + 1 Substitution: n => n  1 Result: C_{(n1)+1} = C_{(n1)} + 1 C_{n} = C_{n1} + 1 
int N; for ( i = 0; i < N; i++ ) for ( j = 0; j < N; j++ ) do_marker(); // Marker operation 

(Clearly, C_{n} = n^{2}
What we want to do now is to find C_{n} using a recurrence relation)
i = 0 1 2 3 ... n n+1 j 0 0 0 0 ... 0 n+1 1 1 1 1 ... 1 n+1  2 2 2 2 ... 2 n+1  .... V n n n n ... n n+1 n+1 n+1 n+1 n+1 ... n+1 n+1 
The red indices are the times that do_marker() is executed when N = n.
The magenta indices are the additional times that do_marker() is executed when N = n+1.

C_{n+1} = C_{n} + 2n + 1 n => n1 ==> C_{(n1)+1} = C_{(n1)} + 2(n1) + 1 <==> C_{n} = C_{n1} + 2n  2 + 1 <==> C_{n} = C_{n1} + 2n  1 
Therefore, we need to discover one initial condition
Compute the running time for N = 1:
int N = 1; for ( i = 0; i < N; i++ ) for ( j = 0; j < N; j++ ) do_marker(); // Marker operation 
We find that:


You can take advantage of the fact that the item in the array are sorted to speed up the search
Example:
Otherwise:
input: int item[N]; int left, right; left = 0; right = N1; while ( left <= right ) { int middle = (left + right)/2; if ( item[middle] == x ) // found return( middle ); /*  Not found, continue search...  */ if ( x < item[middle] ) { right = middle  1; // Search in first half of remaining array } else { left = middle + 1; // Search in second half of remaining array } // Marker operation < } 
In other words:

So if the whileloop is executed p times, then the running time is 3×p
As you know, 3×p is O(p)
Rule of thumb:


Therefore:
C_{n} = 1 + C_{n/2} 
(Number of statements needed to find a value in an array of n unsearched values = one (compare) statement plus the number of statements needed to find a value in an array of n/2 unsearched values)
Therefore, we need to discover one initial condition
Compute the running time for N = 1:
input: int item[N]; int left, right; left = 0; right = 0; // Only item[0] needs to be tested while ( left <= right ) { int middle = (left + right)/2 ==> (0+0)/2 = 0 if ( item[middle] == x ) // found return( middle ); /*  Not found, continue search...  */ if ( x < item[middle] ) { right = middle  1; // right = 1, loop will end } else { left = middle + 1; // left = 1, loop will end } // Marker operation < } 
We find that:

