Sunday 4 February 2018

Time Complexity



Time complexity of an algorithm signifies the total time required by the program to run till its completion.The time complexity of algorithms is most commonly expressed using the big O notation. It's an asymptotic notation to represent the time complexity.



Time Complexity is most commonly estimated by counting the number of elementary steps performed by any algorithm to finish execution


Let us take 2 examples to understand this concept better:


/*

we have to calculate the square of n
*/
for i=1 to n

do n = n + n

// when the loop ends n will hold its square

return n

Or, we can simply use a mathematical operator * to find the square.

/*
we have to calculate the square of n

*/
return n*n



In the above two simple algorithms, you saw how a single problem can have many solutions. While the first solution required a loop which will execute for n number of times, the second solution used a mathematical operator * to return the result in one line. So which one is the better approach, of course the second one.

In the first example the loop will run n number of times, so the time complexity will be n at least and as the value of n will increase the time taken will also increase. While for the second code, time complexity is constant, because it will never be dependent on the value of n, it will always give the result in 1 step.


And since the algorithm's performance may vary with different types of input data, hence for an algorithm we usually use the worst-case Time complexity of an algorithm because that is the maximum time taken for any input size.



How to calculate time complexity?



Sometimes, this topic is very difficult to understand. But we will try to use a simple technique to teach you how you can calculate time complexity.


Now the most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N, as N approaches infinity. In general, you can think of it like this


statement;

Above we have a single statement. Its Time Complexity will be Constant. The running time of the statement will not change in relation to N.


for(i=0; i < N; i++)
 { 
statement;
 }


The time complexity for the above algorithm will be Linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time.

for(i=0; i < N; i++) 
{ for(j=0; j < N;j++)
 { 
statement; 
}


This time, the time complexity of the above code will be Quadratic. The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N*N

while(low <= high) 
{
    mid = (low + high) / 2;
    if (target < list[mid])
        high = mid - 1;
    else if (target > list[mid])
        low = mid + 1;
    else break;
}


This is an algorithm to break a set of numbers into halves, to search a particular field .Now, this algorithm will have a Logarithmic Time Complexity. The running time of the algorithm is proportional to the number of times N can be divided by 2(N is high-low here). This is because the algorithm divides the working area in half with each iteration.



void quicksort(int list[], int left, int right)
{
    int pivot = partition(list, left, right);
    quicksort(list, left, pivot - 1);
    quicksort(list, pivot + 1, right);
}

Taking the previous algorithm forward, above we have a small logic of Quick Sort.Now in Quick Sort, we divide the list into halves every time, but we repeat the iteration N times(where N is the size of list). Hence time complexity will be N*log( N ). The running time consists of N loops (iterative or recursive) that are logarithmic, thus the algorithm is a combination of linear and logarithmic.



Asymptotic Notations:

Following are the commonly used asymptotic notations to calculate the running time complexity of an algorithm.
  • Ο Notation
  • Ω Notation
  • θ Notation

 

O-notation:


To denote asymptotic upper bound, we use O-notation. For a given function g(n), we denote by O(g(n)) (pronounced “big-oh of g of n”) the set of functions:
O(g(n))= { f(n) : there exist positive constants c and n0 such that 0f(n)cg(n) for all nn0 }


Ω-notation:


To denote asymptotic lower bound, we use Ω-notation. For a given function g(n), we denote by Ω(g(n)) (pronounced “big-omega of g of n”) the set of functions:
Ω(g(n))= { f(n) : there exist positive constants c and n0 such that 0cg(n)f(n) for all nn0 }


Θ-notation:


To denote asymptotic tight bound, we use Θ-notation. For a given function g(n), we denote by Θ(g(n)) (pronounced “big-theta of g of n”) the set of functions:
Θ(g(n))= { f(n) : there exist positive constants c1,c2 and n0 such that 0c1g(n)f(n)c2g(n) for all n>n0 }




Time complexity notations:


While analyzing an algorithm, we mostly consider O-notation because it will give us an upper limit of the execution time i.e. the execution time in the worst case.


To compute O-notation we will ignore the lower order terms since the lower order terms are relatively insignificant for large input.

Let f(N)=2N2+3N+5
O(f(N))=O(2N2+3N+5)=O(N2)
Lets consider some example:

1. int count = 0;
for (int i = 0; i < N; i++) 
    for (int j = 0; j < i; j++) 
        count++;

Lets see how many times count++ will run.

When i=0, it will run 0 times.
When i=1, it will run 1 times.
When i=2, it will run 2 times and so on.
Total number of times count++ will run is 0+1+2+...+(N1)=N(N1)2. So the time complexity will be O(N2).




2. int count = 0;
for (int i = N; i > 0; i /= 2) 
    for (int j = 0; j < i; j++) 
        count++;


This is a tricky case. In the first look, it seems like the complexity is O(NlogN)N for the js loop and logN for is loop. But its wrong. Lets see why.
Think about how many times count++ will run.
When i=N, it will run N times.
When i=N/2, it will run N/2 times.
When i=N/4, it will run N/4 times and so on.
Total number of times count++ will run is N+N/2+N/4+...+1=2N. So the time complexity will be O(N).




No comments:

Post a Comment