Home Classroom Algorithms Basics of Algorithms Part 1

Basics of Algorithms Part 1

1065

1. What is an Algorithm?

Given an input I, an algorithm is a series of clearly-defined computation steps which transforms it into the desired output O.

For example, given a sequence of numbers I = {9, 18, 3, 14, 1}, find O, where O is the minimum of all the numbers in the set. The algorithm for this problem is trivial to design. We can iterate over all the elements of the set, and find the minimum element.

However, consider the famous sorting problem. Given a sequence of numbers I = {9, 18, 3, 14, 1}, find O, where O is I sorted in ascending order. There are several possible solutions to this problem. One of the solution can be designed using the algorithm we designed previously. Find the minimum number from I, add it to (the end of) O, and remove that element from I. Similarly, continue finding the minimum elements from I and adding it to the end of O, until I is not empty. This is actually a well-known algorithm to sort numbers, known as Selection Sort. There are quite a few algorithms to do the same job, but the key differentiator amongst any two algorithms is their efficiency. By the end of this article, you would know how we define this ‘efficiency’ and realize how this algorithm is not the best to use for sorting.

 

2. Complexity Analysis

An algorithm is roughly judged on the basis of two major metrics:

a) Time Complexity: It is an indicator of the running time of the algorithm.
b) Space Complexity: It is an indicator of the computational memory required by the algorithm.

 

2.1 Time Complexity:

The notion of time-complexity deals with the running time of the algorithm. It provides us with the idea of the number of steps executed in the algorithm. This is basically to judge the algorithm from a machine-independent view.

Let us look at the pseduo-code for the algorithm we designed for finding the minimum number in the sequence I.

Now, let us associate a cost with the execution of each statement, and the number of times it will get executed. Assume the size of I to be n.

Now, we can see that the total cost of running this algorithm for I of size n is,

We now realize that the cost of running this algorithm is linear with respect to n. That is, the cost grows linearly with increase in n. For the purpose of complexity analysis, we are interested only in the highest order terms, because as the value of n increases, the contribution made by the lower order terms to the running time becomes insignificant.

Now, let us study the sorting algorithm that we discussed earlier.

Now, reconstructing the table in the same fashion, as we did for the previous algorithm, we can prove that the T(n) for Selection-Sort is of the form an2 + bn + c. Thus, T(n) grows with the second power of n.

 

2.1.2 The Big-O Notation

Though we can express the running-time of any algorithm exactly, as we did earlier, it is usually very tedious. We are only interested in how the running-time grows with the increase in the size of the input. The extra-precision involved in the exact description of T(n) is not worth it. If an algorithm A grows at a slower rate than algorithm B to perform the same task, it will usually be more efficient in performing the task. Thus, what we are looking for is the asymptotic efficiency of the algorithm.

The Big-O notation (pronounced “big-oh”) is one of the most commonly used notations to describe the complexity of the algorithm. According to [1]

g(n) is said to be the upper-bound for the algorithm.

For the above two algorithms, the Big-O notations are: O(n) and O(n2). There are other asymptotic notations for describing complexity of algorithms. You can read more about them at [1]. If an algorithm has T(n) of the form, T(n) = a, its time-complexity is said to be O(1). That is, the algorithm operates in constant time. If the T(n) is of the form T(n) = anx + bnx-1 + .. + c, its time complexity is O(nx) and is said to operate in polynomial time. Similarly if T(n) = an + b, its time complexity is O(an) and is said to be an exponential time algorithm. Exponential time algorithms are the least desirable, and we usually strive to obtain a polynomial time solution to problems with such algorithms.

 

2.1.2 Best, Average and Worst Case Analysis

Let us look at another algorithm to sort I. This algorithm is known as Bubble Sort and is quite intuitive.

This algorithm basically iterates over the input sequence, and swaps adjacent elements if they are not in order. The sequence is traversed once again if a swap was performed in the last traversal through the sequence.

In several algorithms, T(n) varies with variation in the input. For example, in most of the cases, the time complexity of the above algorithm is O(n2). This is known as the average case. But realize that, in case the input is sorted in an ascending fashion, the time complexity becomes O(n), since the algorithm will iterate over I just once. Thus, the best case complexity of Bubble Sort is O(n). The worst-case complexity is simply the opposite of the best case complexity.

 

2.1.3 How efficient is Selection Sort?

It turns out that the best, average and worst-case complexities for Selection Sort are all O(n2). However, there are other algorithms such as Quick Sort where the average-case complexity is O(n log n). Similarly Merge Sort also has an average complexity of O(n log n). There are other sorting methods which do not rely on comparison of elements and have still better complexities, such as Counting Sort, but we shall not get into the details of those here.

We realize that since algorithms like Quick Sort on an average case have the complexity as O(n log n), the number of steps required for sorting a sequence I with Quick Sort would definitely be lesser than those required to sort I with Selection Sort. It becomes quite tedious to sort using Selection Sort when n becomes larger than 104. An O(n log n) algorithm can still sort a 107 element array with relative ease.

 

2.2 Space Complexity

You already have the tools to find the space complexity of an algorithm, which is nothing else but an asymptotic bound on the space taken by the algorithm for a given input size n.

Just like we associated the big-O notation with time complexity, we can do the same with space-complexity, and find the best-case, average-case and worst-case space complexities for the algorithm.

 

3 Tasks for you:

  1. Study about the other asymptotic notations for time complexity.
  2. Derive the time complexity of Insertion Sort.

 

References

[1] Thomas H. Cormen, Charles E. Lesiserson, Ronald L. Rivest, Clifford Stein, Introduction to Algorithms, second edition, Prentice-Hall.

Comments

comments

Gaurav is a final year student of Computer Engineering at Thadomal Shahani Engineering College, University of Mumbai (India). His handle on TopCoder is ‘red.dragon’.His areas of interest are Algorithms, Grid Computing and Cryptography

NO COMMENTS

Leave a Reply