Home / How To / Asymptotic complexity of algorithms: what is the beast?

Asymptotic complexity of algorithms: what is the beast?

Asymptotic complexity of algorithms is found everywhere. Available tell what it is, where and how it is used.

The asymptotic complexity of algorithms is the time and memory that your program will need in the process of exorcism. It is one thing to know that there are linear or logarithmic algorithms, but quite another to understand what is behind all this.

With Big O Notation, you can mathematically describe how a program will behave in the worst-case scenario with a lot of input. For example, if you use bubble sorting to sort items in an integer array in ascending order, then the worst case scenario would be an array sorted in descending order. With this arrangement, your algorithm will need the most operations and memory.

By understanding the principle of Big O Notation, you can solve the problem of optimizing your programs and achieve the most efficient code.

The complexities of O (1) and O (n) are the easiest to understand.

O (1) indicates that this operation requires constant time. For example, in constant time, you search for an item in a hash table because you directly query an item without making any comparisons. The same applies to calling the ith element in the array. These operations are independent of the number of inputs.

In case you do not have a very successful hash function that has resulted in a large number of collisions, the element search may take O (n), where n is the number of objects in the hash table. At worst you will need to make n comparisons, namely walk through all elements of the structure. Obviously, the rate of linear algorithms, that is, algorithms operating in O (n), depends on the number of inputs. In the case of linear algorithms, at worst you will have to perform some kind of operation on each element.

Binary search is one of the classic examples of algorithms working for O (log_2 (n)). Each operation reduces the number of inputs by half.

For example, if you have a sorted array of 8 items, at worst you will need to make Log_2 (8) = 3 comparisons to find the item you want.

Consider a sorted one-dimensional array with 16 elements:

Let ‘s say we need to find the number 13.

We select the array median and compare the element under index 7 with the number 13.

Since our array is sorted and 16 is larger than 13, we may not consider elements that are under index 7 or higher. So we got rid of half the array.

Next we choose the median again in the remaining half of the array.

We compare and get that 8 is less than 13. Reduce the array by half, select the element in the middle, and make another comparison. Repeat until there is one element in the array. The last comparison returns the desired item. The best development is if the first median chosen is your number. But at worst you will make log_2 (n) comparisons.

The complexity of O (n * log n) can be represented as a combination of O (log n) and O (n). You can get this complexity if your program has one for-cycle that contains another for-cycle. That is, nesting cycles, one of which works for O (log n) and the other for O (n).

for(int i = 0; i < n; i++) //n of times is carried out

for(int j = 1; j < n; j = j * 2) // blogs of times is carried out

print “hello”; //constant time

The complexity of O (n ^ 2) is found quite often in sorting algorithms. It can be easily obtained if the program implements the nesting of two for-cycles operating in O (n).

For example:

for(int i = 0; i < n; i++) // n of times is carried out

for(int j = 0; j < n; j++) / n of times is carried out

print “hello”; // constant

The same will happen if you have to walk through the elements of a two-dimensional array.

Algorithms which can be described by means of a notation of O (n^2 n 3) or O (n^2-100000) or even O (n^2 100000000) all the same are considered as square. With a large number of inputs, only the greatest degree of n is important. Constants and coefficients are usually not taken into account. The same applies to linear and logarithmic functions. O (n 1000) and O (1000 * n) will still work in O (n).

Using algorithms that work in O (n ^ 2) and higher (e.g. the exponential function 2 ^ n) is a bad practice. You should avoid such algorithms if you want your program to run in an acceptable time and occupy the optimal amount of memory.

About IT

Check Also

operating systems : how did everything start?

Interested in the history of operating systems development? Super! You will learn why “Apple” is …

Leave a Reply

Your email address will not be published. Required fields are marked *