What does it mean for an algorithm to be "fast" or "efficient"? When we say an algorithm is fast, we are not referring to an explicitly notation in time, like seconds or minutes. There is a distinction here because of the distinction between hardware and software - performance of a program in real time can be drastically impacted by the hardware it is running on.
In order to account for this distinction, while still providing an objective measurement for the efficiency of a program, computer scientists rely on Asymptotic Measurement and a notation format called Big O Notation to describe such measurements. The formal description as a formula is as follows:
f(x) ∈ g(x)
xo C
f(x) ≤ C • g(x)
x > xo
In less theoretical terms; as the size of your input increases toward infinity, how does the runtime of your program grow?
For example, imagine counting the number of characters in a string in the most simple way, by reading each character and summing the related values. Such an algorithm is set to run in linear time, compared to the number of characters "n" in the string. This can be notated by saying the runtime is O(n) (or Big O of N).
Using this approach, the time to traverse the entire string is directly proportional to the number of characters (it is linear). Counting the characters in a 20 char string will take twice as long as counting the number of characters in a 10 char string because you must look at all of the characters and each character takes the same amount of time to review.
What if linear time, O(n), is not fast enough for your task? O(n) is insufficient when dealing with large datasets.
Suppose we decide to store the number of characters in the string in a variable "len" pre-determined earlier in your program; before the string was enumerated. In such a scenario, to determine string length you can merely print the variable value. Such an operation - checking a predetermined variable - is referred to as an Asymptotically Constant Operation or O(1) (Big O of 1). Why is this label appropriate? Remember what aymptotical means in this context - how does the runtime of your program grow as the input grows? With this type of operation, the runtime remains unchanged by the size of the input. It is constant, regardless of input. If your string is one char or a million chars, the operation would run in the same time. There is, however, a drawback to this method - you must reserve memory to store the variable, and there is performance overhead to actually storing the variable as well.
There are a variety of common types of Big O notated algorithms. O(n^2) algorithms, for example, can be slower than linear O(n) algorithms. O(n^2) algorithms are not *always* slower, but they become slower as their input grows; and they become slower at an exponential rate. For large datasets, O(n^2) are inappropriate. For smaller datssets, however, they can be appropriate or even run faster than O(n) algorithms.
Another common type is O(log n). The classic "binary search" algorithm for presorted datasets is one such algorithm. Doubling the size of the array increases a binary search time by only one operation. Binary search and other logarithmic algorithms are incredibly powerful when working with large datasets.
There is a pseudo-random performance modifier to such algorithms. With binary search, for example, there is a statistically significant possibility that the desired result of a binary search will be in the absolute center of the array. In cases such as this, the binary search algorithm completes in one operation, no matter what the size of the array is. Because of these probabilistic outcomes, Asymptotic notations also includes expressions for "best case" and "worst case" outcomes, or the upper and lower bounds of run time.
Again using binary search as our example, the best case scenario is as previously mentioned right in the middle of the array to be searched. The best case scenario is always one operation, which also means that the best case scenario is constant, because the result is always achieved in one operation in the best case scenario, regardless of the input size. Best case scenarios are noted using Ω, so the binary search algorithm is said to run with Ω(1).
In the worst case scenario, the binary search is O(log n). Worst case scenarios are expressed simply using the Big O notation already covered.
Using a linear search example, we see that linear algorithms also have Ω(1), because the best case scenario is the item searched for is the first item of the array. The worst case would be the search item being the last item of the array, expressed as O(n).
One last notation we will cover is Θ. Θ is used to express algorithms where the best and worst case scenarios are the same. One example where Θ is appropriate is the variable example we used earlier; where the run time will always remain the same. With the variable example, Ω(1) and O(1), so Θ(1).
No comments:
Post a Comment