You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As you can see in the diagram, the bigger the input size was, the longer it took to find the element we're looking for. The runtime grew proportionally with respect to the input size. Below is some of the data I used to create this chart. In the first column, you see the size of the array, which the algorithm was executed on, and in the second, the time the execution required in milliseconds:
55
55
@@ -78,7 +78,7 @@ The initial search region is the entire array, meaning if the target exists, it
78
78
5. If it is smaller than the element, the new search region becomes the lower (or left) half of the array — by setting _upper bound_ to _mid - 1_.
79
79
80
80
This process is then repeated until a match is found or the search terminates as unsuccessful. Here's a visualisation of this procedure:
You may have already noticed by looking at the y-axis that how much faster binary search is. It peaks a little bit above 0.1 ms — in comparison, linear search reached 250 ms with bigger sized arrays on my computer. Though, the most important difference is that the linear pattern is gone. This is replaced by a logarithmic one. That is because with each iteration, the algorithm halves the search region. You start with $n$ elements, then you get to $\frac{n}{2}$, then $\frac{n}{4}$ and so on, until you get to a problem the size of 1. This process can be described the following way:
105
105
@@ -110,7 +110,7 @@ $$
110
110
The question is, how many times does the algorithm have to divide $n$ by 2 to get to 1? If we solve $\frac{n}{2^x} = 1$ for $x$, we get $x = \log_2 n$. Thus, we say that binary search runs in $O(\log n)$
111
111
112
112
Below is a list of common functions programmers encounter when they analyse their algorithms, from slowest to fastest growing:
This can also be visualised if we graph the two function:
199
-

199
+

200
200
201
201
### How to get the Big O of any function?
202
202
@@ -282,7 +282,7 @@ $$
282
282
</div>
283
283
284
284
Visualised:
285
-

285
+

286
286
287
287
### How to get the Big Omega of any function?
288
288
@@ -351,7 +351,7 @@ Which reads as: $f(n)$ is big $\Theta$ of $g(n)$ if and only if there exist posi
351
351
**Example:**
352
352
353
353
Previously, we saw that for $f(n) = 2n +10$, $f(n)$ is $O(n)$ and also $\Omega(n)$. Thus, we can say that $f(n)$ is $\Theta(n)$ - and as you can see $f(n)$ is "sandwiched" between $g(n)$:
354
-

354
+

355
355
356
356
However, if you want to show how well an algorithm runs, and big $\Theta$ is unachievable, then the next best thing to use is big $O$.
0 commit comments