You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: source/docs/programming_and_computer_usage/complexity.md
+37-2Lines changed: 37 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -125,13 +125,48 @@ by [changing the base](https://en.wikipedia.org/wiki/List_of_logarithmic_identit
125
125
## Why it (almost) does not matter
126
126
127
127
Now, imagine that we have a program manipulating integers in base $b$.
128
-
Converting numbers in base $b'$ takes $\log_2(b) / \log_2(b')$ times more (or less!) space, so if $b = 2$ and $b' = 2$, it means we need $3.322$ times more space to store and manipulate the integers.
128
+
Converting numbers in base $b'$ result in numbers that use $\log_2(b') / \log_2(b)$ more (or less!) space.
129
+
For example, going from base $10$ to base $2$ means that $b = 2$ and $b' = 10$, hence we need $\log_2(10) / \log_2(2) = 3.322 / 1$ times more space to store and manipulate the integers.
130
+
This corresponds intuitively to 32 bits being able to store at most a 10-digit number (2,147,483,647).
129
131
130
132
If our program in base $b$ uses $O(g(n))$, it means that a program performing the same task, with the same algorithm, but using integers in base $b'$, would require $O((\log_2(b) / \log_2(b')) \times g(n))$.
131
133
By adapting the constant factor principle of the big O notation, we can see that this is a negligible factor that can be omitted.
132
134
133
135
However, if the $b'$ base is 1, then the new program will use $O(n \times g(n))$: if $g(n)$ is greater than linear, this will make a difference!
134
-
Of course, unary representation is "not" reasonable, so we will always assume that our representations are related by some constant, making the function orger of magnitude insensible to such details.
136
+
Of course, unary representation is "not" reasonable, so we will always assume that our representations are related by some constant, making the function order of magnitude insensible to such details.
135
137
136
138
You can have a look at [the complexity of various arithmetic functions](https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Arithmetic_functions) and see that the representation is not even discussed, as those results are insensible to them, provided they are "reasonable".
137
139
140
+
# Types of Bounds
141
+
142
+
## Foreword
143
+
144
+
When considering order of magnitude, we are always *asymptotic*, i.e., we consider that the input will grow for ever.
145
+
The Big-O notation above furthermore corresponds to the *worst case*, but two other cases are sometimes considered:
146
+
147
+
- Best case,
148
+
- Average case.
149
+
150
+
The first type of study requires to understand the algorithm very well, to understand what type of input can be easily processed. The second case requires to consider all possible inputs, and to know the distribution of cases.
151
+
152
+
The reason why worst case is generally preferred is because:
153
+
154
+
- Worst case gives an upper bound that is in practise useful,
155
+
- Best case is considered unreliable as it can easily be tweaked, and may not be representative of the algorithm's resource consumption in general,
156
+
- Average case is difficult to compute, and not necessarily useful, as worst and average complexity are often the same.
157
+
158
+
## Examples
159
+
160
+
### Linear search algorithm
161
+
162
+
The [linear search algorithm](https://princomp.github.io/lectures/data/search#finding-a-particular-value) look for a particular value in an array. The version that exit exit prematurely the loop when the target value is found has the following complexity:
163
+
164
+
- The **best case** is if the target is the very first value, in this case, the time complexity is $O(c)$.
165
+
- The **worst case** is if the target is the very last value, in this case the time complexity is $O(n)$ where $n$ is the size of the array.
166
+
- The **average case** is $O(n)$.
167
+
168
+
Note that the space usage of this algorithm is $O(c)$, it requires only one variable if we do not copy the array.
169
+
170
+
### Matrix Multiplication
171
+
172
+
Consider the ["schoolbook algorithm for multiplication"](https://en.wikipedia.org/wiki/Computational_complexity_of_matrix_multiplication#Schoolbook_algorithm)
0 commit comments