空间复杂度 用什么符号表示
Do you really understand Big O? If so, then this will refresh your understanding before an interview. If not, don’t worry — come and join us for some endeavors in computer science.
您真的了解Big O吗? 如果是这样,那么这将在面试前刷新您的理解。 如果没有,请不要担心-快来加入我们,为计算机科学做出一些努力。
If you have taken some algorithm related courses, you’ve probably heard of the term Big O notation. If you haven’t, we will go over it here, and then get a deeper understanding of what it really is.
如果您上过一些与算法有关的课程,您可能听说过“ Big O符号 ”一词。 如果您还没有,我们将在这里进行介绍,然后对它的真正含义有更深入的了解。
Big O notation is one of the most fundamental tools for computer scientists to analyze the cost of an algorithm. It is a good practice for software engineers to understand in-depth as well.
Big O表示法是计算机科学家分析算法成本的最基本工具之一。 对于软件工程师来说,也是一个深入了解的好习惯。
This article is written with the assumption that you have already tackled some code. Also, some in-depth material also requires high-school math fundamentals, and therefore can be a bit less comfortable to total beginners. But if you are ready, let’s get started!
本文假定您已经处理了一些代码。 另外,一些深入的材料也需要高中数学基础知识,因此对于初学者来说可能不太舒服。 但是,如果您准备好了,那就开始吧!
In this article, we will have an in-depth discussion about Big O notation. We will start with an example algorithm to open up our understanding. Then, we will go into the mathematics a little bit to have a formal understanding. After that we will go over some common variations of Big O notation. In the end, we will discuss some of the limitations of Big O in a practical scenario. A table of contents can be found below.
在本文中,我们将对Big O符号进行深入的讨论。 我们将从一个示例算法开始,以增进我们的理解。 然后,我们将对数学进行一点点正式的理解。 之后,我们将介绍Big O符号的一些常见变体。 最后,我们将在实际情况中讨论Big O的一些局限性。 目录可以在下面找到。
So let’s get started.
因此,让我们开始吧。
“Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation.”
“大O表示法是一种数学表示法,它描述了当参数趋于特定值或无穷大时函数的极限行为。 它是由保罗·巴赫曼(Paul Bachmann),埃德蒙·兰道(Edmund Landau)等人发明的一系列记谱法的成员,这些记法统称为“巴赫曼·朗道记法或渐近记法”。
— Wikipedia’s definition of Big O notation
—维基百科对Big O符号的定义
In plain words, Big O notation describes the complexity of your code using algebraic terms.
简而言之,Big O表示法使用代数术语描述了代码的复杂性 。
To understand what Big O notation is, we can take a look at a typical example, O(n²), which is usually pronounced “Big O squared”. The letter “n” here represents the input size, and the function “g(n) = n²” inside the “O()” gives us an idea of how complex the algorithm is with respect to the input size.
要了解什么是Big O符号,我们可以看一个典型的例子O(n²) ,通常将其称为“ Big O squared” 。 字母“ n”代表输入大小 , “ O()”内部的函数“ g(n)=n² ”使我们了解算法相对于输入大小有多复杂。
A typical algorithm that has the complexity of O(n²) would be the selection sort algorithm. Selection sort is a sorting algorithm that iterates through the list to ensure every element at index i is the ith smallest/largest element of the list. The CODEPEN below gives a visual example of it.
具有O(n²)复杂度的典型算法是选择排序算法。 选择排序是一种遍历列表的排序算法,以确保索引i处的每个元素都是列表中的第i个最小/最大元素。 下面的CODEPEN给出了一个直观的示例。
The algorithm can be described by the following code. In order to make sure the ith element is the ith smallest element in the list, this algorithm first iterates through the list with a for loop. Then for every element it uses another for loop to find the smallest element in the remaining part of the list.
可以通过以下代码描述该算法。 为了确保第i个元素是列表中的第i个最小元素,此算法首先使用for循环遍历列表。 然后,对于每个元素,它使用另一个for循环在列表的其余部分中找到最小的元素。
SelectionSort(List) {
for(i from 0 to List.Length) {
SmallestElement = List[i]
for(j from i to List.Length) {
if(SmallestElement > List[j]) {
SmallestElement = List[j]
}
}
Swap(List[i], SmallestElement)
}
}
In this scenario, we consider the variable List as the input, thus input size n is the number of elements inside List. Assume the if statement, and the value assignment bounded by the if statement, takes constant time. Then we can find the big O notation for the SelectionSort function by analyzing how many times the statements are executed.
在这种情况下,我们将变量List视为输入,因此输入大小n是List内元素的数量 。 假设if语句以及由if语句限制的值分配花费固定时间。 然后,通过分析执行语句的次数,我们可以找到SelectionSort函数的大O表示法。
First the inner for loop runs the statements inside n times. And then after i is incremented, the inner for loop runs for n-1 times… …until it runs once, then both of the for loops reach their terminating conditions.
首先,内部for循环在n次内运行语句。 然后,在i递增后,内部for循环运行n-1次……直到运行一次,然后两个for循环都达到其终止条件。
This actually ends up giving us a geometric sum, and with some high-school math we would find that the inner loop will repeat for 1+2 … + n times, which equals n(n-1)/2 times. If we multiply this out, we will end up getting n²/2-n/2.
实际上,这最终给了我们一个几何总和,并且通过一些高中数学,我们会发现内部循环将重复1 + 2…+ n次,等于n(n-1)/ 2次。 如果将其相乘,最终将得到n²/ 2-n / 2。
When we calculate big O notation, we only care about the dominant terms, and we do not care about the coefficients. Thus we take the n² as our final big O. We write it as O(n²), which again is pronounced “Big O squared”.
当我们计算大O表示法时,我们只在乎优势项 ,而在乎系数。 因此,我们将n²作为最终的大O。我们将其写为O(n²),再次称为“大O平方” 。
Now you may be wondering, what is this “dominant term” all about? And why do we not care about the coefficients? Don’t worry, we will go over them one by one. It may be a little bit hard to understand at the beginning, but it will all make a lot more sense as you read through the next section.
现在您可能想知道,这个“主导术语”到底是什么? 为什么我们不关心系数呢? 不用担心,我们将一一介绍。 一开始可能很难理解,但是在阅读下一部分时,所有这些都将变得更加有意义。
Once upon a time there was an Indian king who wanted to reward a wise man for his excellence. The wise man asked for nothing but some wheat that would fill up a chess board.
曾几何时,有一位印度国王想奖励一个聪明人,以表彰他的卓越。 聪明的人只要求一点麦子就可以填满棋盘。
But here were his rules: in the first tile he wants 1 grain of wheat, then 2 on the second tile, then 4 on the next one…each tile on the chess board needed to be filled by double the amount of grains as the previous one. The naïve king agreed without hesitation, thinking it would be a trivial demand to fulfill, until he actually went on and tried it…
但是这是他的规则:在第一个图块中,他需要1粒小麦,然后在第二个图块中需要2颗小麦,然后在下一个图块中获取4个...之一。 天真的国王毫不犹豫地答应了,认为满足他的要求是微不足道的,直到他继续尝试下去……
So how many grains of wheat does the king owe the wise man? We know that a chess board has 8 squares by 8 squares, which totals 64 tiles, so the final tile should have 2⁶⁴ grains of wheat. If you do a calculation online, you will end up getting 1.8446744*10¹⁹, that is about 18 followed by 18 zeroes. Assuming that each grain of wheat weights 0.01 grams, that gives us 184,467,440,737 tons of wheat. And 184 billion tons is quite a lot, isn’t it?
那么国王欠聪明人多少麦子呢? 我们知道国际象棋棋盘有8平方乘8平方,总共64瓦,因此最终的瓦应该有2粒小麦。 如果在线进行计算,最终将得到1.8446744 *10¹⁹,即大约18,后跟18个零。 假设每粒小麦的重量为0.01克,那么我们就有184,467,440,737吨小麦。 1,840亿吨是很多,不是吗?
The numbers grow quite fast later for exponential growth don’t they? The same logic goes for computer algorithms. If the required efforts to accomplish a task grow exponentially with respect to the input size, it can end up becoming enormously large.
后来数字呈指数级增长很快,不是吗? 相同的逻辑适用于计算机算法。 如果完成任务所需的努力相对于输入大小呈指数增长,则最终可能变得非常大。
Now the square of 64 is 4096. If you add that number to 2⁶⁴, it will be lost outside the significant digits. This is why, when we look at the growth rate, we only care about the dominant terms. And since we want to analyze the growth with respect to the input size, the coefficients which only multiply the number rather than growing with the input size do not contain useful information.
现在64的平方是4096。如果将该数字加到2⁶⁴,它将在有效数字之外丢失。 这就是为什么当我们查看增长率时,我们只关心主导术语。 而且,由于我们要分析输入大小的增长,因此仅乘以数字而不是随输入大小增长的系数不包含有用的信息。
Below is the formal definition of Big O:
以下是Big O的正式定义:
The formal definition is useful when you need to perform a math proof. For example, the time complexity for selection sort can be defined by the function f(n) = n²/2-n/2 as we have discussed in the previous section.
当您需要执行数学证明时,形式定义很有用。 例如,选择排序的时间复杂度可以由函数f(n)=n²/ 2-n / 2定义,正如我们在上一节中讨论的那样。
If we allow our function g(n) to be n², we can find a constant c = 1, and a N₀ = 0, and so long as N > N₀, N² will always be greater than N²/2-N/2. We can easily prove this by subtracting N²/2 from both functions, then we can easily see N²/2 > -N/2 to be true when N > 0. Therefore, we can come up with the conclusion that f(n) = O(n²), in the other selection sort is “big O squared”.
如果我们让函数g(n)为n²,我们可以找到一个常数c = 1,一个N₀= 0,只要N>N₀,N²总是大于N²/ 2-N / 2。 我们可以通过从两个函数中减去N²/ 2来轻松证明这一点,然后可以很容易地看到N²/ 2> -N / 2在N> 0时为真。因此,我们可以得出以下结论:f(n)= O(n²),在另一个选择中是“大O平方”。
You might have noticed a little trick here. That is, if you make g(n) grow supper fast, way faster than anything, O(g(n)) will always be great enough. For example, for any polynomial function, you can always be right by saying that they are O(2ⁿ) because 2ⁿ will eventually outgrow any polynomials.
您可能已经注意到这里的一个小技巧。 也就是说,如果使g(n)快速增长,比任何事物都快,那么O(g(n))将永远足够大。 例如,对于任何多项式函数,您总是可以正确地说它们是O(2ⁿ),因为2ⁿ最终将超出任何多项式。
Mathematically, you are right, but generally when we talk about Big O, we want to know the tight bound of the function. You will understand this more as you read through the next section.
从数学上来说,您是对的,但是通常在我们谈论Big O时,我们想知道函数的紧密界限 。 在阅读下一部分时,您将更加了解这一点。
But before we go, let’s test your understanding with the following question. The answer will be found in later sections so it won’t be a throw away.
但是在我们开始之前,让我们用以下问题测试您的理解。 答案将在后面的章节中找到,因此不会被抛弃。
Question: An image is represented by a 2D array of pixels. If you use a nested for loop to iterate through every pixel (that is, you have a for loop going through all the columns, then another for loop inside to go through all the rows), what is the time complexity of the algorithm when the image is considered as the input?
问题:图像由2D像素阵列表示。 如果您使用嵌套的for循环迭代每个像素(也就是说,有一个for循环遍历所有列,然后是另一个for循环遍历所有行),那么当图像被视为输入?
Big O: “f(n) is O(g(n))” iff for some constants c and N₀, f(N) ≤ cg(N) for all N > N₀
大O:对于某些常数c和N₀,“ f(n)是O(g(n))” iff,对于所有N>N₀,f(N)≤cg(N)
Omega: “f(n) is Ω(g(n))” iff for some constants c and N₀, f(N) ≥ cg(N) for all N > N₀
Ω:对于某些常数c和N₀,“ f(n)为Ω(g(n))” iff,对于所有N>N₀,f(N)≥cg(N)
Theta: “f(n) is Θ(g(n))” iff f(n) is O(g(n)) and f(n) is Ω(g(n))
Theta:“ f(n)是Θ(g(n))”,如果f(n)是O(g(n))且f(n)是Ω(g(n))
Little O: “f(n) is o(g(n))” iff f(n) is O(g(n)) and f(n) is not Θ(g(n))
小O:“ f(n)是o(g(n))”,如果f(n)是O(g(n))且f(n)不是Θ(g(n))
—Formal Definition of Big O, Omega, Theta and Little O
— Big O,Omega,Theta和Little O的正式定义
In plain words:
用简单的话说:
Big O (O()) describes the upper bound of the complexity.
大O(O())描述了复杂度的上限 。
Omega (Ω()) describes the lower bound of the complexity.
Ω(Ω())描述了复杂度的下限 。
Theta (Θ()) describes the exact bound of the complexity.
Theta(Θ())描述了复杂度的确切范围。
Little O (o()) describes the upper bound excluding the exact bound.
小O(o())描述了上限,但不包括精确界限 。
For example, the function g(n) = n² + 3n is O(n³), o(n⁴), Θ(n²) and Ω(n). But you would still be right if you say it is Ω(n²) or O(n²).
例如,函数g(n)=n²+ 3n是O(n³),o(n⁴),Θ(n²)和Ω(n)。 但是,如果说它是Ω(n²)或O(n²),那您还是对的。
Generally, when we talk about Big O, what we actually meant is Theta. It is kind of meaningless when you give an upper bound that is way larger than the scope of the analysis. This would be similar to solving inequalities by putting ∞ on the larger side, which will almost always make you right.
通常,当我们谈论大O时,实际上是指Theta。 当您给出的上限大于分析范围时,这是毫无意义的。 这类似于将∞放在较大的一侧来解决不等式,这几乎总是会使您正确。
But how do we determine which functions are more complex than others? In the next section you will be reading, we will learn that in detail.
但是,我们如何确定哪些功能比其他功能更复杂? 在下一节中,您将阅读,我们将详细学习。
When we are trying to figure out the Big O for a particular function g(n), we only care about the dominant term of the function. The dominant term is the term that grows the fastest.
当我们试图找出特定函数g(n)的Big O时,我们只关心函数的主导项 。 占主导地位的术语是增长最快的术语。
For example, n² grows faster than n, so if we have something like g(n) = n² + 5n + 6, it will be big O(n²). If you have taken some calculus before, this is very similar to the shortcut of finding limits for fractional polynomials, where you only care about the dominant term for numerators and denominators in the end.
例如,n²的增长快于n,因此,如果我们有g(n)=n²+ 5n + 6之类的值,那么它将是O(n²)大。 如果您之前进行过演算,则此操作与查找分数多项式极限的快捷方式非常相似,在该方法中,您只关心最后的分子和分母的主导项。
But which function grows faster than the others? There are actually quite a few rules.
但是哪个功能比其他功能增长更快? 实际上有很多规则。
Often called “constant time”, if you can create an algorithm to solve the problem in O(1), you are probably at your best. In some scenarios, the complexity may go beyond O(1), then we can analyze them by finding its O(1/g(n)) counterpart. For example, O(1/n) is more complex than O(1/n²).
通常称为“恒定时间” ,如果您可以创建一种算法来解决O(1)中的问题,则可能处于最佳状态。 在某些情况下,复杂度可能会超过O(1),然后我们可以通过找到O(1 / g(n))对应项来对其进行分析。 例如,O(1 / n)比O(1 /n²)更复杂。
As complexity is often related to divide and conquer algorithms, O(log(n)) is generally a good complexity you can reach for sorting algorithms. O(log(n)) is less complex than O(√n), because the square root function can be considered a polynomial, where the exponent is 0.5.
由于复杂度通常与分治算法有关,因此O(log(n))通常是排序算法可以达到的良好复杂度。 O(log(n))比O(√n)复杂,因为平方根函数可以看作是多项式,指数为0.5。
For example, O(n⁵) is more complex than O(n⁴). Due to the simplicity of it, we actually went over quite many examples of polynomials in the previous sections.
例如,O(n⁵)比O(n⁴)更复杂。 由于它的简单性,我们实际上在前面的部分中介绍了很多多项式的示例。
O(2ⁿ) is more complex than O(n⁹⁹), but O(2ⁿ) is actually less complex than O(1). We generally take 2 as base for exponentials and logarithms because things tends to be binary in Computer Science, but exponents can be changed by changing the coefficients. If not specified, the base for logarithms is assumed to be 2.
O(2ⁿ)比O(n⁹⁹)复杂,但是O(2ⁿ)实际上比O(1)复杂。 我们通常以2为指数和对数的底数,因为在计算机科学中,事物往往是二进制的,但是可以通过更改系数来改变指数。 如果未指定,则将对数的底数假定为2。
If you are interested in the reasoning, look up the Gamma function, it is an analytic continuation of a factorial. A short proof is that both factorials and exponentials have the same number of multiplications, but the numbers that get multiplied grow for factorials, while remaining constant for exponentials.
如果您对推理感兴趣,请查找Gamma函数 ,它是阶乘的解析延续 。 一个简短的证明是阶乘和指数都有相同的乘法次数,但是乘数的乘数增长,而指数保持不变。
When multiplying, the complexity will be greater than the original, but no more than the equivalence of multiplying something that is more complex. For example, O(n * log(n)) is more complex than O(n) but less complex than O(n²), because O(n²) = O(n * n) and n is more complex than log(n).
当相乘时,复杂度将大于原始乘积,但不超过与更复杂的乘积相等。 例如,O(n * log(n))比O(n)更复杂,但比O(n²)复杂,因为O(n²)= O(n * n)并且n比log(n )。
To test your understanding, try ranking the following functions from the most complex to the lease complex. The solutions with detailed explanations can be found in a later section as you read. Some of them are meant to be tricky and may require some deeper understanding of math. As you get to the solution, you will understand them more.
为了测试您的理解,请尝试对从最复杂到最复杂的租赁以下功能进行排名。 阅读后,可以在后面的部分中找到具有详细说明的解决方案。 其中一些注定很棘手,可能需要对数学有更深的理解。 当您找到解决方案时,您将对它们有更多的了解。
Question: Rank following functions from the most complex to the lease complex.
问题:将以下功能从最复杂的到租赁复杂的进行排序。
Solution to Section 2 Question:
第2部分问题的解决方案:
It was actually meant to be a trick question to test your understanding. The question tries to make you answer O(n²) because there is a nested for loop. However, n is supposed to be the input size. Since the image array is the input, and every pixel was iterated through only once, the answer is actually O(n). The next section will go over more examples like this one.
实际上,这是测试您的理解的一个棘手问题。 该问题试图使您回答O(n²),因为存在嵌套的for循环。 但是,n应该是输入大小。 由于图像数组是输入,并且每个像素仅迭代一次,因此答案实际上是O(n)。 下一节将介绍更多类似的示例。
So far, we have only been discussing the time complexity of the algorithms. That is, we only care about how much time it takes for the program to complete the task. What also matters is the space the program takes to complete the task. The space complexity is related to how much memory the program will use, and therefore is also an important factor to analyze.
到目前为止,我们仅讨论了算法的时间复杂度。 也就是说,我们只关心程序完成任务所花费的时间。 同样重要的是程序完成任务所需的空间。 空间复杂度与程序将使用多少内存有关,因此也是分析的重要因素。
The space complexity works similarly to time complexity. For example, selection sort has a space complexity of O(1), because it only stores one minimum value and its index for comparison, the maximum space used does not increase with the input size.
空间复杂度与时间复杂度类似。 例如,选择排序的空间复杂度为O(1),因为它仅存储一个最小值及其比较索引,因此使用的最大空间不会随输入大小而增加。
Some algorithms, such as bucket sort, have a space complexity of O(n), but are able to chop down the time complexity to O(1). Bucket sort sorts the array by creating a sorted list of all the possible elements in the array, then increments the count whenever the element is encountered. In the end the sorted array will be the sorted list elements repeated by their counts.
某些算法(例如存储桶排序)的空间复杂度为O(n),但可以将时间复杂度降低为O(1)。 存储桶排序通过创建数组中所有可能元素的排序列表对数组进行排序,然后在遇到元素时增加计数。 最后,排序后的数组将是按其计数重复的排序后的列表元素。
The complexity can also be analyzed as best case, worst case, average case and expected case.
复杂度还可以按最佳情况,最坏情况,平均情况和预期情况进行分析。
Let’s take insertion sort, for example. Insertion sort iterates through all the elements in the list. If the element is larger than its previous element, it inserts the element backwards until it is larger than the previous element.
让我们以插入排序为例。 插入排序遍历列表中的所有元素。 如果该元素大于其先前的元素,它将向后插入该元素,直到其大于先前的元素。
If the array is initially sorted, no swap will be made. The algorithm will just iterate through the array once, which results a time complexity of O(n). Therefore, we would say that the best-case time complexity of insertion sort is O(n). A complexity of O(n) is also often called linear complexity.
如果最初对数组进行了排序,则不会进行交换。 该算法将仅迭代一次数组,这会导致O(n)的时间复杂度。 因此,我们可以说插入排序的最佳情况下时间复杂度为O(n)。 O(n)的复杂度通常也称为线性复杂度 。
Sometimes an algorithm just has bad luck. Quick sort, for example, will have to go through the list in O(n) time if the elements are sorted in the opposite order, but on average it sorts the array in O(n * log(n)) time. Generally, when we evaluate time complexity of an algorithm, we look at their worst-case performance. More on that and quick sort will be discussed in the next section as you read.
有时候算法只是运气不好。 例如,如果元素以相反的顺序排序,则快速排序将必须在O(n)时间中遍历列表,但平均而言,它会以O(n * log(n))时间对数组进行排序。 通常,当我们评估算法的时间复杂度时,我们会查看它们的最坏情况性能。 阅读时,将在下一部分中讨论更多有关此内容和快速排序的信息。
The average case complexity describes the expected performance of the algorithm. Sometimes involves calculating the probability of each scenarios. It can get complicated to go into the details and therefore not discussed in this article. Below is a cheat-sheet on the time and space complexity of typical algorithms.
平均案例复杂度描述了算法的预期性能。 有时涉及计算每种情况的概率。 进入细节可能会变得很复杂,因此本文不予讨论。 以下是典型算法在时间和空间上的复杂性的备忘单。
Solution to Section 4 Question:
第4部分问题的解决方案:
By inspecting the functions, we should be able to immediately rank the following polynomials from most complex to lease complex with rule 3. Where the square root of n is just n to the power of 0.5.
通过检查这些函数,我们应该能够立即使用规则3将以下多项式从最复数到租复数进行排序。其中n的平方根仅是n的乘方为0.5。
Then by applying rules 2 and 6, we will get the following. Base 3 log can be converted to base 2 with log base conversions. Base 3 log still grows a little bit slower then base 2 logs, and therefore gets ranked after.
然后通过应用规则2和6,我们将得到以下结果。 可以将对数3的对数转换为对数 2的对数 。 基数3的日志仍然比基数2的日志慢一些,因此排名靠后。
The rest may look a little bit tricky, but let’s try to unveil their true faces and see where we can put them.
其余的可能看起来有些棘手,但让我们尝试揭露它们的真实面Kong,看看可以将它们放置在哪里。
First of all, 2 to the power of 2 to the power of n is greater than 2 to the power of n, and the +1 spices it up even more.
首先,2乘以2的幂到n的幂大于2乘以n的幂,而+1会加倍更多。
And then since we know 2 to the power of log(n) with based 2 is equal to n, we can convert the following. The log with 0.001 as exponent grows a little bit more than constants, but less than almost anything else.
然后,由于我们知道以2为底的log(n)的幂等于2,因此可以转换以下内容。 指数为0.001的对数增长比常数多一点,但几乎没有其他任何东西。
The one with n to the power of log(log(n)) is actually a variation of the quasi-polynomial, which is greater than polynomial but less than exponential. Since log(n) grows slower than n, the complexity of it is a bit less. The one with the inverse log converges to constant, as 1/log(n) diverges to infinity.
log(log(n))的幂次为n的那个实际上是准多项式的变体,它大于多项式但小于指数。 由于log(n)的增长慢于n,因此它的复杂性要低一些。 随着1 / log(n)趋于无穷大,具有逆对数的那个收敛到常数。
The factorials can be represented by multiplications, and thus can be converted to additions outside the logarithmic function. The “n choose 2” can be converted into a polynomial with a cubic term being the largest.
阶乘可以用乘法表示,因此可以在对数函数之外转换为加法。 可以将“ n选择2”转换为三次项最大的多项式。
And finally, we can rank the functions from the most complex to the least complex.
最后,我们可以对功能进行排序,从最复杂到最不复杂。
!!! — WARNING — !!!
!!! - 警告 - !!!
Contents discussed here are generally not accepted by most programmers in the world. Discuss it at your own risk in an interview. People actually blogged about how they failed their Google interviews because they questioned the authority, like here.
世界上大多数程序员通常不接受此处讨论的内容。 在面试中讨论风险自负 。 人们实际上在博客上写了他们如何在Google面试中失败的原因,因为他们在这里质疑权威。
!!! — WARNING — !!!
!!! - 警告 - !!!
Since we have previously learned that the worst case time complexity for quick sort is O(n²), but O(n * log(n)) for merge sort, merge sort should be faster — right? Well you probably have guessed that the answer is false. The algorithms are just wired up in a way that makes quick sort the “quick sort”.
由于我们先前已经了解到,快速排序的最坏情况下的时间复杂度为O(n²),但是对于合并排序,则为O(n * log(n)),因此合并排序应该更快-是吗? 好吧,您可能已经猜到答案是错误的。 仅以使快速排序成为“快速排序”的方式来连接算法。
To demonstrate, check out this trinket.io I made. It compares the time for quick sort and merge sort. I have only managed to test it on arrays with a length up to 10000, but as you can see so far, the time for merge sort grows faster than quick sort. Despite quick sort having a worse case complexity of O(n²), the likelihood of that is really low. When it comes to the increase in speed quick sort has over merge sort bounded by the O(n * log(n)) complexity, quick sort ends up with a better performance in average.
为了演示,请查看我制作的此trinket.io 。 它比较快速排序和合并排序的时间。 我只设法在长度不超过10000的数组上对其进行测试,但是到目前为止,您可以看到,合并排序的时间比快速排序的时间增长得更快。 尽管快速排序的情况复杂度为O(n²),但这种可能性确实很小。 当谈到速度的提高时,快速排序已经超过了以O(n * log(n))复杂度为边界的合并排序,因此快速排序的平均性能更高。
I have also made the below graph to compare the ratio between the time they take, as it is hard to see them at lower values. And as you can see, the percentage time taken for quick sort is in a descending order.
我还制作了下图来比较它们花费的时间之间的比率,因为很难看到它们的值较低。 如您所见,快速排序所用的时间百分比是降序排列的。
The moral of the story is, Big O notation is only a mathematical analysis to provide a reference on the resources consumed by the algorithm. Practically, the results may be different. But it is generally a good practice trying to chop down the complexity of our algorithms, until we run into a case where we know what we are doing.
故事的寓意是,Big O表示法只是一种数学分析,可为算法消耗的资源提供参考。 实际上,结果可能会有所不同。 但这通常是一种尝试降低算法复杂性的好习惯,直到遇到我们知道自己在做什么的情况。
I like coding, learning new things and sharing them with the community. If there is anything in which you are particularly interested, please let me know. I generally write on web design, software architecture, mathematics and data science. You can find some great articles I have written before if you are interested in any of the topics above.
我喜欢编码,学习新事物并与社区分享。 如果您有什么特别感兴趣的,请告诉我。 我通常写有关网页设计,软件体系结构,数学和数据科学的文章。 如果您对以上任何主题感兴趣,则可以找到我之前写的一些很棒的文章。
Hope you have a great time learning computer science!!!
希望您在学习计算机科学方面过得愉快!!!
翻译自: https://www.freecodecamp.org/news/big-o-notation-why-it-matters-and-why-it-doesnt-1674cfa8a23c/
空间复杂度 用什么符号表示