One article understands time complexity and space complexity

The first step in learning data structures and algorithms

time complexity

What are the most common time complexity

  • O(1) : Constant Complexity
  • O(log n) : Logarithmic ComPlexity logarithmic complexity
  • O(n) : Linear ComPlexity linear time complexity
  • O(n^2) : N square ComPlexity square
  • O(n^3):N cubic ComPlexity
  • O(2^n) : Exponential Growth Index
  • O(n!) : Factorial factorial

When analyzing time complexity, the front coefficient is not considered. For example, O(1) does not mean that its complexity is 1. It can also be 2, 3, 4..., as long as it is a constant order, use it. O(1) means

How to see the time complexity of a piece of code?

The most common way is to look at this code directly, how many times it will run according to different circumstances of n

O(1)
$n=100000;
echo 'hello';

O(?)
$n=100000;
echo 'hello1';
echo 'hello2';
echo 'hello3';

The first piece of code, no matter what n is, is only executed once, so its time complexity is O(1). The second is actually the same, we don't care about the coefficient. Although the second code will execute the echo output 3 times, it will only execute 3 times no matter what n is, so its time complexity is also constant complexity , which is O(1)

Look at the following two pieces of code:

O(n)
for($i = 1; $i <= $n; $i++) {
    echo 'hello';
}

O(n^2)
for($i = 1; $i <= $n; $i++) {
    for($j = 1; $j <= $n; $j++) {
        echo 'hello';
    }
}

These two pieces of code are different with n, and the number of times it is executed is also changing. The number of times the first piece of code is executed is linear with n, so its time complexity is O(n) .

The second piece of code is a nested loop. When n is 100, the output statement inside will be executed 10,000 times, so its time complexity is O(n^2) . If the loop in the second piece of code is not nested, but parallel, then its time complexity should be O(2n), because we don’t care about the constant coefficient in the front, so its time complexity is O(n )

O(log n)
for($i = 1; $i <= $n; $i = $i*2) {
    echo 'hello';
}

O(k^2)

fib($n) {
    if ($n < 2) {
        return $n;
    }
    
    return fib($n-1) + fib($n-2);
}

In the first piece of code, when n=4, the loop is executed twice, so the relationship between the number of executions inside the loop and n is log2(n), so the time complexity is logarithmic complexity O(logn) . The second paragraph is a Fibonacci (Fibonacci) sequence. This form of recursion is used here, which involves how to calculate the time complexity of the recursive program when it is executed. Its answer is k^n , K is a constant and an exponential order, so it is very slow to find the Fibonacci sequence simply recursively, which is exponential time complexity. How the specific exponential time complexity is obtained will be explained in detail later. Let's take a look at the curves of various time complexity

It can be seen from this figure that when n is relatively small, within 10, the different time complexity is actually similar. But if n continues to expand, the exponential growth is very fast. Therefore, when we are writing a program, if we can optimize the time complexity, for example, from 2 n to n 2, from the perspective of this curve, when n is large, the benefits obtained are very high. Therefore, this also tells us that when developing business code, we must understand our own time and space complexity, and it is a habit to develop a habit. After writing the code, subconsciously analyze the time and space of this program. the complexity.

It can be seen from the figure that if your time complexity is smashed, the company’s machine or resource loss will actually increase as n increases, and if you can Simplification will save a lot of costs for the company

For different programs, accomplishing the same goal in the writing method may lead to different time complexity. Let’s look at a simple example

从1加到2一直加到n,求它的和

When learning mathematics in elementary school, everyone knew that there are two methods. In method one, if you use the program to solve it with violence, it is to loop from 1 to n accumulation. This is a one-level loop. As many times as n is, the accumulation is executed as many times, so its time complexity is O(n)

$sum = 0;
for ($i=1; $i <=$n; $i++) {
    $sum += $i;
}

Method two is to use a mathematical summation formula:

y = n*(n+1)/2

Using this formula, it is found that the program has only one line, so its time complexity is O(1). So it can be seen that although the final result obtained by different methods of the program is the same, its time complexity is very different

For recursion, how to analyze the time complexity?

In the case of recursion, the key is to understand its recursive process, how many recursive statements are executed in total. If it is a loop, it is easy to understand that a loop of n times is executed n times. In the case of recursion, in fact, it is nested layer by layer. In fact, in many cases, we draw a tree structure based on the execution order of recursion, and call it the recursive tree of its recursive state. Take the previous example of finding the nth term of the Fibonacci sequence

Fib:0,1,1,2,3,5,8,13,21...

F(n) = F(n-1)+F(n-2)

I encountered such a question in an interview before, and I wrote it in the easiest way to achieve it in a recursive way.

fib($n) {
    if($n < 2) {
        retuen $n;
    }
    
    return fib($n-1)+fib($n-2);
} 

I mentioned earlier that its time complexity is O(k^n) , so how to get it, you can analyze it, assuming that n takes 6, to calculate Fib(6), it depends on how this code is executed

Therefore, if you want to calculate F(6), you have two branches, F(5) and F(4), at least two more operations

If you want to calculate F(5), you can get it in the same way, you need to settle F(4) and F(3). If you want to calculate F(4), you can get it in the same way, you need to settle F(3) and F(2). Two phenomena can be seen here:

  • For each additional layer, the number of running nodes is twice that of the upper layer. The first layer has only 1 node, the second layer has 2 nodes, the third layer has 4 nodes, and the next layer is 8 Nodes. Therefore, for each layer, the number of nodes, that is, the number of executions, increases exponentially. It can be seen that when it reaches n, it has been executed 2^n times
  • The second one can be observed that there are duplicate nodes appearing in the execution tree, such as F(4) and F(3) in the figure. If you continue to expand this tree, you will find that F(4), F(3), and F(2) will be calculated many times

It is precisely because there are so many redundant calculations that the Fibonacci number of 6 numbers becomes a time complexity of 2^6. Therefore, when you encounter this type of question in the interview, try not to write it in this way, otherwise it will be cold. You can add a cache to cache these intermediate results (store them in an array or a hash, and find the values ​​that are repeatedly calculated), or write directly in a loop

Main theorem

Introduce something called the main theorem. Why this theorem is important is because the main theorem is actually used to solve all recursive functions and how to calculate its time complexity. The main theorem itself is relatively complicated to prove mathematically (for the main theorem, please refer to Wikipedia: https://zh.wikipedia.org/wiki/%E4%B8%BB%E5%AE%9A%E7%90%86 )

In other words, any divide-and-conquer or recursive function can calculate its time complexity, and how to calculate it is through this main theorem. If it is more complicated, how to simplify it into practical methods, in fact, the key is these four, generally just remember

Generally, in various recursive situations, there are the above four situations, which are used in interviews and normal work.

Binary search : Generally occurs when a sequence of numbers is in order, and the target number is found in the sequence of numbers, so it is divided into two each time, and only one side is checked. In this case, the final time The complexity is O(logn)

Binary tree traversal (Binary tree traversal) : If it is a binary tree traversal, its time complexity is O(n) . Because it can be known from the main theorem that it is divided into two each time, but after each division into two, it has the same time complexity on each side. Finally, its recurrence formula becomes T(n)=2T(n/2)+O(1) in the figure. Finally, according to this main theorem, it can be concluded that its running time is O(n). Of course, there is also a simplified way of thinking, that is, if you traverse the binary tree, each node will be visited once, and only once, so its time complexity is O(n)

Two-dimensional matrix (Optimal sorted matrix search) : Binary search is performed in a sorted two-dimensional matrix. At this time, the time complexity is O(n) through the main theorem , just remember it.

Merge sort (merge sort) : The optimal way for all sorts is nlogn , and the time complexity of merge sort is O(nlogn)

Common interview questions about time complexity

Traversal of a binary tree-preorder, middle order, postorder: What is the time complexity?

The answer is: O(n) , where n represents the total number of nodes in the binary tree, no matter which way to traverse, each node has and only visits once, so its complexity is linear to the total number of nodes in the binary tree. Which is O(n)

Graph traversal: what is the time complexity?

Answer: O(n) , each node in the graph is also accessed once and only once, so the time complexity is also O(n), n is the total number of nodes in the graph

Search algorithm: What is the time complexity of DFS (depth first) and BFS (breadth first)?

Answer: O(n) , the following article will introduce these two algorithms in detail (n is the total number of nodes in the search space)

Binary search: What is the time complexity?

Answer: O(logn)

Space complexity

The situation of space complexity and time complexity is similar, but it is simpler. Use the simplest way to analyze. There are two main principles:

If you open an array in your code, then the length of the array is basically your space complexity. For example, if you open a one-dimensional array, then your space complexity is O(n). If you open a two-dimensional array and the length of the array is n 2, then the space complexity is basically n 2

If there is recursion, the deepest depth of recursion is the maximum value of your space complexity. If you open an array in recursion in your program, then the space complexity is the maximum of the two

It is the core competitiveness of a technical person to find the constant in the rapidly changing technology. Unity of knowledge and action, combining theory with practice

Guess you like

Origin blog.csdn.net/self_realian/article/details/107531012