Do you know the ten practical basic algorithms for fried chicken?

When it comes to algorithms, what is the first thing you search for in your treasure trove of knowledge? Haha, can't remember? It doesn't matter~ I will help you find these old friends next.

The best friend by the neck: Quick Sort Algorithm

Quicksort is a sorting algorithm developed by Tony Hall. On average, sorting n items requires O(nlogn) comparisons. In the worst case, O(n2) comparisons are required, but this is uncommon. In fact, quicksort is usually significantly faster than other Ο(nlogn) algorithms because its inner loop can be implemented efficiently on most architectures.

Quicksort uses a divide and conquer strategy to divide a list into two sub-lists.

Algorithm steps:

1. Pick out an element from the sequence, called a "pivot",

2. Reorder the sequence. All elements smaller than the reference value are placed in front of the reference, and all elements larger than the reference value are placed at the back of the reference (same numbers can go to either side). After the partition exits, the benchmark is in the middle of the sequence. This is called a partition operation.

3. Recursively sort the subarrays of elements smaller than the reference value and the subarrays of elements larger than the reference value.

The bottom case of recursion is that the size of the sequence is zero or one, which is always sorted. Although it keeps recursing, the algorithm will always exit, because in each iteration, it will put at least one element to its last position. 

Heart to Heart Friends 2: Heap Sort Algorithm

Heapsort (Heapsort) refers to a sorting algorithm designed using the data structure of the heap. Stacking is a structure that approximates a complete binary tree, and satisfies the property of stacking at the same time: that is, the key value or index of a child node is always less than (or greater than) its parent node.

The average time complexity of heapsort is Ο(nlogn).

Algorithm steps:

1. Create a heap H[0..n-1]

2. Swap the heap head (maximum value) and the heap tail

3. Reduce the size of the heap by 1 and call shift_down(0), the purpose is to adjust the top data of the new array to the corresponding position

4. Repeat step 2 until the heap size is 1 

Negotiable event-level friend three: merge sort

Merge sort (Mergesort, Taiwan translation: merge sort) is an efficient sorting algorithm based on the merge operation. This algorithm is a very typical application of divide and conquer method (Divide and Conquer).

Algorithm steps:

1. Apply for space to be the sum of the two sorted sequences, and this space is used to store the merged sequence

2. Set two pointers, the initial positions are the starting positions of the two sorted sequences

3. Compare the elements pointed to by the two pointers, select a relatively small element to put into the merge space, and move the pointer to the next position

4. Repeat step 3 until a pointer reaches the end of the sequence

5. Copy all remaining elements of the other sequence directly to the end of the merged sequence  

Intimate Friends 4: Binary Search Algorithm

The binary search algorithm is a search algorithm that finds a specific element in an ordered array. The search process starts from the middle element of the array. If the middle element is exactly the element to be found, the search process ends; if a particular element is greater than or less than the middle element, it searches in the half of the array that is greater or less than the middle element. , and start the comparison from the middle element as at the beginning. If the array is empty at a certain step, it means it was not found. This search algorithm reduces the search range by half for each comparison. The halved search reduces the search area in half each time, and the time complexity is Ο(logn).

Nodding and bowing friends five: BFPRT (Linear Search Algorithm)

The problem solved by the BFPRT algorithm is very classic, that is, to select the kth largest (kth smallest) element from a sequence of n elements. Through clever analysis, BFPRT can guarantee that it is still linear time complexity in the worst case. The idea of ​​this algorithm is similar to the idea of ​​quick sort. Of course, in order to make the algorithm still reach the time complexity of o(n) in the worst case, the five algorithm authors have done delicate processing.

Algorithm steps:

1. Divide n elements into groups of 5 each into n/5 (upper bound) groups.

2. Take the median of each group and use any sorting method, such as insertion sort.

3. Recursively call the selection algorithm to find the median of all medians in the previous step, set it as x, and set it to select the smaller one in the middle in the case of an even number of medians.

4. Divide the array by x, set the number less than or equal to x as k, and the number greater than x as nk.

5. If i==k, return x; if i<k, recursively search for the i-th smallest element in the elements less than x; if i>k, recursively search for the ik-th smallest element in the elements greater than x.

Termination condition: When n=1, the returned element is i.

Meet Hate Late Friends 6: DFS (Depth First Search)

Depth-First-Search is a kind of search algorithm. It traverses the nodes of the tree along the depth of the tree, searching for branches of the tree as deep as possible. When all edges of node v have been explored, the search will backtrack to the starting node of the edge that found node v. This process continues until all nodes reachable from the source node have been discovered. If there are still undiscovered nodes, select one of them as the source node and repeat the above process. The whole process is repeated until all nodes are visited. DFS is a blind search.

Depth-first search is a classic algorithm in graph theory. Using the depth-first search algorithm, the corresponding topological sorting table of the target graph can be generated. Using the topological sorting table can easily solve many related graph theory problems, such as the maximum path problem and so on. The heap data structure is generally used to assist in the implementation of the DFS algorithm.

Depth-first traversal graph algorithm steps:

1. Visit vertex v;

2. Starting from the unvisited adjacent points of v in turn, perform depth-first traversal on the graph; until the vertices in the graph that have a path with v are visited;

3. If there are still unvisited vertices in the graph at this time, starting from an unvisited vertex, the depth-first traversal is performed again until all vertices in the graph have been visited.

The above description may be abstract, for example:

After DFS accesses a certain starting vertex v in the graph, it starts from v and accesses any of its adjacent vertex w1; then starts from w1, accesses the vertex w2 that is adjacent to w1 but has not yet been visited; and then starts from w2, carries out A similar visit, ... continues until a vertex u is reached where all adjacent vertices have been visited.

Then, take a step back to the vertex that was just visited before to see if there are other adjacent vertices that have not been visited. If there is, visit this vertex, and then proceed from this vertex to perform a similar visit to the above; if not, go back one step to search. Repeat the above process until all vertices in the connected graph have been visited.

Like-minded friends seven: BFS (breadth-first search)

Breadth-First-Search is a graph search algorithm. Simply put, BFS starts from the root node and traverses the nodes of the tree (graph) along the width of the tree (graph). If all nodes are visited, the algorithm aborts. BFS is also a blind search. Generally, the queue data structure is used to assist the implementation of the BFS algorithm.

Algorithm steps:

1. First put the root node into the queue.

2. Remove the first node from the queue and verify that it is the target.

If the target is found, end the search and return the result.

Otherwise add all its unexamined direct children to the queue.

3. If the queue is empty, it means that the entire map has been checked - that is, there are no objects to search for in the map. End the search and return "target not found".

4. Repeat step 2. 

Care to Level 8: Dijkstra's Algorithm

Dijkstra's salgorithm was proposed by Dutch computer scientist Ezher Dijkstra. Dijkstra's algorithm uses breadth-first search to solve the single-source shortest path problem of non-negative weight directed graphs, and the algorithm finally obtains a shortest path tree. This algorithm is often used in routing algorithms or as a submodule of other graph algorithms.

The input to the algorithm consists of a weighted directed graph G, and a source vertex S in G. We denote the set of all vertices in G by V. Each edge in a graph is an ordered pair of elements formed by two vertices. (u,v) means that there is a path from vertex u to v. We denote the set of all edges in G by E, and the edge weights are defined by the weight function w:E→[0,∞]. Therefore, w(u,v) is the non-negative weight from vertex u to vertex v. The weight of an edge can be thought of as the distance between two vertices. The weight of a path between any two points is the sum of the weights of all edges on the path. Given that there are vertices s and t in V, Dijkstra's algorithm can find the lowest weight path (eg, the shortest path) from s to t. This algorithm can also find the shortest path from one vertex s to any other vertex in a graph. For directed graphs without negative weights, Dijkstra's algorithm is the fastest single-source shortest path algorithm currently known.

Algorithm steps:

1. Initially set S={V0}, T={the remaining vertices}, the distance value corresponding to the vertices in T

If it exists, d(V0,Vi) is the weight on the arc

If it does not exist, d(V0,Vi) is ∞

2. Select a vertex W with the smallest distance value from T and not in S, add S

3. Modify the distance value of the vertices in the remaining T: If W is added as the intermediate vertex, the distance value from V0 to Vi is shortened, then modify the distance value.

Repeat steps 2 and 3 above until all vertices are included in S, that is, W=Vi 

Draw a knife to help friends nine: dynamic programming algorithm

Dynamic programming is a method used in mathematics, computer science and economics to solve complex problems by breaking the original problem into relatively simple sub-problems. Dynamic programming is often suitable for problems with overlapping subproblems and optimal substructure properties, and the time consumption of dynamic programming methods is often much less than that of naive solutions.

The basic idea behind dynamic programming is very simple. Roughly speaking, to solve a given problem, we need to solve its different parts (i.e. sub-problems), and then combine the solutions of the sub-problems to arrive at the solution of the original problem. Often many sub-problems are very similar, and for this reason dynamic programming attempts to solve each sub-problem only once, thus reducing the amount of computation: once the solution to a given sub-problem has been solved, it is memoized so that the same sub-problem is needed next time Check the table directly when solving. This practice is especially useful when the number of repeated subproblems grows exponentially with the size of the input.

The most classic problem about dynamic programming is the knapsack problem.

Algorithm steps:

1. Optimal substructure properties. If the solution of the subproblems contained in the optimal solution of the problem is also optimal, we say that the problem has the optimal substructure property (that is, it satisfies the optimization principle). The optimal substructure properties provide important clues for dynamic programming algorithms to solve problems.

2. The overlapping nature of the subproblems. The overlapping property of sub-problems means that when solving the problem from top to bottom with recursive algorithm, the sub-problems generated each time are not always new problems, and some sub-problems will be repeated many times. The dynamic programming algorithm takes advantage of the overlapping nature of this sub-problem. It only calculates each sub-problem once, and then saves its calculation results in a table. When the calculated sub-problem needs to be calculated again, it is only in the table. Simply look at the results to get high efficiency.

Sending Charcoal Friends in the Snow Ten: Naive Bayesian Classification Algorithm

The Naive Bayes classification algorithm is a simple probabilistic classification algorithm based on Bayes' theorem. The basis of Bayesian classification is probabilistic reasoning, which is how to complete reasoning and decision-making tasks when the existence of various conditions is uncertain and only the probability of their occurrence is known. Probabilistic reasoning is the opposite of deterministic reasoning. The Naive Bayes classifier is based on the assumption of independence, that is, it is assumed that each feature of the sample is not related to other features.

The naive Bayes classifier relies on an accurate natural probability model and can obtain very good classification results in a supervised learning sample set. In many practical applications, the Naive Bayesian model parameters are estimated using the maximum likelihood estimation method, in other words the Naive Bayesian model works without the use of Bayesian probability or any Bayesian model.

Despite these naive ideas and oversimplified assumptions, the Naive Bayes classifier can still achieve quite good results in many complex real-world situations.

via CSDN big data

This is igeekbar.com . More like-minded friends are welcome to come and communicate. If there is any inadequacy, thank you for correcting and for reading.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326442887&siteId=291194637