Graphic novel algorithm like algorithm as interesting primer

Links: https://pan.baidu.com/s/1FJTA9xEiiOtsLJBgP2Tn7A

Extraction code: q3vz

Here Insert Picture Description

Chapter 1 Introduction to Algorithms

"Binary search": Compared to simple sequential search, the algorithm is applied in an ordered list will look to enhance efficiency, especially with the increasing time list.

Big O notation: running time description of growth (rather than time), the operating time indicated in the worst case, such as O (log n) and O (n).

Chapter 2 Selection Sort

Array: elements adjacent to the address in memory. Read speed, random access support, O (1). Insert (placeholder easily lead to memory waste and need to be transferred) and removes elements is not convenient, O (n).

List: store the next element address each element. Simultaneously reading all of the elements, high efficiency, but if necessary jumping, inefficient, O (n). Fast insert and delete elements, O (1).

Select Sort: Each finding the maximum / small elements from the rest of the elements to complete the order, time complexity is O (n2).

Chapter 3 Recursion

Recursive: call a function in the function itself. Comprising a base case (base case) and recursive conditions (recursive case).

Recursive condition refers to a function call their own, and refers to the baseline condition is no longer a function calls itself, so as to avoid an infinite loop.

Call stack: all function calls into the call stack, the stack and the complete stack. Call stack may be very long (especially in the recursion), which will take up a lot of memory, you can switch to cycling or tail recursion.

Chapter 4 Quick Sort

Divide and Conquer (Divide and Conquer, D & C): a well-known recursive problem solving, ① identify simple baseline condition; ② determine how to reduce the scale of the problem so that it meets baseline condition.

Quick Sort: ① randomly selected reference value; ② the array into two sub-arrays: an element is smaller than the reference value and the reference value is greater than the element; ③ the two sub-arrays quickly sort; so recursion.

Big O notation: C a fixed amount of time, usually not considered, may be affected at the same time complexity of the situation. Quick Sort average running time: O (nlog n), worst case: O (n2).

Chapter 5 hash table

Hash function: the same inputs always map to the same index, mapping different inputs to the various indexes.

Hash table: a data structure (Key-Value), and array combined hash function, the hash function used to determine the location of the storage element. Find, insert and delete speed is very fast for analog mappings prevent duplicate cache.

Conflict: two keys assigned to the same location. The solution is simple: if two keys are mapped to the same location, a store chain in this position. However, if the hash table is stored in the list is very long, the hash table speed will drop dramatically. To avoid conflicts, the hash function needs to have: a lower filling factor (the total number of elements comprising a hash table / position, once the filling factor exceeds 0.7, the respect of adjusting the length of the hash table); Good hash function (uniformity map ).

Chapter 6 BFS

FIG: a node (node) and the edge (Edge) composition, may be directly connected to a node with many nodes, these nodes are called neighbors. FIG FIG divided valid / invalid FIG.

Queue: FIFO (the FIFO); Stack: last-out (the LIFO);

BFS answer two problems. The first question: From node A, node B has a path to go do? The second class of problems: from node A, node B which path to go the shortest?

Algorithm: ① create a queue for storing an object to be examined; ② pop up an object from the queue; ③ check whether the object satisfies the object; ④ if not all the neighbors object to join the queue; ⑤ back to step 2 continue until the queue is empty, or to find the target; (Note: after checking an object, which should be marked as checked, and it is no longer checked.)

Run time: O (V + E), where V is the vertex (Vertice) number, E is the number of edges.

Chapter 7 Dijkstra algorithm

Tree: A special diagram in which no finger back side.

Dijkstra algorithm: for finding the shortest path in a weighted graph, the only positive in power weighs useful if it contains a negative right side, may be used Bellman Ford-Algorithm.

Algorithm: ① to find the "cheapest" of nodes, the nodes can be in the shortest possible time of arrival; ② update overhead neighbor of the node; ③ This process is repeated until each node on the graph do so; ④ calculate the final route.

The key idea behind Dijkstra's algorithm: to find the cheapest graph nodes, and the node to ensure that no cheaper route! This assumption is not only the negative side when the right set up.

Chapter 8 greedy algorithm

Set: a list of similar, but can not contain duplicate elements, executable union, intersection, and difference operations.

Greedy algorithm: Finding local optimal solution, in this way an attempt to get the global optimal solution.

NP-complete problems: ① running speed of the algorithm is very fast when there are fewer elements, but with the increase in number of elements, the speed will become very slow; ② issues involving "all combinations" are generally NP-complete problem; ③ the problem can not be divided small problem, you must consider all possible cases, this may be NP-complete; ④ If the issue is the sequence (such as urban sequence Traveling salesman problem) and difficult to solve, it might be NP-complete problem; ⑤ If the problem relates to the collection ( The broadcasting station collections) and difficult to solve, it might be NP-complete problem; ⑥ If the problem can be converted to a set covering problem or traveling salesman problem, then it certainly is NP-complete.

For the NP-complete problems, have yet to find a quick solution, the best practice is to use approximation algorithms (how fast; approximate solutions with optimal solution closeness).

Chapter 9 Dynamic Programming

Dynamic programming: finding an optimal solution given the constraints under conditions when the problem can be decomposed into sub-problems and discrete independent of each other, you can use dynamic programming to solve.

Each dynamic planning solutions involve grid value, the cell is to optimize the value of each cell is a self-issue, it should consider how the problem into sub-problems, which helps identify the grid Axis.

Chapter 10 K-nearest neighbor algorithm

KNN: used for classification and regression, need to consider the nearest neighbor. Classification is the grouping (orange or grapefruit); regression is the prediction (as a number).

Feature Extraction: means an item (such as fruit or user) into a series of comparable figures, you can pick the right features related to the success of KNN algorithm (closely related, impartial).

Distance calculation: Pythagorean formula, or cosine similarity.

Chapter 11 How to do next

Binary search number: For each node, the value of which left child than it is small, and the value of the right child than it is big. When the find nodes in a binary search tree, the average running time is O (log n), but the time required in the worst case is O (n); an ordered array compared with a binary search the number of insert and delete operations much faster, but there are some disadvantages, such as not random access. B-tree can be appreciated, red-black trees, stacks, etc. splay trees.

Reverse index (inverted index): as a hash table that maps the word to the page that contains it, often used to create a search engine.

Fourier transform: if a song can be broken down into different frequencies, can strengthen the part you care about, such as strengthening the bass and treble to hide. Fourier transform is well suited for processing a signal, which may be used to compress music.

Parallel algorithms: it is difficult to design, to ensure that they work properly and achieve the desired speed increase is difficult. One thing is certain, that is, the speed increase is not linear, it must take into account the administrative overhead of parallelism and load balancing.

MapReduce: based on two simple concepts - Mapping (map) and function merge (reduce) function.

Bloom filters: type a probabilistic data structure that provides the answer is likely no, but it is likely to be correct. When using a hash table, the answer is absolutely reliable, while using a Bloom filter, the answer is most likely to be correct. Bloom filter advantage that the storage space occupied by little. HyperLogLog: approximately calculated number of different elements in the set, as the Bloom filter, it can not give an accurate answer, but basically, the memory space occupied is much less.

Secure Hash Algorithm (secure hash algorithm, SHA) functions: Given a string, SHA return its hash value. SHA are widely used cryptographic hash value calculation, which is a one-way hash algorithm.

Locality sensitive hashing algorithms: Sometimes, you want the opposite result, that hope is the local hash function sensitive, when two things you need to check the degree of similarity, Simhash useful.

Diffie-Hellman algorithm and its replacement RSA is still widely used. It uses two keys: a public key and a private key to encrypt and decrypt.

Linear Programming: indicators for improving specified constraints given maximum.

Guess you like

Origin blog.csdn.net/u014211007/article/details/93732848