[AI combat] Quickly master Tensorflow (1): basic operations

Tensorflow is Google's open source deep learning framework. It comes from the Google Brain research project and is developed on the basis of Google's first-generation distributed machine learning framework, DistBelief. Tensorflow was open sourced on GitHub in November 2015, and the distributed version was supplemented in April 2016. The latest version is 1.10, and a preview version of Tensorflow 2.0 will be released in the second half of 2018. Tensorflow is still in rapid development iteration, constantly introducing new functions and optimizing performance. It has become the most popular open source machine learning framework in the world today, and it is an essential artifact for learning and researching AI.

Next, a series of articles on "Mastering Tensorflow Quickly" will be launched, which will take you quickly to master Tensorflow.

1. What is Tensorflow?
Tensorflow is currently the most popular deep learning framework. It is both an interface for implementing deep learning algorithms and a framework for executing deep learning algorithms. The front end of Tensorflow supports Python, C++, Java, Go and other languages, and the back end is written in C++, CUDA, etc., and can run on many systems, including Windows, Mac, Linux, Android, IOS, etc.
The official website of
Tensorflow is http://www.tensorflow.org The GitHub URL of Tensorflow is https://github.com/tensorflow/tensorflow

2. What are the characteristics of Tensorflow?
The main feature of Tensorflow is the use of data flow graphs for numerical computation, which consists of nodes (Nodes) and edges (Edges), where nodes (Nodes) represent data operations, and edges (Edges) represent mutual communication between nodes This kind of data that flows between edges is also called a tensor, hence the name Tensorflow. As shown below:
Tensors Flowing

[Note] In deep learning, the data flow diagrams are from bottom to top, progressive layer by layer, while ordinary flow charts are mostly presented from top to bottom. You may not be used to seeing data flow diagrams at first.

Tensorflow is not a strict "neural network" library, not just for deep learning, as long as the computational model can be represented in the form of a data flow graph, Tensorflow can be used for computation.

3. Install Tensorflow
In the [AI Combat] series of articles, the AI ​​basic environment construction has been introduced: Ubuntu + Anaconda + Tensorflow + GPU + PyCharm, so I won't go into details here. For details, please refer to the article: AI Basic Environment Construction .
The code for this series of articles is based on Python 3.6 and Tensorflow 1.10.

4. The Hello World
programmer starts with "Hello World". After installing Tensorflow, write a Hello World to see if the installation is successful. code show as below:

import tensorflow as tf
hello=tf.constant(‘Hello, TensorFlow!’)
sess=tf.Session()
print(sess.run(hello))

If "Hello, TensorFlow!" is successfully output, it means that TensorFlow is installed successfully.

As can be seen from this Hello World code,
(1) To use tensorflow, first import the tensorflow library, use import tensorflow to import, which is generally abbreviated as tf
(2) To execute tensorflow, first create a Session (session). Session is an interactive interface when users use tensorflow to create a computational graph, and then execute the computational graph through the run method of Session.

5. Learn basic operations
TensorFlow's data flow graph consists of nodes (Nodes) and edges (Edges). The following introduces the basic operations of TensorFlow from nodes (operations) and edges (tensors).
(1) Tensor
Tensor is the main data structure of TensorFlow and is used to operate the computational graph.
A tensor can be simply understood as an array of any dimension, and the rank of a tensor represents the number of dimensions. The rank of the tensor is different and the name is different.
a, scalar: Tensor with dimension 0, that is, a real number
b, vector: Tensor
c with dimension 1, matrix: Tensor d with dimension 2
, tensor: Tensor with dimension reaching or exceeding 3

There are four main ways to create a tensor:
a. Create a fixed tensor
Create a constant tensor:
constant_ts=tf.constant([1,2,3,4,5])


Create a zero tensor:
zero_ts=tf.zeros([row_dim, col_dim])


Create a unit tensor:
ones_ts=tf.ones([row_dim,col_dim])


Create a tensor and fill it with constants:
filled_ts=tf.fill([row_dim,col_dim],123)

b. Create similar shape tensors
Create similar zero tensors:
zeros_like=tf.zeros_like(constant_ts)


Create similar unit tensors:
ones_like=tf.ones_like(constant_ts)

c. Create a sequence tensor
Specify the start and end range
linear_ts=tf.linspace(start=0, stop=2, num=6)
The result returns [0.0,0.4,0.8,1.2,1.6,2.0], note that the result includes the stop value


Specify the increment
seq_ts=tf.range(start=4, limit=16, delta=4)
The result returns [4,8,12], note that the result does not include the stop value

d. Random tensor
Generate uniformly distributed random numbers
randunif_ts=tf.random_uniform([row_dim,col_dim],minval=0,maxval=1)
The result returns a uniformly distributed random number from minval (inclusive) to maxval (exclusive)


Generate a normally distributed random number
randnorm_ts=tf.random_normal([row_dim,col_dim],mean=0.0,stddev=1.0)
where mean represents the mean and stddev represents the standard deviation

(2) Placeholders and variables
Placeholders and variables are the key tools for using the TensorFlow computational graph, and the two are different.
a. Variables: they are parameters in the TensorFlow algorithm, and the model algorithm is optimized by adjusting the state of these variables;
b. Placeholder: It is a TensorFlow object used to represent the format of input and output data, allowing data of specified type and shape to be passed in.

Create variables Create variables
by wrapping tensors with the tf.Variable() function, for example:
my_var=tf.Variable(tf.zeros([row_dim,col_dim]))

[Note] After declaring a variable, it needs to be initialized before it can be used. The following function is most commonly used to initialize all variables at one time. The usage is as follows:
init_op=tf.global_variables_initializer()

Creating placeholders
Placeholders are only for declaring the data location, that is, occupying a single place first, and then passing in the specific data through feed_dict in the session. The sample code is as follows:

a=tf.placeholder(tf.float32,shape=[1,2])
b=tf.placeholder(tf.float32,shape=[1,2])
adder_node=a+b   #这里的“+”是tf.add(a,b)的简洁表达
print(sess.run(adder_node,feed_dict={a:[2,4],b:[5.2,8]}))

The output is [7.2 12]

(3)
The basic operations of adding, subtracting, multiplying, dividing, and modulo of TensorFlow tensors are: add(), sub(), mul(), div(), mod(). E.g:

a=tf.placeholder(tf.float32,shape=[1,2])
b=tf.placeholder(tf.float32,shape=[1,2])
adder_node=tf.add(a,b)
print(sess.run(adder_node,feed_dict={a:[2,4],b:[5.2,8]}))

The output is [7.2 12]

Among them, multiplication and division are special. If you want to divide floating-point numbers, use floordiv(); if you want to calculate the dot product between two tensors, use cross().
The following is a list of commonly used mathematical functions:

Through the above brief introduction, I believe that you have understood the characteristics and basic operation methods of TensorFlow. Next, we will continue to introduce TensorFlow in depth with the case, so stay tuned!

 

Welcome to follow my WeChat public account "Big Data and Artificial Intelligence Lab" (BigdataAILab) for more information

 

Recommended related reading

{{o.name}}
{{m.name}}

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=324063521&siteId=291194637