Visualization of Neural Network Principles

foreword

The neural network has strong learning ability and self-adaptive self-organization ability, and the learning ability will become stronger as the number of hidden layers increases. Therefore, neural networks are currently used in many scenarios, such as deep learning, which we are more familiar with It's the Alpha Dog.

About Neural Networks

There are already many variants of neural networks, such as convolutional neural networks, recurrent neural networks, and so on.

The perceptron is the most basic neural network. It only has an input layer and an output layer. The perceptron can only deal with linearly separable problems, while for nonlinear problems, a multi-layer neural network is required. Generally, as shown in the figure below, there are multiple layers. For example, the left one contains the input layer, the hidden layer and the output layer, while the right one contains two hidden layers. The neurons in each layer are fully interconnected with the next neuron, and the neurons in the same layer are not connected. The input layer is used to receive input, and after processing in the hidden layer, it is processed and output in the output layer.

write picture description here

How to train a multilayer network

For multi-layer networks, we often use the error back propagation algorithm to train, and our most common BP neural network refers to the multi-layer feedforward neural network trained using error back propagation. In addition to this, other types of neural networks may also be trained using the error backpropagation algorithm.

In general, the error back-propagation uses the gradient descent method to continuously adjust the weights in the neural network through back-propagation to minimize the error sum of squares of the output layer.

Visual experiment

TensorFlow provides an experimental demonstration platform that allows us to better understand neural networks through visualization at https://playground.tensorflow.org.

The following two simpler classification models can be seen in the figure that can distinguish the two categories after training. We choose x1 and x2 for the input features, there are two hidden layers with 4 neurons and 2 neurons respectively.

write picture description here

write picture description here

If our dataset is more complex, more hidden layers are required, such as the following,

write picture description here

In this process, you can also see the output and input of each neuron in each layer, and you can also control the step-by-step training yourself. Through this experimental platform, you can help beginners understand the principles and processes of neural networks, and you can play.

-------------Recommended reading------------

My 2017 article summary - machine learning

Summary of my 2017 articles - Java and middleware

Summary of my 2017 articles - deep learning

Summary of my 2017 articles - JDK source code

My 2017 article summary - natural language processing

Summary of my 2017 articles - Java Concurrency


Talk to me, ask me questions:

write picture description here

The menu of the official account has been divided into "Reading Summary", "Distributed", "Machine Learning", "Deep Learning", "NLP", "Java Depth", "Java Concurrency Core", "JDK Source Code", "Tomcat Core" "Wait, there might be one that suits your appetite.

Why write "Analysis of Tomcat Kernel Design"

Welcome to follow:

write picture description here

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325966128&siteId=291194637