the resource for machine learning

Questions and Answers
What's matrix dot product in Deep Learning?
Deep Neural Network with Matrices
https://matrices.io/deep-neural-network-from-scratch/
Calculus For Deep Learning with Matrices
http://explained.ai/matrix-calculus/index.html
https://arxiv.org/pdf/1802.01528.pdf
Matrix Differentiation
https://atmos.washington.edu/~dennis/MatrixCalculus.pdf
Vector and Tensor Algebra
The tensor product combines two lower rank tensors into a higher rank one.
T⊗S(v1,…,vr,w1,…,ws) = T(v1,…,vr)S(w1,…,ws)
http://www.mate.tue.nl/~peters/4K400/VectTensColMat.pdf
Matrix Dot Product
http://www.math.odu.edu/~bogacki/math316/transp/1_3

How to setup OpenCV on Android Studio?

OpenCV, Android Studio, Tensorflow
https://docs.opencv.org/3.4.0/d0/d6c/tutorial_dnn_android.html
https://www.tensorflow.org/mobile/android_build
https://firebase.google.com/docs/ml-kit/android/use-custom-models
Why loss (cost) function is needed in Neural Network?
A loss function (cost function) is a measure of error between what value your model predicts and what is the actual value.
A loss function Loss(x, y, w) quantifies how unhappy you would be if you used w to make 
a prediction on x when the correct output is y.

A loss (cost) is calculated as the difference between the actual output and the predicted output. 
Minimizing the loss (cost) is obtained by modifying these weights and biases towards the “correct” direction.

For example L2 loss function calculates the squares of the difference between 
a model's predicted value for a labeled example and the actual value of the label.
The goal of training a model is to find a set of weights and biases that have low loss.
The objective in training a classifier is to minimize the number of errors 
(zero-one loss) on unseen examples.
http://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html 
http://iopscience.iop.org/article/10.1088/1742-6596/1004/1/012027/pdf
https://developers.google.com/machine-learning/crash-course/descending-into-ml/training-and-loss
The goal of training a model is to find a set of weights and biases that have low loss.
References:
Training and Loss https://developers.google.com/machine-learning/crash-course/descending-into-ml/training-and-loss
Optimization for Training Deep Learning Models http://www.deeplearningbook.org/contents/optimization.html
Applying Gradient Descent in Convolutional Neural Networks http://iopscience.iop.org/article/10.1088/1742-6596/1004/1/012027/pdf
Loss Functions http://ml-cheatsheet.readthedocs.io/en/latest/loss_functions.html
Vector and Tensor Algebra http://www.mate.tue.nl/~peters/4K400/VectTensColMat.pdf
Matrix Differentiation https://atmos.washington.edu/~dennis/MatrixCalculus.pdf
Introduction to Tensor Calculus http://www.ita.uni-heidelberg.de/~dullemond/lectures/tensor/tensor.pdf
Notes on Tensor Products https://www.math.uwaterloo.ca/~kpurbhoo/spring2012-math245/tensor.pdf
Definition and properties of Tensors https://www.uio.no/studier/emner/matnat/math/MAT-INF2360/v12/tensortheory.pdf
Dot Product and Matrix  http://www.math.odu.edu/~bogacki/math316/transp/1_3
Matrix Calculus For Deep Learning https://arxiv.org/pdf/1802.01528.pdf
The Matrix Calculus For Deep Learning http://explained.ai/matrix-calculus/index.html
Deep Neural Network https://matrices.io/deep-neural-network-from-scratch/

猜你喜欢

转载自blog.csdn.net/cindywry/article/details/83828207