logo资料库

Deep Learning with Python A Hands-on Introduction 无水印pdf.pdf

第1页 / 共169页
第2页 / 共169页
第3页 / 共169页
第4页 / 共169页
第5页 / 共169页
第6页 / 共169页
第7页 / 共169页
第8页 / 共169页
资料共169页,剩余部分请下载后查看
Cover
Copyright
Contents at a Glance
Contents
About the Author
1: Introduction to Deep Learning
Historical Context
Advances in Related Fields
Prerequisites
Overview of Subsequent Chapters
Installing the Required Libraries
2: Machine Learning Fundamentals
Intuition
Binary Classification
Regression
Generalization
Regularization
Summary
3: Feed Forward Neural Networks
Unit
Overall Structure of a Neural Network
Expressing the Neural Network in Vector Form
Evaluating the output of the Neural Network
Training the Neural Network
Deriving Cost Functions using Maximum Likelihood
Binary Cross Entropy
Cross Entropy
Squared Error
Summary of Loss Functions
Types of Units/Activation Functions/Layers
Linear Unit
Sigmoid Unit
Softmax Layer
Rectified Linear Unit (ReLU)
Hyperbolic Tangent
Neural Network Hands-on with AutoGrad
Summary
4: Introduction to Theano
What is Theano
Theano Hands-On
Summary
5: Convolutional Neural Networks
Convolution Operation
Pooling Operation
Convolution-Detector-Pooling Building Block
Convolution Variants
Intuition behind CNNs
Summary
6: Recurrent Neural Networks
RNN Basics
Training RNNs
Bidirectional RNNs
Gradient Explosion and Vanishing
Gradient Clipping
Long Short Term Memory
Summary
7: Introduction to Keras
Summary
8: Stochastic Gradient Descent
Optimization Problems
Method of Steepest Descent
Batch, Stochastic (Single and Mini-batch) Descent
Batch
Stochastic Single Example
Stochastic Mini-batch
Batch vs. Stochastic
Challenges with SGD
Local Minima
Saddle Points
Selecting the Learning Rate
Slow Progress in Narrow Valleys
Algorithmic Variations on SGD
Momentum
Nesterov Accelerated Gradient (NAS)
Annealing and Learning Rate Schedules
Adagrad
RMSProp
Adadelta
Adam
Resilient Backpropagation
Equilibrated SGD
Tricks and Tips for using SGD
Preprocessing Input Data
Choice of Activation Function
Preprocessing Target Value
Initializing Parameters
Shuffling Data
Batch Normalization
Early Stopping
Gradient Noise
Parallel and Distributed SGD
Hogwild
Downpour
Hands-on SGD with Downhill
Summary
9: Automatic Differentiation
Numerical Differentiation
Symbolic Differentiation
Automatic Differentiation Fundamentals
Forward/Tangent Linear Mode
Reverse/Cotangent/Adjoint Linear Mode
Implementation of Automatic Differentiation
Source Code Transformation
Operator Overloading
Hands-on Automatic Differentiation with Autograd
Summary
10: Introduction to GPUs
Summary
Index
分享到:
收藏