logo资料库

TensorFlow 1.x Deep Learning Cookbook.pdf

第1页 / 共823页
第2页 / 共823页
第3页 / 共823页
第4页 / 共823页
第5页 / 共823页
第6页 / 共823页
第7页 / 共823页
第8页 / 共823页
资料共823页,剩余部分请下载后查看
Preface
What this book covers
What you need for this book
Who this book is for
Sections
Getting ready
How to do it…
How it works…
There's more…
See also
Conventions
Reader feedback
Customer support
Downloading the example code
Errata
Piracy
Questions
TensorFlow - An Introduction
Introduction
Installing TensorFlow
Getting ready
How to do it...
How it works...
There's more...
Hello world in TensorFlow
How to do it...
How it works...
Understanding the TensorFlow program structure
How to do it...
How it works...
There's more...
Working with constants, variables, and placeholders
How to do it...
How it works...
There's more...
Performing matrix manipulations using TensorFlow
How to do it...
How it works...
There's more...
Using a data flow graph
How to do it...
Migrating from 0.x to 1.x
How to do it...
There's more...
Using XLA to enhance computational performance
Getting ready
How to do it...
Invoking CPU/GPU devices
How to do it...
How it works...
TensorFlow for Deep Learning
How to do it...
There's more
Different Python packages required for DNN-based problems
How to do it...
See also
Regression
Introduction
Choosing loss functions
Getting ready
How to do it...
How it works...
There's more...
Optimizers in TensorFlow
Getting ready
How to do it...
There's more...
See also
Reading from CSV files and preprocessing data
Getting ready
How to do it…
There's more...
House price estimation-simple linear regression
Getting ready
How to do it...
How it works...
There's more...
House price estimation-multiple linear regression
How to do it...
How it works...
There's more...
Logistic regression on the MNIST dataset
How to do it...
How it works...
See also
Neural Networks - Perceptron
Introduction
Activation functions
Getting ready
How to do it...
How it works...
There's more...
See also
Single layer perceptron
Getting ready
How to do it...
There's more...
Calculating gradients of backpropagation algorithm
Getting ready
How to do it...
How it works...
There's more...
See also
MNIST classifier using MLP
Getting ready
How to do it...
How it works...
Function approximation using MLP-predicting Boston house prices
Getting ready
How to do it...
How it works...
There's more...
Tuning hyperparameters
How to do it...
There's more...
See also
Higher-level APIs-Keras
How to do it...
There's more...
See also
Convolutional Neural Networks
Introduction
Local receptive fields
Shared weights and bias
A mathematical example
ConvNets in TensorFlow
Pooling layers
Max pooling
Average pooling
ConvNets summary
Creating a ConvNet to classify handwritten MNIST numbers
Getting ready
How to do it...
How it works...
Creating a ConvNet to classify CIFAR-10
Getting ready
How to do it...
How it works...
There's more...
Transferring style with VGG19 for image repainting
Getting ready
How to do it...
How it works...
There's more...
Using a pretrained VGG16 net for transfer learning
Getting ready
How to do it...
How it works...
There's more...
Creating a DeepDream network
Getting ready
How to do it...
How it works...
There's more...
See also
Advanced Convolutional Neural Networks
Introduction
Creating a ConvNet for Sentiment Analysis
Getting ready
How to do it...
How it works...
There is more...
Inspecting what filters a VGG pre-built network has learned
Getting ready
How to do it...
How it works...
There is more...
Classifying images with VGGNet, ResNet, Inception, and Xception
VGG16 and VGG19
ResNet
Inception
Xception
Getting ready
How to do it...
How it works...
There is more...
Recycling pre-built Deep Learning models for extracting features
Getting ready
How to do it...
How it works...
Very deep InceptionV3 Net used for Transfer Learning
Getting ready
How to do it...
How it works...
There is more...
Generating music with dilated ConvNets, WaveNet, and NSynth
Getting ready
How to do it...
How it works...
There is more...
Answering questions about images (Visual Q&A)
How to do it...
How it works...
There is more...
Classifying videos with pre-trained nets in six different ways
How to do it...
How it works...
There is more...
Recurrent Neural Networks
Introduction
Vanishing and exploding gradients
Long Short Term Memory (LSTM)
Gated Recurrent Units (GRUs) and Peephole LSTM
Operating on sequences of vectors
Neural machine translation - training a seq2seq RNN
Getting ready
How to do it...
How it works...
Neural machine translation - inference on a seq2seq RNN
How to do it...
How it works...
All you need is attention - another example of a seq2seq RNN
How to do it...
How it works...
There's more...
Learning to write as Shakespeare with RNNs
How to do it...
How it works...
First iteration
After a few iterations
There's more...
Learning to predict future Bitcoin value with RNNs
How to do it...
How it works...
There's more...
Many-to-one and many-to-many RNN examples
How to do it...
How it works...
Unsupervised Learning
Introduction
Principal component analysis
Getting ready
How to do it...
How it works...
There's more...
See also
k-means clustering
Getting ready
How to do it...
How it works...
There's more...
See also
Self-organizing maps
Getting ready
How to do it...
How it works...
See also
Restricted Boltzmann Machine
Getting ready
How to do it...
How it works...
See also
Recommender system using RBM
Getting ready
How to do it...
There's more...
DBN for Emotion Detection
Getting ready
How to do it...
How it works...
There's more...
Autoencoders
Introduction
See Also
Vanilla autoencoders
Getting ready
How to do it...
How it works...
There's more...
Sparse autoencoder
Getting Ready...
How to do it...
How it works...
There's More...
See Also
Denoising autoencoder
Getting Ready
How to do it...
See Also
Convolutional autoencoders
Getting Ready...
How to do it...
How it Works...
There's More...
See Also
Stacked autoencoder
Getting Ready
How to do it...
How it works...
There's More...
See Also
Reinforcement Learning
Introduction
Learning OpenAI Gym
Getting ready
How to do it...
How it works...
There's more...
See also
Implementing neural network agent to play Pac-Man
Getting ready
How to do it...
Q learning to balance Cart-Pole
Getting ready
How to do it...
There's more...
See also
Game of Atari using Deep Q Networks
Getting ready
How to do it...
There's more...
See also
Policy gradients to play the game of Pong
Getting ready
How to do it...
How it works...
There's more...
AlphaGo Zero
See also
Mobile Computation
Introduction
TensorFlow, mobile, and the cloud
Installing TensorFlow mobile for macOS and Android
Getting ready
How to do it...
How it works...
There's more...
Playing with TensorFlow and Android examples
Getting ready
How to do it...
How it works...
Installing TensorFlow mobile for macOS and iPhone
Getting ready
How to do it...
How it works...
There's more...
Optimizing a TensorFlow graph for mobile devices
Getting ready
How to do it...
How it works...
Profiling a TensorFlow graph for mobile devices
Getting ready
How to do it...
How it works...
Transforming a TensorFlow graph for mobile devices
Getting ready
How to do it...
How it works...
Generative Models and CapsNet
Introduction
So what is a GAN?
Some cool GAN applications
Learning to forge MNIST images with simple GANs
Getting ready
How to do it...
How it works...
Learning to forge MNIST images with DCGANs
Getting ready
How to do it...
How it works...
Learning to forge Celebrity Faces and other datasets with DCGAN
Getting ready
How to do it...
How it works...
There's more...
Implementing Variational Autoencoders
Getting ready...
How to do it...
How it works...
There's More...
See also...
Learning to beat the previous MNIST state-of-the-art results with Capsule Networks
Getting ready
How to do it...
How it works...
There's more...
Distributed TensorFlow and Cloud Deep Learning
Introduction
Working with TensorFlow and GPUs
Getting ready
How to do it...
How it works...
Playing with Distributed TensorFlow: multiple GPUs and one CPU
Getting ready
How to do it...
How it works...
Playing with Distributed TensorFlow: multiple servers
Getting ready
How to do it...
How it works...
There is more...
Training a Distributed TensorFlow MNIST classifier
Getting ready
How to do it...
How it works...
Working with TensorFlow Serving and Docker
Getting ready
How to do it...
How it works...
There is more...
Running Distributed TensorFlow on Google Cloud (GCP) with Compute Engine
Getting ready
How to do it...
How it works...
There is more...
Running Distributed TensorFlow on Google CloudML
Getting ready
How to do it...
How it works...
There is more...
Running Distributed TensorFlow on Microsoft Azure
Getting ready
How to do it...
How it works...
There's more...
Running Distributed TensorFlow on Amazon AWS
Getting ready
How to do it...
How it works...
There is more...
Learning to Learn with AutoML (Meta-Learning)
Meta-learning with recurrent networks and with reinforcement learning
Meta-learning blocks
Meta-learning novel tasks
Siamese Network
Applications of Siamese Networks
A working example - MNIST
TensorFlow Processing Units
Components of TPUs
Advantages of TPUs
Accessing TPUs
Resources on TPUs
TensorFlow 1.x Deep Learning Cookbook Over 90 unique recipes to solve artificial-intelligence driven problems with Python
Antonio Gulli Amita Kapoor BIRMINGHAM - MUMBAI
TensorFlow 1.x Deep Learning Cookbook Copyright © 2017 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: December 2017 Production reference: 1081217 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-78829-359-4 www.packtpub.com
Credits Authors Antonio Gulli Amita Kapoor Reviewers Copy Editors Safis Editing Vikrant Phadkay Nick McClure Narotam Singh Corrado Zoccolo Project Coordinator Manthan Patel Commissioning Editor Sunith Shetty Acquisition Editor Tushar Gupta Content Development Editor Tejas Limkar Proofreader Safis Editing Indexer Rekha Nair Graphics Tania Dutta Technical Editor Danish Shaikh Production Coordinator Deepika Naik
About the Authors Antonio Gulli is a transformational software executive and business leader with a passion for establishing and managing global technological talent for innovation and execution. He is an expert in search engines, online services, machine learning, information retrieval, analytics, and cloud computing. So far, he has been lucky enough to gain professional experience in four different countries in Europe and manage teams in six different countries in Europe and America. Currently, he works as site lead and director of cloud in Google Warsaw, driving European efforts for Serverless, Kubernetes, and Google Cloud UX. Previously, Antonio helped to innovate academic search as the vice president for Elsevier, a worldwide leading publisher. Before that, he drove query suggestions and news search as a principal engineer for Microsoft. Earlier, he served as the CTO for Ask.com, driving multimedia and news search. Antonio has filed for 20+ patents, published multiple academic papers, and served as a senior PC member in multiple international conferences. He truly believes that to be successful, you must have a great combination of management, research skills, just-get-it-done, and selling attitude. I thank every reader of this book for your attention and for the trust. I am humbled by the number of comments received on LinkedIn and Facebook: You, the reader, provided immense help in making this book better. I would also like to thank various people for providing support during the process of writing the book. In no order: Susana, Ewa, Ignacy, Dawid, Max, Jarek, Jerzy, Nina, Laura, Antonella, Eric, Ettore, Francesco, Liubov, Marco, Fabio, Giacomo, Saskia, Christina, Wieland, and Yossi. I am very grateful to my coauthor, Amita, for her valuable comments and suggestions. I am extremely thankful to the reviewers of this book, Eric Brewer, Corrado Zoccolo, and Sujit Pal, for going through the entire book content. Special thanks to my manager, Eyal, for supporting me during the writing process and for the trust constantly offered. Part of this book has been written in Charlotte Menora (http://bistrocharlotte.pl/), a pub in Warsaw, where I found myself writing pages after work. This is an inspirational place, which I definitively recommend if you are visiting Poland. Modern and cool as the city of Warsaw is these days. Last and not the least, I am grateful to the entire editorial team of Packt, especially Tushar Gupta and Tejas Limkar for
all the support, constant reminders regarding the schedule, and continuous motivation. Thanks for your patience. Amita Kapoor is an associate professor in the Department of Electronics, SRCASW, University of Delhi. She has been actively teaching neural networks for the last 20 years. She did her master's in electronics in 1996 and PhD in 2011. During her PhD, she was awarded the prestigious DAAD fellowship to pursue a part of her research work in Karlsruhe Institute of Technology, Karlsruhe, Germany. She had been awarded the best presentation award at International Conference Photonics 2008 for her paper. She is a member of professional bodies such as OSA (Optical Society of America), IEEE (Institute of Electrical and Electronics Engineers), INNS (International Neural Network Society), and ISBS (Indian Society for Buddhist Studies). Amita has more than 40 publications in international journals and conferences to her credit. Her present research areas include machine learning, artificial intelligence, neural networks, robotics, Buddhism (philosophy and psychology) and ethics in AI. This book is an attempt to summarize what all I had learned in the field of deep neural networks. I have presented it in a manner that readers find easy to understand and apply, and so the prime motivation of this book comes from you, the readers. I thank every reader of this book for being consistently present at the back of my mind, especially when I felt lazy. I would also like to thank professor Parongama Sen, University of Calcutta, for introducing me to the subject in 1994, and my friends Nirjara Jain and Shubha Swaminathan for the hours spent in college library discussing Asimov, his stories, and the future that neural networks behold for our society. I am very grateful to my coauthor, Antonio Guili, for his valuable comments and suggestions and the reviewers of this book, Narotam Singh and Nick McClure, for painstakingly going through the entire content and rechecking the codes. Last and not the least, I am grateful to the entire editorial team of Packt, especially Tushar Gupta and Tejas Limkar for all the support, constant reminders regarding schedule, and continuous motivation.
分享到:
收藏