OpenAI Scholars 2019 - The Syllabus

 
robot-2301646_1280.jpg
 

This week I started a 3 month program as a scholar at OpenAI - an amazing opportunity to study Deep Learning for 2 months & to work on a project for another month. I am focusing my efforts on getting to grips with Generative Models & the Visualization of deep networks. Til now, I have focused on applying DL models - gaining ‘just enough’ to get them working well for particular tasks, particularly in AR & VR.

I was lucky enough to learn (IRL) at the Fast.ai Practical Deep Learning for Coders session last year and it made me fall in love with DL in a whole new way. If you don’t know Fast.ai & the work of Jeremy Howard & Rachel Thomas then you are missing out on an “awesome renegade group of DL researchers” (in the words of MIT’s Lex Fridman … ;-)) In all seriousness, it was an amazing course with a focus on applying DL techniques from the get-go rather than building up slowly from a theoretical foundation. (An approach which is in keeping with that of David Perkins.) In essence, if we taught baseball the way we teach maths, then young kids would be learning spatial geometry & physics before being allowed onto the pitch with a ball.) In that same way, Fast.ai starts out applying & working with DL applications and then starts to fill in the details and foundational theory from there.

That is all to say that with this program & time at OpenAI, I now want to take the opportunity to step back & really ground myself in the fundamentals, as well as the application, of DL networks. For reference, I am including the outline of the syllabus I will be working on below - the full overview in the image & links to some of the readings & papers below that.


 
revised_onepage_web.png
 

Stanford’s famous CS231n & CS236 will form a key part of my curriculum, as well as parts of CS228. In addition, I will be reading from core texts (Deep Learning Book for e.g.,) & relevant research papers - an initial reading list is included below.

Week 1: Set up for success…..

Papers/Reading:

·      Review Part 1 of Deep Learning Book (applied Math & Machine Learning Basics)

Msft Research Paper discussions – use as an intro to how to approach reading ML research papers:

  • FaceNet: A Unified Embedding for Face Recognition and Clustering – Paper, Video & GitRepo

·      Stable Tensor Neural Networks for Rapid Deep Learning Paper & Video

·      Learning Fine-grained Image Similarity with Deep Ranking (2014) Paper & Video 

Begin Research into Visualization

·      Lessons from a year of distilling ML Research video

·      Machine Learning for Visualization video


Week 2: Review & Cement the Key Foundations of DL

Papers/Reading:

·      Deep Learning Ch6. Deep FeedForward Networks

·      Begin investigating visualization in DL with The Building Blocks of Interpretability

·      Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps paper

·      Investigate some of the ways batch size, learning rate, batch norm et al impact results
– Disciplined Approach to NN Hyperparameters Paper;
– Train longer, generalize better Paper;
– Don’t decay the Learning Rate, Increase the Batch Size Paper;
– Rethinking ImageNet PreTraining paper;
– How does Batch Normalization help Optimization? Paper


Week 3: Training Neural Nets w focus on CNN

Papers/Reading:

•         Deep Learning Ch.9 Convolutional Neural Networks

•       ImageNet Classification with Deep Convolutional Neural Networks Paper

•       Improvement – Deep Residual Learning for Image Recognition Paper

•       Multi-scale context Aggregation by Dilated Convolutions Paper (used in WaveNet – model referred to later in GAN section)

•       Densely Connected Convolutional Networks Paper

•       Fun application: Using Deep Learning & Google Street View to estimate the demographic Makeup of the US Paper & What Makes Paris Look Like Paris Paper 


Week 4: Sequence Models & Introduction to Generative Models

Papers/Reading:

•       Deep Learning Ch.10 Sequence Modeling

•       Char-RNN blog post

•       RNN Regularization paper

•       DeepSpeech paper

•       Detection – YOLO9000

•       GAN – A Neural Algorithm of Artistic Style Transfer & Paper

•       Attention architecture – The Annotated Transformer, the ‘Attention is All You Need’ paper & blog post


Week 5: Deeper Dive – Generative Models

Read Visualization Papers  & investigate Lucid Codebase

•       Differentiable Image Parameterizations (Sections 1-6)

•      Activation Atlas

•       Lucid repo

Use Readings & Resources below for Wks5-8 as most relevant:

Background & Generative Models overall:

·      Chapter 10 of book Directed Graphical Models (Bayes nets) in Machine Learning: A Probablistic Perspective by Kevin Murphy

·      Andrej Karpathy post on power of autoregressive models.

·      PixelRNN / PixelCNN paper

·      PixelCNN++ paper

·      Image Transformer paper

·      Additional Tutorials if needed from various listed below:
 - Tutorial on Deep Generative Models. Aditya Grover and Stefano Ermon. International Joint Conference on Artificial Intelligence, July 2018.
 - Tutorial on Generative Adversarial Networks. Computer Vision and Pattern Recognition, June 2018.
 - Tutorial on Deep Generative Models. Shakir Mohamed and Danilo Rezende. Uncertainty in Artificial Intelligence, July 2017.
- Tutorial on Generative Adversarial Networks. Ian Goodfellow. Neural Information Processing Systems, December 2016.
- Learning deep generative models. Ruslan Salakhutdinov. Annual Review of Statistics and Its Application, April 2015. 

Normalizing Flow Models

·      Tutorial by Eric Jang as needed

·       Original normalising flow paper

·       NICE: Non-linear Independent Components Estimation

·       Density Estimation Using RealNVP & Glow

AutoEncoders

·      Deep Learning Ch14 AutoEncoders

·       Neural Ordinary Differential Equations incl ODE Flow

·       Auto_Encoding Variational Bayes (Original VAE paper)

·       VAE tutorial paper

·       Semi Supervised Learning Using Deep Generative Models paper

·       Making your posterior more flexible: improving variational inference with inverse Autoregressive Flow (paper) & related blog post

·       Improve your estimate of the lower bound IWAE paper

·       Variational Lossy Autoencoder paper

 

GANs & Variants

·      Deep Learning Ch20, Deep Generative Models

·       GAN tutorial overview of generative modeling and GANs.

·       Tutorial by Ian Goodfellow - Video

·       Original GAN paper & 2015 Paper

·       Horse/Zebra in unpaired ImagetoImage Translation paper on CycleGAN

·       Super image resolution with GAN paper

·       Investigate types of GANs - DCGAN, Wasserstein GANE-GAN &  video 

·       LWeng Blog post incl. GANs’, problems and Improved training methods, like WGAN

·       Note: on the evaluation of generative Models


MLFirst LastPapers, ML, CS230