Advanced PyTorch Deep Learning: Autoencoders, GAN's, Variational AE's

Jan 1, 2018 · Tel Aviv-Yafo, Israel

Abstract: 


The seminar includes advanced Deep Learning topics suitable for experienced data scientists with a very sound mathematical background. 

For the labs, we shall use PyTorch. 

Topics will be included from: 

·  Autoencoders, denoising autoencoders, Stacked Denoising Autoencoders


·  Bayesian Deep Learning

·  Variational Inference

·  Adversarial Variational Bayes

·  Variational  Autoencoders

·  Generative Adversarial Networks

·  Advanced Deep Learning Architectures

References:

https://github.com/handong1587/handong1587.github.io/tree/master/_posts/deep_learning


<a>2. </a>

2 The Full Story F. Van Veen, “The Neural Network Zoo” (2016)

<a>3. </a>

3 Inception module

<a>4. </a>

4 Inception module / Network in Network Szegedy, Christian, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. "Going deeper with convolutions." CVPR 2015 GoogleNet

<a>5. </a>

5 Lin, Min, Qiang Chen, and Shuicheng Yan. "Network in network." ICLR 2014. Inception module / Network in Network

<a>6. </a>

6 Deep Residual Networks

<a>7. </a>

7 Deep Residual Networks He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Deep residual learning for image recognition." CVPR 2016 [slides]

<a>8. </a>

8 Deep Residual Networks Residual learning: reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Deep residual learning for image recognition." CVPR 2016 [slides]

<a>9. </a>

9 Deep Residual Networks He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Deep residual learning for image recognition." CVPR 2016 [slides] Residual connectionsNon-Residual connections

<a>10. </a>

10 Deep Residual Networks 3.6% top 5 error… with 152 layers !!

<a>11. </a>

11 Deep Residual Networks He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Deep residual learning for image recognition." CVPR 2016 [slides]

<a>12. </a>

12 Deep Residual Networks Brendan Jou and Shih-Fu Chang. 2016. Deep Cross Residual Learning for Multitask Visual Recognition. ACM MM 2016. Residuals for multi-task learning Cross-residuals between tasks Independent networks for each task Branch-out for each task

<a>13. </a>

13 Deep Residual Networks Xie, Saining, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. "Aggregated residual transformations for deep neural networks." arXiv preprint arXiv:[masked] (2016). [code] ResNext

<a>14. </a>

14 Deep Residual Networks F. Van Veen, “The Neural Network Zoo” (2016)

<a>15. </a>

15 Skip connections Figure: Kilian Weinberger

<a>16. </a>

16 Skip connections Long, Jonathan, Evan Shelhamer, and Trevor Darrell. "Fully convolutional networks for semantic segmentation." CVPR 2015 & PAMI 2016.

<a>17. </a>

17 Skip connections Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234-241. Springer International Publishing, 2015

18. 

18Manel Baradad, Amaia Salvador, Xavier Giró-i-Nieto, Ferran Marqués (work under progress) Skip connections

<a>19. </a>

19 Dense connections Dense Block of 5-layers with a growth rate of k=4 Huang, Gao, Zhuang Liu, Kilian Q. Weinberger, and Laurens van der Maaten. "Densely connected convolutional networks." arXiv preprint arXiv:[masked] (2016). [code] Connect every layer to every other layer of the same filter size.

<a>20. </a>

20 Dense connections Huang, Gao, Zhuang Liu, Kilian Q. Weinberger, and Laurens van der Maaten. "Densely connected convolutional networks." arXiv preprint arXiv:[masked] (2016). [code]

<a>21. </a>

21 Dense connections Huang, Gao, Zhuang Liu, Kilian Q. Weinberger, and Laurens van der Maaten. "Densely connected convolutional networks." arXiv preprint arXiv:[masked] (2016). [code]

<a>22. </a>

22 Dense connections Jégou, Simon, Michal Drozdzal, David Vazquez, Adriana Romero, and Yoshua Bengio. "The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation." arXiv preprint arXiv:[masked] (2016). [code] [slides]

<a>23. </a>

23 Differentiable Neural Computers (DNC) Add a trainable external memory to a neural network. Graves, Alex, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo et al. "Hybrid computing using a neural network with dynamic external memory." Nature 538, no. 7626 (2016): 471-476. [Post by DeepMind]

<a>24. </a>

24 Differentiable Neural Computers (DNC) DNC can solve tasks reading information from a trained memory. Graves, Alex, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo et al. "Hybrid computing using a neural network with dynamic external memory." Nature 538, no. 7626 (2016): 471-476. [Post by DeepMind]

<a>25. </a>

25F. Van Veen, “The Neural Network Zoo” (2016) Graves, Alex, Greg Wayne, and Ivo Danihelka. "Neural turing machines." arXiv preprint arXiv:[masked] (2014). [slides] [code] Differentiable Neural Computers (DNC)

<a>26. </a>

26 Reinforcement Learning (RL) Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. "Playing atari with deep reinforcement learning." arXiv preprint arXiv:[masked] (2013).

<a>27. </a>

27 Reinforcement Learning (RL) Bernhard Schölkkopf, “Learning to see and act” Nature 2015.

<a>28. </a>

28 Reinforcement Learning (RL)

https://www.theguardian.com/technology/2014/jan/27/google-acquires-uk-artificial-intelligence-startup-deepmind

<a>29. </a>

29 Reinforcement Learning (RL) An agent that is a decision-maker interacts with the environment and learns through trial-and-error Slide credit: UCL Course on RL by David Silver

<a>30. </a>

30 Reinforcement Learning (RL) An agent that is a decision-maker interacts with the environment and learns through trial-and-error Slide credit: UCL Course on RL by David Silver We model the decision-making process through a Markov Decision Process

<a>31. </a>

31 Reinforcement Learning (RL) Reinforcement Learning ● There is no supervisor, only reward signal ● Feedback is delayed, not instantaneous ● Time really matters (sequential, non i.i.d data) Slide credit: UCL Course on RL by David Silver

<a>32. </a>

32 Reinforcement Learning (RL) Slide credit: Míriam Bellver What is Reinforcement Learning ? “a way of programming agents by reward and punishment without needing to specify how the task is to be achieved” [Kaelbling, Littman, & Moore, 96]

<a>33. </a>

33 Reinforcement Learning (RL) Deep Q-Network (DQN) Mnih, Volodymyr, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves et al. "Human-level control through deep reinforcement learning." Nature 518, no. 7540 (2015): 529-533.

<a>34. </a>

34 Reinforcement Learning (RL) Deep Q-Network (DQN) Source: Tambet Matiisen, Demystifying Deep Reinforcement Learning (Nervana) Naive DQN Refined DQN

<a>35. </a>

35 Reinforcement Learning (RL) Deep Q-Network (DQN) Andrej Karpathy, “ConvNetJS Deep Q Learning Demo”

<a>36. </a>

36 Reinforcement Learning (RL) Slide credit: Míriam Bellver actor state critic ‘q-value’action (5) state action (5) actor performs an action critic assesses how good the action was, and the gradients are used to train the actor and the critic Actor-Critic algorithm Grondman, Ivo, Lucian Busoniu, Gabriel AD Lopes, and Robert Babuska. "A survey of actor-critic reinforcement learning: Standard and natural policy gradients." IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42, no. 6 (2012): 1291-1307.

<a>37. </a>

37 Reinforcement Learning (RL) Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M. and Dieleman, S., 2016. Mastering the game of Go with deep neural networks and tree search. Nature,[masked]), pp[masked]

<a>38. </a>

38 Reinforcement Learning (RL) Miriam Bellver, Xavier Giro-i-Nieto, Ferran Marques, and Jordi Torres. "Hierarchical Object Detection with Deep Reinforcement Learning." In Deep Reinforcement Learning Workshop (NIPS). 2016.

<a>39. </a>

39 Reinforcement Learning (RL) OpenAI Gym + keras-rl + keras-rl keras-rl implements some state-of-the art deep reinforcement learning algorithms in Python and seamlessly integrates with the deep learning library Keras. Just like Keras, it works with either Theano or TensorFlow, which means that you can train your algorithm efficiently either on CPU or GPU. Furthermore, keras-rl works with OpenAI Gym out of the box. Slide credit: Míriam Bellver

<a>40. </a>

40 Reinforcement Learning (RL) OpenAI Universe environment

<a>41. </a>

41 Reinforcement Learning (RL) Deep Learning TV, “Reinforcement learning - Ep. 30”

<a>42. </a>

42 Reinforcement Learning (RL) Davdi Silver, “Reinforcement Learning Course” (Google DeepMind)”

<a>43. </a>

43 Reinforcement Learning (RL) Nando de Freitas, “Deep Reinforcement Learning - Policy search” (University of Oxford)

<a>44. </a>

44 Adversarial Networks Slide credit: Víctor Garcia Discriminator D(·) Generator G(·) Real World Generative Adversarial Networks (GANs) More details in D2L5. Random seed (z) Real/Synthetic

<a>45. </a>

45 Adversarial Networks Slide credit: Víctor Garcia Conditional Adversarial Networks Real World Real/Synthetic Condition Discriminator D(·) Generator G(·)

<a>46. </a>

46 Adversarial Networks Compare (BCE) Generated Saliency Map Ground Truth Saliency Map A computer vision problem such as visual saliency prediction... Input Perceptual cost Junting Pan, Cristian Canton, Kevin McGuinness, Noel E. O’Connor, Jordi Torres, Elisa Sayrol and Xavier Giro-i-Nieto. “SalGAN: Visual Saliency Prediction with Generative Adversarial Networks.” arXiv. 2017.

<a>47. </a>

47 Adversarial Networks Junting Pan, Cristian Canton, Kevin McGuinness, Noel E. O’Connor, Jordi Torres, Elisa Sayrol and Xavier Giro-i-Nieto. “SalGAN: Visual Saliency Prediction with Generative Adversarial Networks.” arXiv. 2017. ...can benefit from adding an adversarial loss: Adversarial costPerceptual cost

<a>48. </a>

48 Adversarial Networks Víctor Garcia and Xavier Giró-i-Nieto (work under progress) Generator Discriminator Loss2 GAN {Binary Crossentropy} 1/0

<a>49. </a>

49 Adversarial Networks Slide credit: Víctor Garcia Isola, Phillip, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. "Image-to-image translation with conditional adversarial networks." arXiv preprint arXiv:[masked] (2016). Generator Discriminator Generated Pairs Real World Ground Truth Pairs Loss → BCE

<a>50. </a>

50 Adversarial Networks Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. "Generative adversarial nets." NIPS 2014 Goodfellow, Ian. "NIPS 2016 Tutorial: Generative Adversarial Networks." arXiv preprint arXiv:[masked] (2016). F. Van Veen, “The Neural Network Zoo” (2016)

<a>51. </a>

51 The Full Story F. Van Veen, “The Neural Network Zoo” (2016)

Event organizers

Are you organizing Advanced PyTorch Deep Learning: Autoencoders, GAN&#39;s, Variational AE&#39;s?

Claim the event and start manage its content.

I am the organizer
Social
Rating

based on 0 reviews