Rutgers Machine Learning Reading Group
A talk series and reading group for deep networks and other machine learning topics. Meetings are weekly on Thursdays at 5 pm (2016 Spring) in CBIM.
- Rutgers Machine Learning Reading Group
- Upcoming Schedule
- Reading Stack
- RNN (theory and hands-on)
- Memory Networks
- Deep Learning Theory
- Structured Prediction
- Semantic Segmentation
- Attention-based models
- Bayesian Neural Networks
- Variational Inference & Variational Autoencoders
- Image Captioning
- Hallucination & Dreaming
- Deep Networks & NLP
- Kernel Methods
- Other links
- Past Meetings
- Members and their presentation dates
(upcoming and archived)
|Jan 28||Dong Yang||Fully convolutional networks for semantic segmentation|
|Semantic image segmentation with deep convolutional nets and fully connected crfs|
|Learning Deconvolution Network for Semantic Segmentation|
|Feb 9(Special, 5pm)||Mohamed Elhoseiny||Zero-Shot Event Detection by Multimodal Distributional Semantic Embedding of Videos|
|Feb 11||Babak Saleh||Toward a Taxonomy and Computational Models of Abnormalities in Images|
|Feb 18||Aditya Chukka||Unifying distillation and privileged information|
|Feb 25||Brian McMahan||Deep Compositional Question Answering with Neural Module Networks Learning to Compose Neural Networks for Question Answerin|
|Mar 17||Spring break|
|Mar 24||Mohammad Elhoseiny||Teaching Machines to See with Language|
|April 7||Qi Chang||Stroke Segmentation Using Convolutional Neural Network|
|April 14||Han Zhang|
|April 21||Colin Rennie|
|April 28||Zacharias Psarakis|
put interesting papers here that people can choose from or know about
RNN (theory and hands-on)
Neural Turing Machines Alex Graves, Greg Wayne, Ivo Danihelka
An Empirical Exploration of Recurrent Network Architectures JOZEFOWICZ, ZAREMBA, SUTSKEVERAR
Memory Networks Jason Weston, Sumit Chopra, Antoine Bordes
End-To-End Memory Networks Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus
Deep Learning Theory
- On the saddle point problem for non-convex optimization
Bayesian Neural Networks
Variational Inference & Variational Autoencoders
Hallucination & Dreaming
Deep Networks & NLP
- Learning Distributed Word Representations for Natural Logic Reasoning
- Skip-Thought Vectors
- A Diversity-Promoting Objective Function for Neural Conversation Models
- Do Deep Nets Really Need to be Deep?
| Date | Person | Information |
| — | — | — |
| October 26th | Shaojun Zhu | Recent applications of deep learning in robotics
(Perhaps focusing on grasping)
Some relevant papers: .
[1-4] are on detecting grasping locations for robots using deep learning. The last one for Pieter Abbeel’s group in Berkeley is more interesting in the sense that it tries to learn control policies directly from visual input. His group is one of the first advocates for deep learning in robotics and has a series of related work. I have not read about it in details, so I may not cover it this time. | | November 9th | Yan Zhu | RNN and memory network
I will basically go over these two excellent blogs: The Unreasonable Effectiveness of Recurrent Neural Networks by Andrew Karpathy and Understanding LSTM networks by Christopher Olah. Then will cover my understanding for the memory networks (see the reference paper below). | | November 16th | Brian McMahan | Reinforcement Learning with Neural Nets
I will discuss the recent improvements of neural networks and reinfrocement learning. Specifically, I will go over Willians (1992) who introduced the REINFORCE algorithms for combining reinforcement learnign with neural network backpropagation. This is the algorithm that underlies the model of Minh, Heess, Graves, and Kavukcuoglu (2014): Recurrent Models of Visual Attention. If there is time or interest, I will also talk about Deep-Q learning. I can also do an overview of RL if there is interest. | | November 23rd | Babak Saleh | “A Unified Framework for Fine-art Painting Classification”. In this talk I will go through some basics in “Metric Learning”, and show their applications in my research on fine-art painting analysis.| | November 30th | Amr Bakry | This talk will be about the work during my PhD on using Image Manifold Analysis for solving couple of computer vision problems. More specifically, I will talk about lip-reading in videos and object recognition and pose estimation in images. | | December 7th | Chetan Tonde | “Do Deep Nets Really Need to be Deep?” and “A new learning paradigm: Learning using privileged information” related papers presented together.|
Members and their presentation dates
(upcoming and archived)
|Ana Echavarria Uribe|
|Behnam Babgholami Mohamada|
|Bhuvan Chandra Inampudi|
- Members of the reading group will take turns presenting a paper or their own work to the group
- There are no ‘bystander’ attendees.
- Presenting a paper does not have to be an immense endeavor. Reading a paper and telling the group what it is about is necessary and sufficient.
- How is the presenters’ order generated?
- The presenters’ order is generated from the member list in a FIFO manner.
- Who is responsible if I can not present at the schedule time?
- What should I do if I can not present at the scheduled time?
- First, let the organizer know your situation, as early as possible. Second, contact other presenters on the list and see if they are willing to swap with you.
- What happens if a new event takes place and we have to change the schedule?
- To minimize disturbance, the conflicted spot will be moved to the rear of the list after confirmed with the original presenter, while all other schedules remain unchanged.
- How do I do X?
- There is a how to page
- If something isn’t on there, email vincentzhu122[at]gmail[dot]com or brian[dot]c[dot]mcmahan[at]gmail[dot]com