Distilling AI is finally happening! The first 20 devoted and passionate guinea pigs can sign up here. Before you sign up, please read below to make sure you understand what you’re in for :)
Distilling AI is a study group where we aim to discuss different AI-related concepts as a group, in order to try to create intuitive abstractions and visualizations for them.
To avoid confusion we want to emphasize that the main idea of these sessions is to 'create intuitions' and not to 'understand' or 'learn about' existing intuitions or the topic at hand. This means that participants should be knowledgeable about the topics being discussed and be excited about actively contributing to the creation of intuitions.
These sessions are experimental and not everything about the format has been decided yet (in fact, your feedback will be crucial for shaping the future of the event), but here’s the info for the first series of sessions:
• 1st session - Preliminaries Why do we need regularization? Maximum Likelihood Estimation, bias vs variance, ill-posed problems and overfitting.
• 2nd session - The L1- and L2-method.
• 3rd session - Other methods. Regularization through modifying training data, Bayesian interpretations of different regularization methods such as Dropout, Interpretation of regularization in terms of Information Theory. How do they relate to L1 and L2 methods?
~ 1h small group discussions.
~ 10m break
Remaining time for a wrap-up to share insights with the other groups.
There are only 20 available spots for this first series of sessions, so don't wait too long to sign up ;-) Note that we are expecting you to join all three sessions if you decide to attend.
Please contact the organizers Carl Samuelsson, Reynaldo Boulogne or Amir Hossein Rahnama for feedback, questions or your inquiry to help out with the events.
Claim the event and start manage its content.I am the organizer