Inside the "black box" - Understanding machine learning models

Aug 24, 2021 · Tel Aviv-Yafo, Israel

Join us for Riskified Technology and superwise.ai meetup, where we’ll hear from Melih and Liran all about explainability in data science.
In this event, Melih will cover why explainability (specifically using SHAP values) of a model is important also for the research phase and how it can help not just the end-user but also us data scientists that are building the models.
Liran will share how we can use feature importance in general and SHAP values in particular beyond the research phase and utilize it in a production environment to monitor machine learning models.

Agenda:
18:30 - 19:00 - Mingling, Beer and snacks
19:00 - 19:30 - Explaining the Explainability: Why and How of Explainability in Research - Melih Bahar, Data Scientist, Riskified
19:30 - 20:00 - Feature importance in production - Liran Nahum, Data Scientist, Superwise.ai
20:00 - 20:30 - More beer and mingling

The talks will be delivered in Hebrew.
This is an in-person event and will take place according to Green Pass.
You will be requested to present a Green Pass or a negative Covid test from the past 24 hours at the entrance.

*Due to the uncertain situation caused by COVID-19, the event might change to an online meetup.
We will provide relevant updates here.

------------
//Explaining the Explainability: Why and How of Explainability in Research - Melih Bahar, Data Scientist, Riskified

The harder the question we are trying to solve, the more sophisticated the machine learning models become, making it almost impossible to interpret. This might mean more features, complex algorithms, or complex patterns.
E(X)plainableAI (XAI) has been a very trending topic recently. To explain the outcomes of these models, mostly focusing on the point of view of the end-user. However, for research, the machine learning models we use are mostly taken for granted as black-boxes because we usually focus on performance and don’t really need to explain the predictions to anyone else.
In this talk, I will cover why explainability (specifically using SHAP values) of a model is also important for the research phase and how it can help the end-user and us data scientists that are building the models. We will see several different ways of looking at a model or its predictions that can help us improve performance even before the production phase and utilize it in a production environment to monitor machine learning models.

Melih is a data scientist at Riskified, where he joined almost 2.5 years ago. Today, he is working mainly on the research and improvement of the ATO product.
Originally from Turkey and coming from an engineering background, he pivoted his way into the Data Science/Machine Learning world to follow his passion for data and AI.
He believes in constant learning and endless curiosity. ​​When not doing DS/ML, you can find him doing any sports or tasting new whisky.

Twitter: https://twitter.com/melih_bhr
Linkedin: https://www.linkedin.com/in/melih-bahar/

//Feature importance in production - Liran Nahum, Data Scientist, Superwise.ai

Feature importance is a common method in the model development process for both feature selection and as an explainability tool. SHAP gained popularity recently and became for many the new go-to feature importance method, however, it is still mainly used only in the research phase as part of the model development process. In this session, we’ll talk about how we can use feature importance in general and SHAP values in particular beyond the research phase and utilize it in a production environment to monitor machine learning models.

Liran Nahum is a data scientist and researcher at Superwise.ai; an AI Assurance platform. He has over 7 years of data science experience in enterprise and startups, researching and participating in AI activities across several verticals and functions.

Linkedin: https://www.linkedin.com/in/nahumliran/

Event organizers
  • Meetups at Riskified

    At our beautiful office in Tel Aviv, we'll be hosting meetups related to fintech, eCommerce, development, machine learning, deep learning, Scala, microservices, and big data, among other interesting topics. There's always delicious food, cold beer, and good vibes!  Riskified is the world's leading eCommerce fraud-prevention company. We use cutting-edge technology, machine learning algorithms, and behavioural analytics to outsmart eCommerce fraud and help our merchants grow. Read more at www.riskified.com.

    Recent Events
    More

Are you organizing Inside the "black box" - Understanding machine learning models?

Claim the event and start manage its content.

I am the organizer
Social
Rating

based on 0 reviews

Featured Events