Building Explainable Machine Learning Systems: The Good, the Bad, and the Ugly

Apr 30, 2018 · New York, United States of America

Hi Makers,

We're excited to be in NYC for this great meetup. We'll also be handing out Patrick Hall's and Navdeep Gill's recent booklet in collaboration with O'Reilly called "An Introduction to Machine Learning Interpretability."

Can't wait for the meetup? Download your copy here: http://www.oreilly.com/data/free/an-introduction-to-machine-learning-interpretability.csp

Special thanks to Marsee Henon and the O'Reilly team for hosting our meetup.

See you there!

Agenda:
6:30 - 7:00pm - Lite bites and check-in
7:00 - 7:45pm - Patrick and Navdeep's talk
7:45 - 8:00pm - Q&A and networking

Abstract:
The good news is building fair, accountable, and transparent machine learning systems is possible. The bad news is it’s harder than many blogs and software package docs would have you believe. The truth is nearly all interpretable machine learning techniques generate approximate explanations, that the fields of eXplainable AI (XAI) and Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) are very new, and that few best practices have been widely agreed upon. This combination can lead to some ugly outcomes!

This talk aims to make your interpretable machine learning project a success by describing fundamental technical challenges you will face in building an interpretable machine learning system, defining the real-world value proposition of approximate explanations for exact models, and then outlining the following viable techniques for debugging, explaining, and testing machine learning models:

*Model visualizations including decision tree surrogate models, individual conditional expectation (ICE) plots, partial dependence plots, and residual analysis.
*Reason code generation techniques like LIME, Shapley explanations, and Treeinterpreter.
*Sensitivity Analysis.

Plenty of guidance on when, and when not, to use these techniques will also be shared, and the talk will conclude by providing guidelines for testing generated explanations themselves for accuracy and stability.

Open source examples (with lots of comments and helpful hints) for building interpretable machine learning systems are available to accompany the talk at: https://github.com/jphall663/interpretable_machine_learning_with_python

Bio:
Patrick Hall is senior director for data science products at H2O.ai where he focuses mainly on model interpretability. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and research and development roles at SAS Institute.

Navdeep Gill
Navdeep Gill is a Software Engineer & Data Scientist at H2O.ai where he focuses on model interpretability, GPU accelerated machine learning, and automated machine learning. He graduated from California State University, East Bay with a M.S. degree in Computational Statistics, B.S. in Statistics, and a B.A. in Psychology (minor in Mathematics). During his education, he gained interests in machine learning, time series analysis, statistical computing, data mining, and data visualization.

Before joining H2O.ai, he worked at Cisco Systems, focusing on data science and software development. Before stepping into industry he worked in various Neuroscience labs as a researcher/analyst. These labs were at institutions such as California State University, East Bay, University of California, San Francisco, and Smith Kettlewell Eye Research Institute. His work across these labs varied from behavioral, electrophysiology, and functional magnetic resonance imaging research. Connect with Navdeep on Twitter @Navdeep_Gill_.

Event organizers
  • NYC Big Data Science

    Welcome to the group. We’re excited to bring you the latest happenings in AI, Machine Learning, Deep Learning, Data Science and Big Data. Who are we? We’re H2O.ai, creators of the world’s leading open source deep learning and machine learning platform, used by more than 90,000 data scientists and 9,000 organizations around the world. Our goal is to congregate with data enthusiasts from all over NYC and discuss trending topics in the world of AI. We also regularly invite esteemed industry influencers and th

    Recent Events
    More

Are you organizing Building Explainable Machine Learning Systems: The Good, the Bad, and the Ugly?

Claim the event and start manage its content.

I am the organizer
Social
Rating

based on 0 reviews