Same place as last month! The JLR offices!
If you need parking, it's all street.
6:00 p.m.: Food, beverage, and networking
6:40 p.m.: Welcome message by Karl Fezer
6:45 p.m: Speaker 1: Pamela Harrison "Automated Driving Landscape"
7:30 p.m: Speaker 2: Julio Barros "Explainability in ML"
8:15 p.m.- 8:30: Project Ideas. Pitch your Project Ideas to this meetup group
Speaker 1 Details:
Pamela will give an overview of the current concerns, activities and expectations in the automated space – focusing on automated driving – and where machine learning fits into the schema. She will talk about the expected roadmap for autonomy levels 4 and 5 (we are currently at automated driving level 3). She will explain the difficulties and limitations introduced by Functional Safety (FuSa) requirements from the hardware levels through the software levels. She will talk about various technologies being used and some expected to be introduced soon. This presentation hints at machine learning but is not about machine learning. Rather it is intended to provide some food for thought that will be useful to the local machine learning community.
Pamela Harrison is a Technical Consulting Engineer at Intel. She supports performance optimization libraries, particularly the Intel® Autonomous Driving Library (ADL). She ensures that developers understand customers’ needs for Intel® ADL and that customers are getting optimal use of this performance optimization library. In addition, she is responsible for understanding the requirements of Functional Safety (FuSa) – ISO 26262 – ensuring that the Intel® ADL team adheres to and is aware of FuSa process requirements.
Prior to this role Pamela was a software engineer at several companies in the tech industry, large and small, and has taught computer science for several years. She is passionate about helping people – she has coached robotics and she actively mentors students and colleagues. Pamela earned her BA in Language (Russian and Portuguese) at the University of California in Riverside, and her MS in Computer Science at California State University in Northridge.
Speaker 2 Details:
Title: Introduction to Explainability
Recently, there has been a lot of interest in explainablity of machine learning models. In many situations we have to choose between higher performance in a hard to interpret model and sacrificing performance for an easier to explain model.
There are some standard tricks for coaxing interpretability from black box models and recently there have been other advances such as LIME and Shap values that help us in this trade-off between performance and interpretability.
In this overview we'll go over the concept of explainability, discuss what it means, and introduce techniques to better understand what our models are doing.
Julio Barros is a machine learning consulatant in Portland, Oregon.
He has been developing software for over 20 years and loves all things related to data, AI/ML, technology, teaching and mentoring.
Julio holds a Bachelor's and Master's degree in Computer Science from GMU and UVA respectively, is active in the community and runs the PDX Clojure, Deep Learning and Probabilistic Programming meetups.