OPEN TO THE PUBLIC - NO CONFERENCE TICKET IS NEEDED
Room is Sutton South (Set 116 TH/CR)
THIS SESSION WILL BE RECORDED AND POSTED TO THE FOLLOWING:
Talk 1: Real-Time, Continuous ML/AI Model Training, Optimizing, and Predicting with Scikit-Learn, TensorFlow, Spark ML, GPU, TPU, Kafka, and Kubernetes
Chris Fregly, Founder @ PipelineAI, will walk you through a real-world, complete end-to-end Pipeline-optimization example. We highlight hyper-parameters - and model pipeline phases - that have never been exposed until now.
Through a series of live demos, Chris will create and deploy a model ensemble using the PipelineAI Platform with GPUs, Google's TPUs, TensorFlow, and Scikit-Learn.
We will do a deep dive on Google TPUs!!
While most Hyper-parameter Optimizers stop at the training phase (ie. learning rate, tree depth, ec2 instance type, etc), we extend model validation and tuning into a new post-training optimization phase including 8-bit reduced precision weight quantization and neural network layer fusing - among many other framework and hardware-specific optimizations.
Next, we introduce hyper-parameters at the prediction phase including request-batch sizing and chipset (CPU v. GPU v. TPU). We’ll continuously learn from all phases of our pipeline - including the prediction phase. And we’ll update our model in real-time using data from a Kafka stream.
Lastly, we determine a PipelineAI Efficiency Score of our overall Pipeline including Cost, Accuracy, and Time. We show techniques to maximize this PipelineAI Efficiency Score using our massive PipelineDB along with the Pipeline-wide hyper-parameter tuning techniques mentioned in this talk.
Chris Fregly is Founder and Applied AI Engineer at PipelineAI, a Real-Time Machine Learning and Artificial Intelligence Startup based in San Francisco.
He is also an Apache Spark Contributor, a Netflix Open Source Committer, founder of the Global Advanced Spark and TensorFlow Meetup, author of the O’Reilly Training and Video Series titled, "High Performance TensorFlow in Production with Kubernetes and GPUs."
Previously, Chris was a Distributed Systems Engineer at Netflix, a Data Solutions Engineer at Databricks, and a Founding Member and Principal Engineer at the IBM Spark Technology Center in San Francisco.
Training and serving ML models using Tensorflow (by Ruhua Jiang, Twitter Cortex)
He will present Twitter’s newest machine learning platform, a framework built on top of Tensorflow which seamlessly integrates with Twitter’s open source technology stack (Mesos, Aurora, Finagle, Thrift, HDFS, DataRecords). He will discuss its architecture, advantages, its usage, how it can make ML more accessible to Twitter and more broader community.
Ruhua Jiang is a software engineer at Twitter Cortex (https://cortex.twitter.com/ ), Core Environment team. Before Twitter, he was a software engineer on Akamai’s platform infrastructure group.
Talk 3: Distributed Deep Learning on Apache Spark with BigDL: dissecting customer use-cases by Sergey Ermolin, Solution Architect @ Intel Deep Learning
BigDL is one of the few deep learning libraries for Apache Spark. Written in Scala, it natively integrates and takes advantages of underlying Spark distributed architecture.
In this talk, we will *very briefly* review the features of the latest BigDL release 0.5 and then do a code walkthrough of two customer use-cases:
* Code deep-dive: customer/merchant propensity recommendation engine for FinTech industry (Scala)
* Code deep-dive: image transfer learning (Python)
Sergey Ermolin is a Software Solutions Architect for deep learning, Spark analytics, and big data technologies at Intel. A Silicon Valley veteran with a passion for machine learning and artificial intelligence, Sergey has been interested in neural networks since 1996, when he used them to predict aging behavior of quartz crystals and cesium atomic clocks made by Hewlett-Packard. Sergey holds an MSEE and a certificate in mining massive datasets from Stanford and BS degrees in both physics and mechanical engineering from California State University, Sacramento
Claim the event and start manage its content.I am the organizer