[FREE] Optimizing Distributed TensorFlow

Aug 28, 2017 · San Francisco, United States of America

Free event, RSVP here: https://fellowship.ai/techtalk/

Illia Polosukhin, one of TensorFlow's top contributors, will host a free preview talk ahead of his upcoming workshop at Scaling Deep Learning Conference (this upcoming conference is a paid event). 

Attendees to this Meetup will receive a special discount code on one regular-priced ticket for the Scaling Deep Learning Conference (Sept 16). 

Talk

TensorFlow allows to run distributed training, but making the most out of hardware still takes a lot of work.

In this talk, you will learn:

• How to setup distributed Tensorflow across multiple CPUs and GPUs.

• Analyze TensorFlow timeline to figure out bottlenecks. 

• Tune various components of the training stack to achieve optimal training speed. 

Event organizers

Are you organizing [FREE] Optimizing Distributed TensorFlow?

Claim the event and start manage its content.

I am the organizer
Social
Rating

based on 0 reviews

Featured Events