Hadoop Meetup - Tuesday, September 7, 2010 6:30pm-8:30pm

Sep 7, 2010 · Washington, United States of America

Hello Hadoop'ers, The next Hadoop User Group-DC Meeting is scheduled for Tuesday, September 7th at 6:30pm. Save the date! I'd like to thank our sponsors:

This meetup is in coordination with several other meetup groups from the area. I'd personally like to thank them for working with us and helping to organize this event:
I'm excited to say we have two very special guests presenting at this meetup,
Tom White and
Aaron Cordova. Tom has been instrumental in shaping the direction and development of Hadoop and many projects in Hadoop's ecosystem. He is also the author of the book "Hadoop: The Definitive Guide" and a part of the team at Cloudera. Aaron Cordova is a UMD graduate working at Booz Allen Hamilton. Before joining Booz Allen Aaron played a key role in defining the critical large scale data analytics infrastructure and applications at the NSA. Currently Aaron focuses on helping government organizations manage and analyze large amounts of data using technologies such as Hadoop, Hive, and HBase. Thus enabling them to make better decisions and answer key questions important to business and operations.
Location: Kaiser Family Foundation - Public Affairs Center 1330 G St. NW Washington, DC 20005 1 block from the
Metro Center Station metro stop.
Agenda:

  • 6:30 - 7:00 Food and Refreshments, Socialize

  • 7:00 - 7:30 Tom White - Hadoop's powerful parallel processing paradigm provides a great generalized framework for storing and analyzing data. For all its raw power however the practical use of Hadoop requires more. What's needed is an integrated stack of components which makes it easier to develop and use real-world applications in a production environment. Tom will discuss the evolving Hadoop platform, its components and how each fills a critical role in making Hadoop more useful in the enterprise.

  • 7:30 - 7:40 Short Break

  • 7:40 - 8:10 Aaron Cordova - One of the barriers to scaling Hadoop to 10,000 machines is the single HDFS NameNode. Recent benchmarks [1] show that the HDFS needs to be able to do an order of magnitude more writes per second to reach 10,000. The most promising way to do this is to create a distributed NameNode. Aaron will discuss the issues surrounding distributing the NameNode functionality to multiple machines, including automatically and organically partitioning the namespace, how to keep operations serialized and durable, and how recovery from failure changes.

1. Shvachko, Konstantin. HDFS Scalability the Limits of Growth. USENIX Login Magazine. April 2010.
http://www.usenix.org...
Don't Forget: Registration is open for Cloudera's Hadoop World in NYC More information here:
http://www.cloudera.c... Look forward to see you there!
Event organizers

Are you organizing Hadoop Meetup - Tuesday, September 7, 2010 6:30pm-8:30pm?

Claim the event and start manage its content.

I am the organizer
Social
Rating

based on 0 reviews