Meet-ups to help, educate and demystify "Big Data” & “Cloud Computing” technologies to businesses, professionals & individuals .
6:00 pm – 6:15 pm: Registration & Mixer
6:15 pm – 6:45 pm: The Changing Big Data Landscape - empowering the business user with analytics-driven insight. - Alex Villamil The exponential growth of structured and unstructured data has overwhelmed traditional BI solutions. Data analysts, managers and executives want to be able to easily correlate the new unstructured data with legacy data sitting on tape or in platters to gain complete insights into customer behavior, business and IT operations without having to worry about the economics. This session will discuss the : * Evolution of Big Data and the challenges it presents to business users * Role Hadoop and NoSQL technologies play today * Challenges that result from the growth of this data and possible solutions.
7:00 pm – 7:45 pm: Hadoop Disk Failures - How to deal with it! - Bharath Mundlapudi One challenge in running Hadoop clusters on commodity hardware is in coping with the disk failures. In a cluster where each node has a dozen disks, the failure of a disk should not result in the removal of the entire node from service. A node out of service may represent 36 TB of unavailable storage. And when a node quits service, HDFS must recreate lost block replicas, and user tasks may have to be restarted. In this talk, we will walk through the lessons learned in making Hadoop clusters more tolerant of disk failures. We'll describe the methodology used to simulate disk failures, some good practices in writing the code for disk failures, and go over the recent changes to Hadoop that address disk failure issues.
7:45 pm – 9:00 pm: Q&A Session, Networking, Mixer
Alex Villamil A veteran in big data, data warehousing, business intelligence, who has worked with many different platforms, and technologies for many different use cases, including transactional, web, social media and banking. Most recently, he has been working for large financial institutions where volume and scale has been critical. Bharath Mundlapudi Bharath Mundlapudi worked at large scale internet companies like Netflix and Yahoo!. He was one of the core engineers in Hadoop Engineering group. He has made lots of contributions to the Hadoop community. He is known in the community for his work in 'Hadoop Disk Fail Inplace' effort. He has experience in cost optimizations, performance tuning, best practices and architecture of pipelines for Hadoop Clusters and related technologies. Prior to that he was a JVM engineer at Sun Microsystem where he worked on JVM and J2EE (XML, Web Services and REST) performance improvements. His interests are in distributed systems and data science.
Light snacks & drinks would be served.
Claim the event and start manage its content.I am the organizer