How Spark can improve your Hadoop Cluster

Jun 22, 2016 · Hamburg, Germany

Apache Spark is a fast growing Framework to further improve your Big Data Infrastructure and processing of your data. It is written in Scala and reduces the overhead writing MapReduce Jobs in pure Java. After this talk, you are able to start playing around with Apache Spark and know the different parts and the benefits of the Framework.

Brief Introduction to Apache Hadoop
Apache Spark Benefits
- Caching
- Lazy Evaluation
- Spark Streaming
- Machine Learning
- GraphX
- Spark SQL

- Code Examples for the explained Spark Features in Scala

- Example Stack of a Spark Infrastructure at Wer liefert was?

Event organizers
  • Scala Hamburg

    Scala Hamburg is all about the Scala language and everything around it, such as Akka, Play, sbt, etc. The regular gatherings can also include topics like functional programming, massively parallel computing, or JVM language landscape in general. You can submit talk ideas at https://github.com/scala-hamburg/scala-hamburg/issues Bottom line: This is where Scala enthusiasts from Northern Germany meet to discuss Scala, share experiences and network.  ======= Bei Scala Hamburg dreht sich alles um das Thema Scal

    Recent Events
    More

Are you organizing How Spark can improve your Hadoop Cluster?

Claim the event and start manage its content.

I am the organizer
Social
Rating

based on 0 reviews