Apache Spark is a general purpose, high performance cluster computing framework, especially used in big data. Spark is the new Map Reduce, and it replaces all the headaches of creating long and complicated applications on Hadoop. Spark unifies, under a simple and concise API, different workflows so we can create very complicated pipelines in minutes. In the new IoT world, the enterprise is aiming for streaming solutions that allow us to use SQL and machine learning algorithms (over hundreds of terabytes) at the same time we create and analyze complex social graphs. Spark lets us do all that without compromise performance, running up to 100x faster than classic solutions on Hadoop. There are few precise points about using Apache Spark. One is easiness of use, next performance, because no other framework can stand equally to it, and finally, extensibility, where every component is extensible enough so we could adapt it to any workload. Spark is just the future of high performance computing, and it something we want to get our hands on.