Once the DAG is built, Spark scheduler creates a physical execution plan. As mentioned above, the DAG scheduler splits the graph into multiple stages, the stages are created based on the transformations. The narrow transformations will be grouped (pipe-lined) together into a single stage. So for our example, Spark will create a two-stage execution as follows:
The DAG scheduler then submits the stages into the task scheduler. The number of tasks submitted depends on the number of partitions present in the textFile
. Fox example consider we have 4 partitions in this example, then there will be 4 sets of tasks created and submitted in parallel provided if there are enough slaves/cores. The below diagram illustrates this in bit more detail:
For more detailed information I suggest you to go through the following YouTube videos where the Spark creators give in depth details about the DAG and execution plan and lifetime.
The diagram below shows how Apache Spark internally working:
Here are some JARGONS from Apache Spark i will be using.
Job:- A piece of code which reads some input from HDFS or local, performs some computation on the data and writes some output data.
Stages:-Jobs are divided into stages. Stages are classified as a Map or reduce stages(Its easier to understand if you have worked on Hadoop and want to correlate). Stages are divided based on computational boundaries, all computations(operators) cannot be Updated in a single Stage. It happens over many stages.
Tasks:- Each stage has some tasks, one task per partition. One task is executed on one partition of data on one executor(machine).
DAG:- DAG stands for Directed Acyclic Graph, in the present context its a DAG of operators.
Executor:- The process responsible for executing a task.
Driver:- The program/process responsible for running the Job over the Spark Engine
Master:- The machine on which the Driver program runs
Slave:- The machine on which the Executor program runs
All jobs in spark comprise a series of operators and run on a set of data. All the operators in a job are used to construct a DAG (Directed Acyclic Graph). The DAG is optimized by rearranging and combining operators where possible. For instance let’s assume that you have to submit a Spark job which contains a map operation followed by a filter operation. Spark DAG optimizer would rearrange the order of these operators, as filtering would reduce the number of records to undergo map operation.
Spark has a small code base and the system is divided in various layers. Each layer has some responsibilities. The layers are independent of each other.
Spark caches the data to be processed, allowing it to me 100 times faster than hadoop. Spark is highly configurable, and is capable of utilizing the existing components already existing in the Hadoop Eco-System. This has allowed spark to grow exponentially, and in a little time many organisations are already using it in production.