What is Yarn in Hadoop?

The Hadoop ecosystem is going through the continuous evolution. Its processing frameworks are also evolving at full speed with the time. Hadoop 1.0 has passed the limitation of the batch-oriented MapReduce processing framework for the development of specialized and interactive processing model which is Hadoop 2.0.

Apache Hadoop was introduced in 2005 and taken over the industries with the capability of doing distributed processing of large data using MapReduce engine. With the time Hadoop has gone through some modifications, which have made Hadoop a better and advanced framework that supports various other distributed processing model with the traditional MapReduce model.

The data moguls such as Facebook, Google, and Yahoo had adopted Apache Hadoop for gaining new heights from the Hadoop HDFS with the resource management environment and the MapReduce processing. But in the past, Google and the others of Hadoop users found some issues with the Hadoop 1.0 architecture. Batch processing agreement of MapReduce was unable to keep track of all information that was flooding in the data collecting process.

Introduction of Yarn (Hadoop 2.0)

The Yarn is an acronym for Yet Another Resource Negotiator which is a resource management layer in Hadoop. It was introduced in 2013 in Hadoop 2.0 architecture as to overcome the limitations of MapReduce. Yarn supports other various others distributed computing paradigms which are deployed by the Hadoop.
Hadoop Yarn
Yahoo rewrites the code of Hadoop for the purpose of separate resource management from job scheduling, the result of which we got Yarn. This has improved Hadoop, as we can use the standalone component with other software like Apache Stark or create our own application by coding using Yarn. Application created using Yarn can run different distribute architecture.

Limitations of MapReduce which paved the path for Yarn (Hadoop 2.0)

Hadoop MapReduce used to do Big data processing but had some drawbacks in architecture which came to light when dealing with huge datasets.

Limitations of MapReduce (Hadoop 1.0)


Jobtracker used to do the job scheduling and keeping tracking of jobs. If the Jobtracker fails in any case, then the jobs will have to restart. The architecture had a single point of availability.


Jobtracker used to perform various tasks such as Job Scheduling, Task Scheduling, Resource Management, and Monitoring. Due to all these tasks, Jobtracker is not able to fully focus on Job Scheduling by which different nodes are not utilized to fullest, and thereby systems scalability is limited.


MapReduce engine devotes the nodes of the cluster to work with the single system. As when the size of the cluster increases in Hadoop, it cannot be employed to work with the different models.

The problem in real-time processing

MapReduce is batch driven by processing and analysis is done in batches where the result is getting after several hours. What if we need real-time analysis in case of fraud detection, it will be no use then.

There are several other issues with MapReduce (Hadoop 1.0) that need to be taken in consideration such as running ad-hoc queries, cascading failure, inefficient utilization of resources, the problem in message passing, and problem in running non-MapReduce applications.

Yarn Architecture

Yarn framework consists of “Resource Manager” a master daemon, “Node Manage” a slave daemon, and “Application Master” for per application.

Resource Manager

The Resource Manager is known as the rack-aware master daemon in Yarn. It is responsible for hoarding resources and assigning them to the applications. Competing application get the system resources when the Resource Manager adjudges.

Two components of Resource Manager:


The Scheduler of the Yarn Resource Manager job is to assign resources to the running applications. It is a pure scheduler, so it doesn’t do tracking and monitoring of the application. It doesn’t do any tracking or monitoring, so it can’t do anything about failure due to hardware or application means restarting the failed tasks.      

Application Manager

Application Manager is used for monitoring and restarting Application Master in case of any failure of nodes.

Node Manager

Node Manager is a slave daemon in Yarn. Node Manager is attached to containers as a per node- agent for overseeing the lifecycles. It also monitors container resource usage and communicates with the Resource manager periodically. Node Manager Work is same as the Task Tracker. Task Trackers used to have fixed number of map and reduce slots for scheduling, as Node Managers are dynamically created, where Resource Containers are of random size. Resource Containers can be used for the map, reduces tasks, and tasks from another framework.

Application Master

Application Master creates a dedicated instance for every application running in the Hadoop. The instance lives in its own container on one of the nodes in the cluster. Each application instance sends a heartbeat message to Resource Master, and if needed requests for additional resources. Resource Manager is used to assigning the additional resources throughout the Container Resource leases, which also serve as reservations for containers on Node Manager.

The full lifespan of an application is overseen by the Application Master, as additional container request from the Resource Manager, to Node Manager Release request.

Resource Manager Restart

It is used for managing resources and scheduling applications running on Yarn and works as a central authority. There are two types of restart for Resource Manager:

Non-work-preserving Resource Manager Restart

Restart is used to enhance RM to persist application/ attempt state in a pluggable state-store. The same information from state-store on the restart is loaded by the Resource Manager, and the previously running apps were re-kicked. Applications are not required to re-submit by the users.

In downtime of RM, Node Manager and clients are used to poll RM until RM comes up. When RM comes up, it will send a heartbeat message to the Node Manager and Application Master.

Work-preserving Resource Manager Restart

This will focus on reconstructing the running state of RM by combining the container requests from Application Masters and container status from Node Manager on restart. When the master is restarted, previously running apps are stopped, so there is no loss of processing data.

When Node Manager sends container status, it is captured by RM which helps them recover the previous running state. When a heartbeat message is sent by the container for re-syncs with the restarted RM, Node Manager will not kill it. It will start to manage the containers again, and containers status is sent across RM when it registers again.

Final Words

The yarn has completely changed the game for distributed application implementation and processing on a cluster of commodity servers. Yarn overcomes the limitation of MapReduce. It is more flexible, scalable, and efficient as compared to MapReduce. Companies are migrating from MRV1 to Yarn, and there is no such reason not to.

Best Big Data Online Course/s covers everything you should know about the Yarn. Get familiar with the various concepts of Yarn in Hadoop and take a step ahead toward the bright Big Data Hadoop career!

Was this post helpful?

Related Posts

  • 28 May

    What is MapReduce in Hadoop?

    Table of ContentsWorking Process of Map and ReduceDataFlow in MapReduceBottom LineWas this post helpful? The heart of Apache Hadoop is Hadoop MapReduce. It’s a programming model used for processing large datasets in parallel across hundreds or thousands of Hadoop clusters on commodity hardware. The framework does all the works; you just need to put the […]

  • 15 May

    What is HDFS in Hadoop

    Table of ContentsHow HDFS WorksFeatures of HDFS   Fault-Tolerant ScalabilityData AvailabilityData Reliability ReplicationWas this post helpful? The Hadoop Distributed File System is a java based file, developed by Apache Software Foundation with the purpose of providing versatile, resilient, and clustered approach to manage files in a Big Data environment using commodity servers. HDFS used to store […]

  • 11 May

    What is HBase in Hadoop?

    Table of ContentsData Model of HBaseHBASE ArchitectureHadoop HBase ComponentsBottom Line Was this post helpful? Hadoop HBase is based on the Google Bigtable (a distributed database used for structured data) which is written in Java. Hadoop HBase was developed by the Apache Software Foundation in 2007; it was just a prototype then. Hadoop HBase is an […]

  • 11 May

    What is Architecture of Hadoop?

    Table of ContentsHadoop ArchitectureRole of HDFS in Hadoop ArchitectureNameNodeDataNodeRole of MapReduce in Hadoop ArchitectureBottom LineWas this post helpful? Hadoop is the open-source framework of Apache Software Foundation, which is used to store and process large unstructured datasets in the distributed environment. Data is first distributed among different available clusters then it is processed. Hadoop biggest […]

  • 08 May

    Applications of Big data

    Table of ContentsTop Big Data Applications Banking and Securities Communication, Media, and Entertainment Healthcare Education Insurance Retail and Wholesale Manufacturing and Natural ResourcesFinal WordsWas this post helpful? Data is omnipresent. It was there in the past, it is now at present, and it will be there in the future also. But the thing that has […]

  • 28 April

    What are the Big Data Technologies?

    Table of Contents1. Apache Hadoop2. Apache Spark3. NoSQL4. Hive5. Kafka6. NiFi7. Blockchain8. Prescriptive AnalyticsFinal WordsWas this post helpful? To be at the top of your field is one thing, and maintaining your position at the top is another. The same thing applies to the IT industry and Big Data technologies are doing the later thing […]

Leave a Reply

Your email address will not be published.

Subscribe to our newletter

Get quality tutorials to your inbox. Subscribe now.