In MRv1, the cluster was managed by a service called JobTracker. Each datanode will have a process called TaskTracker to launch tasks. in MRv2, the functions of JobTracker have been splited between three services: ResourceManager, Application Master and JobHistoryServer. The TaskTracker has been replaced with a process/service called NodeManager.
- ResourceManager: A persistent YARN service that receives and runs applications (note A MapReduce job is an application). It contains the scheduler.
- NodeManager: A replacement of TaskTracker, manages resources and deployment on a datanode.
- Application Master: A per-application ApplicationMaster, it is a framework specific library and is tasked with negotiating resources from ResourceManager, it is also working with NodeManager to execute and monitor the tasks.
- JobHistoryServer: Allows user to get status on finished applications.
Why breaking up JobTracker?
- It avoids scaling problems faced by MRv1 (JobTracker becomes a resource bottleneck when the cluster scales out)
- It makes it possible to run frameworks other than MapReduce
- YARN supports ResourceManager HA to make a YARN cluster highly-available
Configuration
To configure MapRedeuce with YARN, you need to change two files. Since MRv1 functionality has been splited into two parts, the MapReduce cluster configuration goes into yarn-site.xml, the MapReduce configruation goes into mapred-site.xml. The old configuration parameters pertaining to JobTracker or TaskTracker are gone. To get your MapReduce job running in YARN is easy, you can just use the minimum configuration parameters:
yarn-site.xml configuration:
<?xml version="1.0" encoding="UTF-8"?> <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>you.hostname.com</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> </configuration>
mapred-site.xml configuration:
<?xml version="1.0" encoding="UTF-8"?> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
Important configuration parameters:
yarn-site.xml:
- yarn.scheduler.minimum-allocation-mb - Minimum limit of memory to allocate to each container request at the ResourceManager.
- yarn.scheduler.maximum-allocation-mb - Maximum limit of memory to allocate to each container request at the ResourceManager.
- yarn.scheduler.minimum-allocation-vcores - The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this won't take effect, and the specified value will get allocated the minimum.
- yarn.scheduler.maximum-allocation-vcores - The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this won't take effect, and will get capped to this value.
- yarn.nodemanager.resource.memory-mb - Physical memory, in MB, to be made available to running containers.
- yarn.nodemanager.resource.cpu-vcores - Number of CPU cores that can be allocated for containers.
- yarn.nodemanager.aux-services - Shuffle service that needs to be set for Map Reduce to run
- yarn.nodemanager.vmem-pmem-ratio - Maximum ratio by which virtual memory usage of tasks may exceed physical memory, the virtual memory (physical + paged memory) upper limit for each Map and Reduce task is determined by the virtual memory ratio each YARN container is allowed. Considering in our mapred-site.xml, we give map task 128M memory, so the virtual memory upper limit is 128 * 20.1 = 2572M.
- yarn.app.mapreduce.am.resource.mb - This property specify criteria to select resource for particular job. For example, if you give 512M here, it means any nodemanager which has equal or more momory available will get selected for executing job.
- yarn.app.mapreduce.am.command-opts - In yarn ApplicationMaster(AM) is reponsible for securing necessary resources. So this property defines how much memory required to run AM itself. Don't confuse this with nodemanager, where job will be exexcuted.
- mapreduce.framework.name - Execution framework.
- mapreduce.map.cpu.vcores - The number of virtual cores required for each map task.
- mapreduce.reduce.cpu.vcores - The number of virtual cores required for each map task.
- mapreduce.map.memory.mb - The amount of memory we give 128M to each map task. This should <= the memory you give to container.
- mapreduce.map.java.opts - Each container will run JVMs for Map and Reduce tasks. The JVM heap size should be set to lower than the Map and Reduce memory defined, so that they are within the bounds of the container memory allocated by YARN.
- mapreduce.reduce.memory.mb - Amount of memory you give to reduce task.
- mapreduce.reduce.java.opts - Each container will run JVMs for Map and Reduce tasks. The JVM heap size should be set to lower than the Map and Reduce memory defined, so that they are within the bounds of the container memory allocated by YARN.
No comments:
Post a Comment