Yarn 是一个资源调度平台,负责为运算程序,提供服务器运算资源,相当于一个分布式的操作系统平台。
而 MapReduce 等运算程序则相当于,运行在操作系统之上的应用程序。
提交过程—>Yarn
提交过程—>MapReduce
目前,Hadoop 作业调度器主要有三种:先进先出调度器(FIFO)、容量调度器(Capacity Scheduler)和公平调度器(Fair Scheduler)。
yarn-default.xml 文件查看默认参数:
<property>
<description>The class to use as the resource scheduler.description>
<name>yarn.resourcemanager.scheduler.classname>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulervalue>
property>
优点:简单易懂
缺点:不支持多队列并发,生产环境很少使用
1、CapacityScheduler 是 Yahaoo 开发的多以用户调度器。
统一用户提交的作业所占资源进行限定
。2、容量调度器资源分配算法:
1、Fair Scheduler 是 Facebook 开发的多用户调度器。
1)与容量调度器相同点
2)与容量调度器不同点
核心调度策略不同
容量调度器:优先选择资源利用率低的队列
公平调度器:优先选择对资源的缺额比例大的
每个队列可以单独设置资源分配方式
容量调度器:FIFO、DRF
公平调度器:FIFO、FAIR(默认)、DRF
- 公平调度器设计目标是:在时间尺度上,所有作业获得公平的资源。某一时刻一个作业应获资源和实际获取资源的差距叫 " 缺额 "
- 调度器会优先为缺额大的作业分配资源
2、公平调度器分配方式设置为:FIFO
- 公平调度每个队列资源分配策略如果选择 FIFO 的话,此时公平调度器相当于上面讲过的容量调度器。
2、公平调度器资源分配算法(Fair Scheduler)
hadoop-3.3.1]$ cd /var/opt/hadoopSoftware/hadoop-3.3.1
# 运行一个任务
hadoop-3.3.1]$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar wordcount /tinput /toutput1
查看任务:
1、使用监控页面查看
# YARN的ResourceManager
http://hadoop2:8088
2、使用Yarn命令查看
# 1、列出所有 Application
$ yarn application -list
# 2、根据 Application 状态过滤:yarn application-list-appStates(所有状态:ALL、NEW、NEW_SAVING、SUBMITTED、ACCEPTED、RUNNING、FINISHED、FAILED、KILLED)
$ yarn application -list -appStates FINISHED
# 3、Kill 掉 Application
$ yarn application -kill <Application-Id>
# 4、查询 Application 日志
$ yarn logs -applicationId <Application-Id>
# 5、查询 Container 日志(contaiinerId 可以通过步骤 6、进行查看)
$ yarn logs -applicationId <ApplicationId> -contaiinerId <ContainerId>
# 6、查看尝试运行的任务
$ yarn applicationattempt -list <ApplicationId>
# 7、打印 ApplicationAttempt 状态
$ yarn applicationattempt -status <ApplicationAttemptId>
# 8、查看当前app一共有多少个容器
$ yarn container -list <ApplicationAttemptId>
# 9、查看当前 Container 状态
$ yarn container -status <ContainerId>
# 10、查看node节点状态
$ yarn node -list -all
# 11、加载队列配置(动态更新yarn配置)
$ yarn rmadmin -refreshQueues
# 12、打印队列信息
$ yarn queue -status <QueueName>
eg: yarn queue -status default
Queue Information :
Queue Name : default
State : RUNNING
Capacity : 100.00%
Current Capacity : .00%
Maximum Capacity : 100.00%
Default Node Label expression : <DEFAULT_PARTITION>
Accessible Node Labels : *
Preemption : disabled
Intra-queue Preemption : disabled
(根据自身服务器容量以及app需求进行配置)
# 自定义配置文件位置$HADOOP_HOME/etc/hadoop/yarn-site.xml
# 默认配置文件位置$HADOOP_HOME/share/doc/hadoop/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
1)ResourceManager相关
# 配置调度器,默认是容量调度器(大公司资源够要求性能好,可以选择公平调度器)
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>
# ResourceManager 处理调度器接口的线程数 默认50
<property>
<name>yarn.resourcemanager.scheduler.client.thread-count</name>
<value>50</value>
</property>
2)NodeManager相关
# 是否启用自动检测节点功能,让yarn自己检查硬件进行配置 默认false
<property>
<name>yarn.nodemanager.resource.detect-hardware-capabilities</name>
<value>false</value>
</property>
# 是否将虚拟核数当作CPU核数 默认false
<property>
<description>Flag to determine if logical processors(such as hyperthreads) should be counted as cores. Only applicable on Linux when yarn.nodemanager.resource.cpu-vcores is set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true.
</description>
<name>yarn.nodemanager.resource.count-logical-processors-as-cores</name>
<value>false</value>
</property>
# 默认为1,表示一个物理cpu当做一个vcore使用,如果我们已经预留给了服务器cpu的话,那我们这里可以调整问题2或者3
<property>
<description>Multiplier to determine how to convert phyiscal cores to vcores. This value is used if yarn.nodemanager.resource.cpu-vcores is set to -1(which implies auto-calculate vcores) and yarn.nodemanager.resource.detect-hardware-capabilities is set to true. The number of vcores will be calculated as number of CPUs * multiplier.
</description>
<name>yarn.nodemanager.resource.pcores-vcores-multiplier</name>
<value>1.0</value>
</property>
# NodeManager使用的内存 默认8G
<property>
<description>Amount of physical memory, in MB, that can be allocated for containers. If set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true, it is automatically calculated(in case of Windows and Linux). In other cases, the default is 8192MB.
</description>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>-1</value>
</property>
# NodeManager 为系统保留多少内存(为非yarn进程保留的物理内存量,单位为MB)
<property>
<description>Amount of physical memory, in MB, that is reserved for non-YARN processes. This configuration is only used if yarn.nodemanager.resource.detect-hardware-capabilities is set to true and yarn.nodemanager.resource.memory-mb is -1. If set to -1, this amount is calculated as 20% of (system memory - 2*HADOOP_HEAPSIZE)
</description>
<name>yarn.nodemanager.resource.system-reserved-memory-mb</name>
<value>-1</value>
</property>
# 是否开启 物理内存 检查限制container 默认打开
<property>
<description>Whether physical memory limits will be enforced for
containers.</description>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>true</value>
</property>
# 是否开启 虚拟内存 检查限制container 默认打开 (一般会关闭虚拟内存检测,因为java和centos78在分配虚拟内存是会有些冲突所以一般会关闭这个参数)
<property>
<description>Whether virtual memory limits will be enforced for
containers.</description>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>true</value>
</property>
# 虚拟内存 与 物理内存比例 默认2.1
<property>
<description>Ratio between virtual memory to physical memory when setting memory limits for containers. Container allocations are expressed in terms of physical memory, and virtual memory usage is allowed to exceed this allocation by this ratio.
</description>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>
3)Container 相关
# 容器的最小内存 默认1G
<property>
<description>The minimum allocation for every container request at the RM in MBs. Memory requests lower than this will be set to the value of this property. Additionally, a node manager that is configured to have less memory than this value will be shut down by the resource manager.</description>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>
# 容器的最大内存 默认8G
<property>
<description>The maximum allocation for every container request at the RM in MBs. Memory requests higher than this will throw an InvalidResourceRequestException.</description>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>8192</value>
</property>
# 容器的最小CPU核数 默认1个
<property>
<description>The minimum allocation for every container request at the RM in terms of virtual CPU cores. Requests lower than this will be set to the value of this property. Additionally, a node manager that is configured to have fewer virtual cores than this value will be shut down by the resource manager.
</description>
<name>yarn.scheduler.minimum-allocation-vcores</name>
<value>1</value>
</property>
# 容器的最大CPU核数 默认4个
<property>
<description>The maximum allocation for every container request at the RM in terms of virtual CPU cores. Requests higher than this will throw an InvalidResourceRequestException.
</description>
<name>yarn.scheduler.maximum-allocation-vcores</name>
<value>4</value>
</property>
修改参数后分发到所有服务器,然后重启即可
hadoop1 $ xsync $HADOOP_HOME/etc/hadoop/yarn-site.xml
# 停止
hadoop1 $ myhadoop stop
# 启动
hadoop1 $ myhadoop start
在生产环境怎么创建队列?
A、调度器默认就1个 default 队列,不能满足生产要求。
B、按照框架:hive /spark/ flink 每个框架的任务放入指定的队列 (企业用的不是特别多)
C、按照业务模块:登录注册、购物车、下单、业务部门1、业务部门2
创建多队列的好处?
A、因为担心员工不小心,写递归死循环代码,把所有资源全部耗尽。
B、实现任务的降级使用,特殊时期保证重要的任务队列资源充足,比如 双11、6.18等
业务部门1(重要)=》业务部门2(比较重要)=》下单(一般)=》购物车(一般)=》登录注册(次要)
需求1:default队列占总内存的40%,最大资源容量占总资源60%,
hive队列占总内存的60%,最大资源容量占总资源80%。
在文件位置$HADOOP_HOME/etc/hadoop/capacity-scheduler.xml
中配置如下:
<property>
<name>yarn.scheduler.capacity.root.queuesname>
<value>default,hivevalue>
<description>
The queues at the this level (root is the root queue).
description>
property>
<property>
<name>yarn.scheduler.capacity.root.default.capacityname>
<value>40value>
property>
<property>
<name>yarn.scheduler.capacity.root.default.maximum-capacityname>
<value>60value>
property>
(2)为新加队列添加必要属性:
<property>
<name>yarn.scheduler.capacity.root.hive.capacityname>
<value>60value>
property>
<property>
<name>yarn.scheduler.capacity.root.hive.user-limit-factorname>
<value>1value>
property>
<property>
<name>yarn.scheduler.capacity.root.hive.maximum-capacityname>
<value>80value>
property>
<property>
<name>yarn.scheduler.capacity.root.hive.statename>
<value>RUNNINGvalue>
property>
<property>
<name>yarn.scheduler.capacity.root.hive.acl_submit_applicationsname>
<value>*value>
property>
<property>
<name>yarn.scheduler.capacity.root.hive.acl_administer_queuename>
<value>*value>
property>
<property>
<name>yarn.scheduler.capacity.root.hive.acl_application_max_priorityname>
<value>*value>
property>
<property>
<name>yarn.scheduler.capacity.root.hive.maximum-application-lifetimename>
<value>-1value>
property>
<property>
<name>yarn.scheduler.capacity.root.hive.default-application-lifetimename>
<value>-1value>
property>
分发配置文件,然后重启 Yarn 或者执行yarn rmadmin -refreshQueues
刷新队列,http://hadoop2:8088 就可以看到两条队列:
hadoop1 $ xsync $HADOOP_HOME/etc/hadoop/capacity-scheduler.xml
hadoop1 $ yarn rmadmin -refreshQueues
1)hadoop jar 的方式
# 修改提交任务的方式,默认是default。eg:向Hive队列提交任务
hadoop-3.3.1]$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar wordcount -D mapreduce.job.queuename=hive /input /output2
2)打 jar 包的方式(推荐)
默认的任务提交都是提交到 default 队列的。如果希望向其他队列提交任务,在Driver中声明即可:
public class WcDrvier {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
Configuration conf = new Configuration();
conf.set("mapreduce.job.queuename","hive");
//1. 获取一个Job实例
Job job = Job.getInstance(conf);
……
……
//6. 提交Job
boolean b = job.waitForCompletion(true);
System.exit(b ? 0 : 1);
}
}
容量调度器,支持任务优先级的配置,在资源紧张时,优先级高的任务将优先获取资源。默认情况,Yarn将所有任务的优先级限制为0,若想使用任务的优先级功能,须开放该限制。
1)修改yarn-site.xml文件,增加以下参数
hadoop1 $ vim $HADOOP_HOME/etc/hadoop/yarn-site.xml
<property>
<name>yarn.cluster.max-application-priority</name>
<value>5</value>
</property>
2)同步并重启yarn
hadoop1 $ xsync $HADOOP_HOME/etc/hadoop/yarn-site.xml
hadoop2 $ $HADOOP_HOME/sbin/stop-yarn.sh
hadoop2 $ $HADOOP_HOME/sbin/start-yarn.sh
3)模拟资源紧张环境,可连续提交以下任务,直到新提交的任务申请不到资源为止。
hadoop1 hadoop-3.3.1]$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar pi 5 2000000
4)再次重新提交优先级高的任务(可以通过yarn监控页面查看任务执行的优先级 http://hadoop2:8088)
hadoop2 hadoop-3.3.1]$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar pi -D mapreduce.job.priority=5 5 2000000
5)也可以通过以下命令修改 正在执行的任务 的优先级
# yarn application -appID -updatePriority 优先级
hadoop-3.3.1]$ yarn application -appID application_1611133087930_0009 -updatePriority 5
创建两个队列,分别是 test 和fancyry(以用户所属组命名)。期望实现以下效果:若用户提交任务时指定队列,则任务提交到指定队列运行;若未指定队列,test 用户提交的任务到 root.group.test 队列运行,fancyry提交的任务到 root.group.fancyry 队列运行 (注:group为用户所属组)。
公平调度器的配置涉及到两个文件,一个是 yarn-site.xml,另一个是公平调度器队列分配文件 fair-scheduler.xml (文件名可自定义)。
修改 $HADOOP_HOME/etc/hadoop/yarn-site.xml
文件,加入以下参数
yarn.resourcemanager.scheduler.class
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
配置使用公平调度器
yarn.scheduler.fair.allocation.file
/var/opt/hadoopSoftware/hadoop-3.3.1/etc/hadoop/fair-scheduler.xml
指明公平调度器队列分配配置文件
yarn.scheduler.fair.preemption
false
禁止队列间资源抢占
添加公平调度器文件 fair-scheduler.xml (队列test)
$HADOOP_HOME/etc/hadoop/fair-scheduler.xml
0.5
4096mb,4vcores
2048mb,2vcores
4096mb,4vcores
4
0.5
1.0
fair
2048mb,2vcores
4096mb,4vcores
4
1.0
fair
tips:
maxAMShare:限制可用于运行应用程序主节点的队列公平份额的比例。此属性只能用于叶队列。例如,如果设置为 1.0f,则叶队列中的 AM 最多可以占用 100% 的内存和 CPU 公平份额。值 -1.0f 将禁用此功能,并且不会检查 amShare。默认值为 0.5f。
hadoop1 $ xsync $HADOOP_HOME/etc/hadoop/yarn-site.xml
hadoop1 $ xsync $HADOOP_HOME/etc/hadoop/fair-scheduler.xml
hadoop2 $ $HADOOP_HOME/sbin/stop-yarn.sh
hadoop2 $ $HADOOP_HOME/sbin/start-yarn.sh
1、页面刷新可以看到新的调度器
# YARN的ResourceManager
http://hadoop2:8088
2、提交任务时指定队列,按照配置规则,任务会到指定的 root.test 队列
hadoop2 hadoop-3.3.1]$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar pi -D mapreduce.job.queuename=root.test 1 1
3、提交任务时不指定队列,按照配置规则,任务会到root队列,用哪个用户登录运行就在那个用户下
hadoop2 hadoop-3.3.1]$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar pi 1 1
回顾:World count 案例
hadoop2 hadoop-3.3.1]$ hadoop jar helloworld-maven-java.jar com.leojiang.mapreduce.wordcount2.WordCountDriver /input /output
期望可以动态传参,结果报错,误认为是第一个输入参数。
hadoop2 hadoop-3.3.1]$ hadoop jar helloworld-maven-java.jar com.leojiang.mapreduce.wordcount2.WordCountDriver -Dmapreduce.job.queuename=root.test /input /output2
pom.xml如下:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0modelVersion>
<groupId>com.leojiang.yarngroupId>
<artifactId>YarnDemoartifactId>
<version>1.0-SNAPSHOTversion>
<properties>
<maven.compiler.source>8maven.compiler.source>
<maven.compiler.target>8maven.compiler.target>
properties>
<dependencies>
<dependency>
<groupId>junitgroupId>
<artifactId>junitartifactId>
<version>4.13version>
<scope>compilescope>
dependency>
<dependency>
<groupId>org.slf4jgroupId>
<artifactId>slf4j-log4j12artifactId>
<version>1.7.30version>
dependency>
<dependency>
<groupId>org.apache.hadoopgroupId>
<artifactId>hadoop-clientartifactId>
<version>3.3.4version>
dependency>
dependencies>
project>
com.leojiang.yarn
package com.leojiang.yarn;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import java.io.IOException;
// Alt + enter 实现相关的方法
public class WordCount implements Tool {
private Configuration conf;
// 核心驱动(conf 需要传入)
@Override
public int run(String[] args) throws Exception {
Job job = Job.getInstance(conf);
job.setJarByClass(WordCountDriver.class);
job.setMapperClass(WordCountMapper.class);
job.setReducerClass(WordCountReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
return job.waitForCompletion(true) ? 0 : 1;
}
@Override
public void setConf(Configuration conf) {
this.conf = conf;
}
@Override
public Configuration getConf() {
return conf;
}
// mapper
public static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
private Text outK = new Text();
private IntWritable outV = new IntWritable(1);
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
// 获取一行
String line = value.toString();
// String[] words = line.split("");
// 去除首尾空格
String tline = line.trim();
// s表示匹配任何空白字符,+表示匹配一次或多
String[] words = tline.split("\\s+");
for (String word : words) {
outK.set(word);
context.write(outK, outV);
}
}
}
// reducer
public static class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable outV = new IntWritable();
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable value : values) {
sum += value.get();
}
outV.set(sum);
context.write(key, outV);
}
}
}
package com.leojiang.yarn;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import java.util.Arrays;
public class WordCountDriver {
private static Tool tool;
public static void main(String[] args) throws Exception {
// 创建配置
Configuration conf = new Configuration();
switch (args[0]) {
case "wordcount":
tool = new WordCount();
break;
default:
throw new RuntimeException("no such tool" + args[0]);
}
// 执行程序
int run = ToolRunner.run(conf, tool, Arrays.copyOfRange(args, 1, args.length));
System.exit(run);
}
}
查看 Driver 类的全列名 com.leojiang.yarn.WordCountDriver
在 HDFS 上准备输入文件 /input 目录,向集群提交该 Jar 包
hadoop1 hadoop-3.3.1]$ yarn jar $HADOOP_HOME/test-jar/YarnDemo-1.0-SNAPSHOT.jar com.leojiang.yarn.WordCountDriver wordcount /input /output4
注意此时提交的 3 个参数,第一个用于生成特定的 Tool,第二个和第三个为输入输出目录。此时如果我们希望加入设置参数,可以在 wordcount 后面添加参数,例如:
hadoop1 hadoop-3.3.1]$ yarn jar $HADOOP_HOME/test-jar/YarnDemo-1.0-SNAPSHOT.jar com.leojiang.yarn.WordCountDriver wordcount -Dmapreduce.job.queuename=root.test /input /output5
1)FIFO、容量、公平
2)apache默认调度器 =》容量; CDH默认调度器= =》公平
3)公平、容量 默认有一个default,需要创建多队列
多队列:
4)每个调度器的特点:
相同点:支持多队列、可以借资源、支持多用户
不同点:
容量调度器:优先满足先进来的任务执行
公平调度器:在队列里面的任务,公平享有队列资源
5)生产环境怎么选:
1)队列原理
2)Yarn常用命令
3)核心参数配置
4)配置容量调度器和公平调度器
5)tool接口的使用