注意:调整下列参数之前尽量拍摄Linux快照,否则后续的案例还需重写集群。
1)需求:从 1G 数据中,统计每个单词出现次数。服务器 3 台,每台配置 4G 内存,4 核
CPU,4 线程。
2)需求分析:
1G / 128m = 8 个 MapTask;1 个 ReduceTask;1 个 mrAppMaster
平均每个节点运行 10 个 / 3 台 ≈ 3 个任务(4 3 3)
3)修改 yarn-site.xml 配置参数如下:
The class to use as the resource scheduler.
yarn.resourcemanager.scheduler.class
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
Number of threads to handle scheduler
interface.
yarn.resourcemanager.scheduler.client.thread-count
8
Enable auto-detection of node capabilities such as memory and CPU.
yarn.nodemanager.resource.detect-hardware-capabilities
false
Flag to determine if logical processors(such as hyperthreads) should be counted as cores. Only applicable on Linux when yarn.nodemanager.resource.cpu-vcores is set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true.
yarn.nodemanager.resource.count-logical-processors-ascores
false
Multiplier to determine how to convert phyiscal cores to vcores. This value is used if yarn.nodemanager.resource.cpu-vcores is set to -1(which implies auto-calculate vcores) and yarn.nodemanager.resource.detect-hardware-capabilities is set to true. The number of vcores will be calculated as number of CPUs * multiplier.
yarn.nodemanager.resource.pcores-vcores-multiplier
1.0
Amount of physical memory, in MB, that can be allocated for containers. If set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true, it is automatically calculated(in case of Windows and Linux).In other cases, the default is 8192MB.
yarn.nodemanager.resource.memory-mb
4096
Number of vcores that can be allocated
for containers. This is used by the RM scheduler when allocating resources for containers. This is not used to limit the number of CPUs used by YARN containers. If it is set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true, it is automatically determined from the hardware in case of Windows and Linux.In other cases, number of vcores is 8 by default.
yarn.nodemanager.resource.cpu-vcores
4
The minimum allocation for every container request at theRM in MBs. Memory requests lower than this will be set to the value of this property. Additionally, a node manager that is configured to have less memory than this value will be shut down by the resource manager.
yarn.scheduler.minimum-allocation-mb
1024
The maximum allocation for every container request at the RM in MBs. Memory requests higher than this will throw an InvalidResourceRequestException.
yarn.scheduler.maximum-allocation-mb
2048
The minimum allocation for every container request at the RM in terms of virtual CPU cores. Requests lower than this will be set to the value of this property. Additionally, a node manager that is configured
to have fewer virtual cores than this value will be shut down by the resource manager.
yarn.scheduler.minimum-allocation-vcores
1
The maximum allocation for every container request at the RM in terms of virtual CPU cores. Requests higher than this will throw an InvalidResourceRequestException.
yarn.scheduler.maximum-allocation-vcores
2
Whether virtual memory limits will be enforced for containers.
yarn.nodemanager.vmem-check-enabled
false
Ratio between virtual memory to physical memory when setting memory limits for containers. Container allocations are expressed in terms of physical memory, and virtual memory usage is allowed to exceed this allocation by this ratio.
yarn.nodemanager.vmem-pmem-ratio
2.1
补充:关闭虚拟机内存检查的原因:
centos7和Java8以上版本中,Linux为Java进程预留的空间不会共享给Java堆使用,会造成大量的资源浪费。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Wl8bTcv0-1640860139444)(G:\Hadoop笔记\Hadoop\六、Hadoop_YARN_resources\71b81aa2954a693507cbfcc51bfa861e.png)]
4)分发配置。
注意:如果集群的硬件资源不一致,要每个 NodeManager 单独配置
5)重启集群
sbin/stop-yarn.sh
sbin/start-yarn.sh
6)执行 WordCount 程序
7)观察 Yarn 任务执行页面
1)在生产环境怎么创建队列?
(1)调度器默认就 1 个 default 队列,不能满足生产要求。
(2)按照框架:hive /spark/ flink 每个框架的任务放入指定的队列(企业用的不是特别多)
(3)按照业务模块:登录注册、购物车、下单、业务部门 1、业务部门 2(企业常用)
2)创建多队列的好处?
(1)因为担心员工不小心,写递归死循环代码,把所有资源全部耗尽。
(2)实现任务的降级使用,特殊时期保证重要的任务队列资源充足。
例如:11.11 6.18
业务部门 1(重要)=》业务部门 2(比较重要)=》下单(一般)=》购物车(一般)=》登录注册(次要)
需求 1:default 队列占总内存的 40%,最大资源容量占总资源 60%,hive 队列占总内存
的 60%,最大资源容量占总资源 80%。
需求 2:配置队列优先级
(1)修改如下配置
yarn.scheduler.capacity.root.queues
default,hive
The queues at the this level (root is the root queue).
yarn.scheduler.capacity.root.default.capacity
40
yarn.scheduler.capacity.root.default.maximum-capacity
60
(2)为新加队列添加必要属性:
yarn.scheduler.capacity.root.hive.capacity
60
yarn.scheduler.capacity.root.hive.user-limit-factor
1
yarn.scheduler.capacity.root.hive.maximum-capacity
80
yarn.scheduler.capacity.root.hive.state
RUNNING
yarn.scheduler.capacity.root.hive.acl_submit_applications
*
yarn.scheduler.capacity.root.hive.acl_administer_queue
*
yarn.scheduler.capacity.root.hive.acl_application_max_priority
*
yarn.scheduler.capacity.root.hive.maximum-application-lifetime
-1
yarn.scheduler.capacity.root.hive.default-application-lifetime
-1
xsync脚本分发
hadoop jar Wordcount.jar wordcount -D mapreduce.job.queuename=hive /input /output
//-D表示运行时改变参数值
默认的任务提交都是提交到 default 队列的。如果希望向其他队列提交任务,需要在Driver 中声明:
public class Drvier {
public static void main(String[] args) throws IOException,
ClassNotFoundException, InterruptedException {
Configuration conf = new Configuration();
//声明使用hive队列
conf.set("mapreduce.job.queuename","hive");
//1. 获取一个 Job 实例
Job job = Job.getInstance(conf);
// ......此处省略
//6. 提交 Job
boolean b = job.waitForCompletion(true);
System.exit(b ? 0 : 1);
}
在master:8088上可以看出,hive队列已经在执行:
容量调度器,支持任务优先级的配置,在资源紧张时,优先级高的任务将优先获取资源。默认情况,Yarn 将所有任务的优先级限制为 0,若想使用任务的优先级功能,须开放该限制。
1)修改 yarn-site.xml 文件,增加以下参数
yarn.cluster.max-application-priority
5
2)分发配置,并重启 Yarn
xsync yarn-site.xml
sbin/stop-yarn.sh
sbin/start-yarn.sh
3)模拟资源紧张环境,可连续提交以下任务,直到新提交的任务申请不到资源为止。
4)再次重新提交优先级高的任务
5)也可以通过以下命令修改正在执行的任务的优先级。
yarn application -appID
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-hBvcBBWs-1640860139446)(…/…/_resources/7ef3f651f286d55cacf3cbba9446b13d.png)]
创建两个队列,分别是 test 和 pcz(以用户所属组命名)。期望实现以下效果:若用户提交任务时指定队列,则任务提交到指定队列运行;若未指定队列,test 用户提交的任务到 root.group.test 队列运行,pcz 提交的任务到 root.group.pcz 队列运行(注:group 为用户所属组)。
公平调度器的配置涉及到两个文件,一个是 yarn-site.xml,另一个是公平调度器队列分
配文件 fair-scheduler.xml(文件名可自定义)。
(1)配置文件参考资料:
https://hadoop.apache.org/docs/r3.1.3/hadoop-yarn/hadoop-yarn-site/FairScheduler.html
(2)任务队列放置规则参考资料:
https://blog.cloudera.com/untangling-apache-hadoop-yarn-part-4-fair-scheduler-queue-basics/
yarn.resourcemanager.scheduler.class
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairS
cheduler
配置使用公平调度器
yarn.scheduler.fair.allocation.file
/opt/module/hadoop-3.1.3/etc/hadoop/fair-scheduler.xml
指明公平调度器队列分配配置文件
yarn.scheduler.fair.preemption
false
禁止队列间资源抢占
0.5
4096mb,4vcores
2048mb,2vcores
4096mb,4vcores
4
0.5
1.0
fair
2048mb,2vcores
4096mb,4vcores
4
0.5
1.0
fair
xsync yarn-site.xml
xsync fair-scheduler.xml
sbin/stop-yarn.sh
sbin/start-yarn.sh
1)提交任务时指定队列,按照配置规则,任务会到指定的 root.test 队列
hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar pi -
Dmapreduce.job.queuename=root.test 1 1
2)提交任务时不指定队列,按照配置规则,任务会到 root.pcz.pcz 队列
hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar pi 1 1
4.0.0
com.atguigu.hadoop
yarn_tool_test
1.0-SNAPSHOT
org.apache.hadoop
hadoop-client
3.1.4
(2)新建 com.pcz.yarn 包名
(3)创建类 WordCount 并实现 Tool 接口:
package com.pcz.yarn;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import java.io.IOException;
public class WordCount implements Tool {
private Configuration conf;
@Override
public int run(String[] args) throws Exception {
Job job = Job.getInstance(conf);
job.setJarByClass(WordCountDriver.class);
job.setMapperClass(WordCountMapper.class);
job.setReducerClass(WordCountReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
return job.waitForCompletion(true) ? 0 : 1;
}
@Override
public void setConf(Configuration conf) {
this.conf = conf;
}
@Override
public Configuration getConf() {
return conf;
}
public static class WordCountMapper extends Mapper {
private Text outK = new Text();
private IntWritable outV = new IntWritable(1);
@Override
protected void map(LongWritable key, Text value,
Context context) throws IOException, InterruptedException {
String line = value.toString();
String[] words = line.split(" ");
for (String word : words) {
outK.set(word);
context.write(outK, outV);
}
}
}
public static class WordCountReducer extends Reducer {
private IntWritable outV = new IntWritable();
@Override
protected void reduce(Text key, Iterable
values, Context context) throws IOException,
InterruptedException {
int sum = 0;
for (IntWritable value : values) {
sum += value.get();
}
outV.set(sum);
context.write(key, outV);
}
}
}
(4)新建 WordCountDriver
package com.pcz.yarn;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import java.util.Arrays;
public class WordCountDriver {
private static Tool tool;
public static void main(String[] args) throws Exception {
// 1. 创建配置文件
Configuration conf = new Configuration();
// 2. 判断是否有 tool 接口
switch (args[0]){
case "wordcount":
tool = new WordCount();
break;
default:
throw new RuntimeException(" No such tool: "+ args[0] );
}
// 3. 用 Tool 执行程序
// Arrays.copyOfRange 将老数组的元素放到新数组里面
int run = ToolRunner.run(conf, tool,
Arrays.copyOfRange(args, 1,args.length));
System.exit(run);
}
}
yarn jar YarnDemo.jar com.pcz.yarn.WordCountDriver wordcount /input /output
注意此时提交的 3 个参数,第一个用于生成特定的 Tool,第二个和第三个为输入输出目录。此时如果我们希望加入设置参数,可以在wordcount 后面添加参数,例如:
yarn jar YarnDemo.jar com.atguigu.yarn.WordCountDriver wordcount -Dmapreduce.job.queuename=root.test /input /output1
以上操作全部做完过后,快照回去或者手动将配置文件修改成之前的状态,因为本身资源就不够,分成了这么多,不方便以后测试