Hadoop开发--IDEA(三)

一、插件安装

  1. 源码地址
    https://github.com/fangyuzhong2016/HadoopIntellijPlugin
    代码下载
git clone https://github.com/fangyuzhong2016/HadoopIntellijPlugin.git

注意:从Github上下载的源码需要经过编译才能使用

  1. 编译
    ①、目前 Intellij plugin for hadoop 的源码使用maven 进行编译和打包,因此在编译之前请确保安装JDK1.8和maven3 以上版本。
    ②、Intellij plugin for hadoop插件基于 IntelliJ IDEA Ultimate 2017.2 版本进行开发的,因此需要安装IntelliJ IDEAUltimate 2017 以上版本
    ③、进入源码目录 ../HadoopIntellijPlugin/ 修改 pom.xml 文件,主要修改hadoop的版本和IntelliJ IDEA 安装的路径,设置如下:
# 修改properties 版本内容与idea安装路径
    
    3.0.0-alpha2
    
    C:\Program Files\JetBrains\IntelliJ IDEA 2018.2

④、执行mvn 命令:
先执行

C:\Users\Administrator>d:
D:\>cd HadoopIntellijPlugin
D:\HadoopIntellijPlugin>mvn clean
mvn clean
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  40.727 s
[INFO] Finished at: 2019-11-21T13:54:47+08:00
[INFO] ------------------------------------------------------------------------

然后执行

D:\HadoopIntellijPlugin>mvn assembly:assembly
[INFO] Reading assembly descriptor: assembly.xml
[INFO] artifact net.minidev:json-smart: checking for updates from aliyun-repos
[INFO] artifact net.minidev:json-smart: checking for updates from central
[INFO] artifact net.minidev:json-smart: checking for updates from dynamodb-local-oregon
[INFO] artifact net.minidev:json-smart: checking for updates from apache.snapshots.https
[INFO] artifact net.minidev:json-smart: checking for updates from repository.jboss.org
[INFO] HadoopIntellijPlugin/lib/HadoopIntellijPlugin-1.0.jar already added, skipping
[INFO] Building zip: D:\HadoopIntellijPlugin\target\HadoopIntellijPlugin-1.0.zip
[INFO] HadoopIntellijPlugin/lib/HadoopIntellijPlugin-1.0.jar already added, skipping
[INFO]
[INFO] <<< maven-assembly-plugin:2.2-beta-5:assembly (default-cli) < package @ HadoopIntellijPlugin <<<
[INFO]
[INFO]
[INFO] --- maven-assembly-plugin:2.2-beta-5:assembly (default-cli) @ HadoopIntellijPlugin ---
[INFO] Reading assembly descriptor: assembly.xml
[INFO] HadoopIntellijPlugin/lib/HadoopIntellijPlugin-1.0.jar already added, skipping
[INFO] Building zip: D:\HadoopIntellijPlugin\target\HadoopIntellijPlugin-1.0.zip
[INFO] HadoopIntellijPlugin/lib/HadoopIntellijPlugin-1.0.jar already added, skipping
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  01:00 min
[INFO] Finished at: 2019-11-21T16:28:22+08:00
[INFO] ------------------------------------------------------------------------

编译完成后 在.../target/HadoopIntellijPlugin-1.0.zip 即为该插件的安装包,然后安装到 IntelliJ 中即可。
D:\HadoopIntellijPlugin\target


生成文件
  1. 安装HadoopIntellijPlugin


    安装plugin

    安装plugin
  2. 修改GUI
    再次打开IDEA。由于用的是idea的框架。所以idea的ui要改成idea的动态生成插入,在这里设置:在Setting里面找到GUI Designer


    修改GUI

    GUI文件系统
  3. HDFS设置

    设置

    设置

    参数填写:
    只需要填写HDFS地址即可
    注意:测试功能似乎不完善,会提示连接失败,但是直接点确定就可以正常使用。

  4. 每次更改文件,可能都需要以用户登录权限,比较麻烦。这个可以配置,在hdfs-site.xml来配置


    dfs.permissions
    false

常见问题:

  1. 测试提示失败,但可以正常使用。


    连接失败

    在源码的:

com.fangyuzhong.intelliJ.hadoop.fsconnection.ConnectionManager类中。

二、Intellij plugin for hadoop 插件配置和源码的相关说明

1、插件的源码说明

插件的源码结构

①、core 包,为插件项目的核心包,公共组件库,包括了 通用UI界面、多线程操作、Hadoop连接设置基类、Hadoop文件系统通用操作类、插件项目设置通用类和其他工具类
②、fsconnection 包,Hadoop文件系统连接实现类和连接相关配置实现类
③、fsobject 包,文件系统对象类的实现(对于HDFS来讲就是 目录树和文件树节点的组织方式的实现)
④、fsbrowser包,插件的主界面实现,包括读取HDFS文件系统相关数据进行展示、文件系统对象的创建、下载、删除、上传和其他一些操作
⑤、globalization包,插件多语言支持类
⑥、options 包,插件设置类
⑦、mainmenu包, 插件主菜单操作类

2、插件配置相关说明
插件配置在.../resources/目录下,包括HadoopNavigator_en_US.properties、HadoopNavigator_zh_CN.properties 、plugin.xml
HadoopNavigator_en_US.properties 文件为插件界面的英文语言配置
HadoopNavigator_zh_CN.properties 文件为插件界面的中文语言配置
目前插件界面的语言只支持 简体中文和英文,其他的语言,需要自行制作语言包。系统初始默认的语言为操作系统默认的语言。

三、插件使用

  1. 创建目录(测试不好使)


    创建目录
  2. 下载文件


    下载文件
  3. 上传文件


    上传文件

四、Hadoop编程示例

  1. 创建项目


    创建项目

    选择Maven

    设定包名

    指定Maven

    指定项目保存路径
  2. 添加Maven依赖


    4.0.0

    com.xtsz
    hadoop-exercise
    1.0.0
    hadoop-exercise
    hadoop练习
    
        UTF-8
        1.8
        1.8
        2.9.2
    

    
        
            org.apache.hadoop
            hadoop-common
            ${version.hadoop}
        
        
            org.apache.hadoop
            hadoop-hdfs
            ${version.hadoop}
        
        
            org.apache.hadoop
            hadoop-client
            ${version.hadoop}
        
    

    
        
            
                maven-assembly-plugin
                
                    false
                    
                        jar-with-dependencies
                    
                    
                        
                            
                            com.xtsz.WordCount
                        
                    
                
                
                    
                        make-assembly
                        package
                        
                            assembly
                        
                    
                
            
        
    

  1. 代码编写
import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {
    /**
     * Mapper
     */
    public static class TokenizerMapper
            extends Mapper {

        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(Object key, Text value, Context context
        ) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                word.set(itr.nextToken());
                context.write(word, one);
            }
        }
    }

    /**
     * Reducer
     */
    public static class IntSumReducer
            extends Reducer {
        private IntWritable result = new IntWritable();

        public void reduce(Text key, Iterable values,
                           Context context
        ) throws IOException, InterruptedException {
            int sum = 0;
            for (IntWritable val : values) {
                sum += val.get();
            }
            result.set(sum);
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "word count");
        job.setJarByClass(WordCount.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(IntSumReducer.class);
        job.setReducerClass(IntSumReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

  1. 打包可执行jar
    使用插件:

    maven-assembly-plugin
    
        false
        
            jar-with-dependencies
        
        
            
                
                com.xtsz.WordCount
            
        
    
    
        
            make-assembly
            package
            
                assembly
            
        
    

打包可执行jar

打包结果
  1. 上传运行


    上传jar文件
root@master:~# hadoop jar hadoop-exercise-1.0.0.jar  hdfs://master:9000/wordcount hdfs://master:9000/output
运行结果

运行结果

运行结果
hello   7
jerry   1
jone    1
kitty   1
marquis 1
tom 2
world   1

五、非Maven方式导包

1. 新建Java项目

项目创建

创建项目

项目创建

创建包

添加包名

添加类

WordCount

2. 导入jar包

可以在hadoop的 share/hadoop 目录下找到,找到module点击右侧的小加号JARS or directories…


手动导包

手动导包

手动导包

common:


common

comom/lib:
common/lib

hdfs:
hdfs

mapreduce:


mapreduce

yarn:
yarn

3. 代码编写

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;
import java.util.StringTokenizer;

public class WordCountTest {
    /**
     * Mapper
     */
    public static class TokenizerMapper
            extends Mapper {

        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(Object key, Text value, Context context
        ) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                word.set(itr.nextToken());
                context.write(word, one);
            }
        }
    }

    /**
     * Reducer
     */
    public static class IntSumReducer
            extends Reducer {
        private IntWritable result = new IntWritable();
        public void reduce(Text key, Iterable values,
                           Context context
        ) throws IOException, InterruptedException {
            int sum = 0;
            for (IntWritable val : values) {
                sum += val.get();
            }
            result.set(sum);
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "word count");
        job.setJarByClass(WordCountTest.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(IntSumReducer.class);
        job.setReducerClass(IntSumReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.setInputPaths(job, new Path("hdfs://192.168.71.130:9000/wordcount"));
        FileOutputFormat.setOutputPath(job, new Path("hdfs://192.168.71.130:9000/result"));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

4. 运行测试

运行测试
运行测试
结果

六、常见问题

  1. Unable to import maven project: See logs for details
    更换Maven版本为:3.5.4
  2. 清空日志
root@master:/usr/local/hadoop-2.9.2/logs# echo "">hadoop-root-namenode-master.log

  1. 打包插件

    maven-dependency-plugin
    
        
        false
        
        true
        
        ./lib
    

  1. 可执行Jar插件

    maven-assembly-plugin
    
        false
        
            jar-with-dependencies
        
        
            
                
                com.xtsz.WordCount
            
        
    
    
        
            make-assembly
            package
            
                assembly
            
        
    

  1. java.lang.InterruptedException
    当关闭DFSStripedOutputStream的时候,如果在向data/parity块刷回数据失败的时候,streamer线程不会被关闭。同时在DFSOutputStream#closeImpl中也存在这个问题。
    DFSOutputStream#closeImpl总是会强制性地关闭线程,会引起InterruptedException。
  2. 缺少tools.jar

    jdk.tools
    jdk.tools
    1.8
    system
    ${JAVA_HOME}/lib/tools.jar

你可能感兴趣的:(Hadoop开发--IDEA(三))