黑猴子的家:Hue 与 HDFS集成

1、梳理集群环境

linux01 linux02 linux03
NameNode
DataNode DataNode DataNode
ResourceManager
NodeManager NodeManager NodeManager
JobHistoryServer
Zookeeper Zookeeper Zookeeper
Hmaster Hmaster
RegionServer RegionServer RegionServer

2、配置HDFS

hdfs-site.xml



    dfs.webhdfs.enabled
    true
    Enable WebHDFS (REST API) in Namenodes and Datanodes.




    dfs.permissions
    false

core-site.xml


    hadoop.proxyuser.victor.hosts
    *




    hadoop.proxyuser.victor.groups
    *
 




    hadoop.proxyuser.httpfs.hosts
    *



    hadoop.proxyuser.httpfs.groups
    *

httpfs-site.xml



    httpfs.proxyuser.hue.hosts
    *




    httpfs.proxyuser.hue.groups
    *



3、scp同步配置

[root@node1 hadoop]$ scp -r etc/ root@node2:/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/
[root@node1 hadoop]$ scp -r etc/ root@node3:/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/

△4、启动hadoop httpfs服务

[root@node1 hadoop]$ sbin/httpfs.sh start

5、配置hue.ini文件

文件位置:/opt/module/cdh/hue-3.7.0-cdh5.3.6/desktop/conf/hue.ini



[[hdfs_clusters]]
    # HA support by using HttpFs
    [[[default]]]
      # Enter the filesystem uri
      fs_defaultfs=hdfs://hadoop102:9000 
      # fs_defaultfs=hdfs://mycluster
      ##标红处 根据自己的端口 9000

      # NameNode logical name.
      # 如果开启了高可用,需要配置如下
      ## logical_name=mycluster

      # Use WebHdfs/HttpFs as the communication mechanism.
      # Domain should be the NameNode or HttpFs host.
      # Default port is 14000 for HttpFs.
      ## webhdfs_url=http://localhost:50070/webhdfs/v1
      webhdfs_url=http://hadoop102:14000/webhdfs/v1
      # Change this if your HDFS cluster is Kerberos-secured
      ## security_enabled=false

      # Default umask for file and directory creation, specified in an octal value.
      ## umask=022

      # Directory of the Hadoop configuration
      ## hadoop_conf_dir=$HADOOP_CONF_DIR when set or '/etc/hadoop/conf'
      hadoop_conf_dir=/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/etc/hadoop
      hadoop_hdfs_home=/opt/module/cdh/hadoop-2.5.0-cdh5.3.6
      hadoop_bin=/opt/module/cdh/hadoop-2.5.0-cdh5.3.6/bin

6、测试

开启Hue服务

[root@node1 hue]$ build/env/bin/supervisor

打开HUE的页面,进行HDFS管理。

如果提示错误根目录应该归属于hdfs,请修改python变量,位置如下
/opt/module/cdh/hue-3.7.0-cdh5.3.6/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py
修改其中的变量值为

DEFAULT_HDFS_SUPERUSER = 'victor'

此处为用户名 victor或 ... ,然后重启HUE服务即可。

[root@node1 hue]$ build/env/bin/supervisor

启动HUE服务时,请先kill掉之前的HUE服务,如果提示地址被占用,请使用如下命令查看占用8888端口的进程并kill掉

[root@node1 hue]$  netstat -tunlp | grep 8888

你可能感兴趣的:(黑猴子的家:Hue 与 HDFS集成)