玩转大数据计算之Hue

Hue版本:我们使用Hue 3.11.0

Hue安装

  • 下载地址:
    https://dl.dropboxusercontent.com/u/730827/hue/releases/3.11.0/hue-3.11.0.tgz
    并拷贝到安装目录,我的电脑安装目录是:/Users/****/apps
  • 修改Hadoop的配置文件
    停止Hadoop的服务:
 $HADOOP_HOME/sbin/stop-all.sh

修改core-site.xml文件,添加如下内容:

        
               hadoop.proxyuser.hue.hosts
                *
        
        
               hadoop.proxyuser.hue.groups
                *
        

修改hdfs-site.xml文件,添加如下内容:

  
       dfs.webhdfs.enabled
       true
   

再重新启动Hadoop的服务:

 $HADOOP_HOME/sbin/start-all.sh &
  • 安装
    安装依赖,Hue需要很多依赖包,每个操作系统依赖各不相同,具体可以参照链接:https://github.com/cloudera/hue#development-prerequisites
    Mac的安装依赖如下:
brew install gmp
brew install mysql
brew install openssl
export LDFLAGS=-L/usr/local/opt/openssl/lib && export CPPFLAGS=-I/usr/local/opt/openssl/include

解压、进入解压目录、安装

tar zxvf hue-3.11.0.tgz
cd hue-3.11.0
  • 修改配置文件
vim desktop/conf/hue.ini

参考官方链接:http://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/
修改[desktop],[hadoop],[beeswax],[database]配置节相关内容如下:

desktop配置:

[desktop]

  # Set this to a random string, the longer the better.
  # This is used for secure hashing in the session store.
  secret_key=

  # Execute this script to produce the Django secret key. This will be used when
  # 'secret_key' is not set.
  ## secret_key_script=

  # Webserver listens on this address and port
  http_host=0.0.0.0
  http_port=8888

  # Time zone name
  time_zone=Asia/Chongqing

  # Enable or disable Django debug mode.
  django_debug_mode=false

  # Enable or disable database debug mode.
  ## database_logging=false

  # Whether to send debug messages from JavaScript to the server logs.
  ## send_dbug_messages=false

  # Enable or disable backtrace for server error
  http_500_debug_mode=false

  # Enable or disable memory profiling.
  ## memory_profiler=false

  # Server email for internal error messages
  ## django_server_email='[email protected]'

  # Email backend
  ## django_email_backend=django.core.mail.backends.smtp.EmailBackend

  # Webserver runs as this user
  server_user=hue
  server_group=hue

  # This should be the Hue admin and proxy user
  default_user=hue

  # This should be the hadoop cluster admin
  default_hdfs_superuser=hdfs

hadoop配置:

[hadoop]

  # Configuration for HDFS NameNode
  # ------------------------------------------------------------------------
  [[hdfs_clusters]]
    # HA support by using HttpFs

    [[[default]]]
      # Enter the filesystem uri
      fs_defaultfs=hdfs://localhost:9000

      # NameNode logical name.
      ## logical_name=

      # Use WebHdfs/HttpFs as the communication mechanism.
      # Domain should be the NameNode or HttpFs host.
      # Default port is 14000 for HttpFs.
      webhdfs_url=http://localhost:50070/webhdfs/v1

      # Change this if your HDFS cluster is Kerberos-secured
      ## security_enabled=false

      # In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
      # have to be verified against certificate authority
      ## ssl_cert_ca_verify=True

      # Directory of the Hadoop configuration
      hadoop_conf_dir=$HADOOP_HOME/etc/hadoop

  # Configuration for YARN (MR2)
  # ------------------------------------------------------------------------
  [[yarn_clusters]]

    [[[default]]]
      # Enter the host on which you are running the ResourceManager
      resourcemanager_host=localhost

      # The port where the ResourceManager IPC listens on
      ## resourcemanager_port=8032

      # Whether to submit jobs to this cluster
      submit_to=True

      # Resource Manager logical name (required for HA)
      ## logical_name=

      # Change this if your YARN cluster is Kerberos-secured
      ## security_enabled=false

      # URL of the ResourceManager API
      resourcemanager_api_url=http://localhost:8088

      # URL of the ProxyServer API
      ## proxy_api_url=http://localhost:8088

      # URL of the HistoryServer API
      history_server_api_url=http://localhost:19888

beeswax配置:

[beeswax]

  # Host where HiveServer2 is running.
  # If Kerberos security is enabled, use fully-qualified domain name (FQDN).
  hive_server_host=localhost

  # Port where HiveServer2 Thrift server runs on.
  hive_server_port=10000

  # Hive configuration directory, where hive-site.xml is located
  hive_conf_dir=$HIVE_HOME/conf

database配置:

[[database]]
    # Database engine is typically one of:
    # postgresql_psycopg2, mysql, sqlite3 or oracle.
    #
    # Note that for sqlite3, 'name', below is a path to the filename. For other backends, it is the database name
    # Note for Oracle, options={"threaded":true} must be set in order to avoid crashes.
    # Note for Oracle, you can use the Oracle Service Name by setting "host=" and "port=" and then "name=:/".
    # Note for MariaDB use the 'mysql' engine.
    engine=mysql
    host=localhost
    port=3306
    user=root
    password=hive123456
    # Execute this script to produce the database password. This will be used when 'password' is not set.
    ## password_script=/path/script
    name=hue
  • 构建Hue数据库
    连接本地mysql数据库服务器:


    玩转大数据计算之Hue_第1张图片
    hue-1.png

依次执行如下命令:

create database hue;
grant all privileges on hue.* to root@'%' identified by'hive123456';
flush privileges;

再切换到hue的安装目录,执行:

build/env/bin/hue syncdb
玩转大数据计算之Hue_第2张图片
hue-2.png

最后终于可以启动Hue的服务了:

build/env/bin/supervisor  
  • 打开Hue
    在浏览器中输入地址:http://127.0.0.1:8888/
玩转大数据计算之Hue_第3张图片
hue-3.png
玩转大数据计算之Hue_第4张图片
hue-4.png

后面的文章将使用Hue查询Hive和HBase以及Spark等。

你可能感兴趣的:(玩转大数据计算之Hue)