Hadoop Hive Spark安装配置指南

1 安装Hadoop

1.1 下载hadoop-2.7.x压缩包并解压至目标目录,修改$HODOOP_HOME/etc/hadoop下几个文件:

  • hadoop-env.sh,检查JAVA_HOME、HADOOP_CONF_DIR配置是否正确;
  • core-site.xml,加入如下配置:
<property>
       <name>hadoop.tmp.dirname>
       <value>file:/data/hadoop-2.7.3/tmpvalue>
property>
<property>
       <name>fs.defaultFSname>
       <value>hdfs://localhost:8000value>
property>
  • hdfs-site.xml,加入如下配置:
    <property>
       <name>dfs.replicationname>
       <value>1value>
    property>
    <property>
        <name>dfs.http.addressname>
        <value>0.0.0.0:50070value>
    property>
    <property>
        <name>dfs.namenode.name.dirname>
        <value>file:/data/hadoop-2.7.3/tmp/dfs/namevalue>
    property>
    <property>
        <name>dfs.datanode.data.dirname>
        <value>file:/data/hadoop-2.7.3/tmp/dfs/datavalue>
    property>
  • yarn-site.xml,加入如下配置:
    yarn.nodemanager.aux-services
    mapreduce_shuffle
  • mapred-site.xml,加入如下配置:
<property>
        <name>mapreduce.framework.namename>
        <value>yarnvalue>
property>

1.2 修改/etc/profile,加入如下环境设置:

export JAVA_HOME=/data/jdk1.8.0_141
export SCALA_HOME=/data/scala-2.11.8
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export HADOOP_HOME=/data/hadoop-2.7.3
export HIVE_HOME=/data/apache-hive-2.1.0
export YARN_HOME=$HADOOP_HOME
export HIVE_CONF_DIR=$HIVE_HOME/conf
export SPARK_HOME=/data/spark-2.4.0-bin-hadoop2.7
export PATH=$PATH:$JAVA_HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin:$SPARK_HOME/bin

1.3 设置个人账号免密登陆

使用ssh-keygen生成公私钥,一路enter;
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

1.4 初始化hadoop dfs

hdfs namenode -format

1.5 验证hadoop安装配置

$HADOOP_HOME/sbin/start-all.sh
观察控制台日志输出,应该可以看到hadoop各组件依次启动成功的日志(如有报错则逐个google解决之):
Starting namenodes on [localhost]
。。。
Starting secondary namenodes [0.0.0.0]
。。。
starting yarn daemons
。。。
starting resourcemanager
。。。
localhost: starting nodemanager

2 安装Hive

2.1 下载安装mysql

sudo yum localinstall https://dev.mysql.com/get/mysql57-community-release-el7-9.noarch.rpm
sudo yum install mysql-community-server
sudo service mysqld start
grep 'A temporary password' /var/log/mysqld.log |tail -1
mysql -h localhost -u root -p${temporary password from above}
mysql> CREATE USER 'hive'@'%' IDENTIFIED BY 'yourpasswd';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;
mysql> create database hive_db;

2.2 下载hive-2.1.x压缩包并解压至目标目录,修改$HIVE_HOME/conf下几个文件:

  • hive-env.sh,加入如下配置:
HADOOP_HOME=/data/hadoop-2.7.3
# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=/data/apache-hive-2.1.0/conf
# Folder containing extra ibraries required for hive compilation/execution can be controlled by:
export HIVE_AUX_JARS_PATH=/data/apache-hive-2.1.0/lib
  • hive-site.xml,修改/加入如下配置:

    javax.jdo.option.ConnectionURL
    jdbc:mysql://localhost:3306/hive_db?createDatabaseIfNotExist=true


    javax.jdo.option.ConnectionDriverName
    com.mysql.jdbc.Driver


    javax.jdo.option.ConnectionUserName
    hive


    javax.jdo.option.ConnectionPassword
    yourpasswd


  hive.exec.local.scratchdir
  /data/apache-hive-2.1.0/iotmp
  Local scratch space for Hive jobs


  hive.downloaded.resources.dir
  /data/apache-hive-2.1.0/iotmp/${hive.session.id}_resources
  Temporary local directory for added resources in the remote file system.


  hive.metastore.warehouse.dir
  /data/apache-hive-2.1.0/warehouse
  location of default database for the warehouse


  hive.server2.logging.operation.log.location
  /data/apache-hive-2.1.0/iotmp/operation_logs
  Top level directory where operation logs are stored if logging functionality is enabled

2.3 初始化hive schema

cp mysql-connector-java-5.1.37.jar /data/apache-hive-2.1.0/lib
schematool -dbType mysql -initSchema --verbose

2.4 验证安装使用

hive --service metastore &
hive -e “show databases”

3 Spark安装

3.1 下载解压安装配置scala

过程跟jdk安装类似,不再赘述

3.2 修改spark配置文件

  • 修改spark-env.sh,加入如下配置:
    JAVA_HOME=/data/jdk1.8.0_91
    SCALA_HOME=/data/scala-2.12.8
    SPARK_MASTER_HOST=localhost
    SPARK_MASTER_IP=localhost
    SPARK_MASTER_PORT=7077
    SPARK_MASTER_WEBUI_PORT=8080
    SPARK_WORKER_MEMORY=4g
    HADOOP_HOME=/data/hadoop-2.7.3
    HADOOP_CONF_DIR=/data/hadoop-2.7.3/etc/hadoop
    SPARK_DIST_CLASSPATH=/data/hadoop-2.7.3/etc/hadoop:/data/hadoop-2.7.3/share/hadoop/common/lib/:/data/hadoop-2.7.3/share/hadoop/common/:/data/hadoop-2.7.3/share/hadoop/hdfs:/data/hadoop-2.7.3/share/hadoop/hdfs/lib/:/data/hadoop-2.7.3/share/hadoop/hdfs/:/data/hadoop-2.7.3/share/hadoop/yarn/lib/:/data/hadoop-2.7.3/share/hadoop/yarn/:/data/hadoop-2.7.3/share/hadoop/mapreduce/lib/:/data/hadoop-2.7.3/share/hadoop/mapreduce/:/data/hadoop-2.7.3/contrib/capacity-scheduler/*.jar
  • 修改slaves配置文件:
    cp $SPARK_HOME/conf/slaves.template $SPARK_HOME/conf/slaves && echo localhost >> $SPARK_HOME/conf/slaves

3.3 启动验证安装

$SPARK_HOME/sbin/start-master.sh
$SPARK_HOME/sbin/start-slave.sh
$SPARK_HOME/bin/spark-shell

4 问题

在测试服务器安装spark后启动spark-shell,报错:

Failed to initialize compiler: object java.lang.Object in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programmatically, settings.usejavacp.value = true.

尝试更换spark版本/scala版本/java版本,以及传入指定的-Dscala.usejavacp=true均无法解决,在开发机本机没有这个问题,暂时遗留。
系统内核版本:3.10.0-514.26.2.el7.x86_64
操作系统版本:CentOS Linux release 7.2.1511 (Core)

5 参考资料

https://my.oschina.net/miger/blog/1818865
https://tecadmin.net/install-mysql-5-7-centos-rhel/
https://stackoverflow.com/questions/34408677/starting-hadoop-daemons-without-password
https://www.mtyun.com/library/how-to-setup-scala-on-centos7

你可能感兴趣的:(大数据,hadoop,hive,spark,大数据)