hadoop中hive配置文件及spark配置文件

环境变量配置

# /etc/profile: system-wide .profile file for the Bourne shell (sh(1))
# and Bourne compatible shells (bash(1), ksh(1), ash(1), ...).
#jdk18.0_171
export JAVA_HOME=/opt/jdk1.8.0_171
export PATH=$JAVA_HOME/bin:$PATH
#hadoop
export HADOOP_HOME=/opt/hadoop
export PATH=$HADOOP_HOME/bin:$PATH
#spark
export SPARK_HOME=/opt/spark
export PATH=$SPARK_HOME/bin:$PATH
#hive
export HIVE_HOME=/opt/hive
export PATH=$HIVE_HOME/bin:$PATH
#scala
export SCALA_HOME=/opt/scala
export PATH=$SCALA_HOME/bin:$PATH

hive配置

hive-env-sh 添加以下内容:

export JAVA_HOME=/opt/jdk1.8.0_171
export HIve_HOME=/opt/hive

export HADOOP_HOME=/opt/hadoop

hive-site.xml 配置:


    javax.jdo.option.ConnectionURL
    jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true
    JDBC connect string for a JDBC metastore
 


    javax.jdo.option.ConnectionDriverName
    com.mysql.jdbc.Driver
    Driver class name for a JDBC metastore
 


    javax.jdo.option.ConnectionURL
    jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true
    JDBC connect string for a JDBC metastore

 


    javax.jdo.option.ConnectionUserName
    hive
    Username to use against metastore database
 

 
    javax.jdo.option.ConnectionPassword
    000000
    password to use against metastore database

 

 
    hive.server2.logging.operation.log.location
    /hive/tmp/${system:user.name}/operation_logs
    Top level directory where operation logs are stored if logging functionality is enabled

 

 
    hive.exec.scratchdir
    /hive/tmp
    HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scr
atchdir}/<username> is created, with ${hive.scratch.dir.permission}.

 


    hive.exec.local.scratchdir
    /hive/tmp/${system:user.name}
    Local scratch space for Hive jobs
 

 
    hive.downloaded.resources.dir
    /hive/tmp/${hive.session.id}_resources
    Temporary local directory for added resources in the remote file system.
 

 


    hive.metastore.warehouse.dir
    /hive/warehouse
    location of default database for the warehouse

 


    hive.querylog.location
    /hive/logs
    Location of Hive run time structured log file

 

配置slaver节点通过thrift方式连接hive


    hive.metastore.uris
    thrift://master:9083
    Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.

 

刷新hive元数据

./schematool -dbType mysql -initSchema

设置hive进程

 hive --service metastore &

spark搭建配置

spark-env.sh 配置

#jdk18.0_171
export JAVA_HOME=/opt/jdk1.8.0_171
#hadoop
export HADOOP_HOME=/opt/hadoop
#spark
export SPARK_HOME=/opt/spark
#scala

export SCALA_HOME=/opt/scala

slaves 配置

master
slaver1

slaver2

启动spark

cd /opt/spark/sbin/

./start-all.sh

你可能感兴趣的:(hadoop中hive配置文件及spark配置文件)