hiveserver2连接出错如下:Error: Could not open client transport with JDBC Uri: jdbc:hive2://hadoop01:10000: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
1.看hiveserver2服务是否启动
[root@hadoop01 ~]# jps
5101 RunJar # 启动正常
2.看Hadoop安全模式是否关闭
[root@hadoop01 ~]# hdfs dfsadmin -safemode get
Safe mode is OFF # 表示正常
如果为:Safe mode is ON 处理方法见https://www.cnblogs.com/-xiaoyu-/p/11399287.html
3.浏览器打开http://hadoop01:50070/看Hadoop集群是否正常启动
4.看MySQL服务是否启动
[root@hadoop01 ~]# service mysqld status
Redirecting to /bin/systemctl status mysqld.service
● mysqld.service - MySQL 8.0 database server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled; vendor preset: disabled)
Active: active (running) since Sun 2020-01-05 23:30:18 CST; 8min ago
Process: 5463 ExecStartPost=/usr/libexec/mysql-check-upgrade (code=exited, status=0/SUCCESS)
Process: 5381 ExecStartPre=/usr/libexec/mysql-prepare-db-dir mysqld.service (code=exited, status=0/SUCCESS)
Process: 5357 ExecStartPre=/usr/libexec/mysql-check-socket (code=exited, status=0/SUCCESS)
Main PID: 5418 (mysqld)
Status: "Server is operational"
Tasks: 46 (limit: 17813)
Memory: 512.5M
CGroup: /system.slice/mysqld.service
└─5418 /usr/libexec/mysqld --basedir=/usr
Jan 05 23:29:55 hadoop01 systemd[1]: Starting MySQL 8.0 database server...
Jan 05 23:30:18 hadoop01 systemd[1]: Started MySQL 8.0 database server.
Active: active (running) since Sun 2020-01-05 23:30:18 CST; 8min ago 表示启动正常
如没有启动则:service mysqld start 启动mysql
注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意:
一定要用本地mysql工具连接mysql服务器,看是否能正常进行连接!!!!!(只是检查)
如不能连接看下:
配置只要是root用户+密码,在任何主机上都能登录MySQL数据库。
1.进入mysql
[root@hadoop102 mysql-libs]# mysql -uroot -p000000
2.显示数据库
mysql>show databases;
3.使用mysql数据库
mysql>use mysql;
4.展示mysql数据库中的所有表
mysql>show tables;
5.展示user表的结构
mysql>desc user;
6.查询user表
mysql>select User, Host, Password from user;
7.修改user表,把Host表内容修改为%
mysql>update user set host='%' where host='localhost';
8.删除root用户的其他host
mysql>delete from user where Host='hadoop102';
mysql>delete from user where Host='127.0.0.1';
mysql>delete from user where Host='::1';
9.刷新
mysql>flush privileges;
10.退出
mysql>quit;
检查mysql-connector-java-5.1.27.tar.gz驱动包是否一句放入:/root/servers/hive-apache-2.3.6/lib下面
jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true
#查看mysql里面是否有上面指定的库hive 如果 mysql中没有库请看 第 7 步
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| hive |
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.01 sec)
3306后面的hive是元数据库,可以自己指定 比如:
jdbc:mysql://hadoop01:3306/metastore?createDatabaseIfNotExist=true
5.看Hadoop配置文件core-site.xml有没有加如下配置
hadoop.proxyuser.root.hosts -- root为当前Linux的用户,我的是root用户
*
hadoop.proxyuser.root.groups
*
如果linux用户为自己名字 如:xiaoyu
则配置如下:
hadoop.proxyuser.xiaoyu.hosts
*
hadoop.proxyuser.xiaoyu.groups
*
6.其他问题
# HDFS文件权限问题
dfs.permissions
false
7.org.apache.hadoop.hive.metastore.hivemetaexception: failed to get schema version.
schematool -dbType mysql -initSchema
8.最后一句 别下载错包
apache hive-2.3.6下载地址:
http://mirror.bit.edu.cn/apache/hive/hive-2.3.6/
Index of /apache/hive/hive-2.3.6
Icon Name Last modified Size Description
[DIR] Parent Directory -
[ ] apache-hive-2.3.6-bin.tar.gz 23-Aug-2019 02:53 221M (下载这个)
[ ] apache-hive-2.3.6-src.tar.gz 23-Aug-2019 02:53 20M
9.重要
所有东西都检查啦,还是出错!!!
jps查看所有机器开启的进程全部关闭,然后 重启 设备,再
开启zookeeper(如果有)
开启hadoop集群
开启mysql服务
开启hiveserver2
beeline连接
配置文件如下,仅供参考,以实际自己配置为准
hive-site.xml
javax.jdo.option.ConnectionURL
jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true
javax.jdo.option.ConnectionDriverName
com.mysql.jdbc.Driver
javax.jdo.option.ConnectionUserName
root
javax.jdo.option.ConnectionPassword
12345678
hive.cli.print.current.db
true
hive.cli.print.header
true
hive.server2.thrift.bind.host
hadoop01
hive.metastore.schema.verification
false
datanucleus.schema.autoCreateAll
true
core-site.xml
fs.defaultFS
hdfs://hadoop01:9000
hadoop.tmp.dir
/root/servers/hadoop-2.8.5/data/tmp
hadoop.proxyuser.root.hosts
*
hadoop.proxyuser.root.groups
*
hdfs-site.xml
dfs.replication
3
dfs.namenode.secondary.http-address
hadoop03:50090
dfs.permissions
false
mapred-site.xml
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
hadoop03:10020
mapreduce.jobhistory.webapp.address
hadoop03:19888
yarn-site.xml
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.resourcemanager.hostname
hadoop02
yarn.log-aggregation-enable
true
yarn.log-aggregation.retain-seconds
604800
原创地址:https://www.cnblogs.com/-xiaoyu-/p/12158984.html