sqoop 从 phoenix、mysql 导数据到hdfs、hive 时碰到的一些问题

[root@node1 usr]# bin/sqoop import --connect 'jdbc:mysql://172.16.0.13:16045/active_user_stats?useUnicode=true&characterEncoding=utf-8&useSSL=FALSE&serverTimezone=GMT%2B8&convertToNull=CONVERT_TO_NULL&allowMultiQueries=true' --username 'etl01' -P 'sPg)[zq(cH2kj7X{' --table DIM_TIME --target-dir /staging/jzproduct/data_backups/mysql_backups/active_user_stats/dim_time7 -m 1 --columns "timekey,start_hour,end_hour,start_min,end_min,day_start_min,day_end_min,hour_dur" --direct
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.0.0-1634/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.0.0-1634/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.0.0-1634/accumulo/lib/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
20/05/29 17:22:49 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7.3.0.0.0-1634
Enter password: 
20/05/29 17:23:06 INFO manager.MySQLManager: Preparing to use a 	MySQL streaming resultset.
20/05/29 17:23:06 INFO tool.CodeGenTool: Beginning code generation
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
20/05/29 17:23:07 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `DIM_TIME` AS t LIMIT 1
20/05/29 17:23:07 INFO manager.SqlManager: Executing SQL statement: SELECT t.* 		FROM `DIM_TIME` AS t LIMIT 1
20/05/29 17:23:07 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hdp/3.0.0.0-1634/hadoop-mapreduce
20/05/29 17:23:09 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/e4e8eaee1ccb7409fa454fee2645646b/DIM_TIME.jar
20/05/29 17:23:09 WARN manager.DirectMySQLManager: Direct-mode import from MySQL does not support column
20/05/29 17:23:09 WARN manager.DirectMySQLManager: selection. Falling back to JDBC-based import.
20/05/29 17:23:09 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
20/05/29 17:23:09 INFO mapreduce.ImportJobBase: Beginning import of DIM_TIME
20/05/29 17:23:11 INFO client.RMProxy: Connecting to ResourceManager at node1.etonedu.cn/192.168.2.151:8050
20/05/29 17:23:11 INFO client.AHSProxy: Connecting to Application History server at dn-node01.etonedu.cn/192.168.2.161:10200
20/05/29 17:23:12 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/root/.staging/job_1589357961712_0675
20/05/29 17:23:16 INFO db.DBInputFormat: Using read commited transaction isolation
20/05/29 17:23:16 INFO mapreduce.JobSubmitter: number of splits:1
20/05/29 17:23:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1589357961712_0675
20/05/29 17:23:17 INFO mapreduce.JobSubmitter: Executing with tokens: []
20/05/29 17:23:17 INFO conf.Configuration: found resource resource-types.xml at file:/etc/hadoop/3.0.0.0-1634/0/resource-types.xml
20/05/29 17:23:17 INFO impl.YarnClientImpl: Submitted application application_1589357961712_0675
20/05/29 17:23:17 INFO mapreduce.Job: The url to track the job: http://node1.etonedu.cn:8088/proxy/application_1589357961712_0675/
20/05/29 17:23:17 INFO mapreduce.Job: Running job: job_1589357961712_0675
20/05/29 17:23:25 INFO mapreduce.Job: Job job_1589357961712_0675 running in uber mode : false
20/05/29 17:23:25 INFO mapreduce.Job:  map 0% reduce 0%
20/05/29 17:25:46 INFO mapreduce.Job: Task Id : attempt_1589357961712_0675_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
    at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:167)
    at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:158)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.lang.RuntimeException: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
    at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:220)
    at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:165)
    ... 10 more
Caused by: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
    at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174)
    at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64)
    at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:835)
    at com.mysql.cj.jdbc.ConnectionImpl.(ConnectionImpl.java:455)
    at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:240)
    at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:199)
    at java.sql.DriverManager.getConnection(DriverManager.java:664)
    at java.sql.DriverManager.getConnection(DriverManager.java:247)
    at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:300)
    at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:213)
    ... 11 more
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61)
    at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:105)
    at com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:151)
    at com.mysql.cj.exceptions.ExceptionFactory.createCommunicationsException(ExceptionFactory.java:167)
    at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:91)
    at com.mysql.cj.NativeSession.connect(NativeSession.java:152)
    at com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:955)
    at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:825)
    ... 18 more
Caused by: java.net.ConnectException: Connection timed out (Connection timed out)
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:589)
    at com.mysql.cj.protocol.StandardSocketFactory.connect(StandardSocketFactory.java:155)
    at com.mysql.cj.protocol.a.NativeSocketConnection.connect(NativeSocketConnection.java:65)
    ... 21 more

错误原因:sqoop 运行时通过 mapreduce 进行数据清洗,后把数据分不到 yarn 节点上,hadoop 集群中的节点路由未开启,导致数据库连接失败

======================================================================================================

  1. sqoop 从 phoenix 到数据到 hive,命令运行到当前卡住,

     		jdbc:hive2://dn-node01.etonedu.cn:2181,dn-		node02.etonedu.cn:2181,node1.etonedu.cn:2181,dn-node03.etonedu.cn:2181,dn-node04.etonedu.cn:2181/default;password=hive;serviceDiscoveryMode=zooKeeper;user=hive;zooKeeperNamespace=hiveserver2
    

    因为用户问题,数据通过 sqoop 保存在 hdfs,向 hive 传数据的时候, beeline 启动 hive 需要输入用户和密码,导致卡在这一句

解决办法: 在 $hive-conf-dir 目录下,创建 beeline-hs2-connection.xml 文件,添加信息:





beeline.hs2.connection.user
hive


beeline.hs2.connection.password
hive


=============================================================================================
3.

20/06/02 18:21:16 INFO mapreduce.Job: Task Id : attempt_1589357961712_0993_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLException: ERROR 726 (43M10):  Inconsistent namespace mapping properties. Cannot initiate connection as SYSTEM:CATALOG is found but client does not have phoenix.schema.isNamespaceMappingEnabled enabled
    at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:167)
    at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:158)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:77)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:137)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
Caused by: java.lang.RuntimeException: java.sql.SQLException: ERROR 726 (43M10):  Inconsistent namespace mapping properties. Cannot initiate connection as SYSTEM:CATALOG is found but client does not have phoenix.schema.isNamespaceMappingEnabled enabled
    at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:220)
    at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:165)
    ... 10 more
Caused by: java.sql.SQLException: ERROR 726 (43M10):  Inconsistent namespace mapping properties. Cannot initiate connection as SYSTEM:CATALOG is found but client does not have phoenix.schema.isNamespaceMappingEnabled enabled
    at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
    at org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1113)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1501)
    at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2740)
    at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1114)
    at org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
    at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
    at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
    at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
    at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
    at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
    at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2569)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2532)
    at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2532)
    at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
    at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
    at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
    at java.sql.DriverManager.getConnection(DriverManager.java:664)
    at java.sql.DriverManager.getConnection(DriverManager.java:270)
    at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:298)
    at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:213)
    ... 11 more

报错原因:
本地环境和 am 环境配置不同,本地可以调通, am 中报错

修改方法:

  1. phoenix-client.jar phoenix 得 jar 包 放入 sqoop lib 目录下

  2. hbase-site.xml 命令中指向

    sqoop import -files /usr/hdp/current/hbase-client/conf/hbase-site.xml  --driver org.apache.phoenix.jdbc.PhoenixDriver --connect jdbc:phoenix:192.168.2.151:2181 --query "select * from WAREHOUSE.DIM_TIME where TIMEKEY is not null and \$CONDITIONS" --target-dir /staging/jzproduct/data_backups/mysql_backups/active_user_stats/dim_time23 -m 2 --columns "timekey,start_hour,end_hour,start_min,end_min,day_start_min,day_end_min,hour_dur" --direct --split-by TIMEKEY
    

=================================
4.异常内容:

ERROR tool.ImportTool: Import failed: java.io.IOException: Generating splits for a textual index column allowed only in case of “-Dorg.apache.sqoop.splitter.allow_text_splitter=true” property passed as a parameter

解决方案:
在sqoop import命令后加参数 -Dorg.apache.sqoop.splitter.allow_text_splitter=true

5.异常内容:

20/06/17 17:04:09 INFO mapreduce.Job: Job job_1589357961712_2651 failed with state FAILED due to: Job setup failed : org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/":hdfs:hdfs:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:255)
    at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:512)
    at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:279)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1850)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1834)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1793)
    at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:59)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3135)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1126)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:707)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2411)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2385)
    at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1321)
    at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1318)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1335)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1310)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2326)
    at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:354)
    at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobSetup(CommitterEventHandler.java:255)
    at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:235)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/":hdfs:hdfs:drwxr-xr-x
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:255)
    at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:512)
    at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:279)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1850)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1834)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1793)
    at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:59)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3135)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1126)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:707)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)

    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499)
    at org.apache.hadoop.ipc.Client.call(Client.java:1445)
    at org.apache.hadoop.ipc.Client.call(Client.java:1355)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
    at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:653)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
    at com.sun.proxy.$Proxy11.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2409)
    ... 13 more

解决方法:
切换用户,确保执行命令的用户有 hdfs 的权限
su -l hdfs

你可能感兴趣的:(sqoop 从 phoenix、mysql 导数据到hdfs、hive 时碰到的一些问题)