1.下载并解压
wget https://archive.cloudera.com/cdh5/cdh/5/sqoop-1.4.6-cdh5.7.0.tar.gz
tar -zxvf sqoop-1.4.6-cdh5.7.0.tar.gz -C ~/app/
环境变量添加地址:
cat ~/.bash_profile
使环境变量生效
source ~/.bash_profile
2.修改配置文件
Sqoop的配置文件与大多数大数据框架类似,在sqoop根目录下的conf目录中。
1) 重命名配置文件
$ mv sqoop-env-template.sh sqoop-env.sh
2) 修改配置文件 sqoop-env.sh
export HADOOP_COMMON_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
export HADOOP_MAPRED_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
export HBASE_HOME=/home/hadoop/app/hbase-1.2.0-cdh5.7.0
export HIVE_HOME=/home/hadoop/app/hive-1.1.0-cdh5.7.0
export ZOOCFGDIR=/home/hadoop/app/zookeeper-3.4.5-cdh5.7.0
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper-3.4.5-cdh5.7.0
3.拷贝JDBC驱动
拷贝jdbc驱动到sqoop的lib目录下,如:
cp /home/hadoop/software/mysql-connector-java-5.1.27-bin.jar /home/hadoop/app/sqoop-1.4.6-cdh5.7.0/lib
4.验证Sqoop
我们可以通过某一个command来验证sqoop配置是否正确:
$ bin/sqoop help
出现一些Warning警告(警告信息已省略),并伴随着帮助命令的输出:
Available commands:
codegen Generate code to interact with database records
create-hive-table Import a table definition into Hive
eval Evaluate a SQL statement and display the results
export Export an HDFS directory to a database table
help List available commands
import Import a table from a database to HDFS
import-all-tables Import tables from a database to HDFS
import-mainframe Import datasets from a mainframe server to HDFS
job Work with saved jobs
list-databases List available databases on a server
list-tables List available tables in a database
merge Merge results of incremental imports
metastore Run a standalone Sqoop metastore
version Display version information
5.测试Sqoop是否能够成功连接数据库
sqoop list-databases --connect jdbc:mysql://hadoop000:3306/ --username root --password root
出现如下输出:
information_schema
metastore
mysql
oozie
performance_schema
在使用Sqoop之前把HDFS和YARN服务全启动起来,避免出现一些意外的情况