大橙子网站建设,新征程启航
为企业提供网站建设、域名注册、服务器等服务
这篇文章主要讲解了“Sqoop怎么将MySQL数据导入到hive中”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“Sqoop怎么将MySQL数据导入到hive中”吧!
成都服务器托管,创新互联提供包括服务器租用、服务器主机托管、带宽租用、云主机、机柜租用、主机租用托管、CDN网站加速、域名申请等业务的一体化完整服务。电话咨询:18982081108
MySQL表:
mysql> desc t3;
+----------------+------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------------+------------+------+-----+---------+-------+
| ISVALID | int(11) | YES | MUL | NULL | |
| CREATETIME | datetime | YES | | NULL | |
| UPDATETIME | datetime | YES | | NULL | |
| CONC_UNI_CODE | bigint(20) | YES | | NULL | |
| COM_UNI_CODE | bigint(20) | YES | | NULL | |
| FUND_INFW_REL | double | YES | | NULL | |
| MARK_MANI_REL | double | YES | | NULL | |
| STOCK_FREQ_REL | double | YES | | NULL | |
| STOCK_CONC_REL | double | YES | | NULL | |
+----------------+------------+------+-----+---------+-------+
9 rows in set (0.01 sec)
mysql>
hive中自己创建表:
hive> create table tt1(
ISVALID int,
CREATETIME TIMESTAMP,
UPDATETIME TIMESTAMP,
CONC_UNI_CODE bigint,
COM_UNI_CODE bigint,
FUND_INFW_REL double,
MARK_MANI_REL double,
STOCK_FREQ_REL double,
STOCK_CONC_REL double)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' ;
hive>
1.将Mysql数据导入到hive中(提前在hive中创建表)
(1).导入数据到HDFS中
[hdfs@jingong01 ~]$ sqoop import --connect jdbc:mysql://172.16.8.93:3306/db_stktag --username wangying --password wangying --table t3 --target-dir /user/tong/123 --num-mappers 1 --hive-table tt1 -m 1 --split-by date --direct --fields-terminated-by '\t'
(2).加载数据
hive> load data inpath '/user/tong/123' into table tt1;
hive> select * from tt1 limit 2;
OK
0 2015-06-12 10:00:04 2016-07-28 18:00:16 5001000008 3000001022 80.0 90.0 70.0 85.0
0 2015-06-12 10:00:04 2015-12-22 15:18:25 5001000008 3000078316 30.0 80.0 70.0 64.0
Time taken: 0.089 seconds, Fetched: 2 row(s)
hive>
2.直接从Mysql导入到hive中,不需要load data加载
[hdfs@jingong01 ~]$ cat test.sql
create table test(
ISVALID int,
CREATETIME TIMESTAMP,
UPDATETIME TIMESTAMP,
CONC_UNI_CODE bigint,
COM_UNI_CODE bigint,
FUND_INFW_REL double,
MARK_MANI_REL double,
STOCK_FREQ_REL double,
STOCK_CONC_REL double)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
[hdfs@jingong01 ~]$ hive -f test.sql --创建表
Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.13.0-1.cdh6.13.0.p0.29/lib/hive/lib/hive-common-1.1.0-cdh6.13.0.jar!/hive-log4j.properties
OK
Time taken: 6.709 seconds
[hdfs@jingong01 ~]$ sqoop import --connect jdbc:mysql://172.16.8.93:3306/db_stktag --username wangying --password wangying --table t3 --delete-target-dir --num-mappers 1 --hive-import -m 1 --hive-table test --fields-terminated-by '\t' --导入数据
。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
19/01/30 15:35:38 INFO hive.HiveImport: OK
19/01/30 15:35:38 INFO hive.HiveImport: Time taken: 6.207 seconds
19/01/30 15:35:38 INFO hive.HiveImport: Loading data to table default.test
19/01/30 15:35:38 INFO hive.HiveImport: Table default.test stats: [numFiles=1, totalSize=3571294]
19/01/30 15:35:38 INFO hive.HiveImport: OK
19/01/30 15:35:38 INFO hive.HiveImport: Time taken: 0.615 seconds
19/01/30 15:35:38 INFO hive.HiveImport: WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
19/01/30 15:35:38 INFO hive.HiveImport: WARN: Please see http://www.slf4j.org/codes.html#release for an explanation.
19/01/30 15:35:39 INFO hive.HiveImport: Hive import complete.
19/01/30 15:35:39 INFO hive.HiveImport: Export directory is contains the _SUCCESS file only, removing the directory.
[hdfs@jingong01 ~]$ hive
Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.13.0-1.cdh6.13.0.p0.29/lib/hive/lib/hive-common-1.1.0-cdh6.13.0.jar!/hive-log4j.properties
hive> select * from test limit 2;
OK
0 2015-06-12 10:00:04 2016-07-28 18:00:16 5001000008 3000001022 80.0 90.0 70.0 85.0
0 2015-06-12 10:00:04 2015-12-22 15:18:25 5001000008 3000078316 30.0 80.0 70.0 64.0
Time taken: 0.058 seconds, Fetched: 2 row(s)
hive>
感谢各位的阅读,以上就是“Sqoop怎么将MySQL数据导入到hive中”的内容了,经过本文的学习后,相信大家对Sqoop怎么将MySQL数据导入到hive中这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是创新互联,小编将为大家推送更多相关知识点的文章,欢迎关注!