博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
sqoop安装与简单实用
阅读量:4648 次
发布时间:2019-06-09

本文共 9228 字,大约阅读时间需要 30 分钟。

一,sqoop安装     

1.解压源码包2.配置环境变量3.在bin目录下的  /bin/configsqoop 注释掉check报错信息4.配置conf目录下  /conf/sqoop-env.sh 配置hadoop和hive家目录5.导入依赖的jar包至lib目录下  mysql-connector-java-5.1.46-bin.jar   /share/hadoop/common/hadoop-common-2.7.2.jar6.连接测试

 

二、命令介绍

   

versionimportlist-databaseslist-tablescreate-hive-table    在hive中仅创建表结构--connect         连接url ,在hive-site.xml中配置连接参数  --username--password--table--columns--query                where  and $CONDITIONS (没有就会报错)--delete-target-dir         目标文件若存在在删除--target-dir--fields-terminated-by       默认逗号--lines-terminated-by        默认换行符--hive-import             以Hive的方式进行导入--hive-database           导入到指定的Hive库中--external-table-dir 
:以外部表的形式创建--hive-table             导入到指定的Hive表中--hive-overwrite           是否覆盖导入--split-by              指定分任务的操作列名-m {d}                指定maptask数目(默认4)

 

三、数据导入

   1.数据导入到HDFS    

1.指定命令 -- (声明配置项 - 指定参数)2.命令语句 -> 翻译成程序 -> 打成jar包3.将jar包(核心程序 - 依赖jar包)提交至集群4.简单的数据导出:只有Map阶段 -> 数据源:关系型数据库的数据(结构化的数据)5.单表数据导出:将数据全部查出,拼接分隔符输出,Reduce阶段原样输出6.默认使用4个MapTask执行 -> 产生了四个结果文件 -> 通过-m参数实现7.默认使用第一列作为分割任务的列 -> 1(MIN(id))-100(MAX(id)) -> 确定固定的区间 -> --split-by指定分割列8.在相应的目录下生成结果 -> 使用SQL(聚合函数运算或UDAF) -> -m 1

 

   2.数据导入到HIVE

  • 先将数据文件导入到HDFS上,产生一个临时文件
  • 成功后将数据上传到HIVE,成功则删除hdfs上的临时文件 
    1.导入至HDFS ->临时文件的路径:当前执行sqoop命令的家目录 -> 目录名称:不指定自动生成/也可以手动指定2.导入为HDFS时指定的部分参数会被Hive导入时进行读取(分隔符)3.如果目标表不存在则创建(数据表结构信息(数据源的表结构) - 命令指定的相关参数)4.如果目标表存在:默认追加导入,数据结构保持一致,目标Hive表的相关参数(分隔符)5.导入数据 -> 将中间目录的文件夹下的数据文件移动至内部表目录下

     

   3.打印信息相关理解

1 sqoop import --connect jdbc:mysql://sz01:3306/test --username root --password root --table make --delete-target-dir --target-dir /home/haha 2 18/09/18 11:36:00 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7                                                                                                                                                                        1.sqoop版本 3 18/09/18 11:36:00 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 4 18/09/18 11:36:00 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.                                                                                                                    2.获取mysql的结果集 5 18/09/18 11:36:00 INFO tool.CodeGenTool: Beginning code generation                                                                                                                                                                    3.sql代码生成 6 18/09/18 11:36:01 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `make` AS t LIMIT 1 7 18/09/18 11:36:01 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `make` AS t LIMIT 1 8 18/09/18 11:36:01 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/bigdata/hadoop-2.7.2                                                                                                            4.hadoop-->MR 9 Note: /tmp/sqoop-bigdata/compile/5b8add033feeb0f72f4b3eac44ff00e9/make.java uses or overrides a deprecated API.10 Note: Recompile with -Xlint:deprecation for details.11 18/09/18 11:36:04 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-bigdata/compile/5b8add033feeb0f72f4b3eac44ff00e9/make.jar                                5生成jar文件12 18/09/18 11:36:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable13 18/09/18 11:36:05 INFO tool.ImportTool: Destination directory /home/haha is not present, hence not deleting.                                                                                6.确认目标路径14 18/09/18 11:36:05 WARN manager.MySQLManager: It looks like you are importing from mysql.15 18/09/18 11:36:05 WARN manager.MySQLManager: This transfer can be faster! Use the --direct16 18/09/18 11:36:05 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.17 18/09/18 11:36:05 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)18 18/09/18 11:36:05 INFO mapreduce.ImportJobBase: Beginning import of make                                                                                                                                                        7.开始导入数据19 18/09/18 11:36:05 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar                                                                                    8.开始替换MR20 18/09/18 11:36:05 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps21 18/09/18 11:36:06 INFO client.RMProxy: Connecting to ResourceManager at sz01/192.168.18.130:8032                                                                                                        9.连接RM                                22 18/09/18 11:36:11 INFO db.DBInputFormat: Using read commited transaction isolation23 18/09/18 11:36:11 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `make`                                                                                10.划分任务24 18/09/18 11:36:11 INFO db.IntegerSplitter: Split size: 67; Num splits: 4 from: 1 to: 27225 18/09/18 11:36:11 INFO mapreduce.JobSubmitter: number of splits:426 18/09/18 11:36:11 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1537076834131_0011                                                                                                        11.提交JOB27 18/09/18 11:36:12 INFO impl.YarnClientImpl: Submitted application application_1537076834131_001128 18/09/18 11:36:12 INFO mapreduce.Job: The url to track the job: http://sz01:8088/proxy/application_1537076834131_0011/    29 18/09/18 11:36:12 INFO mapreduce.Job: Running job: job_1537076834131_001130 18/09/18 11:36:27 INFO mapreduce.Job: Job job_1537076834131_0011 running in uber mode : false                                                                                                                12.执行job31 18/09/18 11:36:27 INFO mapreduce.Job:  map 0% reduce 0%32 18/09/18 11:36:40 INFO mapreduce.Job:  map 25% reduce 0%                                                                        33 18/09/18 11:36:41 INFO mapreduce.Job:  map 50% reduce 0%34 18/09/18 11:36:48 INFO mapreduce.Job:  map 75% reduce 0%35 18/09/18 11:36:50 INFO mapreduce.Job:  map 100% reduce 0%36 18/09/18 11:36:51 INFO mapreduce.Job: Job job_1537076834131_0011 completed successfully37 18/09/18 11:36:52 INFO mapreduce.Job: Counters: 31                                                                                                                                                                                                    13.打印counters38     File System Counters39         FILE: Number of bytes read=040         FILE: Number of bytes written=54804441         FILE: Number of read operations=042         FILE: Number of large read operations=043         FILE: Number of write operations=044         HDFS: Number of bytes read=40545         HDFS: Number of bytes written=227946         HDFS: Number of read operations=1647         HDFS: Number of large read operations=048         HDFS: Number of write operations=849     Job Counters 50         Killed map tasks=151         Launched map tasks=452         Other local map tasks=453         Total time spent by all maps in occupied slots (ms)=3401254         Total time spent by all reduces in occupied slots (ms)=055         Total time spent by all map tasks (ms)=3401256         Total vcore-milliseconds taken by all map tasks=3401257         Total megabyte-milliseconds taken by all map tasks=3482828858     Map-Reduce Framework59         Map input records=18460         Map output records=18461         Input split bytes=40562         Spilled Records=063         Failed Shuffles=064         Merged Map outputs=065         GC time elapsed (ms)=47066         CPU time spent (ms)=583067         Physical memory (bytes) snapshot=41790259268         Virtual memory (bytes) snapshot=825280512069         Total committed heap usage (bytes)=12189696070     File Input Format Counters 71         Bytes Read=072     File Output Format Counters 73         Bytes Written=227974 18/09/18 11:36:52 INFO mapreduce.ImportJobBase: Transferred 2.2256 KB in 46.2286 seconds (49.2985 bytes/sec)75 18/09/18 11:36:52 INFO mapreduce.ImportJobBase: Retrieved 184 records.76 77 78 18/09/18 13:24:59 INFO mapreduce.ImportJobBase: Publishing Hive/Hcat import job data to Listeners for table make79 18/09/18 13:24:59 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `make` AS t LIMIT 180 18/09/18 13:24:59 INFO hive.HiveImport: Loading uploaded data into Hive81 18/09/18 13:25:00 DEBUG hdfs.LeaseRenewer: Lease renewer daemon for [] with renew id 1 executed82 83 Logging initialized using configuration in jar:file:/home/bigdata/apache-hive-1.2.2-bin/lib/hive-jdbc-1.2.2-standalone.jar!/hive-log4j.properties84 OK85 Time taken: 5.158 seconds86 Loading data to table user1.make87 Table user1.make stats: [numFiles=4, totalSize=2279]88 OK89 Time taken: 1.244 seconds
View Code

 

四、日志记录

  1.自己配置

    conf目录下放入log4j.properties -> 日志级别,日志文件产生的目录以及名称
    lib目录下放入log4j相关jar包

  2.sqoop拉取数据时,实际是MR和HIVE数据导入导出

    可查看hadoop的logs/userlogs 和hive的日志

 

转载于:https://www.cnblogs.com/OnTheWay-0518/p/9671252.html

你可能感兴趣的文章
Java JTree_6
查看>>
python模块:sys
查看>>
TVS參数具体解释及选型应用
查看>>
计算机网络数据链路层次学习
查看>>
Mac OS X 完全卸载MySQL
查看>>
Android Studio精彩案例(四)《DrawerLayout使用详解仿网易新闻客户端侧边栏 》
查看>>
Python 3 Basics
查看>>
BZOJ5300 [Cqoi2018]九连环 【数学】【FFT】
查看>>
QT-helloworld-Qt设计师编写
查看>>
网络知识整理
查看>>
windows下搭建iphone开发环境
查看>>
远程桌面剪贴板失效的解决办法
查看>>
Mybatis框架插件PageHelper的使用
查看>>
Apache ab 压力测试工具
查看>>
C# 取二位小数点(四舍五入)
查看>>
黑马程序员——程序结构
查看>>
crawlspider
查看>>
正则表达式及其应用
查看>>
整理90部好看的经典喜剧片
查看>>
美丽的数学家:如果您讨厌数学,这些其实都是人生故事
查看>>