hive增加Update、Delete支持

一、配置hive-site.xml

CDH版本先进入Hive配置页 
hive增加Update、Delete支持_第1张图片 
选择高级,找到hive-site.xml 的 Hive 客户端高级配置代码段配置项 
 
点击+号,增加如下配置项

 
  1. hive.support.concurrency = true
  2. hive.enforce.bucketing = true
  3. hive.exec.dynamic.partition.mode = nonstrict
  4. hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
  5. hive.compactor.initiator.on = true
  6. hive.compactor.worker.threads = 1

hive增加Update、Delete支持_第2张图片 
然后点击保存更改,分发配置就可以了。

二、建表

如果要支持delete和update,则必须输出是AcidOutputFormat然后必须分桶。 
而且目前只有ORCFileformat支持AcidOutputFormat,不仅如此建表时必须指定参数('transactional' = true) 

 
  1. USE test;
  2. DROP TABLE IF EXISTS S1_AC_ACTUAL_PAYDETAIL;
  3. CREATE TABLE IF NOT EXISTS S1_AC_ACTUAL_PAYDETAIL
  4. (
  5. INPUTDATE STRING,
  6. SERIALNO STRING,
  7. PAYDATE STRING,
  8. ACTUALPAYDATE STRING,
  9. CITY STRING,
  10. PRODUCTID STRING,
  11. SUBPRODUCTTYPE STRING,
  12. ISP2P STRING,
  13. ISCANCEL STRING,
  14. CDATE STRING,
  15. PAYTYPE STRING,
  16. ASSETSOWNER STRING,
  17. ASSETSOUTDATE STRING,
  18. CPD DOUBLE,
  19. PAYPRINCIPALAMT BIGINT,
  20. PAYINTEAMT BIGINT,
  21. A2 BIGINT,
  22. A7 BIGINT,
  23. A9 BIGINT,
  24. A10 BIGINT,
  25. A11 BIGINT,
  26. A12 BIGINT,
  27. A17 BIGINT,
  28. A18 BIGINT,
  29. PAYAMT BIGINT,
  30. LOANNO STRING,
  31. CREATEDATE STRING,
  32. CUSTOMERID STRING,
  33. etl_in_dt string
  34. )
  35. CLUSTERED BY (SERIALNO) --根据某个字段分桶
  36. INTO 7 BUCKETS --分为多少个桶
  37. ROW FORMAT DELIMITED
  38. FIELDS TERMINATED BY ','
  39. LINES TERMINATED BY '\n'
  40. STORED AS ORC
  41. LOCATION '/user/hive/test/S1_AC_ACTUAL_PAYDETAIL'
  42. TBLPROPERTIES('transactional'='true');--增加额描述信息,比如最后一次修改信息,最后一个修改人。

注:由于cdh自动的在元数据里面创建了COMPACTION_QUEUE表,所以博客中说的那个问题不存在 
hive增加Update、Delete支持_第3张图片

三、操作

执行

 
  1. update test.S1_AC_ACTUAL_PAYDETAIL set city='023' where SERIALNO = '20688947002';

操作100条数据,平均每条花费2秒多,其中执行花费1秒左右。相对还是能接受的。

 
  1. delete from test.S1_AC_ACTUAL_PAYDETAIL where SERIALNO = '20688947002';

四、总结

  • 1、Hive可以通过修改参数达到修改和删除数据的效果,但是速度远远没有传统关系型数据库快
  • 2、通过ORC的每个task只输出单个文件和自带索引的特性,以及数据的分桶操作,可以将要修改的数据锁定在一个很小的文件块,因此可以做到相对便捷的文件修改操作。因此数据的分桶操作非常重要,通常一些表单信息都会根据具体的表单id进行删除与修改,因此推荐使用表单ID作为分桶字段。
  • 3、频繁的update和delete操作已经违背了hive的初衷。不到万不得已的情况,还是使用增量添加的方式最好。

 

 

hive0.14-insert、update、delete操作测试

首先用最普通的建表语句建一个表:

 

hive>create table test(id int,name string)row format delimited fields terminated by ',';

 

测试insert:

insert into table test values (1,'row1'),(2,'row2');

结果报错:

 

 

 
  1. java.io.FileNotFoundException: File does not exist: hdfs://127.0.0.1:9000/home/hadoop/git/hive/packaging/target/apache-hive-0.14.0-SNAPSHOT-bin/

  2. apache-hive-0.14.0-SNAPSHOT-bin/lib/curator-client-2.6.0.jar

  3. at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1128)

  4. at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)

  5. at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

  6. at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)

  7. at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)

  8. at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)

  9. at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:99)

  10. at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)

  11. at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)

  12. at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)

  13. at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)

  14. at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)

  15. at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)

  16. at java.security.AccessController.doPrivileged(Native Method)

  17. ......

貌似往hdfs上找jar包了,小问题,直接把lib下的jar包上传到hdfs

 
  1. hadoop fs -mkdir -p /home/hadoop/git/hive/packaging/target/apache-hive-0.14.0-SNAPSHOT-bin/apache-hive-0.14.0-SNAPSHOT-bin/lib/

  2. hadoop fs -put $HIVE_HOME/lib/* /home/hadoop/git/hive/packaging/target/apache-hive-0.14.0-SNAPSHOT-bin/apache-hive-0.14.0-SNAPSHOT-bin/lib/

  3.  

 

接着运行insert,没有问题,接下来测试delete

 

hive>delete from test where id = 1;

报错!:

 

 

FAILED: SemanticException [Error 10294]: Attempt to do update or delete using transaction manager that does not support these operations.

说是在使用的转换管理器不支持update跟delete操作。

 

原来要支持update操作跟delete操作,必须额外再配置一些东西,见:

https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-NewConfigurationParametersforTransactions

根据提示配置hive-site.xml:

 

 
  1. hive.support.concurrency – true

  2. hive.enforce.bucketing – true

  3. hive.exec.dynamic.partition.mode – nonstrict

  4. hive.txn.manager – org.apache.hadoop.hive.ql.lockmgr.DbTxnManager

  5. hive.compactor.initiator.on – true

  6. hive.compactor.worker.threads – 1

配置完以为能够顺利运行了,谁知开始报下面这个错误:

 

 

FAILED: LockException [Error 10280]: Error communicating with the metastore

与元数据库出现了问题,修改log为DEBUG查看具体错误:

 

 

 
  1. 2014-11-04 14:20:14,367 DEBUG [Thread-8]: txn.CompactionTxnHandler (CompactionTxnHandler.java:findReadyToClean(265)) - Going to execute query