转义字符\(在hive+shell以及java中注意事项):正则表达式的转义字符为双斜线,split函数解析也是正则

转义字符

将后边字符转义,使特殊功能字符作为普通字符处理,或者普通字符转化为特殊功能字符。
各个语言中都用应用,如java、python、sql、hive、shell等等。
如sql中
        "\""    
        "\'"
        "\t"
        "\n"
sql中直接输出 
        "
        '
        tab键
        换行键

转义字符的一般应用

"\"转义字符放到字符前面,如java和python输出内容用双引号标识,双引号中可以用转义字符\进行转义输出,比如输出双引号
java中 system.out.print("\"")
python中 print "\""

特殊的情况 :转义字符自身的转义

转义字符的特殊情况,自身的转义,比如java有时候需要两个转义字符"\\",或者四个转义字符“\\\\”。

1)java的俩种情况 
    正则表达式匹配和string的split函数
    这两种情况中字符串包含转义字符“\”时,需要先对转义字符自身转义,就是说需要两个转义字符“\\”。(java解析后,再有正则和split自身特定进行解析)
    而当匹配字符正斜线“\”,则需要四个转义字符“\\\\”,因为,首先java(编译器?)自身先解析,转义成两个“\\”,再由正则或split的解析功能转义成一个“\”,才是最终要处理的字符。
    这是因为解析过程需要两次,才能在字符串中出现正斜线“\”,出现后才能转义后面的字符。

2)hive中的split和正则表达式
    hive用java写的,所以同Java一样,两种情况也需要两个“\\”,split处理代码为例:
    select 
    ad,
    '月资费类型' as feature,
    (CASE subscriptionfee_id
        when '0' then '无'
        when '1' then '[0,50)'
        when '2' then '[50,100]'
        when '3' then '[100,150]'
        when '4' then '[150,200)'
        when '5' then '>=200'
        else 'error_data' 
    END) as feature_detail,
    1 as type
    from mengniubi.dianxin_user_tags
    union all
    select 
    ad,
    '爱好分布' as feature,
    split(new_interest,'\\|')[1] as feature_detail,
    2 as type
    from mengniubi.dianxin_user_tags
    lateral view explode(interests) AllInterests as new_interest
    union all
    select 
    ad,
    '商品浏览' as feature,
    split(products,'\\|')[0] as feature_detail,
    4 as type
    from mengniubi.dianxin_user_tags
    lateral view explode(split(product_view_cates,',')) AllProducts as products
代码中,如果以“\”作为分隔符的话,那么就需要4个转义字符“\\\\”,即
    split(products,'\\\\')[0] as feature_detail,

3)hive语句在shell脚本中执行
shell语言也有转义字符,自身直接处理。
而hive语句在shell脚本中执行时,就需要先由shell转义后,再由hive处理。这个过程又造成二次转义。
如上面的hive语句写入shell脚本中,执行是错误的,shell先解析,转义成”|“后传给hive,hive解析这个转义字符后,split就无法正确的解析了。
所以,注意hive语句在shell脚本执行时,转义字符需要翻倍。hive处理的是shell转义后的语句,必须转以后正确,才能执行。上面代码在shell脚本中如下

#!/bin/bash
##### execute hive sql for analyzing data #####
arg_count=$#
if [ $arg_count -lt 1 ];then
   echo "参数错误 [$*], Usage:$0 2015-08"
   exit 1
fi

if [ ! -d "$HIVE_HOME" ];then
   echo "HIVE_HOME not exists .. "
   exit 2
fi

month_arg=$1
echo "month : ${month_arg}"

echo "start ... "
########################  SQL EDIT AREA 1 BEGIN #####################################
msg="step1 t_bi_daily_ad_area_report .."
echo
echo
echo $msg
echo
echo
sql=$(cat <set mapred.queue.names=queue3;
SET mapred.reduce.tasks=14;

insert OVERWRITE table t_bi_figure_whole_network_report partition(month='${month_arg}')
select '-1' as brand,feature,feature_detail,type,count(ad) as count
from 
(
    select 
    ad,
    '月资费类型' as feature,
    (CASE subscriptionfee_id
        when '0' then '无'
        when '1' then '[0,50)'
        when '2' then '[50,100]'
        when '3' then '[100,150]'
        when '4' then '[150,200)'
        when '5' then '>=200'
        else 'error_data' 
    END) as feature_detail,
    1 as type
    from mengniubi.dianxin_user_tags
    union all
    select 
    ad,
    '爱好分布' as feature,
    split(new_interest,'\\\\|')[1] as feature_detail,
    2 as type
    from mengniubi.dianxin_user_tags
    lateral view explode(interests) AllInterests as new_interest
    union all
    select 
    ad,
    '商品浏览' as feature,
    split(products,'\\\\|')[0] as feature_detail,
    4 as type
    from mengniubi.dianxin_user_tags
    lateral view explode(split(product_view_cates,',')) AllProducts as products

) t1
group by feature,feature_detail,type
union all
select '-1' as brand,
    '搜索关键字' as feature,
    search_word as feature_detail,
    2 as type,
    count(1) as count 
from mengniubi.dianxin_user_tags
lateral view explode(split(search_keywords,',')) AllKeyWords as search_word
where search_word is not null and search_word <> '' 
group by search_word order 
by count desc 
limit 1000;
!EOF)
########### execute begin ##########
echo $sql
$HIVE_HOME/bin/hive -e "$sql"
exitCode=$?
if [ $exitCode -ne 0 ];then
   echo "[ERROR] $msg"
   exit $exitCode
fi
########### execute end  ###########
########################  SQL EDIT AREA 1 END #######################################

hive中正则表达式的转义字符用双斜线

hive的split函数中,分隔符为正则表达式

split(string str, string pat) 

Splits str around pat (pat is a regular expression).
这里切分符号是正则表达式,按一个字符分隔没问题,如
select 
    split(all,'~')
from
tb_pmp_log_all_lmj_tmp
limit 10
当分隔字符是竖线'|'时,直接使用默认为正则表达式中的或,则为无,所以会将字段中的单个字符全部分隔开,如
    select 
        split(all,'|')
    from
    tb_pmp_log_all_lmj_tmp
    limit 10

因为hive正则表达式转义字符为两个\,所以’|’就是’\|’,如

    select 
        split(all,'\\|')
    from
    tb_pmp_log_all_lmj_tmp
    limit 10

而’|+’无法运行,报错,

当分隔符为多个符号组合时,用正则要注意双斜线\\为转义字符,否则如’\|’会有问题,
匹配“|~|”分隔符如

select 
    split(all,'\\|~\\|')
from
tb_pmp_log_all_lmj_tmp
limit 10

或者,在[]内部拼接成字符串

select 
    split(all,'[|~]+')
from
tb_pmp_log_all_lmj_tmp
limit 10

你可能感兴趣的:(shell,hive)