大数据相关参考

sparkstreaming写redis : https://blog.csdn.net/qq_28666339/article/details/79234301

maven-ant-run copy tasks: http://ant.apache.org/manual/Tasks/copy.html

echarts: https://gallery.echartsjs.com/explore.html#sort=rank~timeframe=all~author=all

https://blog.csdn.net/qq_36275889/article/details/83383150

echarts3实现暂无数据:https://github.com/apache/incubator-echarts/issues/4829

spark读取配置文件:

nohup spark2-submit --master yarn --num-executors 2 --executor-cores 1 --executor-memory 3G --driver-memory 1G --class com.marfosec.ods.OdsAccessOutDetail --files config.properties bigdata-cernetbam-1.0.0.jar 1 &
val props : Properties = new Properties
if (isCluster == 1) {
    props.load(new FileInputStream("config.properties"))
} else {
    val resource: URL = this.getClass.getClassLoader.getResource("config.properties")
    val in:InputStream = new FileInputStream(resource.getPath)
    props.load(in)
}

hive基于日期分区:https://blog.csdn.net/dylanzr/article/details/86187552

cdh高可用:https://blog.csdn.net/u011142688/article/details/82078132

azkaban安装:https://blog.csdn.net/weixin_42179685/article/details/90716366

https://blog.csdn.net/hg_harvey/article/details/80342396

问题解决:https://yq.aliyun.com/articles/648399

azkaban实战:https://blog.csdn.net/tototuzuoquan/article/details/73251616

https://www.jianshu.com/p/01188607a794?nomobile=yes

cdh oozie:

https://blog.csdn.net/hxiaowang/article/details/78551106 

https://blog.csdn.net/qq_24908345/article/details/80017660

(oozie真是太难用了, 推荐使用azkaban)

spark优化参考书:https://www.jb51.net/books/612370.html

spark消费kafka数据,入到hive中:https://blog.csdn.net/u012164361/article/details/79742201

java, kafka写数据到hdfs:https://blog.csdn.net/u013385018/article/details/80689546

cdh版本的maven依赖:https://blog.csdn.net/hexinghua0126/article/details/80292905

cdh安装spark2.3:https://blog.csdn.net/lichangzai/article/details/82225494

https://www.jianshu.com/p/170ffe85c063/

https://www.cnblogs.com/zengxiaoliang/p/6478859.html(spark2.1.0)

idea搭建spark开发环境:https://blog.csdn.net/yiluohan0307/article/details/79568363

scala和java实现wordcount:https://www.cnblogs.com/byrhuangqiang/p/4017725.html(java在spark上非常繁琐,建议学scala)

kafka查看消息解压情况:https://blog.51cto.com/12473494/2420105

spark2-shell读取hive数据:https://www.cnblogs.com/xinfang520/p/7985939.html

sqoop将mysql导入到hive:https://www.cnblogs.com/xuyou551/p/7998846.html

 

 

 

 

你可能感兴趣的:(大数据,大数据)