spark 使用记录case

case1: Spark SQL缓存了Parquet元数据以达到良好的性能。当Hive metastore Parquet表转换为enabled时,表修改后缓存的元数据并不能刷新。所以,当表被Hive或其它工具修改时,则必须手动刷新元数据,以保证元数据的一致性。示例如下:

spark.sql("""REFRESH TABLE **** """)
或者
spark.catalog.refreshTable("my_table")
sqlContext.refreshTable("my_table")

case2: spark-streaming长任务的配置

部分参照: http://mkuthan.github.io/blog/2016/09/30/spark-streaming-on-yarn/

  • --conf spark.streaming.stopGracefullyOnShutdown=true
    参照:https://www.jianshu.com/p/a108611129f5
    https://blog.csdn.net/qq_23146763/article/details/79152213
    结束任务时等待处理中的任务结束,并保留当前工作状态到checkpoint中,确保重启任务后能获取到正确的checkpoint.
    也可以通过 Runtime.getRuntime().addShutdownHook 来添加一个钩子,其会在JVM关闭的之前缇执行streamingContext.stop,stop函数中,如下:
    (发送kill命令关闭driver进程,不要使用(-9)强制关闭,不然钩子无法捕获)
Runtime.getRuntime().addShutdownHook(new Thread() {
  override def run() {
    log("Gracefully stop Spark Streaming")
    streamingContext.stop(true, true)
  }
})

StreamingContext代码链接,graceful stop 逻辑见stop函数

  • streaming长时间任务Kerberos失效问题
    --keytab /$HOME/****.keytab \
    --principal ****
    spark-2.2 才起作用, spark-2.1没有起实际作用

  • --conf spark.yarn.maxAppAttempts=4
    --conf spark.yarn.am.attemptFailuresValidityInterval=1h
    应用程序失败自动重启次数, 和重试间隔

  • --conf spark.yarn.max.executor.failures={8 * num_executors}
    --conf spark.yarn.executor.failuresValidityInterval=1h
    executor允许失败的最大数量,以及计数的时间间隔, 默认是max(2

  • num executors,3), 长任务改大点, 且时间间隔设小点

  • --conf spark.task.maxFailures=8
    task失败重试的次数

  • --queue realtime_queue
    使用YARN Capacity Scheduler调度, 且提交到单独的Yarn队列

  • --conf spark.speculation=true
    开启推测执行, 只有当Spark操作是幂等时,才能启用推测模式。 幂等是指多次操作不会影响最终的结果, 比如支付的服务端和客户端的一致性,避免多次扣款

case 3: web ui的安全机制

参照: https://www.jianshu.com/p/5454f9b24e83

  • --conf spark.acls.enable=true --conf spark.ui.view.acls=dr.who --conf spark.monitor.email=*****
    web ui端kill权限的ACL安全机制, spark.acls.enable 是否启动ACL机制, spark.ui.view.acls 默认就是dr.who(可以在Hadoop的core-site.xml中使用hadoop.http.staticuser.user属性来指定登陆用户)
    相关属性:
Property Name Default Meaning
spark.acls.enable false Whether Spark acls should be enabled. If enabled, this checks to see if the user has access permissions to view or modify the job. Note this requires the user to be known, so if the user comes across as null no checks are done.
spark.ui.view.acls Empty Comma separated list of users that have view access to the Spark web ui. By default only the user that started the Spark job has view access. Putting a "*" in the list means any user can have view access to this Spark job.
spark.modify.acls Empty Comma separated list of users that have modify access to the Spark job. By default only the user that started the Spark job has access to modify it (kill it for example). Putting a "*" in the list means any user can have access to modify it.
spark.admin.acls Empty Comma separated list of users/administrators that have view and modify access to all Spark jobs. This can be used if you run on a shared cluster and have a set of administrators or devs who help debug when things do not work. Putting a "*" in the list means any user can have the privilege of admin.

关于Filter机制,设置view和modify的用户和密码 (有点复杂, 有时间实现下~~) 见参照链接

case 4: kakfa的相关配置
  • --conf spark.streaming.kafka.maxRatePerPartition=10000
    限制每秒钟从topic的每个partition最多消费的消息条数,这样就把首个batch的大量的消息拆分到多个batch中去了,为了更快的消化掉delay的消息,可以调大计算资源和把这个参数调大。
case 5: spark-sql 一张中间表多次使用时, 注册为临时视图,有时会抛出 not found key, 检查是否表名和字段名冲突
case6: 将rdd存入mem或disk前再进行一次压缩, 节省资源

-- spark.rdd.compress true

你可能感兴趣的:(spark 使用记录case)