[Spark基础]-- Spark sql使用(编程和 cli)

什么是Spark sql?

分布式的SQL查询引擎,官方测试结果比 Hive sql 快 100倍;从 Spark-2.2.0版本起,提供了基于代价的优化器。

spark sql 怎样使用?

1、使用编程方式

举例:https://spark.apache.org/docs/2.2.0/sql-programming-guide.html#getting-started

2、使用命令行

可以理解为 spark-submit 提交 spark任务,但是又新增了 hive 命令行执行 sql 的方式。

举例:

(1)在spark cli 上运行

spark-sql --master yarn 

(2)运行 包含sql 的文件

spark-sql   -f sql.txt --master yarn  

(3)运行 包含 sql 的命令

 spark-sql -e "show databases" --master yarn

spark-sql 命令行参数,几乎和 spark-submit 一样

[[email protected] download]$ spark-sql --help
Usage: ./bin/spark-sql [options] [cli option]

Options:
  --master MASTER_URL         spark://host:port, mesos://host:port, yarn, or local.
  --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally ("client") or
                              on one of the worker machines inside the cluster ("cluster")
                              (Default: client).
  --class CLASS_NAME          Your application's main class (for Java / Scala apps).
  --name NAME                 A name of your application.
  --jars JARS                 Comma-separated list of local jars to include on the driver
                              and executor classpaths.
  --packages                  Comma-separated list of maven coordinates of jars to include
                              on the driver and executor classpaths. Will search the local
                              maven repo, then maven central and any additional remote
                              repositories given by --repositories. The format for the
                              coordinates should be groupId:artifactId:version.
  --exclude-packages          Comma-separated list of groupId:artifactId, to exclude while
                              resolving the dependencies provided in --packages to avoid
                              dependency conflicts.
  --repositories              Comma-separated list of additional remote repositories to
                              search for the maven coordinates given with --packages.
  --py-files PY_FILES         Comma-separated list of .zip, .egg, or .py files to place
                              on the PYTHONPATH for Python apps.
  --files FILES               Comma-separated list of files to be placed in the working
                              directory of each executor.

  --conf PROP=VALUE           Arbitrary Spark configuration property.
  --properties-file FILE      Path to a file from which to load extra properties. If not
                              specified, this will look for conf/spark-defaults.conf.

  --driver-memory MEM         Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
  --driver-java-options       Extra Java options to pass to the driver.
  --driver-library-path       Extra library path entries to pass to the driver.
  --driver-class-path         Extra class path entries to pass to the driver. Note that
                              jars added with --jars are automatically included in the
                              classpath.

  --executor-memory MEM       Memory per executor (e.g. 1000M, 2G) (Default: 1G).

  --proxy-user NAME           User to impersonate when submitting the application.
                              This argument does not work with --principal / --keytab.

  --help, -h                  Show this help message and exit
  --verbose, -v               Print additional debug output
  --version,                  Print the version of current Spark

 Spark standalone with cluster deploy mode only:
  --driver-cores NUM          Cores for driver (Default: 1).

 Spark standalone or Mesos with cluster deploy mode only:
  --supervise                 If given, restarts the driver on failure.
  --kill SUBMISSION_ID        If given, kills the driver specified.
  --status SUBMISSION_ID      If given, requests the status of the driver specified.

 Spark standalone and Mesos only:
  --total-executor-cores NUM  Total cores for all executors.

 Spark standalone and YARN only:
  --executor-cores NUM        Number of cores per executor. (Default: 1 in YARN mode,
                              or all available cores on the worker in standalone mode)

 YARN-only:
  --driver-cores NUM          Number of cores used by the driver, only in cluster mode
                              (Default: 1).
  --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
  --num-executors NUM         Number of executors to launch (Default: 2).
  --archives ARCHIVES         Comma separated list of archives to be extracted into the
                              working directory of each executor.
  --principal PRINCIPAL       Principal to be used to login to KDC, while running on
                              secure HDFS.
  --keytab KEYTAB             The full path to the file that contains the keytab for the
                              principal specified above. This keytab will be copied to
                              the node running the Application Master via the Secure
                              Distributed Cache, for renewing the login tickets and the
                              delegation tokens periodically.

spark-sql cli 类似 hive 提交的命令行参数

CLI options:
 -d,--define           Variable subsitution to apply to hive
                                  commands. e.g. -d A=B or --define A=B
    --database      Specify the database to use
 -e          SQL from command line
 -f                     SQL from files
 -H,--help                        Print help information
    --hiveconf    Use value for given property
    --hivevar          Variable subsitution to apply to hive
                                  commands. e.g. --hivevar A=B
 -i                     Initialization SQL file
 -S,--silent                      Silent mode in interactive shell
 -v,--verbose                     Verbose mode (echo executed SQL to the
                                  console)

 

参考

1、Spark sql 指南

https://docs.databricks.com/spark/latest/spark-sql/index.html?_ga=2.199462894.1903233328.1551075199-1834987536.1541639991

2、开发

https://spark.apache.org/docs/2.2.0/sql-programming-guide.html#getting-started

 

你可能感兴趣的:(Spark)