好玩的大数据之25:Spark实验1(用Spark-Shell读写外部数据源)

一、简介


            本实验参考《spark权威指南》第9章:Data Sources(数据源)

二、实验内容


            利用spark-shell读写以下类型的数据源

                    csv

                    json

                    orc

                    parquet

                    jdbc

三、实验前准备


        1.实验数据说明

                参考“Spark实验前准备”说明,下载和准备《spark权威指南》配套的代码和数据文件

                本实验要用的数据文件夹(windows下查看)

flight-data

                        flight-data下的子文件夹

flight-data下的子文件夹

        2.准备实验数据

         以下在ubuntu下执行

         1)将本实验所需要的 数据 文件载入到hdfs中

                hdfs dfs -mkdir -p /mylab/mydata/spark/spark_guide_data

                hdfs dfs -put -p /home/hadoop/share/mydata/spark/spark_guide_data/flight-data /mylab/mydata/spark/spark_guide_data

上传文件至hdfs

                hdfs dfs -ls /mylab/mydata/spark/spark_guide_data/flight-data

flight-data下的文件夹

                hdfs dfs -ls /mylab/mydata/spark/spark_guide_data/flight-data/csv

csv文件

           3.改变日志输出级别

                在使用spark-shell过程中,屏幕上可能会输出很多信息,可以通过设置日志级别来控制日志输出的详细程度

                方法之一是配置conf/log4j.properties

                        log4j.rootCategory=INFO, console        #可以把INFO改为ERROR、WARN、INFO、DEBUG

                所以要想屏幕输出的提示信息最少,配置成

                        log4j.rootCategory=ERROR, console 

                然后重启spark-shell

                方法之二是启动spark-shell来设置,参见下面的说明    
            

四、启动

            1.启动spark-shell

                  spark-shell

                  直到看到大大大大的spark,及scala提示符

spark-shell启动界面

                2.改变日志输出级别

                            要是对日志输出比较烦,可以通过

                                    sc.setloglevel(WARN)

                                    sc.setloglevel(ERROR)

                                    sc.setloglevel(DEBUG)

                            来改变日志级别

                 3.spark-shell --help

Usage: ./bin/spark-shell [options]

Scala REPL options:

  -I                   preload , enforcing line-by-line interpretation

Options:

  --master MASTER_URL        spark://host:port, mesos://host:port, yarn,

                              k8s://https://host:port, or local (Default: local[*]).

  --deploy-mode DEPLOY_MODE  Whether to launch the driver program locally ("client") or

                              on one of the worker machines inside the cluster ("cluster")

                              (Default: client).

  --class CLASS_NAME          Your application's main class (for Java / Scala apps).

  --name NAME                A name of your application.

  --jars JARS                Comma-separated list of jars to include on the driver

                              and executor classpaths.

  --packages                  Comma-separated list of maven coordinates of jars to include

                              on the driver and executor classpaths. Will search the local

                              maven repo, then maven central and any additional remote

                              repositories given by --repositories. The format for the

                              coordinates should be groupId:artifactId:version.

  --exclude-packages          Comma-separated list of groupId:artifactId, to exclude while

                              resolving the dependencies provided in --packages to avoid

                              dependency conflicts.

  --repositories              Comma-separated list of additional remote repositories to

                              search for the maven coordinates given with --packages.

  --py-files PY_FILES        Comma-separated list of .zip, .egg, or .py files to place

                              on the PYTHONPATH for Python apps.

  --files FILES              Comma-separated list of files to be placed in the working

                              directory of each executor. File paths of these files

                              in executors can be accessed via SparkFiles.get(fileName).

  --conf, -c PROP=VALUE      Arbitrary Spark configuration property.

  --properties-file FILE      Path to a file from which to load extra properties. If not

                              specified, this will look for conf/spark-defaults.conf.

  --driver-memory MEM        Memory for driver (e.g. 1000M, 2G) (Default: 1024M).

  --driver-java-options      Extra Java options to pass to the driver.

  --driver-library-path      Extra library path entries to pass to the driver.

  --driver-class-path        Extra class path entries to pass to the driver. Note that

                              jars added with --jars are automatically included in the

                              classpath.

  --executor-memory MEM      Memory per executor (e.g. 1000M, 2G) (Default: 1G).

  --proxy-user NAME          User to impersonate when submitting the application.

                              This argument does not work with --principal / --keytab.

  --help, -h                  Show this help message and exit.

  --verbose, -v              Print additional debug output.

  --version,                  Print the version of current Spark.

Cluster deploy mode only:

  --driver-cores NUM          Number of cores used by the driver, only in cluster mode

                              (Default: 1).

Spark standalone or Mesos with cluster deploy mode only:

  --supervise                If given, restarts the driver on failure.

Spark standalone, Mesos or K8s with cluster deploy mode only:

  --kill SUBMISSION_ID        If given, kills the driver specified.

  --status SUBMISSION_ID      If given, requests the status of the driver specified.

Spark standalone, Mesos and Kubernetes only:

  --total-executor-cores NUM  Total cores for all executors.

Spark standalone, YARN and Kubernetes only:

  --executor-cores NUM        Number of cores used by each executor. (Default: 1 in

                              YARN and K8S modes, or all available cores on the worker

                              in standalone mode).

Spark on YARN and Kubernetes only:

  --num-executors NUM        Number of executors to launch (Default: 2).

                              If dynamic allocation is enabled, the initial number of

                              executors will be at least NUM.

  --principal PRINCIPAL      Principal to be used to login to KDC.

  --keytab KEYTAB            The full path to the file that contains the keytab for the

                              principal specified above.

Spark on YARN only:

  --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").

  --archives ARCHIVES        Comma separated list of archives to be extracted into the

                              working directory of each executor.


五、实验过程 


除非特殊说明,以下均在scala>提示符下执行,需要输入的代码在begin-end之间

注:对于以下这种多行文本,需要先执行:paste,然后输入多行或者粘贴,之后Ctrl-D(这个是指Ctrl和D的两个键同时敲下去)结束,系统就开始执行

1.Schema

=====================common Schema begin =====================

:paste

import org.apache.spark.sql.types._

val myManualSchema = new StructType(Array(

new StructField("DEST_COUNTRY_NAME", StringType, true),

new StructField("ORIGIN_COUNTRY_NAME", StringType, true),

new StructField("count", LongType, false)

))

Ctrl-D(这个是指Ctrl和D的两个键同时敲下去)

Schema

=====================common Schema end =====================

2.CSV

=====================CSV begin=====================

:paste

val csv=spark.read.format("csv")

.option("header", "true")

.option("mode", "FAILFAST")

.schema(myManualSchema)

.load("hdfs:///mylab/mydata/spark/spark_guide_data/flight-data/csv/2010-summary.csv")

.take(5)

csv.show

Ctrl-D

CSV read 

csv.foreach(println)

result


:paste

val csvFile = spark.read.format("csv")

.option("header", "true")

.option("mode", "FAILFAST")

.schema(myManualSchema)

.load("hdfs:///mylab/mydata/spark/spark_guide_data/flight-data/csv/2010-summary.csv")

Ctrl-D

CSV write

:paste

csvFile.write.format("csv")

.mode("overwrite")

.option("sep", "\t")

.save("hdfs:///mylab/mydata/spark/spark_guide_data/flight-data/csv/2010-summary-2.csv")

Ctrl-D

在unbunt另外开一个terminal

        hdfs dfs -ls /mylab/mydata/spark/spark_guide_data/flight-data/csv/2010-summary-2.csv

result

=====================CSV end===================== 

2. JSON

=====================JSON begin=====================

:paste

val json=spark.read.format("json")

.option("mode", "FAILFAST")

.schema(myManualSchema)

.load("hdfs:///mylab/mydata/spark/spark_guide_data/flight-data/json/2010-summary.json")

Ctrl-D

JSON read 

json.show(5)

result


:paste

json.write.format("json")

.mode("overwrite")

.save("hdfs:///mylab/mydata/spark/spark_guide_data/flight-data/json/2010-summary-2.json")

Ctrl-D

JSON write

在unbunt另外开一个terminal

        hdfs dfs -ls hdfs:///mylab/mydata/spark/spark_guide_data/flight-data/json/2010-summary-2.json

result

=====================JSON end=====================

3. Parquet

=====================Parquet begin=====================

:paste

val parquet=spark.read.format("parquet")

.load("hdfs:///mylab/mydata/spark/spark_guide_data/flight-data/parquet/2010-summary.parquet")

Ctrl-D

Parquet read 

parquet.take(5).foreach(println)

result


:paste

parquet.write.format("parquet")

.mode("overwrite")

.save("hdfs:///mylab/mydata/spark/spark_guide_data/flight-data/parquet/2010-summary-2.parquet")

Ctrl-D

Parquet write

:paste

json.write.format("parquet")

.mode("overwrite")

.save("hdfs:///mylab/mydata/spark/spark_guide_data/flight-data/parquet/2010-summary-3.parquet")

Ctrl-D

Parquet write

在unbunt另外开一个terminal

        hdfs dfs -ls /mylab/mydata/spark/spark_guide_data/flight-data/parquet

result

=====================Parquet end=====================

4.Orc

=====================Orc begin=====================

:paste

val orc=spark.read.format("orc")

.load("hdfs:///mylab/mydata/spark/spark_guide_data/flight-data/orc/2010-summary.orc")

orc.limit(5).show()

Ctrl-D


:paste

orc.write.format("orc")

.mode("overwrite")

.save("hdfs:///mylab/mydata/spark/spark_guide_data/flight-data/orc/2010-summary-2.orc")

Ctrl-D

=====================Orc end=====================

5. MySQL

=====================MySQL  begin=====================

#in MySQL

mysql -u test -p testdb

show tables;

create table student(

id int not null,

name varchar(20) not null

);

insert into student value(1,'zhang');

insert into student value(2,'zhao');

insert into student value(3,'li');

insert into student value(4,'qian');

create table score(

id int not null,

math int ,

eng int ,

phy int,

chem int

);

insert into score value(1,134,34,13,5);

insert into score value(2,375,388,647,656);

insert into score value(3,894,909,386,647);

insert into score value(4,184,148,334,1837);

select a.id,a.name,b.

math ,

eng,

phy,

chem

from student a,score b

where a.id=b.id;


#切换到Spark-SQL环境

#select  方法一

:paste

val df1=spark.read.format("jdbc")

.option("url","jdbc:mysql://master:3306/testdb")

.option("driver","com.mysql.cj.jdbc.Driver")

.option("dbtable","student")

.option("user","test")

.option("password","test")

.load()

Ctrl-D

 select  #方法一

df1.show

result

#select  方法二

:paste

val df1=spark.read.format("jdbc")

.option("url","jdbc:mysql://master:3306")

.option("driver","com.mysql.cj.jdbc.Driver")

.option("dbname","testdb")

.option("dbtable","student")

.option("user","test")

.option("password","test")

.load()

Ctrl-D

#或者这样

:paste

val props = new java.util.Properties

props.setProperty("driver","com.mysql.cj.jdbc.Driver")

props.setProperty("user","test")

props.setProperty("password","test")

val url ="jdbc:mysql://master:3306/testdb"

val tablename="student"

val df1= spark.read.jdbc(url, tablename, props)

Ctrl-D

 select  #方法二

df1.show

result


#select  方法三

:paste

val props = new java.util.Properties

props.setProperty("driver","com.mysql.cj.jdbc.Driver")

props.setProperty("user","test")

props.setProperty("password","test")

val url ="jdbc:mysql://master:3306/testdb"

val queryname="""

(select a.id,a.name,b.math,b.eng,b.phy,b.chem

from student a,score b

where a.id=b.id) T

"""

val df1= spark.read.jdbc(url, queryname, props)

Ctrl-D

 select  #方法三

df1.show

result


#insert

:paste

import java.util.Properties

import org.apache.spark.sql.types._

import org.apache.spark.sql.Row

val studentRDD = spark.sparkContext.parallelize(Array("13 cheng", "14 qian")).map(_.split(" "))

val schema = StructType(List(StructField("id", IntegerType, true), StructField("name", StringType, true)))

val rowRDD = studentRDD.map(p => Row(p(0).toInt, p(1).trim))

val studentDF = spark.createDataFrame(rowRDD, schema)

val prop = new Properties

prop.put("user", "test")

prop.put("password", "test")

prop.put("driver", "com.mysql.cj.jdbc.Driver")

studentDF.write.mode("append").jdbc("jdbc:mysql://master:3306/testdb", "student", prop)

Ctrl-D

insert 

#看看结果

:paste

val props = new java.util.Properties

props.setProperty("driver","com.mysql.cj.jdbc.Driver")

props.setProperty("user","test")

props.setProperty("password","test")

val url ="jdbc:mysql://master:3306/testdb"

val tablename="student"

val df1= spark.read.jdbc(url, tablename, props)

df1.show

Ctrl-D

result


=====================MySQL  end=====================

:quit退出

你可能感兴趣的:(好玩的大数据之25:Spark实验1(用Spark-Shell读写外部数据源))