Bigdata-Hbase+Geomesa+Geoserver集群部署并发布地图服务

Bigdata-Hbase+Geomesa+Geoserver集群部署并发布地图服务

  • 基本环境配置
  • 安装java
  • 安装hadoop
  • 安装zookeeper
  • 安装hbase
  • 安装Geomesa
  • 整合geoserver

基本环境配置

所有节点:
1.关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
2.主机名映射
[root@bdmaster ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.6.63 bdmaster
192.168.6.68 bdslave1
192.168.6.72 bdslave2
3.关闭selinux
setenforce 0
sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/’ /etc/selinux/config

4.时间同步
yum install chrony
systemctl start chronyd
systemctl enable chronyd
bdmaster:
[root@bdmaster ~]# vi /etc/chrony.conf
修改添加:
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 127.127.1.0

#allow 192.168.0.0/16
allow 192.168.6.0/24

[root@bdmaster ~]# chronyc -a makestep
[root@bdmaster ~]# chronyc sourcestats
[root@bdmaster ~]# chronyc sources -v

bdslave1和bdslave2:
vi /etc/chrony.conf注释原来的,增加bdmaster的ip地址
server 192.168.6.63
重启服务:
systemctl restart chronyd.service

#chronyc -a makestep
200 OK
#chronyc sourcestats
210 Number of sources = 1
。。。。。。
^? bdmaster 0 6 0 - +0ns[ +0ns] +/- 0ns

安装java

参考单机版安装java
所有节点
1、卸载自带java:
rpm -qa|grep java
rpm -e --nodeps rpm -qa | grep java
~]# java -version
-bash: /usr/bin/java: No such file or directory
2、解压安装:
mkdir /usr/local/java/
tar -zxvf jdk-8u192-linux-x64.tar.gz -C /usr/local/java/
3、配置环境变量:
vi /etc/profile
增加:
export JAVA_HOME=/usr/local/java/jdk1.8.0_192
export JRE_HOME= J A V A H O M E / j r e e x p o r t C L A S S P A T H = . : {JAVA_HOME}/jre export CLASSPATH=.: JAVAHOME/jreexportCLASSPATH=.:{JAVA_HOME}/lib: J R E H O M E / l i b e x p o r t P A T H = {JRE_HOME}/lib export PATH= JREHOME/libexportPATH={JAVA_HOME}/bin:$PATH

source /etc/profile

4、验证
~]# java -version
java version “1.8.0_192”
Java™ SE Runtime Environment (build 1.8.0_192-b12)
Java HotSpot™ 64-Bit Server VM (build 25.192-b12, mixed mode)

安装hadoop

1、所有节点创建用户hadoop
useradd hadoop (创建用户)
passwd hadoop (设置密码,为简单起见,3台机器上的hadoop密码最好设置成一样,比如hadoop)
usermod -g root hadoop (加入root组)

2、配置ssh免密码登录
bdmaster:
[root@bdmaster ~]# su - hadoop
[hadoop@bdmaster ~]$ ssh-keygen -t rsa -P ‘’
[hadoop@bdmaster ~]$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys
[hadoop@bdmaster ~]$ chmod 600 .ssh/authorized_keys

bdslave1:
[root@bdslave1 ~]# su - hadoop
[hadoop@bdslave1 ~]$ ssh-keygen -t rsa -P ‘’
[hadoop@bdslave1 ~]$ scp .ssh/id_rsa.pub hadoop@bdmaster:/home/hadoop/id_rsa_01.pub
bdslave2:
[root@bdslave2 ~]# su - hadoop
[hadoop@bdslave2 ~]$ ssh-keygen -t rsa -P ‘’
[hadoop@bdslave2 ~]$ scp .ssh/id_rsa.pub hadoop@bdmaster:/home/hadoop/id_rsa_02.pub

bdmaster:
导入bdslave1、bdslave2的公钥
[hadoop@bdmaster ~]$ ls
id_rsa_01.pub id_rsa_02.pub
[hadoop@bdmaster ~]$ cat id_rsa_01.pub >> .ssh/authorized_keys
[hadoop@bdmaster ~]$ cat id_rsa_02.pub >> .ssh/authorized_keys

bdmaster分发其余节点:
[hadoop@bdmaster ~]$ scp .ssh/authorized_keys hadoop@bdslave1:/home/hadoop/.ssh/authorized_keys
[hadoop@bdmaster ~]$ scp .ssh/authorized_keys hadoop@bdslave2:/home/hadoop/.ssh/authorized_keys

bdslave1、bdslave1更改权限:
[hadoop@bdslave1 ~]$ chmod 600 .ssh/authorized_keys
[hadoop@bdslave2 ~]$ chmod 600 .ssh/authorized_keys

3、下载hadoop
$ wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.7.6/hadoop-2.7.6.tar.gz
$ tar xzvf hadoop-2.7.6.tar.gz
root用户修改环境变量vi /etc/profile
export HADOOP_HOME=/home/hadoop/hadoop-2.7.6
export PATH= H A D O O P H O M E / b i n : HADOOP_HOME/bin: HADOOPHOME/bin:JAVA_HOME/bin: H A D O O P H O M E / s b i n : HADOOP_HOME/sbin: HADOOPHOME/sbin:HADOOP_HOME/bin:$PATH

[root@bdmaster ~]# source /etc/profile

4、修改配置文件
[root@bdmaster ~]# su - hadoop
[hadoop@bdmaster ~]$ cd $HADOOP_HOME/etc/hadoop

hadoop-env.sh:
[hadoop@bdmaster hadoop]$ vi hadoop-env.sh
修改JAVA_HOME并添加
export JAVA_HOME=/usr/local/java/jdk1.8.0_192
export HADOOP_PREFIX=/home/hadoop/hadoop-2.7.6

yarn-env.sh
[hadoop@bdmaster hadoop]$ vi yarn-env.sh
添加:
#export JAVA_HOME=/home/y/libexec/jdk1.6.0/
export JAVA_HOME=/usr/local/java/jdk1.8.0_192

core-site.xml:
[hadoop@bdmaster hadoop]$ vi core-site.xml


fs.defaultFS
hdfs://bdmaster:9000


hadoop.tmp.dir
/home/hadoop/tmp

hdfs-site.xml:
[hadoop@bdmaster hadoop]$ vi hdfs-site.xml


dfs.datanode.ipc.address
0.0.0.0:50020


dfs.datanode.http.address
0.0.0.0:50075


dfs.replication
2

mapred-site.xml:
[hadoop@bdmaster hadoop]$ mv mapred-site.xml.template mapred-site.xml
[hadoop@bdmaster hadoop]$ vi mapred-site.xml

yarn-site.xml:
[hadoop@bdmaster hadoop]$ vi yarn-site.xml


yarn.nodemanager.aux-services
mapreduce_shuffle

slaves:
[hadoop@bdmaster hadoop]$ vi slaves
bdmaster
bdslave1
bdslave2

5、分发到其余两个节点
[hadoop@bdmaster ~]$ scp -r hadoop-2.7.6 hadoop@bdslave1:/home/hadoop/
[hadoop@bdmaster ~]$ scp -r hadoop-2.7.6 hadoop@bdslave2:/home/hadoop/

6、格式化namenode
[hadoop@bdmaster bin]$ cd hadoop-2.7.6/bin/
[hadoop@bdmaster bin]$ hdfs namenode -format

7、启动hdfs
[hadoop@bdmaster bin]$ H A D O O P H O M E / s b i n / s t a r t − d f s . s h [ h a d o o p @ b d m a s t e r b i n ] HADOOP_HOME/sbin/start-dfs.sh [hadoop@bdmaster bin] HADOOPHOME/sbin/startdfs.sh[hadoop@bdmasterbin] $HADOOP_HOME/sbin/start-yarn.sh

[hadoop@bdmaster bin]$ jps
12176 NodeManager
12067 ResourceManager
11864 SecondaryNameNode
11545 NameNode
11689 DataNode
[hadoop@bdslave1 ~]$ jps
9356 NodeManager
9197 DataNode
[hadoop@bdslave2 ~]$ jps
10756 NodeManager
10597 DataNode

安装zookeeper

1、下载并解压
[hadoop@bdmaster ~]$ wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz
2、修改环境变量
[root@bdmaster ~]# vi /etc/profile
export ZOOKEEPER_HOME=/home/hadoop/zookeeper-3.4.14
export PATH= P A T H : PATH: PATH:ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf
[root@bdmaster ~]# source /etc/profile

3、配置文件
[hadoop@bdmaster ~]$ cd zookeeper-3.4.14/conf/
[hadoop@bdmaster conf]$ ls
configuration.xsl log4j.properties zoo_sample.cfg
[hadoop@bdmaster conf]$ mv zoo_sample.cfg zoo.cfg
[hadoop@bdmaster conf]$ vi zoo.cfg
修改:
dataDir=/home/hadoop/storage/zookeeper
添加:
server.1=bdmaster:2888:3888
server.2=bdslave1:2888:3888
server.3=bdslave2:2888:3888

所有节点:
$ mkdir -p /home/hadoop/storage/zookeeper

4、分发
[hadoop@bdmaster ~]$ scp -r zookeeper-3.4.14 hadoop@bdslave1:/home/hadoop/
[hadoop@bdmaster ~]$ scp -r zookeeper-3.4.14 hadoop@bdslave2:/home/hadoop/

5、设置myid
所有节点:
$ cd /home/hadoop/storage/zookeeper
$ touch myid
$ echo “1”>>myid 注:bdmaster为1,bdslave1为2,bdslave3

6、启动
bdmaster>bdslave1>bdslave2顺序启动zk
$ /home/hadoop/zookeeper-3.4.14/bin/zkServer.sh start

可看到第二个节点为leader
[hadoop@bdmaster zookeeper]$ /home/hadoop/zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.14/bin/…/conf/zoo.cfg
Mode: follower
[hadoop@bdslave1 zookeeper]$ /home/hadoop/zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.14/bin/…/conf/zoo.cfg
Mode: leader
[hadoop@bdslave2 zookeeper]$ /home/hadoop/zookeeper-3.4.14/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.14/bin/…/conf/zoo.cfg
Mode: follower

安装hbase

1、下载解压
[hadoop@bdmaster ~]$ wget https://mirrors.tuna.tsinghua.edu.cn/apache/hbase/1.3.4/hbase-1.3.4-bin.tar.gz
[hadoop@bdmaster ~]$ tar -xzvf hbase-1.3.4-bin.tar.gz

2、环境变量
[root@bdmaster ~]# vi /etc/profile
添加:
export HBASE_HOME=/home/hadoop/hbase-1.3.4
export PATH= P A T H : PATH: PATH:HBASE_HOME/bin:$PATH
[root@bdmaster ~]# source /etc/profile

3、配置文件
[root@bdmaster ~]# su - hadoop
Last login: 四 6月 6 12:55:15 CST 2019 on pts/1
[hadoop@bdmaster ~]$ cd hbase-1.3.4/conf/

hbase-env.sh:
[hadoop@bdmaster conf]$ vi hbase-env.sh
添加
#export JAVA_HOME=/usr/java/jdk1.6.0/
export JAVA_HOME=/usr/local/java/jdk1.8.0_192
export HBASE_MANAGES_ZK=false
注释掉:
#Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+因此注释
#export HBASE_MASTER_OPTS=“KaTeX parse error: Expected 'EOF', got '#' at position 90: …acheSize=256m" #̲export HBASE_RE…HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m -XX:ReservedCodeCacheSize=256m”

hbase-site.xml:
[hadoop@bdmaster conf]$ vi hbase-site.xml
添加


hbase.rootdir
hdfs://bdmaster:9000/hbase


hbase.cluster.distributed
true


hbase.zookeeper.quorum
bdmaster,bdslave1,bdslave2


hbase.zookeeper.property.dataDir
/home/hadoop/storage/zookeeper

regionservers:
[hadoop@bdmaster conf]$ vi regionservers
bdmaster
bdslave1
bdslave2

4、分发
[hadoop@bdmaster ~]$ scp -r hbase-1.3.4 hadoop@bdslave1:/home/hadoop/
[hadoop@bdmaster ~]$ scp -r hbase-1.3.4 hadoop@bdslave2:/home/hadoop/

5、启动
[hadoop@bdmaster ~]$ $HBASE_HOME/bin/start-hbase.sh

6、验证
[hadoop@bdmaster ~]$ jps
12176 NodeManager
12067 ResourceManager
15236 Jps
14021 QuorumPeerMain
14966 HRegionServer
11864 SecondaryNameNode
11545 NameNode
11689 DataNode
14809 HMaster

[hadoop@bdslave1 ~]$ jps
10594 QuorumPeerMain
9197 DataNode
11133 HRegionServer
11341 Jps

[hadoop@bdslave2 ~]$ jps
10597 DataNode
12535 HRegionServer
12744 Jps
12009 QuorumPeerMain

[hadoop@bdmaster ~]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase-1.3.4/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop-2.7.6/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter ‘help’ for list of supported commands.
Type “exit” to leave the HBase Shell
Version 1.3.4, r5d443750f65c9b17df23867964f48bbd07f9267d, Mon Apr 15 02:17:47 UTC 2019

安装Geomesa

安装包位置:
[root@bdmaster geomesa_soft]# ls
apache-maven-3.5.4-bin.tar.gz geomesa-hbase_2.11-2.2.1-bin.tar.gz
apache-tomcat-9.0.16.tar.gz geoserver-2.14.2-war.zip
[root@bdmaster geomesa_soft]# pwd
/root/geomesa_soft

1、解压安装
[root@bdmaster geomesa_soft]# tar -zxvf geomesa-hbase_2.11-2.2.1-bin.tar.gz -C /usr/local/
[root@bdmaster local]# chown -R hadoop:root geomesa-hbase_2.11-2.2.1/

2、配置环境变量
[root@bdmaster geomesa_soft]# vi /etc/profile
添加
export GEOMESA_HBASE_HOME=/usr/local/geomesa-hbase_2.11-2.2.1
export PATH=.: J A V A H O M E / b i n : {JAVA_HOME}/bin: JAVAHOME/bin:{HADOOP_HOME}/bin: H B A S E H O M E / b i n : {HBASE_HOME}/bin: HBASEHOME/bin:{GEOMESA_HBASE_HOME}/bin:$PATH
[root@bdmaster geomesa_soft]# source /etc/profile

geomesa推荐在geomesa里配置环境变量:
[root@bdmaster geomesa_soft]# vi /usr/local/geomesa-hbase_2.11-2.2.1/conf/geomesa-env.sh
export HADOOP_HOME=/home/hadoop/hadoop-2.7.6
export HBASE_HOME=/home/hadoop/hbase-1.3.4
export GEOMESA_HBASE_HOME=/usr/local/geomesa-hbase_2.11-2.2.1
export PATH=" P A T H : {PATH}: PATH:{GEOMESA_HOME}/bin"

分发:
[root@bdmaster local]# scp -r geomesa-hbase_2.11-2.2.1 root@bdslave1:/usr/local/
[root@bdmaster local]# scp -r geomesa-hbase_2.11-2.2.1 root@bdslave2:/usr/local/

更改权限:
[root@bdslave1 ~]# chown -R hadoop:root /usr/local/geomesa-hbase_2.11-2.2.1/
[root@bdslave2 ~]# chown -R hadoop:root /usr/local/geomesa-hbase_2.11-2.2.1/

3、将geomesa运行时jar拷贝到指定目录
[root@bdmaster ~]# su - hadoop
Last login: 四 6月 6 13:25:22 CST 2019 on pts/1
[hadoop@bdmaster ~]$ hdfs dfs -mkdir /hbase/lib
[hadoop@bdmaster ~]$ hdfs dfs -put /usr/local/geomesa-hbase_2.11-2.2.1/dist/hbase/geomesa-hbase-distributed-runtime_2.11-2.2.1.jar /hbase/lib
[hadoop@bdmaster ~]$ hadoop fs -ls /hbase/lib
Found 1 items
-rw-r–r-- 2 hadoop supergroup 40568843 2019-06-06 14:28 /hbase/lib/geomesa-hbase-distributed-runtime_2.11-2.2.1.jar

注:为了确保起作用,可将该jar拷贝至每个节点的hbase类目录里
所有节点都要做:
~]# su - hadoop
~]$ cp /usr/local/geomesa-hbase_2.11-2.2.1/dist/hbase/geomesa-hbase-distributed-runtime_2.11-2.2.1.jar /home/hadoop/hbase-1.3.4/lib/

4、Register the Coprocessors
Assuming that you have installed the distributed runtime JAR under hbase.dynamic.jars.dir, coprocessors will be registered automatically when you call createSchema on a data store. Alternatively, the coprocessors may be registered manually. See Manual Coprocessors Registration for details.
官网给出了几种方法实现这一目标,最方便的是在HBase的配置文件hbase-site.xml添加如下内容:
所有节点:
$ vi /home/hadoop/hbase-1.3.4/conf/hbase-site.xml

hbase.coprocessor.user.region.classes
org.locationtech.geomesa.hbase.coprocessor.GeoMesaCoprocessor

5、安装两个插件
所有节点:
[hadoop@bdmaster ~]$ /usr/local/geomesa-hbase_2.11-2.2.1/bin/install-jai.sh
[hadoop@bdmaster ~]$ /usr/local/geomesa-hbase_2.11-2.2.1/bin/install-jline.sh

整合geoserver

1、安装tomcat
[root@bdmaster geomesa_soft]# mkdir /usr/local/tomcat
[root@bdmaster geomesa_soft]# cp apache-tomcat-9.0.16.tar.gz /usr/local/tomcat/
[root@bdmaster geomesa_soft]# tar -xzvf /usr/local/tomcat/apache-tomcat-9.0.16.tar.gz -C /usr/local/tomcat

2、环境变量
[root@bdmaster geomesa_soft]# vi /etc/profile
export CATALINA_HOME=/usr/local/tomcat/apache-tomcat-9.0.16
export CATALINA_BASE=/usr/local/tomcat/apache-tomcat-9.0.16
export PATH=.: J A V A H O M E / b i n : {JAVA_HOME}/bin: JAVAHOME/bin:{HADOOP_HOME}/bin: H B A S E H O M E / b i n : {HBASE_HOME}/bin: HBASEHOME/bin:{GEOMESA_HBASE_HOME}/bin: M 2 H O M E / b i n : {M2_HOME}/bin: M2HOME/bin:{CATALINA_BASE}/bin:$PATH
[root@bdmaster geomesa_soft]# source /etc/profile

[root@bdmaster ~]# vi /usr/local/tomcat/apache-tomcat-9.0.16/bin/catalina.sh
#OS specific support. v a r m u s t b e s e t t o e i t h e r t r u e o r f a l s e . 前 添 加 : J A V A H O M E = / u s r / l o c a l / j a v a / j d k 1.8. 0 1 92 J R E H O M E = var _must_ be set to either true or false.前添加: JAVA_HOME=/usr/local/java/jdk1.8.0_192 JRE_HOME= varmustbesettoeithertrueorfalse.JAVAHOME=/usr/local/java/jdk1.8.0192JREHOME={JAVA_HOME}/jre

3、tomcat整合geoserver
[root@bdmaster geomesa_soft]# unzip /root/geomesa_soft/geoserver-2.14.2-war.zip -d geoserver
[root@bdmaster geomesa_soft]# cp geoserver/geoserver.war $CATALINA_BASE/webapps/

4、启tomcat服务
root@bdmaster geomesa_soft]# $CATALINA_BASE/bin/startup.sh

5、jar包
[root@bdmaster geomesa_soft]# cd /usr/local/geomesa-hbase_2.11-2.2.1/dist/gs-plugins/
[root@bdmaster gs-plugins]# tar -zxvf geomesa-hbase-gs-plugin_2.11-2.2.1-install.tar.gz -C $CATALINA_BASE/webapps/geoserver/WEB-INF/lib

修改install-hadoop.sh,修改其中版本和htrace版本
[root@bdmaster gs-plugins]# $GEOMESA_HBASE_HOME/bin/install-hadoop.sh
。。。
Continue? (y/n) y
fetching zookeeper-3.4.14.jar
fetching commons-configuration-1.6.jar
fetching hadoop-auth-2.7.6.jar
fetching hadoop-client-2.7.6.jar
fetching hadoop-common-2.7.6.jar
fetching hadoop-hdfs-2.7.6.jar
fetching commons-logging-1.1.3.jar
fetching commons-cli-1.2.jar
fetching commons-io-2.5.jar
fetching servlet-api-2.4.jar
fetching netty-all-4.1.17.Final.jar
fetching netty-3.6.2.Final.jar
fetching metrics-core-2.2.0.jar
fetching htrace-core-3.1.0-incubating.jar
fetching guava-12.0.1.jar

将这些包复制到tomcat geoserver的类目录中
[root@bdmaster ~]# cd /usr/local/geomesa-hbase_2.11-2.2.1/lib/
[root@bdmaster lib]# cp hadoop-* /usr/local/tomcat/apache-tomcat-9.0.16/webapps/geoserver/WEB-INF/lib/
[root@bdmaster lib]# cp commons-configuration-1.6.jar /usr/local/tomcat/apache-tomcat-9.0.16/webapps/geoserver/WEB-INF/lib/
[root@bdmaster lib]# cp commons-logging-1.1.3.jar /usr/local/tomcat/apache-tomcat-9.0.16/webapps/geoserver/WEB-INF/lib/
[root@bdmaster lib]# cp commons-cli-1.2.jar /usr/local/tomcat/apache-tomcat-9.0.16/webapps/geoserver/WEB-INF/lib/
[root@bdmaster lib]# cp commons-io-2.5.jar /usr/local/tomcat/apache-tomcat-9.0.16/webapps/geoserver/WEB-INF/lib/
[root@bdmaster lib]# cp zookeeper-3.4.14.jar /usr/local/tomcat/apache-tomcat-9.0.16/webapps/geoserver/WEB-INF/lib/
[root@bdmaster lib]# cp servlet-api-2.4.jar /usr/local/tomcat/apache-tomcat-9.0.16/webapps/geoserver/WEB-INF/lib/
[root@bdmaster lib]# cp netty-all-4.1.17.Final.jar /usr/local/tomcat/apache-tomcat-9.0.16/webapps/geoserver/WEB-INF/lib/
[root@bdmaster lib]# cp netty-3.6.2.Final.jar /usr/local/tomcat/apache-tomcat-9.0.16/webapps/geoserver/WEB-INF/lib/
[root@bdmaster lib]# cp metrics-core-2.2.0.jar /usr/local/tomcat/apache-tomcat-9.0.16/webapps/geoserver/WEB-INF/lib/
[root@bdmaster lib]# cp htrace-core-3.1.0-incubating.jar /usr/local/tomcat/apache-tomcat-9.0.16/webapps/geoserver/WEB-INF/lib/
[root@bdmaster lib]# cp guava-12.0.1.jar /usr/local/tomcat/apache-tomcat-9.0.16/webapps/geoserver/WEB-INF/lib/

6、创建软链接
[root@bdmaster ~]# ln -s /home/hadoop/hbase-1.3.4/conf/hbase-site.xml /usr/local/tomcat/apache-tomcat-9.0.16/webapps/geoserver/WEB-INF/classes/hbase-site.xml

7、重启tomcat
$CATALINA_BASE/bin/shutdown.sh
$CATALINA_BASE/bin/startup.sh
8、安装geomesa-tutorials
这里应该用hadoop用户进行编译和导入,要确保用户一致,否则geoserver发布服务时会出现错误,另外要确保geomesa等相关家目录权限属于hadoop

]# git clone https://github.com/geomesa/geomesa-tutorials.git
]# cd geomesa-tutorials/
]# mvn clean install -pl geomesa-tutorials-hbase/geomesa-tutorials-hbase-quickstart -am

java -cp geomesa-tutorials-hbase/geomesa-tutorials-hbase-quickstart/target/geomesa-tutorials-hbase-quickstart-2.4.0-SNAPSHOT.jar org.geomesa.example.hbase.HBaseQuickStart --hbase.zookeepers master --hbase.catalog geomesa_hbase

9、导入台湾数据
$ geomesa-hbase ingest --catalog gis_osm_buildings_a_free_1 --feature-name gis_osm_buildings_a_free_1 --input-format shp “/root/taiwan-latest-free.shp/gis_osm_buildings_a_free_1.shp”

然后登录geoserver新建数据存储发布即可

你可能感兴趣的:(BigData)