在centos中安装thehive的过程

整个过程使用虚拟机在CentOS Linux release 7.2.1511 (Core)安装成功。

安装thehive需要的步骤:

1、安装存储数据的elasticsearch,使用docker安装或者rpm包进行安装

2、安装cortex,创建相关用户账号获取相关API的秘钥供thehive使用

3、安装thehive,调用cortex提供的秘钥获取相关调用接口进行调查

elasticsearch相关命令使用:https://www.cnblogs.com/remainsu/p/elasticsearch-chang-yong-curl-ming-ling.html

一、安装elasticsearch

1.1使用docker进行安装

注意安装时请注意时间同步问题,否者会出现证书相关问题:ntpdate cn.pool.ntp.org

Docker CE:https://docs.docker.com/install/#supported-platforms

Docker Compose:https://docs.docker.com/compose/install/

Git:https://git-scm.com/book/en/v2/Getting-Started-Installing-Git

按照以上步骤完成docker的安装

第一步:

sudo sysctl -w vm.max_map_count=262144

第二步:

yum install -y java-1.8.0-openjdk

第三步,创建在本机上保存数据的目录:

mkdir -p /usr/share/elasticsearch/data

chmod 777 /usr/share/elasticsearch/data

第四步:安装es

docker run \

--name elasticsearch \

--hostname elasticsearch \

--rm \

--publish 127.0.0.1:9200:9200 \

--publish 127.0.0.1:9300:9300 \

--volume /usr/share/elasticsearch/data:/usr/share/elasticsearch/data \

-e "http.host=0.0.0.0" \

-e "transport.host=0.0.0.0" \

-e "xpack.security.enabled=false" \

-e "cluster.name=hive" \

-e "script.inline=true" \

-e "thread_pool.index.queue_size=100000" \

-e "thread_pool.search.queue_size=100000" \

-e "thread_pool.bulk.queue_size=100000" \

docker.elastic.co/elasticsearch/elasticsearch:5.6.0

1.2使用rpm包进行安装

注意安装时请注意时间同步问题,否者会出现证书相关问题:

ntpdate cn.pool.ntp.org

参考:https://github.com/TheHive-Project/TheHiveDocs/blob/master/installation/install-guide.md

第一步:

sudo sysctl -w vm.max_map_count=262144

第二步:

yum install -y java-1.8.0-openjdk

第三步:在/etc/yum/repos.d/创建elasticsearch.repo,加入以下内容:

[elasticsearch-5.x]

name=Elasticsearch repository for 5.x packages

baseurl=https://artifacts.elastic.co/packages/5.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

 mkdir -p /etc/yum/repos.d/

 cd /etc/yum/repos.d/

第四步:

  sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

  sudo yum install elasticsearch

第五步:

  cd /etc/elasticsearch/

在/etc/elasticsearch/elasticsearch.yml中加入以下内容:

network.host: 127.0.0.1

script.inline: true

cluster.name: hive

thread_pool.index.queue_size: 100000

thread_pool.search.queue_size: 100000

thread_pool.bulk.queue_size: 100000

 xpack.security.enabled: true

强烈建议避免将此服务暴露给不受信任的区域。

如果Elasticsearch和thehive运行在同一主机上(而不是在docker),编辑/etc/ Elasticsearch /elasticsearch.yml和设置网络。主机参数为127.0.0.1。thehive使用动态脚本进行部分更新。因此,必须使用script.inline: true来激活它们。

还必须设置集群名称(例如,hive)。线程池队列大小必须设置一个较高的值(100000)。默认大小将容易使队列超载。

第六步:

手动安装xpack:https://artifacts.elastic.co/downloads/packs/x-pack/x-pack-5.6.16.zip

./elasticsearch-plugin install file:///root/x-pack-5.6.16.zip

-> Downloading file:///root/x-pack-5.6.16.zip

[=================================================] 100%  

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

@    WARNING: plugin requires additional permissions    @

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

* java.io.FilePermission \\.\pipe\* read,write

* java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries

* java.lang.RuntimePermission getClassLoader

* java.lang.RuntimePermission setContextClassLoader

* java.lang.RuntimePermission setFactory

* java.net.SocketPermission * connect,accept,resolve

* java.security.SecurityPermission createPolicy.JavaPolicy

* java.security.SecurityPermission getPolicy

* java.security.SecurityPermission putProviderProperty.BC

* java.security.SecurityPermission setPolicy

* java.util.PropertyPermission * read,write

* javax.net.ssl.SSLPermission setHostnameVerifier

See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html

for descriptions of what these permissions allow and the associated risks.

Continue with installation? [y/N]y

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

@        WARNING: plugin forks a native controller        @

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@

This plugin launches a native controller that is not subject to the Java

security manager nor to system call filters.

Continue with installation? [y/N]y

-> Installed x-pack

自动安装:

进入:/usr/share/elasticsearch/bin

./elasticsearch-plugin install x-pack

并且破解相关文件(过程见:https://blog.csdn.net/dymkkj/article/details/91043669) :

/usr/share/elasticsearch/plugins/x-pack/x-pack-5.6.16.jar

第七步:

sudo systemctl enable elasticsearch.service

 sudo systemctl start elasticsearch.service

  sudo systemctl status elasticsearch.service

第八步:更改用户密码

默认用户名密码为: elastic:changeme

curl --user elastic:changeme http://127.0.0.1:9200

{

  "name" : "0Gxn_d0",

  "cluster_name" : "hive",

  "cluster_uuid" : "UC3wm1J-SRO_GwVBnEPxow",

  "version" : {

    "number" : "5.6.16",

    "build_hash" : "3a740d1",

    "build_date" : "2019-03-13T15:33:36.565Z",

    "build_snapshot" : false,

    "lucene_version" : "6.6.1"

  },

  "tagline" : "You Know, for Search"

}

更改密码(https://www.jianshu.com/p/f1b009113e61):

curl -XPUT -u elastic '127.0.0.1:9200/_xpack/security/user/elastic/_password' -H "Content-Type: application/json" -d '{"password" : "2020@qwerty"}'

curl --user elastic:2020@qwerty http://127.0.0.1:9200

二、安装cortex

yum install https://dl.bintray.com/thehive-project/rpm-stable/thehive-project-release-1.1.0-2.noarch.rpm

yum install cortex

yum install python-pip python2.7-dev python3-pip python3-dev ssdeep libfuzzy-dev libfuzzy2 libimage-exiftool-perl libmagic1 build-essential git libssl-dev

pip3 install cortexutils

sudo systemctl enable cortex.service

sudo systemctl start cortex.service

sudo systemctl status cortex.service

firewall-cmd --zone=public --add-port=9001/tcp --permanent

firewall-cmd --reload

相关配置文件在:/etc/cortex中,需要进行部分修改,以下给出模板:

# Sample Cortex application.conf file

## SECRET KEY

#

# The secret key is used to secure cryptographic functions.

#

# IMPORTANT: If you deploy your application to several  instances,  make

# sure to use the same key.

#这个地方需要添加相关秘钥,具体生成方法请参照官网

play.http.secret.key="XpN0jSW03tWKGQr2MSpI6mizC0oEa8BPY3FgENBgDvCKbk0obep19pDcy2oE7tad"

#http.port = "9001"

## ElasticSearch

search {

  # Name of the index

  index = cortex

  # Address of the ElasticSearch instance

  uri = "http://127.0.0.1:9200"

}

# ElasticSearch cluster name

cluster {

name = hive

}

## Cache

#

# If an analyzer is executed against the same observable, the previous report can be returned without re-executing the

# analyzer. The cache is used only if the second job occurs within cache.job (the default is 10 minutes).

cache.job = 10 minutes

## Authentication

auth {

method.basic = true

# "provider" parameter contains the authentication provider(s). It can be multi-valued, which is useful

# for migration.

# The available auth types are:

# - services.LocalAuthSrv : passwords are stored in the user entity within ElasticSearch). No

#  configuration are required.

# - ad : use ActiveDirectory to authenticate users. The associated configuration shall be done in

#  the "ad" section below.

# - ldap : use LDAP to authenticate users. The associated configuration shall be done in the

#  "ldap" section below.

provider = [local]

ad {

# The Windows domain name in DNS format. This parameter is required if you do not use

# 'serverNames' below.

#domainFQDN = "mydomain.local"

# Optionally you can specify the host names of the domain controllers instead of using 'domainFQDN

# above. If this parameter is not set, TheHive uses 'domainFQDN'.

#serverNames = [ad1.mydomain.local, ad2.mydomain.local]

# The Windows domain name using short format. This parameter is required.

#domainName = "MYDOMAIN"

# If 'true', use SSL to connect to the domain controller.

#useSSL = true

}

ldap {

# The LDAP server name or address. The port can be specified using the 'host:port'

# syntax. This parameter is required if you don't use 'serverNames' below.

#serverName = "ldap.mydomain.local:389"

# If you have multiple LDAP servers, use the multi-valued setting 'serverNames' instead.

#serverNames = [ldap1.mydomain.local, ldap2.mydomain.local]

# Account to use to bind to the LDAP server. This parameter is required.

#bindDN = "cn=thehive,ou=services,dc=mydomain,dc=local"

# Password of the binding account. This parameter is required.

#bindPW = "***secret*password***"

# Base DN to search users. This parameter is required.

#baseDN = "ou=users,dc=mydomain,dc=local"

# Filter to search user in the directory server. Please note that {0} is replaced

# by the actual user name. This parameter is required.

#filter = "(cn={0})"

# If 'true', use SSL to connect to the LDAP directory server.

#useSSL = true

}

}

## ANALYZERS

#

analyzer {

  # Absolute path where you have pulled the Cortex-Analyzers repository.

  #放分析器的地方

  #path = ["/opt/Cortex-Analyzers/analyzers"]

  urls = ["https://dl.bintray.com/thehive-project/cortexneurons/analyzers.json"]

  # Sane defaults. Do not change unless you know what you are doing.

  fork-join-executor {

    # Min number of threads available for analysis.

    parallelism-min = 2

    # Parallelism (threads) ... ceil(available processors * factor).

    parallelism-factor = 2.0

    # Max number of threads available for analysis.

    parallelism-max = 4

  }

}

  #放应答器的地方

responder {

path = ["/opt/thehive/responders","/opt/cortex/github-aacgood/Cortex-Analyzers/Responders"]

}

# It's the end my friend. Happy hunting!

进入首页后需要初始化数据库


更新数据库
创建用户名
生成新的组织
使用新组织添加用户
使用新用户获取相关秘钥

获取的秘钥将用于thehive的配置文件中,以便于thehive使用cortex

三、安装thehive

yum install thehive

sudo systemctl enable thehive.service

sudo systemctl start thehive.service

sudo systemctl status thehive.service

firewall-cmd --zone=public --add-port=9000/tcp --permanent

firewall-cmd --reload

安装完后,启动服务前,需要进行配置文件的更改:

# Secret Key

# The secret key is used to secure cryptographic functions.

# WARNING: If you deploy your application on several servers, make sure to use the same key.

#需要更改的地方

play.http.secret.key="PBnMSnyrQZD8sY5J69VL0Nj9jfEs0UJnNd3Pupv5MpA2nmJ9bSmYZAoxlAv4dNQn"

# Elasticsearch

search {

  ## Basic configuration

  # Index name.

  index = the_hive

  # ElasticSearch instance address.

  uri = "http://127.0.0.1:9200/"

  ## Advanced configuration

  # Scroll keepalive.

  #keepalive = 1m

  # Scroll page size.

  #pagesize = 50

  # Number of shards

  #nbshards = 5

  # Number of replicas

  #nbreplicas = 1

  # Arbitrary settings

  #settings {

  #  # Maximum number of nested fields

  #  mapping.nested_fields.limit = 100

  #}

  ## Authentication configuration

#@kindsjay@JamesCullum. Changing the application.conf key "search.username" to "user" and "search.password" to "password #resolved this for me on both RC01 and RC02. Thanks to@ag-michaelfor pointing me towards this fix.

#https://github.com/TheHive-Project/TheHive/issues/1055


# 加es的用户认证,请将"search.username" 更改为 "user" and #"search.password" 更改为 "password"

  #search.username = ""

  #search.password = ""

  ## SSL configuration

  #search.keyStore {

  #  path = "/path/to/keystore"

  #  type = "JKS" # or PKCS12

  #  password = "keystore-password"

  #}

  #search.trustStore {

  #  path = "/path/to/trustStore"

  #  type = "JKS" # or PKCS12

  #  password = "trustStore-password"

  #}

}

# Authentication

auth {

  # "provider" parameter contains authentication provider. It can be multi-valued (useful for migration)

  # available auth types are:

  # services.LocalAuthSrv : passwords are stored in user entity (in Elasticsearch). No configuration is required.

  # ad : use ActiveDirectory to authenticate users. Configuration is under "auth.ad" key

  # ldap : use LDAP to authenticate users. Configuration is under "auth.ldap" key

  # oauth2 : use OAuth/OIDC to authenticate users. Configuration is under "auth.oauth2" and "auth.sso" keys

  provider = [local]

  # By default, basic authentication is disabled. You can enable it by setting "method.basic" to true.

  #method.basic = true

  ad {

    # The Windows domain name in DNS format. This parameter is required if you do not use

    # 'serverNames' below.

    #domainFQDN = "mydomain.local"

    # Optionally you can specify the host names of the domain controllers instead of using 'domainFQDN

    # above. If this parameter is not set, TheHive uses 'domainFQDN'.

    #serverNames = [ad1.mydomain.local, ad2.mydomain.local]

    # The Windows domain name using short format. This parameter is required.

    #domainName = "MYDOMAIN"

    # If 'true', use SSL to connect to the domain controller.

    #useSSL = true

  }

  ldap {

    # The LDAP server name or address. The port can be specified using the 'host:port'

    # syntax. This parameter is required if you don't use 'serverNames' below.

    #serverName = "ldap.mydomain.local:389"

    # If you have multiple LDAP servers, use the multi-valued setting 'serverNames' instead.

    #serverNames = [ldap1.mydomain.local, ldap2.mydomain.local]

    # Account to use to bind to the LDAP server. This parameter is required.

    #bindDN = "cn=thehive,ou=services,dc=mydomain,dc=local"

    # Password of the binding account. This parameter is required.

    #bindPW = "***secret*password***"

    # Base DN to search users. This parameter is required.

    #baseDN = "ou=users,dc=mydomain,dc=local"

    # Filter to search user in the directory server. Please note that {0} is replaced

    # by the actual user name. This parameter is required.

    #filter = "(cn={0})"

    # If 'true', use SSL to connect to the LDAP directory server.

    #useSSL = true

  }

  oauth2 {

    # URL of the authorization server

    #clientId = "client-id"

    #clientSecret = "client-secret"

    #redirectUri = "https://my-thehive-instance.example/index.html#!/login"

    #responseType = "code"

    #grantType = "authorization_code"

    # URL from where to get the access token

    #authorizationUrl = "https://auth-site.com/OAuth/Authorize"

    #tokenUrl = "https://auth-site.com/OAuth/Token"

    # The endpoint from which to obtain user details using the OAuth token, after successful login

    #userUrl = "https://auth-site.com/api/User"

    #scope = "openid profile"

  }

  # Single-Sign On

  sso {

    # Autocreate user in database?

    #autocreate = false

    # Autoupdate its profile and roles?

    #autoupdate = false

    # Autologin user using SSO?

    #autologin = false

    # Attributes mappings

    #attributes {

    #  login = "sub"

    #  name = "name"

    #  groups = "groups"

    #  #roles = "roles"

    #}

    # Name of mapping class from user resource to backend user ('simple' or 'group')

    #mapper = group

    # Default roles for users with no groups mapped ("read", "write", "admin")

    #defaultRoles = []

    #groups {

    #  # URL to retreive groups (leave empty if you are using OIDC)

    #  #url = "https://auth-site.com/api/Groups"

    #  # Group mappings, you can have multiple roles for each group: they are merged

    #  mappings {

    #    admin-profile-name = ["admin"]

    #    editor-profile-name = ["write"]

    #    reader-profile-name = ["read"]

    #  }

    #}

  }

}

# Maximum time between two requests without requesting authentication

session {

  warning = 5m

  inactivity = 1h

}

# Max textual content length

play.http.parser.maxMemoryBuffer= 1M

# Max file size

play.http.parser.maxDiskBuffer = 1G

# Cortex

# TheHive can connect to one or multiple Cortex instances. Give each

# Cortex instance a name and specify the associated URL.

#

# In order to use Cortex, first you need to enable the Cortex module by uncommenting the next line

play.modules.enabled += connectors.cortex.CortexConnector

#需要更改的地方

cortex {

  "LOCAL CORTEX" {

    url = "http://127.0.0.1:9001"

    key ="ZN4hcdcFSJP3DmfCvYZsjGC9GyOrKj7j"

    }

  #"CORTEX-SERVER-ID" {

  #  url = ""

  #  key = ""

  #  # HTTP client configuration (SSL and proxy)

  #  ws {}

  #}

}

# MISP

# TheHive can connect to one or multiple MISP instances. Give each MISP

# instance a name and specify the associated Authkey that must  be used

# to poll events, the case template that should be used by default when

# importing events as well as the tags that must be added to cases upon

# import.

# Prior to configuring the integration with a MISP instance, you must

# enable the MISP connector. This will allow you to import events to

# and/or export cases to the MISP instance(s).

#play.modules.enabled += connectors.misp.MispConnector

misp {

  # Interval between consecutive MISP event imports in hours (h) or

  # minutes (m).

  interval = 1h

  #"MISP-SERVER-ID" {

  #  # MISP connection configuration requires at least an url and a key. The key must

  #  # be linked with a sync account on MISP.

  #  url = ""

  #  key = ""

  #

  #  # Name of the case template in TheHive that shall be used to import

  #  # MISP events as cases by default.

  #  caseTemplate = ""

  #

  #  # Optional tags to add to each observable  imported  from  an  event

  #  # available on this instance.

  #  tags = ["misp-server-id"]

  #

  #  ## MISP event filters

  #  # MISP filters is used to exclude events from the import.

  #  # Filter criteria are:

  #  # The number of attribute

  #  max-attributes = 1000

  #  # The size of its JSON representation

  #  max-size = 1 MiB

  #  # The age of the last publish date

  #  max-age = 7 days

  #  # Organization and tags

  #  exclusion {

  #    organisation = ["bad organisation", "other organisations"]

  #    tags = ["tag1", "tag2"]

  #  }

  #

  #  ## HTTP client configuration (SSL and proxy)

  #  # Truststore to use to validate the X.509 certificate of the MISP

  #  # instance if the default truststore is not sufficient.

  #  # Proxy can also be used

  #  ws {

  #    ssl.trustManager.stores = [ {

  #      path = /path/to/truststore.jks

  #    } ]

  #    proxy {

  #      host = proxy.mydomain.org

  #      port = 3128

  #    }

  #  }

  #

  #  # MISP purpose defines if this instance can be used to import events (ImportOnly), export cases (ExportOnly) or both (ImportAndExport)

  #  # Default is ImportAndExport

  #  purpose = ImportAndExport

  #} ## <-- Uncomment to complete the configuration

}

同样需要初始化

祝好运!

你可能感兴趣的:(在centos中安装thehive的过程)