AVWS+Xray联动(详细版)

正文

awvs的爬虫很厉害,xray的扫描能力更强,而且xray的社区版不能爆破子域名。awvs+xray联动就互相取长补短了。

 

首先安装AWVS(这里用的是AWVS13破解版,软件在文末),具体破解版安装,压缩包里面有文档,这里就不再详细叙述了。安装好界面如下。

AVWS+Xray联动(详细版)_第1张图片

安装好后我们就在Xray(社区版,软件在文末)开启监听。

  •  
python xray.exe webscan --listen 127.0.0.1:8888 --html-output 456.html

 

AVWS+Xray联动(详细版)_第2张图片

 

然后我们再去AVWS中去添加Targets,但是AVWS只支持导入csv文件

AVWS+Xray联动(详细版)_第3张图片

这里用如下脚本进行导入目标

​​​​​​​

import requestsimport json
import urllib3.packages#import requests.packgesfrom urllib3.exceptions import InsecureRequestWarning#requests.packages.urllib3.disable_warnings(InsecureRequestWarning)urllib3.disable_warnings(InsecureRequestWarning)
apikey = '1986ad8c0a5b3df4d7028d5f3c06e936c799964eceb1549099277fc38e6393765'#API
headers = {'Content-Type': 'application/json',"X-Auth": apikey}

def addTask(url,target):    try:        url = ''.join((url, '/api/v1/targets/add'))        data = {"targets":[{"address": target,"description":""}],"groups":[]}        r = requests.post(url, headers=headers, data=json.dumps(data), timeout=30, verify=False)        result = json.loads(r.content.decode())        return result['targets'][0]['target_id']    except Exception as e:        return edef scan(url,target,Crawl,user_agent,profile_id,proxy_address,proxy_port):    scanUrl = ''.join((url, '/api/v1/scans'))    target_id = addTask(url,target)
    if target_id:        data = {"target_id": target_id, "profile_id": profile_id, "incremental": False, "schedule": {"disable": False, "start_date": None, "time_sensitive": False}}        try:            configuration(url,target_id,proxy_address,proxy_port,Crawl,user_agent)            response = requests.post(scanUrl, data=json.dumps(data), headers=headers, timeout=30, verify=False)            result = json.loads(response.content)            return result['target_id']        except Exception as e:            print(e)
def configuration(url,target_id,proxy_address,proxy_port,Crawl,user_agent):    configuration_url = ''.join((url,'/api/v1/targets/{0}/configuration'.format(target_id)))    data = {"scan_speed":"fast","login":{"kind":"none"},"ssh_credentials":{"kind":"none"},"sensor": False,"user_agent": user_agent,"case_sensitive":"auto","limit_crawler_scope": True,"excluded_paths":[],"authentication":{"enabled": False},"proxy":{"enabled": Crawl,"protocol":"http","address":proxy_address,"port":proxy_port},"technologies":[],"custom_headers":[],"custom_cookies":[],"debug":False,"client_certificate_password":"","issue_tracker_id":"","excluded_hours_id":""}    r = requests.patch(url=configuration_url,data=json.dumps(data), headers=headers, timeout=30, verify=False)def main():    Crawl = True    proxy_address = '127.0.0.1'    proxy_port = '8888'    awvs_url = 'https://127.0.0.1:3443' #awvs url    with open(r'C:\Users\随风\Desktop\edu.txt','r',encoding='utf-8') as f:        targets = f.readlines()    profile_id = "11111111-1111-1111-1111-111111111111"    user_agent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.21 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.21" #扫描默认UA头    if Crawl:        profile_id = "11111111-1111-1111-1111-111111111117"                            for target in targets:        target = target.strip()        if scan(awvs_url,target,Crawl,user_agent,profile_id,proxy_address,int(proxy_port)):            print("{0} 添加成功".format(target))
if __name__ == '__main__':    main()

 

替换如下信息

  • 将apikey替换为自己的

  • 将需要扫描的url放入到脚本同一目录下的url.txt

  • 将awvs_url改为自己awvs地址

  • 如果和需要和X-RAY配合使用可自行修改 将Crawl 改为 True

  • proxy_address 改为代理地址

  • proxy_port 改为代理端口

  • 如果还想使用其他扫描类型可以自行修改profile_id

  • 如果想修改UA头自行修改user_agent

脚本原文

https://www.imzzj.com/2020/05/18/shi-yong-python-gei-awvs13-pi-liang-tian-jia-ren-wu.html

 

因为我们是要把AWVS的流量导入到Xray中的,所以我们要在AWVS中开启代理功能,所以我么需要将脚本的Crawl 改为 True(上文替换信息标红)。

 

配置好我们就可以运行进行导入目标了

AVWS+Xray联动(详细版)_第4张图片

因为我们在Xray中进行了监听,所以我们的Xray就可一接收到流量进行扫描了,信息敏感已码死。

AVWS+Xray联动(详细版)_第5张图片

 

软件链接

  •  
  •  
链接:https://pan.baidu.com/s/1j4ydVn9zKrAXfMKRznC6Og 提取码:kwcl

你可能感兴趣的:(工具使用,渗透测试)