dirsearch下载与简单实用

下载


下载地址


  我的电脑是Windows,而且我也有python3.6的环境,所以我是直接clone到本地就能使用了。
  命令的提示在上面的下载地址里就有,这里给个最简单的命令(脚本小子专属,我也要好好学习技术)

python3 dirsearch.py -u  -e 
翻译:
python3 dirsearch.py -u 网址 -e 语言
URL知道是网址,这个EXTENSION看前辈们说是指定网站语言的意思
例:
py dirsearch.py -u www.xxx.com -e php 或
py dirsearch.py -u www.xxx.com -e *

下载之后根目录下有个readme文件,很明显这个文件就是具体的使用方法,这里因为文件太长了,就粘贴到下面了哈哈,文档是英文的,百度翻译一下,勉强看懂。也许可以帮你省下一些功夫

dirsearch
=========

当前版本: v0.3.9 (2019.11.26)


概述
--------
dirsearch是一个简单的命令行工具,用于强制执行网站中的目录和文件。


下载方法 & 用处
------------

``
git clone https://github.com/maurosoria/dirsearch.git
cd dirsearch
python3 dirsearch.py -u  -e 
``

也可以使用此【alias】直接发送到代理
`python3 /path/to/dirsearch/dirsearch.py --http-proxy=localhost:8080`


选项
-------


``
选项:
  -h, --help            显示此帮助消息和退出

 强制性的,命令:
    -u URL, --url=URL   URL target
    -L URLLIST, --url-list=URLLIST
                        URL list target
    -e EXTENSIONS, --extensions=EXTENSIONS
                        Extension list separated by comma (Example: php,asp)
    -E, --extensions-list
                        Use predefined list of common extensions

  Dictionary Settings:
    -w WORDLIST, --wordlist=WORDLIST
    -l, --lowercase
    -f, --force-extensions
                        Force extensions for every wordlist entry (like in
                        DirBuster)

  General Settings:
    -s DELAY, --delay=DELAY
                        Delay between requests (float number)
    -r, --recursive     Bruteforce recursively
    -R RECURSIVE_LEVEL_MAX, --recursive-level-max=RECURSIVE_LEVEL_MAX
                        Max recursion level (subdirs) (Default: 1 [only
                        rootdir + 1 dir])
    --suppress-empty, --suppress-empty
    --scan-subdir=SCANSUBDIRS, --scan-subdirs=SCANSUBDIRS
                        Scan subdirectories of the given -u|--url (separated
                        by comma)
    --exclude-subdir=EXCLUDESUBDIRS, --exclude-subdirs=EXCLUDESUBDIRS
                        Exclude the following subdirectories during recursive
                        scan (separated by comma)
    -t THREADSCOUNT, --threads=THREADSCOUNT
                        Number of Threads
    -x EXCLUDESTATUSCODES, --exclude-status=EXCLUDESTATUSCODES
                        Exclude status code, separated by comma (example: 301,
                        500)
    --exclude-texts=EXCLUDETEXTS
                        Exclude responses by texts, separated by comma
                        (example: "Not found", "Error")
    --exclude-regexps=EXCLUDEREGEXPS
                        Exclude responses by regexps, separated by comma
                        (example: "Not foun[a-z]{1}", "^Error$")
    -c COOKIE, --cookie=COOKIE
    --ua=USERAGENT, --user-agent=USERAGENT
    -F, --follow-redirects
    -H HEADERS, --header=HEADERS
                        Headers to add (example: --header "Referer:
                        example.com" --header "User-Agent: IE"
    --random-agents, --random-user-agents

  Connection Settings:
    --timeout=TIMEOUT   Connection timeout
    --ip=IP             Resolve name to IP address
    --proxy=HTTPPROXY, --http-proxy=HTTPPROXY
                        Http Proxy (example: localhost:8080
    --http-method=HTTPMETHOD
                        Method to use, default: GET, possible also: HEAD;POST
    --max-retries=MAXRETRIES
    -b, --request-by-hostname
                        By default dirsearch will request by IP for speed.
                        This forces requests by hostname

  Reports:
    --simple-report=SIMPLEOUTPUTFILE
                        Only found paths
    --plain-text-report=PLAINTEXTOUTPUTFILE
                        Found paths with status codes
    --json-report=JSONOUTPUTFILE
``


支持的操作系统(看起来几乎所有系统都支持)
---------------------------
- Windows XP/7/8/10
- GNU/Linux
- MacOSX

Features
--------
- Multithreaded
- Keep alive connections
- Support for multiple extensions (-e|--extensions asp,php)
- Reporting (plain text, JSON)
- Heuristically detects invalid web pages
- Recursive brute forcing
- HTTP proxy support
- User agent randomization
- Batch processing
- Request delaying

About wordlists
---------------
词典必须是文本文件。每一行都将按原样进行处理,除了使用特殊单词%EXT%外,它将为作为参数传递的每个扩展名(-e |--extension)生成一个条目。

Example:
- example/
- example.%EXT%

传递扩展名“asp”和“aspx”将生成以下词典:
- example/
- example.asp
- example.aspx

You can also use -f | --force-extensions switch to append extensions to every word in the wordlists (like DirBuster).
还可以使用-f |--force extensions开关将扩展附加到单词列表中的每个单词(如DirBuster)。

## Support Docker
### Install Docker Linux
Install Docker
```sh
curl -fsSL https://get.docker.com | bash
``
> To use docker you need superuser power(需要root-超级用户)

### Build Image dirsearch
To create image
``sh
docker build -t "dirsearch:v0.3.8" .
``
> **dirsearch** this is name the image and **v0.3.8** is version

### Using dirsearch
For using
``sh
docker run -it --rm "dirsearch:v0.3.8" -u target -e php,html,png,js,jpg
``
> 目标是 [网站](如:www.xxx.php) 或 [IP](如:127.0.0.1)

许可证
-------
Copyright (C) Mauro Soria (maurosoria at gmail dot com)

License: GNU General Public License, version 2


贡献者
---------
特别感谢这些人:

- mzfr
- Damian89
- Bo0oM
- liamosaur
- redshark1802
- SUHAR1K
- FireFart
- k2l8m11n2
- vlohacks
- r0p0s3c

根目录下的db文件夹有,似乎是可以控制扫描目录的文件dicc.txt,可以往里面加想要扫描的目录名。如可以添加www.php,等等……这就很舒服了。好了,等有空再更新其他用法,web萌新,希望有人可以交流。

你可能感兴趣的:(扫描测试工具)