PhishBuster: Scan for Phishing Pages

Estimated read time 3 min read

In our last blog post, we delved into DirBuster and its utility in pinpointing particular phishing pages. In this installment, we’ll shift our attention to a Python script I’ve developed. This complimentary tool enables you to parse your unique paths and asynchronously verify the presence of those paths in a specific domain. Additionally, it provides the flexibility to designate a preferred proxy. The outcome is conveniently displayed in the terminal.


How to install

Files in the directory
Files in the directory
  • Create a new folder, called “phishbuster“.
  • In that folder, create a new file called “default_paths.tsv
  • In the same folder, create a new file “


Copy the data below and paste it into default_paths.tsv. Save this file.

enabled path
FALSE admin
TRUE pay
FALSE login code

Copy the code below and paste it into which we made previously. Save this file.

#By Reza Rafati 
#Feel free to use and edit.
#Goal: Scans for specific paths on designated domains
import aiohttp
import asyncio
import csv

async def fetch_url(session, url, proxy=None, user_agent=None):
    headers = {'User-Agent': user_agent} if user_agent else None

        async with session.get(url, proxy=proxy, headers=headers) as response:
            return url, response.status
    except aiohttp.ClientError:
        return url, None

async def parse_paths_async(file_path, proxy=None,user_agent=None):
    tasks = []
    results = []

    async with aiohttp.ClientSession() as session:
        with open(file_path, 'r') as file:
            reader = csv.reader(file, delimiter='\t')

            # Skip the header

            for row in reader:
                path = row[1]
                url = target + "/" + path
                task = asyncio.create_task(fetch_url(session, url, proxy=proxy,user_agent=user_agent))

            for task in asyncio.as_completed(tasks):
                result = await task

    return results

file_path = 'default_paths.tsv'
target = ""
selected_proxy = None
selected_user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36'
#selected_proxy = ''

loop = asyncio.get_event_loop()
results = loop.run_until_complete(parse_paths_async(file_path, proxy=selected_proxy,user_agent=selected_user_agent))

for url, status_code in results:
    if status_code is not None:
        if target in url:
            print(f"URL: {url} | Status Code: {status_code}")
        print(f"URL: {url} | Failed to fetch")


Run the code with python and wait for the output.

Reza Rafati

Reza Rafati, based in the Netherlands, is the founder of An industry professional providing insightful commentary on infosec, cybercrime, cyberwar, and threat intelligence, Reza dedicates his work to bolster digital defenses and promote cyber awareness.

You May Also Like

More From Author