PhishBuster: Scan for Phishing Pages

Estimated read time 3 min read

In our last blog post, we delved into DirBuster and its utility in pinpointing particular phishing pages. In this installment, we’ll shift our attention to a Python script I’ve developed. This complimentary tool enables you to parse your unique paths and asynchronously verify the presence of those paths in a specific domain. Additionally, it provides the flexibility to designate a preferred proxy. The outcome is conveniently displayed in the terminal.

Phishbuster
Phishbuster

How to install

Files in the directory
Files in the directory
  • Create a new folder, called “phishbuster“.
  • In that folder, create a new file called “default_paths.tsv
  • In the same folder, create a new file “Scan.py

default_paths.tsv

Copy the data below and paste it into default_paths.tsv. Save this file.

enabled path
FALSE admin
TRUE pay
FALSE login

Scan.py code

Copy the code below and paste it into Scan.py which we made previously. Save this file.

#By Reza Rafati 
#From Cyberwarzone.com
#Feel free to use and edit.
#Goal: Scans for specific paths on designated domains
import aiohttp
import asyncio
import csv

async def fetch_url(session, url, proxy=None, user_agent=None):
    headers = {'User-Agent': user_agent} if user_agent else None

    try:
        async with session.get(url, proxy=proxy, headers=headers) as response:
            return url, response.status
    except aiohttp.ClientError:
        return url, None


async def parse_paths_async(file_path, proxy=None,user_agent=None):
    tasks = []
    results = []

    async with aiohttp.ClientSession() as session:
        with open(file_path, 'r') as file:
            reader = csv.reader(file, delimiter='\t')

            # Skip the header
            next(reader)

            for row in reader:
                path = row[1]
                url = target + "/" + path
                task = asyncio.create_task(fetch_url(session, url, proxy=proxy,user_agent=user_agent))
                tasks.append(task)

            for task in asyncio.as_completed(tasks):
                result = await task
                results.append(result)

    return results


file_path = 'default_paths.tsv'
target = "https://cyberwarzone.com"
selected_proxy = None
selected_user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36'
#selected_proxy = 'http://proxy.example.com:8080'


loop = asyncio.get_event_loop()
results = loop.run_until_complete(parse_paths_async(file_path, proxy=selected_proxy,user_agent=selected_user_agent))

for url, status_code in results:
    if status_code is not None:
        if target in url:
            print(f"URL: {url} | Status Code: {status_code}")
    else:
        print(f"URL: {url} | Failed to fetch")

Start Run.py

Run the code with python Run.py and wait for the output.

Reza Rafati https://cyberwarzone.com

Reza Rafati, based in the Netherlands, is the founder of Cyberwarzone.com. An industry professional providing insightful commentary on infosec, cybercrime, cyberwar, and threat intelligence, Reza dedicates his work to bolster digital defenses and promote cyber awareness.

You May Also Like

More From Author