Hacking at Scale: Crush Massive Target Scopes & Supercharge Your Bug Bounty
文章介绍了一种基于 Celery 和 Redis 的分布式命令执行系统,用于高效处理大规模漏洞挖掘任务。该系统通过主节点分配任务到多个工作节点并行执行,支持运行 httpx、nmap 等工具,显著提升扫描速度和效率。 2025-7-13 05:56:40 Author: infosecwriteups.com(查看原文) 阅读量:10 收藏

Picture this: You’re hunting on a bug bounty program with a scope like *.bigcorp.com. Your trusty amass, subfinder, etc run just dumped a list of 2 million subdomains . Running httpx to check for live hosts on each one using a single machine?
😓 Now, imagine splitting those 2 million subdomains across 10 cloud VMs, each running httpx in parallel, finishing the scan in hours instead of days. That’s the power of distributed command execution! Here’s why this setup is a game-changer for bug hunters:
Tackle Massive Scopes: Distribute tasks across multiple servers to scan millions of subdomains lightning-fast.

  • Tackle Massive Scopes: Distribute tasks across multiple servers to scan millions of subdomains lightning-fast.
  • Save Time: Parallel execution means you’re finding live hosts (and potential vulns) while others are still waiting.
  • Run Any Tool: From httpx to nmap to custom scripts, this system handles it all.
  • Find Bugs Faster: Speed up recon to focus on chaining vulns for that critical bounty payout.

A big part of my workflow relies on automation. In fact, 60% of my tasks are automated with custom scripts that constantly look for bugs🐞 in public projects. However, automation isn’t just about executing scripts, it’s about doing it intelligently.

Automating workflows
  • Master Node: Your main machine queues commands (e.g., subfinder, amass, nuclei).
  • Worker Nodes: Other machines (VMs, cloud instances) execute those commands in parallel.
  • Redis: A fast message broker that coordinates tasks between the master and workers.
  • Round-Robin Algo: Tasks are evenly distributed across workers, so no single machine gets overwhelmed.

Let’s get to the fun part — setting up your own distributed system. You’ll need a Linux machine (e.g., Kali) for the master and a few VMs or cloud instances for workers. I’ll walk you through the setup

On all nodes (master and workers):

sudo apt update
sudo apt install python3 python3-pip python3-venv redis-server
mkdir -p /root/distshell
cd /root/distshell
python3 -m venv venv
source venv/bin/activate
pip install celery redis

Set the Redis password in /etc/redis/redis.conf

requirepass supersecret
bind 0.0.0.0

On the master node (or a dedicated Redis server), enable and start Redis:

sudo systemctl enable redis
sudo systemctl start redis

Test Redis (use your Redis server IP, e.g., 192.168.206.130):

redis-cli --config ~/.rediscli -h 192.168.206.130 -p 6379 PING

Should return PONG. If it fails, check your firewall (sudo ufw allow 6379) or Redis config.

Save this as /root/distshell/tasks.py on all nodes (master and workers):

This is the heart of the system, defining how commands are executed and sent back to the master.
NOTE: Must verify REDIS PASSWORD & REDIS SERVER IP.

import os
import logging
from celery import Celery

# Set up logging
logging.basicConfig(level=logging.INFO, filename='/tmp/celery_tasks.log')
logger = logging.getLogger(__name__)

# Celery configuration
app = Celery('distshell',
broker='redis://default:[email protected]:6379/0',
backend='redis://default:[email protected]:6379/0')

app.conf.update(
task_serializer='json',
accept_content=['json'],
result_serializer='json',
timezone='UTC',
enable_utc=True,
task_track_started=True,
task_time_limit=300,
broker_connection_retry_on_startup=True,
)

@app.task(name='distshell.execute_command')
def execute_command(command, run_id):
import subprocess
logger.info(f"Executing command: {command}")
try:
result = subprocess.run(command, shell=True, capture_output=True, text=True, timeout=300)
output = result.stdout + result.stderr
status = 'success' if result.returncode == 0 else 'error'
except subprocess.TimeoutExpired:
output = "Command timed out"
status = 'error'
logger.error(f"Command timed out: {command}")
except Exception as e:
output = f"Command failed: {str(e)}"
status = 'error'
logger.error(f"Command failed: {command}, error: {str(e)}")
return {'worker': os.uname().nodename, 'command': command, 'output': output.strip(), 'result': status, 'run_id': run_id}

Save this as /root/distshell/scheduler.py on the master node:

import sys
import uuid
import logging
from celery.result import AsyncResult
from tasks import app, execute_command

# Set up logging
logging.basicConfig(level=logging.INFO, filename='/tmp/scheduler.log')
logger = logging.getLogger(__name__)

def main():
logger.info("Starting scheduler")
run_id = str(uuid.uuid4())
print(f"Started DistShell run {run_id}. Waiting for results...")

# Read commands
if len(sys.argv) > 1:
commands = [' '.join(sys.argv[1:])]
else:
commands = []
current_command = []
for line in sys.stdin:
line = line.strip()
if line:
current_command.append(line)
elif current_command:
commands.append(' '.join(current_command))
current_command = []
if current_command:
commands.append(' '.join(current_command))

# Queue tasks with round-robin routing
task_results = []
queues = ['queue1', 'queue2', 'queue3']
for i, cmd in enumerate(commands):
queue = queues[i % len(queues)]
logger.info(f"Queuing command '{cmd}' to {queue}")
result = execute_command.apply_async(args=[cmd, run_id], queue=queue)
task_results.append((result, cmd))

# Collect results
completed = 0
total = len(task_results)
while completed < total:
for result, cmd in task_results:
if result.ready():
if result.successful():
res = result.get()
print(f"Result from {res['worker']} (Command: {cmd}):")
print(res['output'])
logger.info(f"Received result for '{cmd}' from {res['worker']}: {res['output']}")
else:
print(f"Error for command '{cmd}': Task failed")
logger.error(f"Task failed for command '{cmd}'")
completed += 1
task_results.remove((result, cmd))
import time
time.sleep(1)

print("All commands processed.")
logger.info("All commands processed")

if __name__ == "__main__":
main()

On each worker node, save this as /root/distshell/worker.py:

from tasks import app, execute_command

Start workers (adjust queue and worker names):

cd /root/distshell
source venv/bin/activate
celery -A worker worker --loglevel=info --concurrency=4 -Q queue1 -n worker1@%h

For additional workers (if you have):

celery -A worker worker --loglevel=info --concurrency=4 -Q queue2 -n worker2@%h
celery -A worker worker --loglevel=info --concurrency=4 -Q queue3 -n worker3@%h

Let’s put this to work with a real bug bounty scenario. On the master node:

cd /root/distshell
source venv/bin/activate
python scheduler.py 'echo $(hostname) says `id`'

Output:

Started DistShell run <some_uuid>. Waiting for results...
Result from worker1_hostname (Command: echo $(hostname) says `id`):
worker1_hostname says uid=0(root) gid=0(root) groups=0(root)
All commands processed.

Check the logs to debug:

  • Scheduler: cat /tmp/scheduler.log
  • Workers: cat /tmp/celery_tasks.log

This setup is like having a personal botnet (ethical, of course!) for your recon:

  • Speed: Scan thousands of subdomains in parallel across multiple VMs.
  • Flexibility: Run any tool — httpx, nmap, or even custom scripts like curl -I.
  • Scalability: Spin up more workers on AWS or DigitalOcean to handle bigger scopes.
  • Visibility: Use Flower (pip install flower; celery -A tasks flower --port=5555) to monitor tasks live at http://<master_ip>:5555.

I’ve used this to blast through massive scopes, like running httpx on thousands of subdomains or chaining amass with dirb. It’s saved me hours and helped me find bugs faster.

Distributed command execution with Celery is your secret weapon for scaling bug bounty recon. It’s like having a team of hackers working for you, minus the extra Red Bulls.

With over 13+ years of extensive experience in the IT industry, I bring a wealth of knowledge and expertise to
the table. My focus lies in Cyber Security (Red & Blue Team), cloud technologies (AWS/GCP), Frameworks (MITRE ATT&CK, CIS), ISO and advanced security tools (SIEM, IDS/IPS, DLP) where I have honed my skills in architecting robust and secure solutions. My forte lies in designing enterprise-level solutions tailored to meet the unique needs of organizations, leveraging the latest technologies and best practices.

DevSecOps experts with deep expertise in cloud platforms like AWS, Azure, and GCP. They are well-versed in modern DevOps tools such as Docker, Kubernetes, Jenkins, Terraform, and Ansible etc. I worked extensively on CI/CD pipeline development, infrastructure automation, and container orchestration, helping organizations build scalable, efficient, and reliable DevOps & DevSecOps environments.

— — — —— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Try it out, play with the setup, and share your results on X (@dheerajkmadhukar) or LinkedIn (@DheerajMadhukar). Got questions or epic bounties? DM me, and let’s keep making the internet safer, one bug at a time! 🐞
Happy hunting, and may your next bug be a critical one! 🚀

IF YOU WANT ME TO DEMONSTRATE THIS OR YOU NEED A VIDEO, DROP A COMMENT !


文章来源: https://infosecwriteups.com/hacking-at-scale-crush-massive-target-scopes-supercharge-your-bug-bounty-dcd856d01601?source=rss----7b722bfd1b8d---4
如有侵权请联系:admin#unsafe.sh