Picture this: You’re hunting on a bug bounty program with a scope like *.bigcorp.com. Your trusty amass, subfinder, etc run just dumped a list of 2 million subdomains . Running httpx to check for live hosts on each one using a single machine?
😓 Now, imagine splitting those 2 million subdomains across 10 cloud VMs, each running httpx in parallel, finishing the scan in hours instead of days. That’s the power of distributed command execution! Here’s why this setup is a game-changer for bug hunters:
Tackle Massive Scopes: Distribute tasks across multiple servers to scan millions of subdomains lightning-fast.
A big part of my workflow relies on automation. In fact, 60% of my tasks are automated with custom scripts that constantly look for bugs🐞 in public projects. However, automation isn’t just about executing scripts, it’s about doing it intelligently.
subfinder, amass, nuclei
).Let’s get to the fun part — setting up your own distributed system. You’ll need a Linux machine (e.g., Kali) for the master and a few VMs or cloud instances for workers. I’ll walk you through the setup
On all nodes (master and workers):
sudo apt update
sudo apt install python3 python3-pip python3-venv redis-server
mkdir -p /root/distshell
cd /root/distshell
python3 -m venv venv
source venv/bin/activate
pip install celery redis
Set the Redis password in /etc/redis/redis.conf
requirepass supersecret
bind 0.0.0.0
On the master node (or a dedicated Redis server), enable and start Redis:
sudo systemctl enable redis
sudo systemctl start redis
Test Redis (use your Redis server IP, e.g., 192.168.206.130
):
redis-cli --config ~/.rediscli -h 192.168.206.130 -p 6379 PING
Should return PONG
. If it fails, check your firewall (sudo ufw allow 6379
) or Redis config.
Save this as
/root/distshell/tasks.py
on all nodes (master and workers):This is the heart of the system, defining how commands are executed and sent back to the master.
NOTE: Must verify REDIS PASSWORD & REDIS SERVER IP.
import os
import logging
from celery import Celery# Set up logging
logging.basicConfig(level=logging.INFO, filename='/tmp/celery_tasks.log')
logger = logging.getLogger(__name__)
# Celery configuration
app = Celery('distshell',
broker='redis://default:[email protected]:6379/0',
backend='redis://default:[email protected]:6379/0')
app.conf.update(
task_serializer='json',
accept_content=['json'],
result_serializer='json',
timezone='UTC',
enable_utc=True,
task_track_started=True,
task_time_limit=300,
broker_connection_retry_on_startup=True,
)
@app.task(name='distshell.execute_command')
def execute_command(command, run_id):
import subprocess
logger.info(f"Executing command: {command}")
try:
result = subprocess.run(command, shell=True, capture_output=True, text=True, timeout=300)
output = result.stdout + result.stderr
status = 'success' if result.returncode == 0 else 'error'
except subprocess.TimeoutExpired:
output = "Command timed out"
status = 'error'
logger.error(f"Command timed out: {command}")
except Exception as e:
output = f"Command failed: {str(e)}"
status = 'error'
logger.error(f"Command failed: {command}, error: {str(e)}")
return {'worker': os.uname().nodename, 'command': command, 'output': output.strip(), 'result': status, 'run_id': run_id}
Save this as /root/distshell/scheduler.py
on the master node:
import sys
import uuid
import logging
from celery.result import AsyncResult
from tasks import app, execute_command# Set up logging
logging.basicConfig(level=logging.INFO, filename='/tmp/scheduler.log')
logger = logging.getLogger(__name__)
def main():
logger.info("Starting scheduler")
run_id = str(uuid.uuid4())
print(f"Started DistShell run {run_id}. Waiting for results...")
# Read commands
if len(sys.argv) > 1:
commands = [' '.join(sys.argv[1:])]
else:
commands = []
current_command = []
for line in sys.stdin:
line = line.strip()
if line:
current_command.append(line)
elif current_command:
commands.append(' '.join(current_command))
current_command = []
if current_command:
commands.append(' '.join(current_command))
# Queue tasks with round-robin routing
task_results = []
queues = ['queue1', 'queue2', 'queue3']
for i, cmd in enumerate(commands):
queue = queues[i % len(queues)]
logger.info(f"Queuing command '{cmd}' to {queue}")
result = execute_command.apply_async(args=[cmd, run_id], queue=queue)
task_results.append((result, cmd))
# Collect results
completed = 0
total = len(task_results)
while completed < total:
for result, cmd in task_results:
if result.ready():
if result.successful():
res = result.get()
print(f"Result from {res['worker']} (Command: {cmd}):")
print(res['output'])
logger.info(f"Received result for '{cmd}' from {res['worker']}: {res['output']}")
else:
print(f"Error for command '{cmd}': Task failed")
logger.error(f"Task failed for command '{cmd}'")
completed += 1
task_results.remove((result, cmd))
import time
time.sleep(1)
print("All commands processed.")
logger.info("All commands processed")
if __name__ == "__main__":
main()
On each worker node, save this as /root/distshell/worker.py
:
from tasks import app, execute_command
Start workers (adjust queue and worker names):
cd /root/distshell
source venv/bin/activate
celery -A worker worker --loglevel=info --concurrency=4 -Q queue1 -n worker1@%h
For additional workers (if you have):
celery -A worker worker --loglevel=info --concurrency=4 -Q queue2 -n worker2@%h
celery -A worker worker --loglevel=info --concurrency=4 -Q queue3 -n worker3@%h
Let’s put this to work with a real bug bounty scenario. On the master node:
cd /root/distshell
source venv/bin/activate
python scheduler.py 'echo $(hostname) says `id`'
Output:
Started DistShell run <some_uuid>. Waiting for results...
Result from worker1_hostname (Command: echo $(hostname) says `id`):
worker1_hostname says uid=0(root) gid=0(root) groups=0(root)
All commands processed.
Check the logs to debug:
cat /tmp/scheduler.log
cat /tmp/celery_tasks.log
This setup is like having a personal botnet (ethical, of course!) for your recon:
httpx
, nmap
, or even custom scripts like curl -I
.pip install flower; celery -A tasks flower --port=5555
) to monitor tasks live at http://<master_ip>:5555
.I’ve used this to blast through massive scopes, like running httpx
on thousands of subdomains or chaining amass
with dirb
. It’s saved me hours and helped me find bugs faster.
Distributed command execution with Celery is your secret weapon for scaling bug bounty recon. It’s like having a team of hackers working for you, minus the extra Red Bulls.
With over 13+ years of extensive experience in the IT industry, I bring a wealth of knowledge and expertise to
the table. My focus lies in Cyber Security (Red & Blue Team), cloud technologies (AWS/GCP), Frameworks (MITRE ATT&CK, CIS), ISO and advanced security tools (SIEM, IDS/IPS, DLP) where I have honed my skills in architecting robust and secure solutions. My forte lies in designing enterprise-level solutions tailored to meet the unique needs of organizations, leveraging the latest technologies and best practices.
DevSecOps experts with deep expertise in cloud platforms like AWS, Azure, and GCP. They are well-versed in modern DevOps tools such as Docker, Kubernetes, Jenkins, Terraform, and Ansible etc. I worked extensively on CI/CD pipeline development, infrastructure automation, and container orchestration, helping organizations build scalable, efficient, and reliable DevOps & DevSecOps environments.
— — — —— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
Try it out, play with the setup, and share your results on X (@dheerajkmadhukar) or LinkedIn (@DheerajMadhukar). Got questions or epic bounties? DM me, and let’s keep making the internet safer, one bug at a time! 🐞
Happy hunting, and may your next bug be a critical one! 🚀
IF YOU WANT ME TO DEMONSTRATE THIS OR YOU NEED A VIDEO, DROP A COMMENT !