Part 02 What To Do After Choosing a Target? | Bug Bounty
2023-11-11 16:3:10 Author: infosecwriteups.com(查看原文) 阅读量:9 收藏

Om Arora

InfoSec Write-ups

Hello Everyone,

Welcome to the Part 02 of this series,

In the last series we discussed about Recon-:

  1. Subdomain Enumeration
  2. Automatic Scanners
  3. Finding Known Tech
  4. Tools For Known Bugs
  5. Screenshots
  6. Urls

So in this part we will start with:

What Is Google Dorking?

Google Dorking or Google hacking refers to using Google search techniques to hack into vulnerable sites or search for information that is not available in public search results.

The Google search engine works similarly to an interpreter using search strings and operators. For example, you can say that Google reacts sensitively to certain search strings when applied with specific operators.

This is one of the most important part of the recon, you can find the things here which you cannot find anywhere else,

for the basics of google dorks you can refer this:

From Hacking Perspective exploitdb is the best websites for some good dorks which are updated regularly by real users:

This is a very good websites with already formed google dorks which you can use, or you can make your own.

Here are some websites where you can just do it by clicking on buttons.

These are some best websites which can help you from my experience.

So this is one of my favourite things to do, Because its very easy and efficient,

So First I take all the urls which i found from all the subdomains from waybackurls and use the grep command to find the js files.

cat urls.txt | grep '.js$'

This will find all the urls that are ending with .js

You can also find the js files using other tools like katana.

katana -u https://test.com -jc -d 2 | grep ".js$" | uniq | sort > js.txt

After finding all the js files organize them in one single file and use httprobe or httpx to find the ones that are running.

You can analyze them manually which could be very hard for a large scope, So you can use tools like secretfinder, which is not very effective but its faster then manual if you have many files.

cat js.txt | while read url; do python3 SecretFinder.py -i $url -o cli >> secrets.txt; done

This command uses the js.txt file, grabs the urls and find the possible information disclosures in it and give you details about them like api keys, passwords, etc.
If you find an Api key and don’t know what to do just use this:

https://github.com/streaak/keyhacks

Search for the key you got in this and it will give you an exploit which you can then use to report.

You can also analyze the files which are not running using the wayback machine if it has a snapshot of the old file.

You can also use these files to find directories in the websites and find endpoints which could be useful.

Content Discovery

This is also a very important part of recon,

So earlier when we got the screenshots of all the subdomains, go through all of them and find the interesting ones by just going on the links and playing with the functionalities and decide which you want to hunt on so that you can move on with the content discovery.

So content discovery can be done with many tools.

Some of the most famous ones are:

  1. FeroxBuster
  2. GoBuster
  3. Dirb
  4. Dirbuster

You can use them on all the interesting domains to find some interesting endpoints, Feroxbuster has a feature where you can find more then one directories like for example-:

If https://google.com/test is working then it will automatically search other directories under that one like https://google.com/test/test123

I have written a script where you have to enter the subdomains and it automatically finds the directories for all the subdomains and saves a report for you using feroxbuster, you can also create a script which is very easy to do. If you want me to share it please let me know I will do it.

One more important thing while doing this is using the right wordlists, what most people do it use a single wordlist for all of them without researching anything, we used some tools earlier to find the known tech about the websites you can use that information to use the specific wordlists for that technologies, like for example if you have a wordpress website and you use a nomal wordlist on it you will not find anything, for some good wordlists you can use Seclists:

So after finding the directories, you can visit them and look for some bugs or information disclosures.

That is it for the recon part, there is a lot more to recon than this but if it will take ages to tell everything and this series is not only about recon, if you want to know more about recon I will make another series about recon.

With that the Part 02 also comes to an end, I hope it helped you in some way.

In the next Part we will discuss the part where most of the people get stuck

  1. What to do after Recon ?
  2. What to look for in the website ?
  3. How to know where and what bug to find?

All these types of questions will be discussed in the next part, this part was late because I had my exams in college, the next part will be on time.

Meanwhile here are some bonus picked out resources which could help you:

If you want more resources or free courses links feel free to dm me on instagram:

https://www.instagram.com/om._.arora1603/

You can also connect with me on linkedin:

https://www.linkedin.com/in/om-arora-b88340213/

Thank you for reading till the end

Please Consider following and liking if you found it helpful.

You can also support me through:


文章来源: https://infosecwriteups.com/part-02-what-to-do-after-choosing-a-target-bug-bounty-eb8d73ee73ee?source=rss----7b722bfd1b8d--bug_bounty
如有侵权请联系:admin#unsafe.sh