Exploiting S3 bucket with path folder to Access PII info of A BANK

Hi, everyone

My name is Santosh Kumar Sha, I’m a security researcher from India(Assam). In this article, I will be describing How I Exploiting S3 bucket with path folder to Access PII info of A BANK using AWS cli .

I am now offering 1:1 sessions to share my knowledge and expertise:

topmate.io/santosh_kumar_sha

SPECIAL COVID-19 Note:

Don’t go outside without any reason . Stay home be safe and also safe other. Special request to my fellow bug-bounty hunter Take care of your health and get vaccinated.

TOOLS used for the exploitation

1. Subfinder (https://github.com/projectdiscovery/subfinder)

2. httpx (https://github.com/projectdiscovery/httpx)

3. gau(Corben) — https://github.com/lc/gau

4. waybackurls(tomnomnom) — https://github.com/tomnomnom/waybackurls.

Story Behind the bug:

This is the write of my latest Finding where I go access to to Access PII info of A BANK by Exploiting S3 bucket with path folder.

As The BANK does Not have the public But They Where in process to launch a private bug bounty program . So Initially the reporting the issue by a difficult Task but The Bank was highly responsive to toward issue and properly handle the vulnerability And fixed the issue in a fly As it a Critical issue.
* The BANK appreciate my effort And also made ME the first private invite for their new bug-bounty program.*

Here it goes:

Suppose we assume the target name is example.com , as it was a private bug bounty program so can’t disclosed the name . Lets say XXXX Bank.
So I started hunting on the Main Domain.

example.com

To gather all the subdomain from internet archives i have used subfinder , waybackurls tool and gau.

Command used:

subfinder -d example.com silent

gau -subs example.com

waybackurls example.com

So the chance of missing the subdomain still exist so in-order to be ahead of the game I don’t want to miss any subdomain for testing so I used subfinder and pipe to waybackurls to get all the domain for all the subdomain if exist and save it to a file.

So the final command will look like this:

So in order to expand by target scope I decide to gather all subdomain and url endpoint for target.

gau -subs example.com | grep “=”>> vul1.txt

waybackurls example.com |grep “=” >> vul2.txt

subfinder -d example.com -silent | gau -subs | grep “=” >> vul3.txt

Now collecting all subdomain in one and sorting out the duplicates

cat vul1.txt vul2.txt vul3.txt | sort -u >> unique_sub.txt

NOW the actual s3 hunting start:

Will Playing around main domain for couple hours with my burp I came across an endpoint “XXXXXX.aspx” which got my attention As it was used for transaction data. But While going through the main app i came to know that it was used by many other function like product tax invoice and sell tax invoice.

Its gave me the feeling that one or more function using the endpoint “XXXXXX.aspx” for the tax invoice data. So started fuzzing for the vulnerable endpoint but no success . As i was Aware that it can used for other function, So i collect all the url from waybackurl and internet archive .

Now , I have all the url with endpoint from the waybackurl machine and gau and internet archive. So, I used my bash skill to collect all the endpoint from the unique_sub.txt.

cat unique_sub.txt | sed ‘s/^.*.com//’ | sed ‘s/?.*//’ >> endpoint_path.txt

So, above command will remove target.com and anything after “?” like below as flow

/XXXXX/transaction

etc….

Now Once i have the all the endpoint now I append the endpoint with XXXXXX.aspx” using the bash.

cat endpoint_path.txt | sed ‘s/$/\/XXXXXX.jsp/’

Now I have added the “XXXXXX.aspxto all the endpoint path now i will the use my Magic parameter sparing tricks to find the vulnerable endpint..

xargs -a endpoint_path.txt -I@ bash -c ‘for url in $(cat endpoint_path.txt | sed “s/$/\/XXXXXX.aspx/”); do echo “http://$url/@”;done’ | httpx -silent -status-code -http-proxy http://127.0.0.1:8080

So In My burp history I came across as endpoint which caught my attention

https://transaction.example.com/transaction/XXXXXX.aspx

But Issue Was That It was giving me 403 error. So Tried to use the cookie and session id of the main domain but no luck. I tried Various Other bypassing techniques but there was no success.

So tried I have all other process to bypass the 403 error all fails. As all other technique falls , I decided to go old school method.

OLD school method:

I used OPTIONS as http method to see which are The method it allowing. BY Using the method one by one that was allowed on that endpoint and The miracle happen.

When I hit the endpoint with POST it automatically download the file from the s3 bucket that was storing the transaction pdf.

S3 bucket url:

https://XXXXX.s3.us-east-1.amazonaws.com/11XXXXXXXXXX012.pdf

So, I tried to access the AWS s3 bucket using AWS cli but it failed due for proper access control policy on s3 bucket.

aws s3 ls s3://XXXXX — region us-east-1

Now The Actual Process How I got access to PII info:

So, I have previous exploited a scenario were there AWS s3 bucket have the proper access control policy but the folder on s3 bucket does not have proper access control policy. So, I decided try this out, In these situation also but the main concerned was there was no folder in s3 bucket while downloading the transaction pdf.

In order to find the folder in s3 bucket I stared looking info the source and java script file manually luckily got the an s3 bucket name with folder in an javascript file as:

https://XXXXX.s3.us-east-1.amazonaws.com/document/uploadXXXX.png

So know I tried AWS cli command on s3 bucket with folder name as document

aws s3 ls s3://XXXXX /document — region us-east-1

But Still no success. So here I stared to lose my hope and wanted to stop here but i want give my last try then i had an idea why not do a directory fuzzing on the s3 bucket to get other list of directory. So i made an custom wordlist specifically containing the words which are commonly used in banking service.

So you use any directory brute-forcing tool but here I used Burp intruder for brute-forcing in-order to filter out the result with length and The output shocks me I got the list of folder that are valid as

/kyc_details/

/banking_info/

/transaction/

And fortunately the one of the folder lack the proper access control policy on folders.

aws s3 ls s3://XXXXX/banking_info/ — region us-east-1

My reaction after see the output ……..

And the output was a shocker for me as I can See all The user bank info and PII user of the user.

Takeaway

I’m sure that a lot of security researcher had already see there process but this how I approach for Exploiting S3 bucket with path folder to Access PII info of A BANK

So, the real learning is that we be creative in exploiting the issue and don’t give up unless you achieve or learned something new from it.

That’s one of the reasons why I wanted to share my experience. also to highlight other techniques to exploit such vulnerability.

Support me if you like my work! Buy me a coffee and Follow me on Twitter.

https://www.buymeacoffee.com/killmongar1996
https://www.buymeacoffee.com/killmongar1996

Thanks for reading :)
Stay Safe.

https://twitter.com/killmongar1996

--

--

Santosh Kumar Sha (@killmongar1996)

Cloud Security |Security Researcher |Pentester | Bugbounty hunter|VAPT | Pentration tester | CTF player | topmate.io/santosh_kumar_sha