“911 what’s your emergency?”
“It’s over… it’s all over… no more dev, no more prod, over two thousand customers’ websites offline…
“OK, calm down and explain to me what happened.”
The customer security manager took a deep breath and whispered: “It’s like somebody has taken control of our AWS account, I’m afraid he’s deleted everything… please, help me, my boss is going to kill me!”
We already were aware of similar cases so we divided in and after a quick assessment of their account and a close inspection of their actions on the last day of work, we found out that someone pushed on their public GitHub repository their AWS credentials. And in case you are wondering, they had Administrator access.
By reconstructing the dynamic of the incident we found that it took less than 15 minutes from when the credentials were accidentally published, to the moment in which a crawler found them, entered into the account and dropped a nuclear bomb that completely blew off all of the environments. When our rescue team logged in, there was nothing left to save: no EC2 instances, no RDS and, worse than that, no backup snapshots.
Instead, a large fleet of humongous EC2 instances was busily mining Bitcoins, generating an AWS billing that was rising by the minute.
In this post-meltdown scenario, not everything was lost. Luckily, the customer had set up a cross-account backup policy, so there was a recent copy of their infrastructure on a twin AWS account. After a busy day of work, everything was up and running again.
Every time I think about this episode, I still get chills. What would have happened if we wouldn’t have been able to restore the infrastructure? I’m thinking about a large web agency with thousands of premium customers – one of them produces the kind of car that every man dreams of having in his garage – that would have declared bankrupt overnight. Two hundred people out of a job, by the way.
And everything because of an easily avoidable human error.
So, how to completely eliminate the risk of somebody stealing your secrets and access keys that you use to develop on AWS? And what to do in case you’ve pushed your AWS admin credentials on GitHub?
Here’s what we did to save the situation when our customer called us.
The first and most urgent step to take is to immediately invalidate any access tied to compromised keys to prevent any illicit action.
For IAM Users credentials it’s pretty easy to invalidate all the current related keys: by browsing to the IAM section, it is possible to remove the keys. This is enough to deny all access from that time on.
So, if the keys are simply tied to an IAM User, this step will be enough to invalidate them (and this was the case in our story).
Instead, things change when keys are related to IAM roles:
As IAM roles don’t have credentials, you can’t revoke them. Don’t panic you can get the same result by revoking session action: a policy will be attached to the role with a condition with the current timestamp and, by doing so, all access to all the previously generated sessions will be denied.
Deleting the file from the git repository
Deleting the file from the Git repository and from the local cache is the necessary very next step… but still not enough to ensure security. Traces will remain in the commit history, and they still can be read.
To address the problem, GitHub documentation provides us precious resources:
- Git-filter-branch command: it gets things done, but maybe a bit clunky to use.
- BFG repo-cleaner: great project, maybe the clearest solution to use to delete any trace left.
By completely removing the file from the git repository and obscuring any reference in the history, you can finally breathe again… Maybe.
How to get this process even more secure?
This time the customer was lucky enough to notice the problem in time, and we were able to contain the damage. But what if this happened at the end of the day and he would have noticed that everything was falling apart the next day? We must ensure to provide a secure environment for the developers, with the least possible margin of error.
#Tip 1: keep the attack surface to the minimum
Don’t store sensitive information inside your projects, especially if you are using public git repositories. There are a ton of cheap and easy solutions that can store your secrets without using git repositories. Approach all projects trying to follow all security best practices.
#Tip 2: always enforce the least privilege principle:
Give your developers only enough access to perform the required job: designing leveraging on this approach, users only get as much access as they need, and the possibility of human errors is reduced to nearly zero.
(Curious about least-privilege principle? Then stay tuned: a dedicated article is coming soon!)
#Tip 3: always use short-term credentials
AWS gives you the possibility to minimize credentials validity time. The minimum time possible is 15 minutes. Keeping credentials validity to the minimum doesn’t prevent your access and secret keys from being stolen, but in most of the cases, if someone malicious stole them, he will probably found them after the expiration time. (WIN!)
We are developers on AWS, too, and we know how tedious using short-term credentials for everyday work is. What if there was a tool taking care of providing secure access to the development environment without any more action to be taken than a few simple clicks? That’s why a few months ago, we came up with LookAuth, a tool that enables users to automatically generate short-term credentials and rotate them, to enable a secure and hassle-free environment.
LookAuth is built by DevOps for DevOps.
Cool, isn’t it?
So, have a try! LookAuth is in free beta version: sign up now and let us know what you think about it! Your feedback is precious!
For this article, that’s all. Feel free to contact us for questions, feedback or simply just to say hello.
Keep following our blog,