Amazon is in the process of planning how to patch their remaining EC2 hosts to protect against Meltdown and Spectre. Official details are here:
We’ll probably receive a small number of maintenance notifications in the next few days. I’ll try to forward those onto account owners in a timely fashion. Since Amazon is trying to fully remediate as quickly as possible, we can expect substantially less lead time than Amazon provides for normal maintenance.
Amazon’s work should protect their hypervisors, disallowing use of the attacks to break out of a VM, but you’ll still need to update the OS inside your VM to protect at that level.
Amazon re:Invent was held last week in Las Vegas. We saw a lot of exciting announcements, some expected and some more surprising. Amazon has the major launches detailed here:
For our campus usage, I’m most excited about these:
- Fargate – makes containers easier than ever before.
- ECS for Kubernetes – allows container management with Kubernetes, which may be how you’re already doing it.
- Hibernation for Spot Instances – don’t lose your work if you get outbid.
- New Spot Pricing Model – smooths out spot market pricing to avoid sudden surprises.
- Aurora Serverless – auto-scale database capacity, even down to zero (with a quick scale-up when you need it again)
- DynamoDB Backups – I can get rid of the scripts I wrote to back up DynamoDB; they don’t work as well as the new service.
- Comprehend – process spoken language.
- Translate – translate between spoken languages.
- SageMaker – machine learning made easy.
- Inter-Region VPC Peering – we’re evaluating how we can make the UOFI Active Directory available in regions outside us-east-2.
- PrivateLink – access private services without advanced VPC configuration.
- GuardDuty – use AWS’ behind-the-scenes machine learning to alert on unexpected behavior within your account.
We’ve seen a few account compromises on campus resulting from AWS IAM credentials checked into a public Github repository.
I encourage our customers to implement Amazon’s git-secrets package, which will automatically scan your code for keys and reject a git check-in if they’re found.
But if you’re not putting keys in your code, where should they go? A few suggestions:
- If you’re running from an EC2 instance, you can use an EC2 role to grant access to any API calls originating from that instance. This is my preferred method because no key management is required.
- Create local profiles that store credentials outside your application. “aws configure” will get you started with the AWS CLI.
- Populate your environment variables, again pulling the data out of your code.
Amazon documents their best practices for managing AWS access keys, which includes more options and more detail.
Besides handling credentials carefully, it’s useful to give your application the least privileges it needs. I recommend creating a dedicated IAM user or role for each application and granting it only the permissions it needs. Attackers tend to be most interested in credentials that allow them to launch EC2 instances. If your application doesn’t need that capability, you can dramatically limit the potential for attack.
Yesterday, Amazon announced per-second billing for EC2 and EBS. The new calculation will be used starting October 2. Continue reading
Amazon S3 has been in the news lately:
Top Defense Contractor Left Sensitive Pentagon Files on Amazon Server With No Password
Cloud Leak: How A Verizon Partner Exposed Millions of Customer Accounts
S3’s default configuration does not allow public access to the contents of a bucket, but these stories all feature bucket or object permissions that were open to the world. It’s evident that it’s a common mistake, but how can we avoid it? Continue reading