We’ve seen a few account compromises on campus resulting from AWS IAM credentials checked into a public Github repository.
I encourage our customers to implement Amazon’s git-secrets package, which will automatically scan your code for keys and reject a git check-in if they’re found.
But if you’re not putting keys in your code, where should they go? A few suggestions:
- If you’re running from an EC2 instance, you can use an EC2 role to grant access to any API calls originating from that instance. This is my preferred method because no key management is required.
- Create local profiles that store credentials outside your application. “aws configure” will get you started with the AWS CLI.
- Populate your environment variables, again pulling the data out of your code.
Amazon documents their best practices for managing AWS access keys, which includes more options and more detail.
Besides handling credentials carefully, it’s useful to give your application the least privileges it needs. I recommend creating a dedicated IAM user or role for each application and granting it only the permissions it needs. Attackers tend to be most interested in credentials that allow them to launch EC2 instances. If your application doesn’t need that capability, you can dramatically limit the potential for attack.
Yesterday, Amazon announced per-second billing for EC2 and EBS. The new calculation will be used starting October 2. Continue reading
Amazon S3 has been in the news lately:
Top Defense Contractor Left Sensitive Pentagon Files on Amazon Server With No Password
Cloud Leak: How A Verizon Partner Exposed Millions of Customer Accounts
S3’s default configuration does not allow public access to the contents of a bucket, but these stories all feature bucket or object permissions that were open to the world. It’s evident that it’s a common mistake, but how can we avoid it? Continue reading
We’ll be holding free AWS labs throughout the Spring semester. Here’s the full schedule:
- January 11 – 1:00 to 4:00 p.m. in 1009 Mechanical Engineering Lab
- January 25 – 10:00 to 11:30 a.m. in 1009 Mechanical Engineering Lab
- February 8 – Remote labs from 10:00 to 11:30 a.m.
- February 22 – 9:30 to 11:00 a.m. in 1001 Mechanical Engineering Lab
- March 8 – 9:30 to 11:00 a.m. in 27 Illini Hall
- March 22 – 9:30 to 11:00 a.m. in 27 Illini Hall
- April 12 – 2:30 to 4:30 p.m. in 1009 Mechanical Engineering Lab
- April 26 – 2:30 to 4:30 p.m. in 27 Illini Hall
- May 10 – 9:30 to 11:00 a.m. in 27 Illini Hall
- June 28 – 9:30 to 11:00 a.m. in 27 Illini Hall
During each lab session, you’ll have your choice of topics:
- AWS 101: Introduction to EC2
- Identity and Access Management
- S3 and CloudFront for content distribution
- Relational Database Service
- Automating AWS with CloudFormation
- Introduction to Lambda
- Building clusters with Alces Flight
- Elastic MapReduce
You may run through multiple labs if time allows. An Amazon solutions architect will be on-site with our local staff to offer technical assistance and discuss cloud topics.
Technology Services will grant you access to a shared AWS account for the lab; you don’t need your own. Computers will be available onsite, though you’re welcome to bring your own laptop if you prefer.
Please register here to reserve your seat.
Amazon has posted their summary of this week’s S3 disruption in us-east-1. While this was just 1 of 60 services in 1 of 16 regions, it had an outsized impact on operations. A number of AWS components and third party services depend on S3 in us-east-1, and the outage cased widespread service disruptions across the internet.
S3 was the first publicly available Amazon service, and us-east-1 was the first AWS region, which helps explain why so many services were built on this particular instance of the service.
In the summary, Amazon transparently details what went wrong as well as the measures they’re taking to ensure that this class of mistake cannot reoccur. The lesson I’m taking from this is to expect failures, but ensure that you never fail the same way twice.