Yesterday I attended AWS Summit event and wanted to post my impressions and notes from the event. As you can see in the image below there were quite a few people there:
Keynote was given by Amazon.com’s CTO Werners Vogel. Some notes from the keynote:
- It was quite a big event: 3000+ attendees
- Intel was introduced as platinum partner. Last month I attended to AWSome Day and they mentioned Intel have manufactured a special chip for EC2 only (C4-type instances). It’s designed specifically to address heat considerations.
- They have now more than 1 minnion active customers
- According to Gartner, AWS has 5-times compute capacity than 14 closest competitors combined (which also explains the specific requirements to reduce heat in chips)
- There were guest speakers who emphasized different features of AWS
- GoSquared: Experimentation, how it is easier to try out new technologies in a cost-effective way
- iTv: Microservices as opposed to monolithic applications
- OmniFone: Scalability
- Just Giving: Distributed regions and data analysis capabilities
- AWS VP of public sector: Usage in public sector. It started after 12 and everyone started leaving.
New features introduced
- Elastic File System: SSD-based, automatically replicated, auto-resized file system. Currently in preview mode, will be released this summer
- Machine Learning: Makes it easier to analyze big data for search intent, demand estimation etc.
- Lambda: Event-driven, fully-managed compute service
- EC2 Container Service: There is a big shift towards Docker and containers. He said: “The container is now the output of your development process”
- Generally microservices approach is favored: Building large solutions with smaller blocks allows faster, cost-effective solutions which can be adapted easier to changes
- Security and their compliance with major certification requirements is emphasized. But he didn’t mention shared-responsibility principle which AWS adopts. Just because you AWS doesn’t mean you’re compliant to all the regulations as well.
- They have support for hybrid solutions but according to AWS’s vision it’s not the destination, just a transition
- He made an analogy that fighting the cloud is like “fighting the gravity”: It’s a fight you cannot win!
After the lunch break there were a lot of sessions about various AWS features. I picked Technical Track 1 which included EC2 Container Services, Lambda, CloudFormation and CodeDeploy
EC2 Container Service
I know using containers is a big deal nowadays but still haven’t the chance to try it out myself. I was hoping from this session to find out more about it but didn’t benefit much from it as it didn’t cover the very basics. But in the light of keynote, it’s obvious there’s a huge demand on containers so will be the first service I’ll try next.
This is a very cool new service. Instead of running every small job in small services in small EC2 instances now we can get rid of all the maintenance and costs and just let AWS run our code whenever an event is triggered.
- It currently supports a wide variety of event sources such as objects put to S3 buckets, DynamoDB table updates, SNS messages etc.
- SQS support is coming soon
- Currently it runs node.js but it can be used to launch a Java application but native Java support is coming soon so it can directly execute Java code.
- It even allows access to underlying processes, threads, file system and sockets
- It’s charged per invocation so you don’t pay anything for idle times.
Infrastructure as code
The focus of the session was CloudFormation but a client of AWS showed how they used Eclipse to deploy with a single click so it can be done in several ways. (That’s why the title of the talk wasn’t CloudFormation)
This is also a great tool to automatically launch a whole infrastructure based on ERB configuration files. I don’t have much experience in this one but was a nice session to see its capabilities in action.
This is yet another cool new feature just gone out of preview mode. You can automatically deploy new code based on your rules. For example, you can deploy one-at-a-time. It verifies every deployment and moves to next one. Or you can deploy new version on half of the instances meaning that half of your system will be available even if the deployment fails. Or if you like some adrenaline rush you can deploy to all instances at once :-)
You can specify pre and post scripts that handle the clean-up tasks and verifying the deployment.
CodeDeploy has been GA (General Availability) but 2 more services were introduced yesterday: CodePipeline and CodeCommit
The idea is to fully-automate the whole process from source code checking to deployment which according to the speaker is a very time consuming task in their own applications.
It was a nice big event and I’m glad I had the chance to attend to it. The content was rich to cover every aspect of AWS. I decided to attend to the sessions instead of labs as I can do them online here. It’s also a bit overwhelming to see there’s so much to learn about but that’s always a challenge in this industry anyway. As Werner Vogels said in his keynote: “Innovation is continuous!”