CloudFormation is an infrastructure provisioning and management tool that enables us to effectively and efficiently create templates to manage the structure a client’s network and resources. Each server, firewall rule and network connection is coded. We use technologies such as Chef to configure and deploy software and applications on individual servers, automatically.
The templates allow us easily replicate a client’s infrastructure at the click of a button — meaning that companies can instantly spawn a test environment for a new project, or run continuous deployment workloads during development.
We find that CloudFormation gives clients certainty that their infrastructure is documented and understood. As a management tool they can have oversight of any changes, and it’s easier to track what can be optimised. CloudFormation is at the heart of many of the AWS projects we’re involved with.
Using AWS’ own Chef implementation (Opsworks) we have built infrastructure on this, including a project to automatically manage and deploy the application hosting for hundreds of websites. Utilising Opsworks we can cut development time when building chef solutions, and automate some of the CloudFormation work we would normally do.
Combining Cloudformation and Opsworks allows the creation of powerful devops solutions which give development teams and sysops teams the confidence to roll out the latest versions of their applications to live environments, knowing that these are already tested in an exact replicant environment.
AWS EC2 (Elastic Compute Cloud) is the core of many AWS deployments. Virtualized versions of the traditional servers you’d see in a data centre. EC2 provides resizable compute capacity in the cloud leaving you free of the hassle of physical servers. Our clients utilise both Windows and Linux servers within their AWS environments. Recent projects we have consulted on include:
- The replication of an in-house Active Directory deployment onto AWS — for the purposes of centralised management, scalability (via CloudFormation) and disaster recovery.
- Scalable on-demand processing for files, based on Amazon’s queue service (SQS) with a farm of EC2 servers scaling up and down to process those files
- Automatic provisioning of EC2 instances using Chef. Automatic deployment of extract-transform-load (ETL) operations on data. This loaded large datasets from Amazon RDS (MSSQL) into RedShift (via S3 for parallel processing), ready for analysis in JasperSoft business intelligence studio. With one API call, the whole system would run in an hour and email reports to the business’ board. This was scheduled nightly using AWS scheduled events.
Amazon S3 is a bottomless storage system for files and data. We help companies to optimise their web hosting by moving static content to Amazon S3. S3 is great for keeping data easily accessible within a client’s infrastructure and only costs while in use.
CloudFront is AWS’ content delivery network (CDN) using edge locations around the world to serve data to end users from the nearest region rather than coming directly to from the organisation’s application server location.
Utilising CloudFront and S3 together means client’s end users are served data and content quickly and consistently anywhere in the world for example media such as graphics and videos are always served from a location near the end user meaning more fluid streaming and less need to buffer.
S3 is the backbone of many of the document management tools we have built with and for clients. We make heavy use of S3 presigned URLs, which allow users to be given access to specific files, for a limited time; enabling one-off downloads and uploads.
Recently many of our projects have been focused on AWS Lambda. Lambda is a “serverless” system that utilises languages like Python and NodeJS (the languages we primarily use) and provides functions without the need for physical servers or massive arrays of EC2 instances to handle traffic. Instead Lambda functions are written to perform tasks when triggered by various sources such as a push notification (via Amazon’s Simple Notification Service (SNS)), S3, Cloudwatch or external REST API requests (via API Gateway) — or in response to other changes in the AWS environment such as logs, changes in metrics, realtime streams, and so on.
The added benefit of using Lambda is cost, with Lambda we are able to build functions for a client’s infrastructure that only incurs a cost when a request is processed. A complex API structure can be built with AWS Gateway, which can be ready to take requests, but until data comes in and is processed, there is no recurring fee for having the infrastructure available.
This is ideal for applications which start small and grow, startup companies, or companies building new products, because costs are kept low until users and requests increase.
Using CloudFormation stacks we have built many of our API Gateway solutions enabling us to maintain control of the creation of the Lambda functions and create test environments quickly. We have also developed a deployment system for quickly updating the various Lambda functions. Often solutions will end up with hundreds or thousands of functions, and this enables us to track the application as a whole, and deploy individual updates when required.
We are able to implement complex interactions and even built whole applications based on Lambda, API Gateway, DynamoDB and AngularJS (hosted on S3). For example, many of our clients use Amazon SQS to manage data flowing between applications. We have built connectors which allow Lambda functions to trigger from SQS, using scheduled events and a secondary lambda function. Ultimately this means we can build whole, scalable, web applications without any traditional servers, and which have no cost (except for some minimal S3 storage) until usage increases. We have also added user authentication to many of these apps, using AWS’ Cognito collection of services for identity management.
AWS Elastic Beanstalk is a service for deploying and scaling web applications. We have several client projects run on EB, and generally integrate these with CloudFormation to manage them centrally. This gives total control over the application stack. Furthermore on certain projects these stacks are also implemented into continuous deployment systems (for example, CodeShip). The CloudFormation integration then allows us to bring up new versions of the entire environment for testing; meaning we can have multiple environments running new features, or different environments for each end clients, managed from a central codebase.
We often end up utilising a wide range of AWS tools in our projects. These regularly include:
- Elastic Container Service (ECS) for running and managing Docker applications.
- Relational Database Service (RDS) allows the use of MSSQL, MySQL, Aurora databases and more.
- Kinesis Streams for streaming data at any scale.
- Simple Email Service (SES) a scalable outbound email service .
- QuickSight – AWS’ own business intelligence tool.
- Elastic Search (ES) an open-source search and analytics engine.
Some of our clients use Mongo clusters, as these are not native to AWS we have built our own system for running and managing Mongo clusters using CloudFormation, Chef and EC2 instances, together with EC2 snapshots and LVM volumes to ensure backup and disaster recovery.