Welcome to Day 6 of the "How to AWS" series: Your first workload (part 1)
If you missed the introduction and drivers behind this series, check it out here.
In the previous blog post in this series we setup some common base operational concerns for operating in AWS. We:
- Setup a golden AMI pipeline for AMI management
- Setup EBS Snapshots for base-level backups
- Decided on how you’ll perform log management of your workloads
- Discussed the importance of secrets management
- Talked through options for emailing on AWS
Today, we will start to deploy our first workload onto AWS focusing on:
- Identifying your first workload
- Your workload architecture & requirements
- Network security specific to your workload
Pick your workload!
We’re now at the point in this series where we need to select our first workload for deployment into AWS. Are you going to build new (green fields) or migrate an existing workload into AWS?
Depending on your specific circumstances you’ll need to select the path which best suits you.
When selecting your workload, many businesses opt to pick the low-hanging fruit such as a website or an isolated system to move into AWS. This reduces complexity whilst also builds confidence and skills in operating on AWS. If you’re new to AWS, then this would likely be a great first step!
Know your requirements!
Now you’ve selected your workload, you can ascertain your requirements.
Is this workload destined for production or are you only testing and developing in a sandbox? This first question may likely determine the rigour you apply to the definition and architecture decisions for your workload.
Do you require high availability? I strongly encourage you to run everything in a highly available manner (spread across Availability Zones) where possible. Consider session management and load balancing requirements for this!
It may be the case that your application architecture is constrained by licensing restrictions. Make sure you understand and take these into consideration and adhere to your specific licensing terms. My advice is to avoid heavily licensed and constrained technologies if it make sense for your business (you may not always have a choice).
Will your application need to scale? Consider the expected traffic volumes and load. Wherever possible you should build and design your application for horizontal scaling (adding servers, not making them bigger!). Leverage Auto Scaling to add servers and capacity to your workload on an as-needed basis. If you’re scaling your application workload, make sure you understand and know what metrics you need to scale on. Often this will be CPU or network load.
Do you need a load balancer to distribute traffic across your workload? If so, now is a good time to consider and select the best option for you. I recommend either the AWS Network Load Balancer or the Application Load Balancer. If you’re interested in knowing the difference between the AWS load balancer offerings, take a look at my blog post on this topic.
Consider your workload resource placement. This will stem from the architecture of your workload. With traditional infrastructure workloads, most workloads fall into a n-tier architecture. Once you know your tiers, consider the placement of these resources. Leverage your virtual private cloud (VPC) and split workloads into the public/private subnets you setup in our earlier blog post.
Once you’ve decided your workload placement, I strongly recommend you document and draw a diagram depicting the placement and components of your workload. This will enable others to visualise and easily understand how your workload components will work together and be deployed in AWS.
Once you’ve worked out your workload architecture, it’s time to design the network security which will control access between components. Security Groups will be your go-to firewall within an AWS VPC and enable you to restrict both inbound and outbound network traffic.
As a general rule of thumb, each component of your workload should have its own security group, and you should permit only the specific ports required for your applications. Follow Defence in Depth principles. By designing and operating like this you limit potential options (blast radius, pivot attacks) if servers are compromised.
It’s a common practice to group like-resources (e.g. an Auto Scaling Group) and have them share one or more security groups and thus they all have the same firewall rules. In this way a change to the security group will apply to all other resources that the security group is associated with.
Rather than just relying on IP addresses (which are prone to change in dynamic AWS environments), Security groups offer you the ability to have dynamic firewall rules. This is achieved by permitting access on a port from another security group.
For example, in a MySQL database security group you might permit access to the Web Server security group on TCP port 3306. As web servers are added to the environment and added to the Web Server security group, they would also have access to the database.
In this blog post we identified your first workload and discussed the need to understand and consider the requirements you have for the workload. Throughout the rest of this blog post we started to design the workload architecture and discussed the network security architecture that will need to be in place.
In our next blog post we’ll continue work on designing and deploying your first workload on AWS.