Introduction to AWS

Introduction to AWS

In the previous post, we introduced Terraform. In this post, we'll quickly go over AWS and what we'll be doing with it throughout the other Terraform tutorials.

Who this chapter is for

If you've recently started out in managing a cloud infrastructure, or have primarily used other providers, it is possible that you're not familiar with the intricacies of AWS.

This chapter is for those who might not be familiar with phrases like "Elastic IP Address", "Application Load Balancer", "Security Group", "VPC" or "EC2 Instance". If you are comfortable with those phrases, feel free to skip to the next chapter.

Some of you might be planning on setting up a container infrastructure utilizing Amazon ECS, Fargate or Elastic Beanstalk. This series will touch on those technologies in later chapters - after setting up an infrastructure with traditional servers. The knowledge gained from setting up traditional servers can be directly translated into setting up a container-based infrastructure.

Let's dive into it and define a few terms!

EC2 Instances

Elastic Compute Cloud (EC2) Instances are essentially servers that you can rent from Amazon to run your code. They utilize VM technology that makes it easy to create and destroy environments running Linux and other operating systems.

AMI

Amazon Machine Images (AMI) are a snapshot of a machine at any given point in time. If you create a Linux server and install Ruby and MySQL on it, you can save a snapshot that allows you to more easily create a server with that configuration in the future.

Want to read more posts like this? Subscribe to the blog! Sign in with Github

Auto-Scaling

Auto-scaling is a tool that monitors web applications and decides when to add or subtract instances within the infrastructure, based on usage. If your web application suddenly gets a burst in traffic, auto-scaling makes sure, without manual intervention, that there are enough servers to handle it. If traffic dies down on the weekends, auto-scaling will use less servers (and by extension cost less) on the weekends.

Security Group

Security Groups allow you to create rules for which ports can be accessed, and from where. For example, if you wanted only your IP address to be able to SSH into a server, you'd manage that through Security Groups.

Elastic IP Address (EIP)

Elastic IP Addresses (EIP) are IP addresses that you can purchase from Amazon and attach to EC2 instances. These IP addresses can be used to point domain names at your rented servers. The ability to attach and detach them allows you to upgrade your servers without changing your DNS.

Application Load Balancer

When managing multiple servers, or deploying multiple times a day, it becomes beneficial to send all traffic through a load balancer. A load balancer is a layer in between your servers and the internet that decides which server a given request will go to. It can be beneficial for heavy traffic, overloaded servers or no-downtime deploys.

Availability Zone

Amazon has multiple datacenters spread throughout the world. An Availability Zone represents an isolated data center. Placing servers in multiple zones means that if one data center ever fails, your web application can remain online.

VPC

A virtual private cloud (VPC) is a private network in which you can place your servers. All servers within the virtual cloud are given private IP addresses that allow for faster and more secure communication between them. You can split a VPC into subnets spread across multiple availability zones.

Putting it all together

These terms should help to make things more understandable while we're setting up our infrastructure. Don't worry if you haven't quite memorized them yet. After using them in practice, these terms become second nature.

Throughout the rest of this series, we're going to use Terraform to put these pieces together into a production web application.

Goals of this series:

This series can be broken into five parts. Each part gets a little more advanced than the last and results in an infrastructure suited for a slightly larger company. The series is written in this way to allow for a beginner to go from setting up a small company's moderate-traffic application to configuring a large company's high-traffic, highly available infrastructure.

Part 1:
  • Create and configure an EC2 Instance
  • Install a Rails application on that EC2 Instance
  • Connect that Rails application to a PostgreSQL database
  • Attach a Static IP and Domain to that server

This part (the server setup) will allow you to set up an infrastructure that can easily host most web applications. If you're just starting out, and need to build an MVP for your idea, this will get you there. It will allow you to expand to a moderate amount of traffic and a few engineers.

Part 2:

  • Create a second server and send traffic through an Application Load Balancer

As a company and its traffic grows, the need for multiple servers will arise. Multiple servers allows for a larger amount of concurrent connections. Equally as important, a growing engineering team requires a way to push changes to the application with minimal downtime. This allows for the company to stay nimble - Responding to an ever-changing business landscape while also providing the stability needed for its existing userbase.

Part 3:

  • Build a code deploy process using Amazon CodeDeploy and CodePipeline

As an engineering team grows, the overhead of an intricate deploy process can become a challenge. If only one person knows the deploy process, they become a bottle-neck. If everyone needs to learn a multi-step deploy process, mistakes happen and downtime occurs.

This part of the series will allow you to setup an automated testing and deploy process that is triggered by engineers committing to a Git repo. This simple deploy process can scale to any number of engineers and make everyone's lives a little bit better.

Part 4:

  • Convert our server-based infrastructure into a Docker Container infrastructure

We have infrastructure as code, that takes the connecting tissue between all our servers and immortalizes it in text. We can get similar benefits by immortalizing our web application configurations in code. These mini-VMs can make deploying a microservice architecture, desired by many large web applications, much easier.

Containers are quickly taking the web development world by storm. We'll move our server-based infrastructure to an Docker and ECS-based infrastructure.

Part 5:

  • Set up auto-scaling to lower costs

Many web applications are accessed most at certain points in the day. B2B applications are accessed most during business hours. Social networking tools are utilized the most during non-business hours. Some applications are used the most on weekends.

By setting up auto-scaling, we can make sure that our applications are always kept running using the cheapest set of resources they need to make our users happy. This saves a ton in costs for large companies and allows the company to expand its infrastructure past what otherwise would be the limit.

Moving Forward

In the next chapter, we're going to jump right into Terraform. We'll create an EC2 instance with Rails installed on it that's accessible by the internet.

Want to read more posts like this? Subscribe to the blog! Sign in with Github