NET702: Lab 12 Scale & Load Balance your Architecture

Task 1: Creating an AMI for Auto Scaling

Step 1: Open Vocareum -> Click on the lab -> My Work -> Start Lab. A pop-up window will appear and when it says, “Lab status: ready”, close the window -> Click on AWS tab on top right navigation bar.

Step 2: Click on “Services” -> Select “EC2” -> Click on “Instances” on the left navigation panel -> Wait until “Web Server 1” has 2/2 checks passed -> Select “Web Server 1” -> Click on “Actions” menu and Select Images -> Choose “Create Image” and enter the Image name as “Web Server AMI” and Image description as “Lab AMI for Web Server” -> Click on “Create Image” and click on “Close”.

Task 2: Creating a Load Balancer

Step 1: Click on “Load Balancers” on the left navigation panel -> Click on “Create Load Balancer”.

Step 2: Click on “Create” button under “Application Load Balancer” -> Enter the name as “LabELB”, VPC as “Lab VPC” -> Select both availability zones under “Availability Zones” to see available subnets -> Select “Public Subnet 1” and “Public Subnet 2”

Step 3: Click on “Next: Configure Security Settings” -> Click on “Next: Configure Security Groups” -> Select the “Web Security Group” and deselect “Default”. -> Click on “Next: Configure Routing” -> Enter the name as “LabGroup” -> Select “Next: Register Targets” -> Click on “Next: Review”-> Click on “Create” and then “Close”.

Task 3: Creating a Launch Configuration and Auto Scaling Group

Step 1: Click on “Launch Configurations” on the left navigation panel -> Click on “Create launch configuration” -> Select “My AMIs” on the left navigation panel -> Click “Select” in the row for “Web Server AMI”.

Step 2: Select “t2.micro” under the “Type” column -> Click on “Next: Configure details”.

Step 3: Enter the Name as “LabConfig” -> Under “Monitoring” section select “Enable CloudWatch detailed monitoring” -> Click on “Next: Add Storage” -> Click on “Next: Configure Security Group”.

Step 4: Click on “Select an existing security group” -> Select “Web Security Group” -> Click on “Review” -> After reviewing click on “Create launch configuration”.

Step 5: Select “I acknowledge that…” box on “Selecting an existing key pair” dialog box -> Click on “Create launch configuration”.

Step 6: Click on “Create an Auto Scaling group using this launch configuration”.

Step 7: Enter the Group name as “Lab Auto Scaling Group”, Group size as “2” and Network as “Lab VPC” -> Select “Private Subnet 1 (10.0.1.0/24)” and “Private Subnet 2 (10.0.3.0/24)” in the “Subnet” category -> Click on “Advanced Details” -> Select “Receive traffic from one or more load balancer ” from “Load Balancing” -> Enter the name of Target Groups as “LabGroup” -> Select “Enable CloudWatch detailed monitoring” in the “Monitoring” section -> Click on “Next: Configure scaling policies”.

Step 8: Select “Use scaling policies to adjust the capacity of this group” -> Give the value “2” and “6” for the scale between instances. -> Under “Scale Group Size” enter the Metric type as “Average CPU Utilization” and give the target value as “60”. Click on “Next: Configure Notifications”

Step 9: Click on “Next: Configure Tags” -> Enter the key as “Name” and the value as “Lab Instance” -> Click on “Review” -> Click on “Create Auto Scaling group” and if any error occurred click on “Retry Failed Tasks” until all the errors are fixed -> Click on Close.

Task 4: Verifying that Load Balancer is Working

Step 1: Click on “Instances” on the left navigation panel -> We should see 2 instances named as “Lab Instance” -> Click on “Target Groups” on the left navigation panel -> Click on “Targets” tab -> There should be 2 Instances listed in the target group.

Step 2: Click on “Load Balancer” on left navigation panel -> Copy the “DNS name” of the load balancer. Paste it in a new web browser tab and press “Enter”. We should see the application appearing, indicating that the load balancer is working.

Task 5: Testing Auto Scaling

Step 1: Click on “Services” -> Select “CloudWatch” -> Click on “Alarms” on the left navigation panel. There should be 2 alarms. The following images are to when there is no “Alarm” and how to create them. Since we have created 2 “Alarm” during Task 3, we will skip this step.

Step 2: In the below image we can see that the “Low CPU Utilization” alarm is in “ALARM” state.

Step 3: Go to browser with the web application and click on “Load Test” -> Return back to the CloudWatch console and we can see that the state is “Ok” now. After few minutes the other alarm should be in “ALARM” state -> Go back to “Services” -> Select “EC2” -> Click on “Instances” and we can see there is more than 2 “Lab Instances” which were created by Auto Scaling when the Alarm was triggered.

Task 6: Terminating Web Server 1

Step 1: Select “Web Server 1” -> Click on “Actions” menu and select “Instance State” -> Click on “Terminate” -> Click on “Yes, Terminate” and the Instance should be terminated.

Reflection and Critical Thinking

In this lab, I learned how to create an AMI, load balancer, create a launch configuration and Auto Scaling group, create an Amazon CloudWatch alarms and monitoring the performance of the infrastructure.

First, I created an AMI from an existing web server for auto scaling. Then I created a load balancer in which I selected 2 availability zones and provided a VPC to it. After that I was guided to launch the configuration for the Auto Scaling group which is used to launch an EC2 instance. In launch configuration, I had to select the AMI, the instance type, a key pair and security group. In the Scale group size, I mentioned the metric type and the scale between instances.

Once that was done, I went to the EC2 Instances to check whether the load balancing is working or not. I found that there were 2 instances there which were named Lab Instance and these Instances were deployed by the Auto Scaling. Then I went to the Target groups in the Load balancing section and found that the 2 instances that was mentioned earlier were a part of the target group. Then I copied the DNS name of the load balancer and I pasted it in web browser which displayed the application. This shows that the load balancer received the request, then it send the request to one of the EC2 instance and displayed the result.

The last part was testing the Auto Scaling. After increasing the load, if additional instances are added that mean Auto Scaling is functional. For that first we went to the CloudWatch then we went to Alarms. In the lab instructions, before the final task, they show how to create alarms, but I created the alarms when I was launching configuration and Auto Scaling. If we go to Alarms, we should see 2 alarms there. You will see that one of the alarm’s status will be ALARM. In this case go to the web application and click on Load test. After that when I came back to the CloudWatch, I saw the status changed back to OK, but later the other alarm was in ALARM status. I went to the Instances to check if there was any newer instances and I found that there was a new instance. Once it was confirmed that Auto Scaling is functional, I terminated the web server because it was the last task of the lab. It was also used to create the AMI which was no longer needed.

Load Balancers are used for optimizing the available resources in order to minimize the response time, maximize the output and to avoid overloading of a resource. Load Balancer uses multiple resources by increasing the reliability and availability of the architecture which allows it to direct requests to different instances and thus, letting all the instances work equally. This allows Load Balancers to avoid using a single component. AWS on the other hand, monitors the compute resources and adjusts them to handle the current traffic in a right way in AWS cloud. In order to keep up with the load of traffic, Auto Scaling, either removes the allocated instances or adds more instances depending on whether the load of traffic is low or high.

Leave a comment

Design a site like this with WordPress.com
Get started