Module 5 - Networking and Content Delivery
Module 5 - Networking and Content Delivery
This module covers three fundamental Amazon Web Services (AWS) for networking and
content delivery: Amazon Virtual Private Cloud (Amazon VPC), Amazon Route 53, and
Amazon CloudFront.
• Networking basics
• VPC networking
• VPC security
• Amazon Route 53
• Amazon CloudFront
This module includes some activities that challenge you to label a network diagram and design
a basic VPC architecture.
You will watch a recorded demonstration to learn how to use the VPC Wizard to create a VPC
with public and private subnets.
You then get a chance to apply what you have learned in a hands-on lab where you use the VPC
Wizard to build a VPC and launch a web server.
Finally, you will be asked to complete a knowledge check that test your understanding of key
concepts that are covered in this module.
• Create your own VPC and add additional components to it to produce a customized
network
• Identify the fundamentals of Amazon Route 53
1.2 IP Addresses:
Each client machine in a network has a unique Internet Protocol (IP) address that identifies it.
An IP address is a numerical label in decimal format. Machines convert that decimal number
to a binary format.
In this example, the IP address is 192.0.2.0. Each of the four dot (.)-separated numbers of the
IP address represents 8 bits in octal number format. That means each of the four numbers can
be anything from 0 to 255. The combined total of the four numbers for an IP address is 32 bits
in binary format.
IPv6 addresses, which are 128 bits, are also available. IPv6 addresses can accommodate more
user devices.
An IPv6 address is composed of eight groups of four letters and numbers that are separated by
colons (:). In this example, the IPv6 address is 2600:1f18:22ba:8c00:ba86:a05e:a5ba:00FF.
Each of the eight colon-separated groups of the IPv6 address represents 16 bits in hexadecimal
number format. That means each of the eight groups can be anything from 0 to FFFF. The
combined total of the eight groups for an IPv6 address is 128 bits in binary format.
1.4 Classless Inter-Domain Routing (CIDR)
A common method to describe networks is Classless Inter-Domain Routing (CIDR). The CIDR
address is expressed as follows:
• Finally, a number that tells you how many bits of the routing prefix must be fixed or
allocated for the network identifier
The bits that are not fixed are allowed to change. CIDR is a way to express a group of IP
addresses that are consecutive to each other.
In this example, the CIDR address is 192.0.2.0/24. The last number (24) tells you that the first
24 bits must be fixed. The last 8 bits are flexible, which means that 2 8 (or 256) IP addresses are
available for the network, which range from 192.0.2.0 to 192.0.2.255. The fourth decimal digit
is allowed to change from 0 to 255.
If the CIDR was 192.0.2.0/16, the last number (16) tells you that the first 16 bits must be fixed.
The last 16 bits are flexible, which means that 2 16 (or 65,536) IP addresses are available for the
network, ranging from 192.0.0.0 to 192.0.255.255. The third and fourth decimal digits can each
change from 0 to 255.
• Fixed IP addresses, in which every bit is fixed, represent a single IP address (for
example, 192.0.2.0/32). This type of address is helpful when you want to set up a
firewall rule and give access to a specific host.
Amazon Virtual Private Cloud (Amazon VPC) is a service that lets you provision a logically
isolated section of the AWS Cloud (called a virtual private cloud, or VPC) where you can
launch your AWS resources.
Amazon VPC gives you control over your virtual networking resources, including the selection
of your own IP address range, the creation of subnets, and the configuration of route tables and
network gateways. You can use both IPv4 and IPv6 in your VPC for secure access to resources
and applications.
You can also customize the network configuration for your VPC. For example, you can create
a public subnet for your web servers that can access the public internet. You can place your
backend systems (such as databases or application servers) in a private subnet with no public
internet access.
Finally, you can use multiple layers of security, including security groups and network access
control lists (network ACLs), to help control access to Amazon Elastic Compute Cloud
(Amazon EC2) instances in each subnet.
You can optionally associate an IPv6 CIDR block with your VPC and
subnets, and assign IPv6 addresses from that block to the resources in
your VPC. IPv6 CIDR blocks have a different block size limit.
The CIDR block of a subnet can be the same as the CIDR block for a VPC. In this case, the
VPC and the subnet are the same size (a single subnet in the VPC). Also, the CIDR block of a
subnet can be a subset of the CIDR block for the VPC. This structure enables the definition of
multiple subnets. If you create more than one subnet in a VPC, the CIDR blocks of the subnets
cannot overlap. You cannot have duplicate IP addresses in the same VPC.
• Network address
• Future use
For example, suppose that you create a subnet with an IPv4 CIDR
block of 10.0.0.0/24 (which has 256 total IP addresses). The subnet has 256 IP addresses, but
only 251 are available because five are reserved.
Additional costs might apply when you use Elastic IP addresses, so it is important to release
them when you no longer need them.
To learn more about Elastic IP addresses, see Elastic IP Addresses in the AWS Documentation
at https://docs.aws.amazon.com/vpc/latest/userguide/vpc-eips.html.
Each instance in your VPC has a default network interface (the primary network interface) that
is assigned a private IPv4 address from the IPv4 address range of your VPC. You cannot detach
a primary network interface from an instance. You can create and attach an additional network
interface to any instance in your VPC. The number of network interfaces you can attach varies
by instance type.
Each subnet in your VPC must be associated with a route table. The
main route table is the route table is automatically assigned to your
VPC. It controls the routing for all subnets that are not explicitly associated with any other
route table. A subnet can be associated with only one route table at a time, but you can associate
multiple subnets with the same route table.
Key Points:
Some key takeaways from this section of the module include:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html.
3.2 Network address translation (NAT) gateway
A network address translation (NAT) gateway enables instances in a private subnet to connect
to the internet or other AWS services, but prevents the internet from initiating a connection
with those instances.
To create a NAT gateway, you must specify the public subnet in which the NAT gateway should
reside. You must also specify an Elastic IP address to associate with the NAT gateway when
you create it. After you create a NAT gateway, you must update the route table that is associated
with one or more of your private subnets to point internet-bound traffic to the NAT gateway.
Thus, instances in your private subnets can communicate with the internet.
You can also use a NAT instance in a public subnet in your VPC instead of a NAT gateway.
However, a NAT gateway is a managed NAT service that provides better availability, higher
bandwidth, and less administrative effort. For common use cases, AWS recommends that you
use a NAT gateway instead of a NAT instance.
• Security groups – VPC sharing participants can reference the security group IDs of each
other
• Efficiencies – Higher density in subnets, efficient use of VPNs and AWS Direct
Connect
• No hard limits – Hard limits can be avoided—for example, 50 virtual interfaces per
AWS Direct Connect connection through simplified network architecture
• Optimized costs – Costs can be optimized through the reuse of NAT gateways, VPC
interface endpoints, and intra-Availability Zone traffic
VPC sharing enables you to decouple accounts and networks. You have fewer, larger, centrally
managed VPCs. Highly interconnected applications automatically benefit from this approach.
• Transitive peering is not supported. For example, suppose that you have three VPCs:
A, B, and C. VPC A is connected to VPC B, and VPC A is connected to VPC C.
However, VPC B is not connected to VPC C implicitly. To connect VPC B to VPC C,
you must explicitly establish that connectivity.
• You can only have one peering resource between the same two VPCs.
For more information about VPC peering, see VPC Peering in the AWS Documentation at
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-peering.html.
1. Create a new virtual gateway device (called a virtual private network (VPN) gateway)
and attach it to your VPC.
2. Define the configuration of the VPN device or the customer gateway. The customer
gateway is not a device but an AWS resource that provides information to AWS about
your VPN device.
3. Create a custom route table to point corporate data center-bound traffic to the VPN
gateway. You also must update security group rules. (You will learn about security
groups in the next section.)
4. Establish an AWS Site-to-Site VPN (Site-to-Site VPN) connection to link the two
systems together.
For more information about DX, see the AWS Direct Connect product page at
https://aws.amazon.com/directconnect/.
3.7 VPC endpoints
A VPC endpoint is a virtual device that enables you to privately connect your VPC to supported
AWS services and VPC endpoint services that are powered by AWS PrivateLink. Connection
to these services does not require an internet gateway, NAT device, VPN connection, or AWS
Direct Connect connection. Instances in your VPC do not require public IP addresses to
communicate with resources in the service. Traffic between your VPC and the other service
does not leave the Amazon network.
• An interface VPC endpoint (interface endpoint) enables you to connect to services that
are powered by AWS PrivateLink. These services include some AWS services, services
that are hosted by other AWS customers and AWS Partner Network (APN) Partners in
their own VPCs (referred to as endpoint services), and supported AWS Marketplace
APN Partner services. The owner of the service is the service provider, and you—as
the principal who creates the interface endpoint—are the service consumer. You are
charged for creating and using an interface endpoint to a service. Hourly usage rates
and data processing rates apply. See the AWS Documentation for a list of supported
interface endpoints and for more information about the example shown here at
https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html.
• Gateway endpoints: The use of gateway endpoints incurs no additional charge. Standard
charges for data transfer and resource usage apply.
For more information about VPC endpoints, see VPC Endpoints in the AWS Documentation at
https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html.
Though you can use VPC peering to connect pairs of VPCs, managing point-to-point
connectivity across many VPCs without the ability to centrally manage the connectivity
policies can be operationally costly and difficult. For on-premises connectivity, you must attach
your VPN to each individual VPC. This solution can be time-consuming to build and difficult
to manage when the number of VPCs grows into the hundreds.
To solve this problem, you can use AWS Transit Gateway to simplify your networking model.
With AWS Transit Gateway, you only need to create and manage a single connection from the
central gateway into each VPC, on-premises data center, or remote office across your network.
A transit gateway acts as a hub that controls how traffic is routed among all the connected
networks, which act like spokes. This hub-and-spoke model significantly simplifies
management and reduces operational costs because each network only needs to connect to the
transit gateway and not to every other network. Any new VPC is connected to the transit
gateway, and is then automatically available to every other network that is connected to the
transit gateway. This ease of connectivity makes it easier to scale your network as you grow.
3.9 Activity: Label this network diagram
See if you can recognize the different VPC networking components that you learned about by
labeling this network diagram.
At the most basic level, a security group is a way for you to filter traffic to your instances.
Security groups have rules that control the inbound and outbound traffic. When you create a
security group, it has no inbound rules. Therefore, no inbound traffic that originates from
another host to your instance is allowed until you add inbound rules to the security group. By
default, a security group includes an outbound rule that allows all outbound traffic. You can
remove the rule and add outbound rules that allow specific outbound traffic only. If your
security group has no outbound rules, no outbound traffic that originates from your instance is
allowed.
Security groups are stateful, which means that state information is kept even after a request is
processed. Thus, if you send a request from your instance, the response traffic for that request
is allowed to flow in regardless of inbound security group rules. Responses to allowed inbound
traffic are allowed to flow out, regardless of outbound rules.
A network ACL has separate inbound and outbound rules, and each rule can either allow or
deny traffic. Your VPC automatically comes with a modifiable default network ACL. By
default, it allows all inbound and outbound IPv4 traffic and, if applicable, IPv6 traffic. The
table shows a default network ACL.
Network ACLs are stateless, which means that no information about a request is maintained
after a request is processed.
A network ACL contains a numbered list of rules that are evaluated in order, starting with the
lowest numbered rule. The purpose is to determine whether traffic is allowed in or out of any
subnet that is associated with the network ACL. The highest number that you can use for a rule
is 32,766. AWS recommends that you create rules in increments (for example, increments of
10 or 100) so that you can insert new rules where you need them later.
For more information about network ACLs, see Network ACLs in the AWS Documentation at
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html.
4.5 Security groups versus network ACLs
Here is a summary of the differences between security groups and network ACLs:
• Security groups act at the instance level, but
network ACLs act at the subnet level.
• For security groups, all rules are evaluated before the decision is made to allow traffic.
For network ACLs, rules are evaluated in number order before the decision is made to
allow traffic.
See if you can design a VPC that meets the following requirements:
• The first address of your network must be 10.0.0.0. Each subnet must have 256 IPv4
addresses.
• Your database server must be able to access the internet to make patch updates.
• Your architecture must be highly available and use at least one custom firewall layer.
• Create a VPC.
4.8 Begin Lab 2: Build Your VPC and Launch a Web Server
It is now time to start the lab. It should take you approximately 30 minutes to complete the lab.
You can use Amazon Route 53 to configure DNS health checks so you that can route traffic to
healthy endpoints or independently monitor the health of your application and its endpoints.
Amazon Route 53 traffic flow helps you manage traffic globally through several routing types,
which can be combined with DNS failover to enable various low-latency, fault-tolerant
architectures. You can use Amazon Route 53 traffic flow’s simple visual editor to manage how
your users are routed to your application’s endpoints—whether in a single AWS Region or
distributed around the globe.
Amazon Route 53 also offers Domain Name Registration—you can purchase and manage
domain names (like example.com), and Amazon Route 53 will automatically configure DNS
settings for your domains.
• Simple routing (round robin) – Use for a single resource that performs a given function
for your domain (such as a web server that serves content for the example.com website).
• Weighted round robin routing – Use to route traffic to multiple resources in proportions
that you specify. Enables you to assign weights to resource record sets to specify the
frequency with which different responses are served. You might want to use this
capability to do A/B testing, which is when you send a small portion of traffic to a server
where you made a software change. For instance, suppose you have two record sets that
are associated with one DNS name: one with weight 3 and one with weight 1. In this
case, 75 percent of the time, Amazon Route 53 will return the record set with weight 3,
and 25 percent of the time, Amazon Route 53 will return the record set with weight 1.
Weights can be any number between 0 and 255.
• Latency routing (LBR) – Use when you have resources in multiple AWS Regions and
you want to route traffic to the Region that provides the best latency. Latency routing
works by routing your customers to the AWS endpoint (for example, Amazon EC2
instances, Elastic IP addresses, or load balancers) that provides the fastest experience
based on actual performance measurements of the different AWS Regions where your
application runs.
• Geolocation routing – Use when you want to route traffic based on the location of your
users. When you use geolocation routing, you can localize your content and present
some or all of your website in the language of your users. You can also use geolocation
routing to restrict the distribution of content to only the locations where you have
distribution rights. Another possible use is for balancing the load across endpoints in a
predictable, easy-to-manage way, so that each user location is consistently routed to the
same endpoint.
• Geoproximity routing – Use when you want to route traffic based on the location of
your resources and, optionally, shift traffic from resources in one location to resources
in another.
• Failover routing (DNS failover) – Use when you want to configure active-passive
failover. Amazon Route 53 can help detect an outage of your website and redirect your
users to alternate locations where your application is operating properly. When you
enable this feature, Amazon Route 53 health-checking agents will monitor each location
or endpoint of your application to determine its availability. You can take advantage of
this feature to increase the availability of your customer-facing application.
• Multivalue answer routing – Use when you want Route 53 to respond to DNS queries
with up to eight healthy records that are selected at random. You can configure Amazon
Route 53 to return multiple values—such as IP addresses for your web servers—in
response to DNS queries. You can specify multiple values for almost any record, but
multivalue answer routing also enables you to check the health of each resource so that
Route 53 returns only values for healthy resources. It's not a substitute for a load
balancer, but the ability to return multiple health-checkable IP addresses is a way to use
DNS to improve availability and load balancing.
• Creating health checks to monitor the health and performance of your web applications,
web servers, and other resources. Each health check that you create can monitor one of
the following—the health of a specified resource, such as a web server; the status of
other health checks; and the status of an Amazon CloudWatch alarm.
1. Create two DNS records for the Canonical Name Record (CNAME) www with a routing
policy of Failover Routing. The first record is the primary route policy, which points to
the load balancer for your web application. The second record is the secondary route
policy, which points to your static Amazon S3 website.
2. Use Route 53 health checks to make sure that the primary is running. If it is, all traffic
defaults to your web application stack. Failover to the static backup site would be
triggered if either the web server goes down (or stops responding), or the database
instance goes down.
• Amazon Route 53 is a highly available and scalable cloud DNS web service that
translates domain names into numeric IP addresses.
• You can use Amazon Route 53 failover to improve the availability of your applications.
A content delivery network (CDN) is a globally distributed system of caching servers. A CDN
caches copies of commonly requested files (static content, such as Hypertext Markup
Language, or HTML; Cascading Style Sheets, or CSS; JavaScript; and image files) that are
hosted on the application origin server. The CDN delivers a local copy of the requested content
from a cache edge or Point of Presence that provides the fastest delivery to the requester.
CDNs also deliver dynamic content that is unique to the requester and is not cacheable. Having
a CDN deliver dynamic content improves application performance and scaling. The CDN
establishes and maintains secure connections closer to the requester. If the CDN is on the same
network as the origin, routing back to the origin to retrieve dynamic content is accelerated. In
addition, content such as form data, images, and text can be ingested and sent back to the origin,
thus taking advantage of the low-latency connections and proxy behavior of the PoP.
As objects become less popular, individual edge locations might remove those objects to make
room for more popular content. For the less popular content, CloudFront has Regional edge
caches. Regional edge caches are CloudFront locations that are deployed globally and are close
to your viewers. They are located between your origin server and the global edge locations that
serve content directly to viewers. A Regional edge cache has a larger cache than an individual
edge location, so objects remain in the Regional edge cache longer. More of your content
remains closer to your viewers, which reduces the need for CloudFront to go back to your
origin server and improves overall performance for viewers.
For more information about how Amazon CloudFront works, see How CloudFront Delivers
Content in the AWS Documentation at
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowCloudFrontWo
rks.html#HowCloudFrontWorksContentDelivery.
• Fast and global – Amazon CloudFront is massively scaled and globally distributed. To
deliver content to end users with low latency, Amazon CloudFront uses a global
network that consists of edge locations and regional caches.
• Deeply integrated with AWS – Amazon CloudFront is integrated with AWS, with both
physical locations that are directly connected to the AWS Global Infrastructure and
other AWS services. You can use APIs or the AWS Management Console to
programmatically configure all features in the CDN.
• Data transfer out – You are charged for the volume of data that is transferred out from
Amazon CloudFront edge locations, measured in GB, to the internet or to your origin
(both AWS origins and other origin servers). Data transfer usage is totaled separately
for specific geographic regions, and then cost is calculated based on pricing tiers for
each area. If you use other AWS services as the origins of your files, you are charged
separately for your use of those services, including storage and compute hours.
• HTTP(S) requests – You are charged for the number of HTTP(S) requests that are made
to Amazon CloudFront for your content.
• Invalidation requests – You are charged per path in your invalidation request. A path
that is listed in your invalidation request represents the URL (or multiple URLs if the
path contains a wildcard character) of the object that you want to invalidate from
CloudFront cache. You can request up to 1,000 paths each month from Amazon
CloudFront at no additional charge. Beyond the first 1,000 paths, you are charged per
path that is listed in your invalidation requests.
• Dedicated IP custom Secure Sockets Layer (SSL) – You pay $600 per month for each
custom SSL certificate that is associated with one or more CloudFront distributions that
use the Dedicated IP version of custom SSL certificate support. This monthly fee is
prorated by the hour. For example, if your custom SSL certificate was associated with
at least one CloudFront distribution for just 24 hours (that is, 1 day) in the month of
June, your total charge for using the custom SSL certificate feature in June is (1 day /
30 days) * $600 = $20.
For the latest pricing information, see the Amazon CloudFront pricing page at
https://aws.amazon.com/cloudfront/pricing/.
• Amazon CloudFront is a fast CDN service that securely delivers data, videos,
applications, and APIs over a global infrastructure with low latency and high transfer
speeds.
• Highly programmable
• Cost-effective