Customers with the requirement to create (green-field) or migrate their workload on cloud infrastructure will need to understand & analyze their network infrastructure requirements and whether or not the cloud provider can support their workload;
Thorough analysis & understanding of the following are needed:
- migration plan - lift-shift OR phase-wise OR hybrid integration approach;
- hosting regions & availability zones, existing data center location(s) on-premise (if applicable)
- non-functional considerations / targets - performance, security, availability, reliability & scalability
- possible trade-offs by priority based on the requirements to support the target workload
- cost factor - evaluating options available to build network infrastructure on the cloud platform
- range of IP addresses needed to support the communication between different systems / applications part of the workload
- network routing architecture - routers, gateways, filters, policies, rules; securing access across networks on the cloud
Consumer business needs to thoroughly understand the options supported by the cloud provider & evaluate the cost-effective, simple & workable solution based on the business size (small / medium / large enterprises), required capacity (shared / private / hybrid infrastructure), cloud migration & go-to market strategy;
In this article, we cover the basic concepts related to AWS networking - required to design the network infrastructure
---------------------------------------------------------------------------------------------------------------------------
Ephemeral ports - short-lived transport protocol ports used in IP communications; dynamic ports; above "well-known" ports ( >1024); suggested range is 49152 to 65535 - windows uses 1025 onward & linux uses 32568-61000;
AWS does not support multi-cast; it's always unicast, meaning to say the communication is direct & peer-to-peer networking;
- TCP - connection-based (web, email); stateful (FTP); receipt acknowledged
- UDP - connection-less (DNS, streaming); stateless
- ICMP (officially layer-3, but debated) - used by network devices for health check - tracert, ping, etc.
---------------------------------------------------------------------------------------------------------------------------
- E-gress --> for IPv6, globally unique, public by default, stateful
- needs custom routes for ::/0 to the egress only internet gateway
- Internet gateway --> both IPv4 & IPv6
- only translates IP addresses from public to private & back
- NAT instance --> it's an EC2 instance with special AWS provided AMI
- replaces the need for an EC2 instance
- must be created in public subnet and uses elastic IP for public IP addresses, disable source / destination check flag
- NAT gateway --> fully managed NAT service, managed by AWS
- translate the private to public in the case where the request needs to go out to the internet
- work together with internet gateways but are needed to translate IP addresses to & from public internet
- supports IPv4 only
- Route tables entries are necessary for NAT instance & gateways for routing in the private subnet
---------------------------------------------------------------------------------------------------------------------------
- CIDR blocks
- network gateways
- subnets
- routing rules / tables
- VPC name tag
- CIDR blocks [choose properly], tenancy [choose between default or dedicated]
- optionally associate a IPv4 / IPv6 address range
How many IP addresses allowed in a VPC? # IPs = 2^(32 - slash subnetting); e.g. /16 = 2^(32 - 16) = 2^16 = 65536
- 10.0.64.0 - network logical address
- 10.0.64.1 - reserved for AWS VPC router
- 10.0.64.2 - IP address of the DNS server
- 10.0.64.3 - reserved for AWS future use
- 10.0.71.255 - for network broadcasting / network broadcast address
- VPC operates in a dual stack mode
- in a dual stack mode, IPv4 & IPv6 operates are independent of one another
- we got to configure routing & security components separately for IPv4 & IPv6
- IPv4 CIDR block range is between /16-/28; IPv6 CIDR block range is fixed - /56 blocks;
- IPv4 allows defining private IPs in the CIDR block range; IPv6 - AWS assign CIDR block, no choice to select a range;
- IPv4 - private & public addressing is by default; IPv6 - no default private addressing, routing & security policies differentiates private & public;
inter-communication is handled by DHCP option sets, which are created & assigned when we create our VPC; any resources we create or launch within the subnet get connected via DHCP dynamics, where the addressing happens;
- VPC router is a standard router provided within a VPC
- this is where all the router decisions begin, network packets first hit the VPC where the routing starts
- routing decisions are governed by route tables associated with VPCs & subnets
FQDN = fully qualified domain name = root + top level domain + sub level domain(s)
- AWS reserves the first five of the allocated subnet IP addreses
- *.1 relates to VPC router, which provides DHCP services in the VPC
- *.0 is the network address, *.2 is reserved for DNS
- *.3 is reserved for future use
- *.last is for broadcasting
- specify the set of domain names, sub-domain names/ IP addresses for domain name resolution in the route53 service
- route53 can determine the exact route path into the VPC
- associate routing policies with route53 along with regular "health checks" can aid achieve availability, fault tolerance, etc
- routing policies includes - simple routing, weighted routing, fail-over routing, latency based routing, geo-location & geo-proximity routing
- as with other routing policies, these are configured on route53 routing policies
- weighted, fail-over & latency routing policies route by priority --> determined dynamically based on that instant
- Geo-location based routing works by the closest location for the users
- private hosted zones get linked to VPCs in a private zone / space
- public hosted zones are linked with external zones, outside of private zones
- using CLI, route53 can link VPCs across other accounts, using the following commands
- create-vpc-association-authorization
- associate-vpc-with-hosted-zone
- basically, EC2 instances on public subnet are known as bastion hosts
- specifically designed on a computer network to withstand attacks
- usually hosts a single server known as proxy server and all other services are removed / limited to reduce threat to the computer
- it's hardened & located with a specific purpose - either outside a firewall OR "DMZ"
- usually involves access from untrusted networks or computers
Use a network load balancer (layer-4), if you have 2 bastion hosts, for high-availability; this is an expensive option, running bastion hosts with NLB are expensive comparatively;
- cheaper option for dev / test environments is to configure auto-scaling with min & max instance set to "ONE";
- by this, even if bastion on subnet-1 goes down, auto-scaling group ensures to bring-up new bastion instance on another subnet-2;
- write a script to copy over the elastic IP address to the new bastion instance;
- note that this involves downtime to detect failure and bring up / configure new bastion instance
---------------------------------------------------------------------------------------------------------------------------
Elastic network interfaces (ENI), elastic IPs & internet gateways
- ENI is a virtual interface to attach to an instance in a VPC;
- ENI's are associated with a subnet
- there is a default ENI that gets created with an EC2 instance
- ENIs can be attached & detached as we see fit into an EC2 instance
- ENIs can be attached to running instances (hot attach), stopped instances (warm attach) OR at launch (cold attach)
- ENIs are confined to a single availability zone
What is ENI composed of?
- has primary private IPv4 address, a MAC address AND atleast one security group
- ENIs can optionally have a secondary private IPv4 address
- can have one or more elastic IP addreses
- one PUBLIC IPv4 address, one or more IPv6 addresses
- elastic / external IP address, source & destination check
- primary ENIs cannot be detached from the resource that originally created from
Multiple ENIs can be associated with SINGLE EC2 INSTANCE WITHIN THE SAME AZ
Why do we need multiple ENIs?- this is called dual-homed instance, where single EC2 instance serves external traffic as well as internal traffic across 2 different subnets, however, from networking management & security perspective, it's easier to manage & control access across 2 separate subnets and security groups
- normal use case is single subnet per AZ associated with single EC2 instance
- another use case for multiple instances is to perform scheduled maintenance on a backup server, when the backup server can connect via a different subnet and primary server is taken down for maintenance, without impacting users / with no downtime
- additional use case relates to software licensing when the software licenses are applied to separate ENI (called primary & backup ENI), when the EC2 instance associated with backup ENI is brought up
- rule ordering for NACLs --> evaluated in a sequential order;
- mathematics between NACLs & security groups is comparable to how we operate across subnets; Ingress & Egress applies to NACLs, while security groups self-reference can operate across subnets within an availability zone;
- both places, the rules are evaluated sequentially; Explicit allow & deny rules are configured on NACLs, while security groups can specify the range of IP addresses to allow, no deny rules;
- NACLs are stateless, which means inbound & outbound rules are required to be set for traffic coming in & going out;
- Security groups are applied to a subnet and EC2 hosts / ENIs within the subnet;
- What if I need to allow / deny access across subnets within the AZ, this is where security groups self-reference comes into play
- NAT gateways are managed by AWS and are highly available
- NAT gateways range - 5Gbps-45 Gbps
- up to 5 NAT gateways can be configured per availability zone
- ephemeral port ranges should be open to operate with NAT gateways
- flow logs capture ingress & egress information, they don't capture the network packets but the packet metadata such as the source, destination IPs, VPC id, subnet id, port numbers, etc;
- it captures the IP traffic coming into & going out of the VPC;
- captures logs at 3 levels - VPC level, subnet level & network interface level
- can be attached to subnets, VPCs OR ENIs
- are stored using Amazon CloudWatch logs i.e. after creating flow logs, we can view & retrieve log data from Amazon CloudWatch logs;
- once configured, cannot be edited, to be re-created; are not real-time, it involves a delay;
- setup in Cloud watch OR S3;
- needs an IAM role to configure & write to Cloud watch or S3
- network load balancers provides stable IP addresses for your networks
- classic load balancers operate on layer-4 & layer-7
- application load balancer operates on layer-7
- application load balancer also adds the ability to route requests to "target groups", based on the type of requests
- web sockets are supported on both NLB & ALB
- SNI (server name indication) are supported in both NLB & ALB
- ALB specifically supports: path / host-based routing and user authentication
- ALB can also route based on HTTP headers, method-based routing, query string params & source IP address CIDR based routing
- many options available with ALB as against NLB
- NLB specifically supports: static & elastic IPs
- "X-Forwarded-For" header, when included --> indicates the IP address where the request originated from
- while classic load balancer has 2 states - healthy or unhealthy;
- network & application load balancers have multiple states - initial, healthy, unhealthy, unused & draining;
- unused is where the target is not registered to a target group or rule OR target not registered in AZ
- checks for the prefixes
- checks for the weighted configuration
- closest ASN to route to
- while Iceland has ~68 registered BGPs, USA is in the order of 17000 named BGPs
- across the world, this goes beyond 67000 BGPs and with different ASN
- this improves availability & performance of internet applications used by global audience
- comes with 2 static IP addresses;
- each global accelerator includes ONE or more "listeners" --> they're connected globally at optimal endpoints;
- when we create an global accelerator ==> we get a default "DNS name" "e.g. abcd1234.awsglobalaccelerator.com";
- further traffic is routed thru network zone; from there, it's routed to listeners --> distributed to optimal endpoints within the endpoint groups associated with the listeners
- endpoint groups are associated with AWS regions; actual endpoint will be the NLBs / ALBs / EC2 instances / elastic IP addresses, etc.
- Availability -NAT gateways are managed services, highly available within AZ; NAT instances are managed by the customer;
- Bandwidth - NAT gateways range up to 45 Gbps; NAT instance bandwidth depends on the instance types;
- Maintenance - provided as a managed service for NAT gateways; managed by the customer for NAT instances;
- Performance - optimized for NAT gateways; Amazon Linux AMI configured for address translation - for NAT instances;
- Public IP - elastic IP always attached for NAT gateways; NAT instances have detachable elastic IP;
- Security Groups - cannot be associated with NAT gateway; can be used with NAT instances;
- Bastion server - not supported for NAT gateway; NAT instances can be used as bastion servers;
No comments:
Post a Comment