Adsense ad-unit

Scalable networking options on AWS

Network infrastructure on AWS involves network planning, architecture, defining public & private network boundaries; define how subnets interact with each other, limit & secure network access; plan required set of public & private network addresses to operate the payload, etc.

Network infrastructure is provided as a service for SaaS, PaaS & IaaS consumers - hence the required network devices, gateways, routers & related hardware components are managed by the cloud providers; consumers however need to plan the network architecture to support their workload;

VPC connectivity options with AWS

  • Customer network to Amazon VPC connectivity options - requires non-overlapping IP ranges, hence unique CIDR blocks to be used by each VPC
    • AWS managed VPN - VPN connection from network equipment on remote network to AWS managed network equipment; data-center redundancy & fail-over ensured by AWS; static routes & dynamic / BGP peering supported; 
      • limitation - customer responsible for redundancy & fail-over on consumer side; device should support single-hop BGP (if leveraged)
    • AWS Direct Connect - private connection between on-premises to AWS networks; range 1-10 Gbps; BGP peering & routing policies supported; 
      • limitation - cost factor; additional telecom & hosting provider relationships
    • AWS VPN Cloud Hub - hub & spoke model; AWS managed VPG with redundancy & fail-over; BGP & routing policies supported;
      • limitation - latency, variability & availability are dependent on the internet; consumer-managed redundancy & fail-over;
    • Transit VPC - global network transit on AWS; AWS managed VPN connection b/w hub & spoke - redundancy managed by AWS;
      • limitation - HA managed by customer
  • Amazon VPC to VPC connectivity options
    • VPC Peering - connect VPCs within & across AWS regions; leverages AWS network backbone; no other external hardware interference since connectivity is direct;
      • limitation - transitive peering is not supported
    • software VPN to AWS managed VPN - connect VPCs b/w customer managed VPN appliance & AWS managed VPN appliance; leverages AWS network backbone - between & across regions; availability, redundancy & fail-over are managed by AWS;
      • limitation - HA for software appliance VPN endpoints are managed by the customer;
      • (a) compute costs, (b) personnel costs, (c) availability costs, (d) VPN performance (e) no hardware acceleration
    • AWS managed VPN - routing across VPCs via IP Sec VPN connections; reuse existing Amazon VPN VPC connections; redundancy & fail-over managed by AWS; static routes & BGP peering with routing policies are supported;
      • limitation - HA, fail-over & redundancy for VPC endpoints are managed by the customer
    • AWS Direct Connect & AWS Private Link - leverage logical connections & VPC interface endpoints / VPC endpoint services to connect VPCs; managed HA, fail-over & redundancy managed by AWS; static & BGP peering are supported with routing policies; no single-point-of-failure;
      • limitation - additional hosting for Direct Connect; VPC endpoint services with Private Link are region specific;
  • Connect remote VPC users
    • connect remote users to VPC resources using remote-access solution; leverage existing end-user internal & remote access policies + technologies
    • low-cost, elastic & secure web-services can be implemented with this option
    • limitation - requires existing user internal & remote access implementations

 Set of steps to configure site-to-site VPN connection:

  • configure virtual gateway (VGW) or transit gateway
  • confirm customer gateway device (CGD) meets requirements
  • configure customer gateway (CGW)
  • configure VPN connections
  • configure VPC route tables
  • configure VPN settings on CGD
  • for each site-to-site VPN connection, AWS gives us "2" tunnel endpoints in different AZs;
    • the 2-tunnel connections operate in active/passive mode
    • meaning only one of the tunnel connections are active at any given time
    • tunnel is between the virtual private gateway [VPG] and customer gateway device [CGD]
  • With site-to-site VPN connectivity, limitations relates to the network bandwidth & capacity to route via single tunnel
    • all network traffic, even if configured on separate VGW on cloud and different customer gateways on-premise, are converged to a single tunnel
    • this means that the total bandwidth of 1.25 Gbps is capped at that limit across all connections configured over the secure tunnel network; no matter how many customer gateway devices / VGWs are inter-connected;
    • even while returning the traffic, meaning responding to the requests, A SINGLE TUNNEL IS USED; each AWS IPSec VPN tunnel only supports a single pair of one-way security associations
-------------------------------------------------------------------------------------------

For scalable connectivity across VPCs,  AWS provides variety of options for service consumers to support their payload:

  • Virtual private cloud (VPCs) to segment consumer workload, defining logical network boundaries
  • Inter-connect on-premise & cloud networks at scale using Direct Connect & VPN;
  • Inter-connect VPCs within & across regions at scale using VPC peering & transit gateways;
  • Centralized egress endpoints to access internet using NAT gateways, VPC endpoints, PrivateLink, etc
  • DNS management - network address resolution & routing services;

-------------------------------------------------------------------------------------------

Connectivity solutions to communicate between VPCs

VPC peering - simplest procedure to connect 2 VPCs on-cloud; cost incurred relates to the amount of data transferred between the VPCs; it's a point-to-point direct connectivity (P2P) - transitive (forwarding / proxy) connections is not possible with this option; 

Problem with VPC peering relates to scalability - since it results in a complex mesh of P2P connections at scale (100's-1000's of VPCs); max limit = 125 peering connections per VPC;

Inter-region VPC peering is an AWS offering, ensures communication across VPCs are secure; No additional network interfaces, network appliances, gateways are required; network traffic always remains in VPC space - packet delivery is ensured, secure from DDoS / malware intrusion with no external data broadcasted to network / internet gateway; requires unique CIDR block range (inter-region VPC peering does not work with overlapping CIDR block ranges);

Transit VPC - hub & spoke design for inter-VPC connectivity; leverages BGP over IPSec protocol; transitive routing is supported - using overlay VPN network; transit VPCs can use layer-7 firewall / IPS / IDS for enhanced security, integrating with 3rd party vendor software deployed on EC2 instances; 
 
Useful to implement complex routing rules such as network address translation, supports transitive routing, good option to manage network traffic across multiple AWS & customer regions / data-centers - for scenarios where customers have their data centers distributed across the globe and prefer connecting to their nearest AWS region(s) / AZs;  
Note that overlapping IP ranges isn't a problem between VPCs - this option provides direct network routing between VPCs & on-premise networks;

Transit VPCs has higher costs for running virtual appliances & limited throughput per VPC (up to 1.25 Gbps per VPN tunnel); additional configuration & management is an overhead + redundancy is customer-managed;
 
Transit gateways - used to connect on-premise networks & VPCs on cloud as a fully-managed service --> at scale; needs virtual appliances to be provisioned; hub & spoke model simplifies management & reduces operational cost; transit gateways allows transitive peering between thousands of VPCs and on-premises data centres;
transit gateways are region-specific; max of 3 transit gateways over 1 direct-connect gateway for hybrid connectivity; transit gateways can be stationed across multiple regions to inter-communicate; they work with Direct Connect & software VPN connections;
 
compared to transit VPCs, transit gateways are:
  1. easier to manage & maintain VPCs at scale (>100s); transit gateways are available as fully-managed with AWS managing the VPCs
  2. abstracts managing VPN connections
  3. availability & reliability managed & ensured by  AWS
  4. better performant, improved bandwidth for VPC communication with burst speeds up-to 50 Gps per AZ
  5. reduces latency - does not require EC2 proxies;
  6. supports multi-cast (not supported by other AWS network services)
compared to peering VPCs:
  1. transit gateways are neater, with hub & spoke architecture avoiding P2P single / direct connections between VPCs
  2. VPC peering connections are lower in cost comparatively - no additional cost other than data transfer costs; no connection charges;
  3. VPC peering - being direct connection - does not have bandwidth limits; reduced latency - no additional hops;
  4. security groups referencing work for intra-region VPC peering, not supported with transit gateways
AWS PrivateLink - service offered by AWS used for cross-VPC communication, where the connections are initiated by the consumer(s) to provider's VPC (limit secure access); e.g. is a tiered architecture where web-tier app in public VPC consumes one or more service(s) exposed by application tier in private VPC; must be used with NLB & ENI
 
Use network load balancer with VPC endpoint service configured - in order to connect the service consumer & service provider VPCs; such config creates an elastic network interface (ENI) in the provider subnet; 
 
VPC endpoints are powered by PrivateLink without requiring an internet gateway, NAT device, VPC connection or AWS Direct Connect connection; AWS PrivateLink attaches 2 or more VPCs, it's capable to attach 100s OR 1000s of VPCs; instances in the VPC do not require public IP addreses to communicate with resources in the service; VPC endpoints enable to privately connect your VPC to supported AWS services; to establish "AWS PrivateLink", we need "network load balancer" & "elastic network interface (ENI)";
 
Usage for AWS PrivateLink is for use cases where consumer initiates requests to the service provider VPC; is also useful to peer VPC connections when consumer & provider VPCs have overlapping IP addresses; whenever public VPC needs to be exposed NOT outside to the internet BUT with other VPCs within the Amazon network;

ClassicLink - connect EC2-classic instances privately to VPC

AWS VPC sharing - used when VPC is controlled for cross-account access; VPC owner can grant access to selected subnet(s), to participating account(s); once shared, participating account(s) have access to subnet resources;
 
AWS VPC flow logs - can  be setup at VPC / subnet / ENI level, for ACCEPT & REJECT traffic, used for security monitoring, detect intrusion attack at network layer, can be integrated with Athena / Cloud watch to analyze the flow logs;
-------------------------------------------------------------------------------------------

 Hybrid connectivity

Connections to on-premises data centers require one-to-one connectivity or an edge consolidation; one-to-one connectivity involves setting up Direct-Connect or VPN connection (using virtual private gateway VGW) - difficult to scale & maintain as & when customers scale their VPCs; edge consolidation however interacts with multiple VPC endpoints, hence scalable;

AWS VPN termination options are:

  1. use transit gateway with IP sec termination for site-to-site VPN;
  2. terminate VPN on EC2 instance, with transit VPC for edge consolidation;
  3. terminate VPN on virtual private gateway (VPG);
Transit gateway with IP sec termination is better opted out of the above options; given hub-spoke model to consolidate traffic across VPCs, lesser management overhead - availability & reliability managed by AWS, support for static & BGP-based dynamic connections, ability to operate at scale;

Direct connect - enables consistent, low-latency, high throughput & dedicated fiber connectivity between on-premises & AWS cloud networks; to enable direct connect connectivity with on-premises:
  1. private virtual interface (VIF) to VGW attached to VPC - limited to 50 VIFs per Direct Connect connection; One BGP peering per VPC & restricted to AWS region to which Direct Connect location connects to;
  2. range --> 1-10 Gbps per connection; private VIF to Direct Connect gateway attached to multiple VGWs (each VGW attached to a VPC) - can connect up to 10 VGWs globally
  3. great option for smaller number of VPCs, to combine & visualize as a single, managed connection; sub-1G connections with link aggregation groups (LAG) can be used to aggregate multiple 1G / 10G connections at single AWS Direct Connect endpoint;
  4. private VIF to Direct Connect gateway associated with transit gateway - scalable option to connect on-premises data center to transit gateways (in turn attached to 1000s of VPCs) --> across regions & accounts; Limitations - up to 3 transit gateways can connect to 1 Direct Connect gateway, 20 CIDR ranges can be advertised from 1 transit gateway to on-premises router;
Direct Connect gateway - Direct connect connection across VPCs in different regions;

Centralize NAT gateway - egress to internet: NAT gateway can be deployed in each VPC, which can turn out to be costlier - when number of VPCs are larger; using transit gateway in between is a viable option to re-route & centralize traffic;  routing table configurations can be applied to route all VPC traffic into transit gateway, which in turn front-ends the egress traffic; 
For high-availability, use of 2 NAT gateways distributed across AZs is recommended; security & access control can be applied via network ACL rules;
 
Internet gateway (egress) - similar to NAT gateway for IPv6

AWS VPN Cloudhub - securely communicate between VPCs by adopting hub & spoke model with VPN Cloud hub; use existing internet connections to connect between on-premises & AWS networks, connect multiple branch offices into AWS cloud network; low-cost hub & spoke option & simple to operate, manage;
  • VPN Cloud hub leverages VPC virtual private gateway with multiple gateways, each with unique BGP autonomous system numbers (ASNs);
  • gateways advertise routes with BGP prefixes over the respective VPN connection; CloudHub allows other connected networks to learn about each other;
  • routing advertisements are re-advertised to each BGP peer in the network - to be able to send & receive data across BGP peers;
  • pre-requisite here is that the VPCs have unique IP address ranges (overlapping IP addresses won't work) & ASNs are unique for each spoke
  • can combine with VPN + Direct Connect or other VPN options - as per the requirements;

When using BGP for routing, AWS limits number of routes per BGP session to 100 routes; it sends reset & tear-down BGP connection when number of routes exceeds 100 routes per session or from MPLS (multi-protocol label switching) provider;

DNS - by default, when 'enableDnsSupport' is set to true on the VPC, Route 53 resolver answers DNS queries for VPC domain names / EC2 instances / elastic load balancer; DNS resolution between AWS landing zone & on-premise resources (referred to as hybrid DNS) involves integrate DNS for VPCs in an AWS region with DNS in the on-premises network; 
  • Route 53 resolver inbound endpoint resolves inbound DNS queries into the VPC
  • Route 53 resolver outbound endpoint resolves outbound DNS queries from the VPC
 
VPC endpoints - 2 types: interface VPC endpoints & gateway VPC endpoints; endpoints are region scoped, intra-region; gateway endpoints are supported for S3 & Dynamo DB only; interface endpoints are supported with majority of services on AWS;
  • VPC endpoints are not extendable across VPC boundaries
  • DNS resolution is needed within a VPC
  • by default, VPC endpoints policy is unrestricted
  • policies in a VPC may override policies within objects / other resources e.g. S3 bucket / S3 objects / RDS object policies
Interface VPC endpoints - consists of one or more elastic network interfaces with private IP address, serves as an endpoint to inbound traffic to AWS; ensures traffic transmitted remains within the AWS network backbone; 
expensive when customers need to interact with specific AWS services across multiple VPCs - interface VPC endpoints can be centralized to avoid this cost; transit gateways enables flow from spoke VPCs into centralized VPC endpoints;
  -------------------------------------------------------------------------------------------

No comments:

Post a Comment

Adsense ad-unit

Featured posts

Why Cloud Adoption...What are the necessary steps needed to migrate onto cloud

Cloud services offered by different cloud providers have grown exponentially in recent years. Cloud adoption work for start-ups, small &...