Adsense ad-unit

Why Cloud Adoption...What are the necessary steps needed to migrate onto cloud

Cloud services offered by different cloud providers have grown exponentially in recent years. Cloud adoption work for start-ups, small & medium enterprises (SMEs) as well as large enterprises. Cloud adoption has increasingly seen to benefit enterprises across business sectors - be it banking, financial services, insurance, health care, manufacturing, automotive, travel & leisure, social media, gaming, etc. Is it a norm for every business to migrate to cloud..?

Cloud adoption reduces the overhead to manage & maintain hardware + pay for software maintenance & upgrade regularly for enterprise customers and let them focus on their core business functionality. That's why cloud adoption has seen an exponentially increasing trend over the last decade or so, and more so embracing in the years to come...a key driver for cloud adoption is moving from Cap-Ex to Op-Ex model;

Few major players / providers in the industry are Amazon, Google, Microsoft, IBM etc...Gartner reports distributed cloud  as one of the key trends for 2021. However, each business has its' choice to compare costs & decide whichever option meets their need best. Cloud, has now evolved as an utility service, from the days of distributed, grid & cluster computing.

Cloud adoption benefits cost of maintenance and improves flexibility; all customers benefit from new services made available publicly - to experiment, at rapid speed, with pay-per-use option; depending on the business case:

  1. customers can choose to avail cloud infrastructure to setup their custom platform & services OR 

  2. choose to only deploy their services / core logic as a function - with the platform & infrastructure managed by the cloud provider OR

  3. forget about managing infrastructure / application / services - use a cloud software deployed as-is to serve their business

 Pricing options, cost-benefit models, planning to effective manage cost on cloud etc. are detailed in my AWS Pricing blog & 'Cost optimization pillar' section in AWS Architecture pillars blogs.

Cloud adoption requires the changes understood & business impact analyzed - communicated across different stakeholders in the organization; also needs a business drive to successfully implement transformation programs - especially for large enterprises; collect application portfolio data & rationalize into one of 6R strategies: re-host (lift & shift), re-platform, re-factor / re-architect, re-purchase, retire (applications no longer needed) & retain (required applications)

Enterprises with existing on-premise infrastructure & software services need to carefully plan their migration strategy; Cloud migration complexity varies depending on the architecture considerations and flexibility with existing application set; phase-wise migration approach is recommended to build early confidence; among the 6-R strategies, re-factor / re-architect is known to be more complex by experience, while the best approach to optimize enterprise applications;

Thousands of start-ups realize the benefits of cloud & it's implications; cloud platforms provide free & easy access to software developers, students, etc. and teams could realize their ideas via experimentation at an affordable cost;

Cloud adoption is flexible, hundreds of services & options are made available on the cloud, managed by the cloud provider to choose from; depending on the business use case, business consumers need to evaluate the cloud applicability;

  • Cloud migration gels well with an agile development approach; experimentation & evaluation of techniques, ideas, performance, etc. is possible sooner - at early stages of migration

    • this helps decide the best option - based on actual data / results from experiments;

    • helps uncover unknown areas at early stages - reduces risk of program failure at a later stage;

  • Large enterprises service variety of applications, thereby customers can choose to evaluate services on multiple cloud provider(s) - best suited for a sub-division within the enterprise;

  • "Cloud security" or security of data (data-at-rest & in-transit) needs to be planned & evaluated for businesses to ensure integrity, confidentiality & reliability for their customers

    • using static code analysis tools to detect possible software vulnerabilities is a practice to inculcate within the development teams

    • securing data in filesystems, block storage, databases & object storage - should be encrypted with restricted access

    • securing data in transit / between local area / wide area network is equally important to consider

    • data backup & archival processes, recovery in case of a failure should be planned aptly to reduce risk of data loss in case of an attack --> my blog on AWS security services covers details related to security services on the Amazon cloud;

  • Continuous integration & automation (Dev-Ops) is also a vital point to consider for cloud adoption; 

    • Cloud providers promote workload automation - to build, test, deploy & monitor automatically with least manual intervention; 

    • this helps businesses to create software services, test them quickly, deploy them into production and go-to-market at a quick pace;

    • cloud adoptors say - "automate everything", meaning reduce manual intervention - to build your code, application, platform & infrastructure - there by reducing chances of errors; promote interoperability, portability, being platform / cloud provider agnostic;

Similar to agile, cloud adoption may prove to be a change in mindset for large organizations, with different teams, with varying skill levels & work culture; hence important to staff & train employees within an organization - for an effective transition;

Prioritizing the relevant IT skills for recruitment & training is important for large organizations; such a drive helps organizations embrace change & technology innovation;

 In summary, cloud adoption is easier for start-ups to start from scratch; poses an organization change for large enterprises to carefully evaluate options to migrate - considering application portfolio, supportive business streams, migration planning, security, dev-ops & automation, governance & project management aspects as well as skill development aspect within the organization;

AWS Elastic Compute Cloud (EC2) instance types

AWS Elastic Compute Cloud (EC2) offers several instance types for consumers to deploy their applications on; EC2 instances can be associated with Elastic Block Store (EBS) / Elastic File System (EFS) storage services - to store instance data on temporary / ephemeral storage OR persistent storage options;

  • P3 & P4 are accelerated computing instances; P4d instances are EC2 Ultra clusters, multi-node training & 4000 NVIDIA A100 Tensor core GPUs, petabit scale networking; each EC2 ultra-cluster is a super-computer with low latency FSx for Lustre storage;
    • P3 - used for performance optimization, streaming multi-processing (SM), machine learning (ML)  & deep learning (DL);
  • G3 - for graphics applications, 3D visualization, application streaming, video encoding, 3D rendering, etc.;
  • FPGA & HDK - used for hardware level programming; hardware developed can be used on the cloud or on-premise environments; F1 instances provide interfaces for FPGA programming, these are all F1.* instance types for EC2 ;
  • Inferentia chips & Inf1 instances - elastic inference - acceleration to EC2 instance with right mix of memory & CPU;
  • Compute optimized instances are known - C6G instances; EBS optimized up to 19000Mbps of dedicated EBS bandwidth even with encryption enabled; C6G instances support ENA based enhanced networking;
  • General purpose T4G - 40% better price & performance over T3G; 750 free hours are provided; General purpose instances are AWS Nitro system based;
  • AWS Graviton2 processor - general purpose instance; It uses 256 bit memory encryption, within the host system; strangely keys are encrypted on the host system, do not leave the host system and get destroyed with the EC2 instance; KMS integration & BYO keys are not supported with Graviton2;
  • A1 instances - GPU instance supported to run apps on Java / Python / Node JS; similar capacity, EBS backed, nitro-system, elastic network adapter (ENA) supported;  bandwidth up to 10Gbps in placement groups is achievable;
  • most of the EC2 instances support up to 20 EBS volumes to attach by default;
  • High memory instances - are EC2 offerings with high end memory in the order of 25TB in a single instance; suited to run large applications / workloads in-memory such as HANA; nitro-system based EC2 bare-metal instances; 
    • R6G - latest generation of "memory optimized" EC2 instances; nextgen memory optimized instances powered by Arm-based Graviton2 processors;

Balancing 5 pillars on AWS to build an architecture

 Main aspects to consider - to balance the 5 pillars to architect solutions on AWS are listed below:

  1. utilize resources by usage
  2. automate systems enable flexibility
  3. test at scale, for accuracy, even for the data backup - not only during development but also post deployments, continually
  4. think of architecture as evolving, similar to technology, rather than static
  5. derive architecture decisions by data, implement data-driven approach
Detailed blog - AWS Architecture Pillars

Other than the 5-pillars for the well-architected framework, AWS offers a set of tools & options for architects to make use of: 

server-less design patterns, using AWS lambda, dynamo DB, Kinesis streams, API gateway and a combination of these to build a server-less application

  • use event driven models and micro-services
  • good for eventual consistency, an eventually consistent model can give a high performance throughput with server-less

High performance computing

Options to modernize traditional approaches to handle highly complex math-logic; very big calculations requiring high compute capacity, an area of super computing;

  • distributed approach rather than a single big server
  • spawn new servers on demand for compute
  • use high throughput computing models such as server-less batch or queue or sequential or similar and optimize the same for high performance computation
originally specific super computers performed specific jobs, made for a single job, which is not the case now; hardware constraints are no longer stopping the experiment cycle
 
Detailed blog -  AWS HPC Workloads

High throughput versus High performance computing

  • HTC is | loosely coupled | highly iterative, learn by cycle | throughput is very important when computation need not be exactly sequential
  • HPC is | tightly coupled | very very sequential & math dependent | calculations are very important since each step is a derivative

Make hard choices

  • compute services: AWS EC2, AWS ECS (elastic container service), AWS Fargate, AWS Lambda
  • databases: also run on EC2, OR AWS RDS, OR Amazon Aurora OR Amazon Dynamo DB OR use semi-old school indexed data in Amazon Athena
  • choices are based off the business context, constraints, the problem at hand, type of data we're dealing with, timeline and similar measure

Adsense ad-unit

Featured posts

Why Cloud Adoption...What are the necessary steps needed to migrate onto cloud

Cloud services offered by different cloud providers have grown exponentially in recent years. Cloud adoption work for start-ups, small &...