AWS
Cloud Computing
Cloud Computing Models
Infrastructure as a Service (IaaS)
If we need to launch a Linux Server and we want to manage it ourselves, that is how we would do as an IaaS model.
Usually the cloud provider won't have access to the server.
Ex.:
VPC EC2 EBS
Cloud Computing Deployment Models
Public Cloud
Fully deployed in the cloud and all parts of the application run in the cloud.
Ex.: AWS, Azure, GCP.
Hybrid
A way to connect infrastructure and applications between cloud-based resources and existing resources that are not located in the cloud.
A mix of public and private.
Run AWS Infrastructure and services on premises with
AWS Outposts
.
Private Cloud (On-Premise)
Deploying resources on-premise, using virtualization and resource management tools, is sometimes called "Private Cloud".
You manage it in your datacenter.
Serverless Computing
Allows you to build and run applications and services without thinking about servers.
Also referred to as Function as a Service (FaaS) or Abstracted services.
Ex.:
Amazon Simple Storage Service (S3) (To storage files) AWS Lambda (To run code in the cloud) Amazon DynamoDB (NoSQL databases) Amazon Simple Notification Service (SNS) (To send notification messages to your users)
Business Case for AWS
Why use AWS or Cloud Computing?
6 Advantages of Cloud Computing
Cloud Architecture Design Principles
Design for Failure (Multi-AZ, Multi-Region)
Elasticity (Autoscaling)
Should expand and contract based on demand.
Loose Coupling
Services should be independent.
Should scale independently.
Should converse through a
Event Bus
.Basically follow a
Microservice Architecture
.
AWS Well-Architected Framework Design Principles
Stop guessing your capacity needs.
Test systems at production scale.
Automate to make architectural experimentation easier.
Allow for evolutionary architectures.
Always keep improving the architecture.
Drive architectures using data.
Analyzing the data that comes from it, to make it better.
Improve through game days.
Conduct simulations on your environment and try forecast what is going to happen.
AWS Well-Architected Framework 6 Pillars
Operational Excelence: Focuses on running and monitoring systems, and continually improving processes and procedures. (Automating changes, responding to events)
Security: Focuses on protecting information and systems. (Confidentiality and integrity of data, managing user permissions)
Reliability: Focuses on workloads performing their intended functions and how to recover quickly from failure. (Distributed system design, recovery planning)
Performance Efficiency: Focuses on structured and streamlined allocation of IR and computing resources. (Selecting resource types and sizes, monitoring performance)
Cost Optimization: Focuses on avoiding unnecessary costs. (Selecting resources of the right type and quantity, scaling without overspending)
Sustainability: Focus on minimizing the environmental impacts of running cloud workloads. (Shared responsability model for sustainability, minimize required resources)
Benefits of AWS Security
Keep Your data safe: The AWS infrastructure puts strong safeguards in place to help protect your privacy.
Meet complience requirements: AWS manages dozens of compliance programs in its infrastructure.
Save money: Cut costs by using AWS data centers. Maintain the highest standard of security without having to manage your own facility.
Scale quickly: Security scales with your AWS Cloud usage. No matter the size of your business, the AWS infrastructure is designed to keep your data safe.
Why change to AWS?
Cost Savings:
Changing from an upfront capital investment to a pay as you go pricing model.
Free up budget for investment elsewhere.
Staff Productivity:
Staff no longer will have to worry about managing physical servers.
Teams can work on higher value activities.
Operational Resilience:
Increased reliability, availabiloity and security.
Business Agility:
Increased innovation and reduced time to market.
Migration Best Practices
Get stakeholders and senior leaders aligned.
Set Top-down quantifiable goals - focused not organic.
Trust the process - Assess -> Mobilize -> Migrate & Modernize.
Choose the right migration pattern:
Refactor
: Will completely redesign your architecture and all of the underlying infrastructure.Re-plataform
: For instance, going from Windows server to Linux.Repurchase
:Rehost
(lift and shift): Simply move to another location.Relocate
(VMware, Hyper-V): Relocate your virtual structure.Retain
:Retire
: Retire the old system.
AWS Compliant to
SOC 1
/SSAE 16
/ISAE 3402
SOC 2
SOC 3
FISMA
,DIACAP
, andFedRAMP
DOC CSM Level 1-5
PCI DSS Level 1
ISO 9001
/ISO 27001
ITAR
FIPS 140-2
MTCS Level 3
Overview on AWS
AWS Global Infrastructure
Datacenters distributed globally.
On-demand delivery of IT resources.
Shared and Dedicated resources.
At the hyperviser level, accounts are isolated, but share these resources with the option of dedicated resources.
Are divided in Geographic Regions.
Geographic Regions
Are physical location around the world where AWS cluster data centers.
A
Region
consists of a group of logical data centers calledAvailability Zone (AZ)
.Each
Region
consists of a minimum of 3, isolated and physically separateAZs
whithin a geographic area.
You can choose the region of your server to:
Optimize latency.
Minimize costs.
Address regulatory requirements.
Availability Zones (AZ)
Each
Region
has multiple physically isolatedAvailability Zones
.Minimum of 3
AZs
perRegion
.They are separated by a meaningful distance, many kilometers. (Although all are within 100 km of each other)
This provides business continuity.
Each
Availabillity Zone
has:Independent power and cooling.
Physical security.
Is connected via reduntant, ultra-low-latency networks.
Also
AZs
are interconnected with dedicated metro fiber.All it's traffic is encrypted.
And the network performance is sufficient to accomplish synchronous replication between
AZs
.
Local Zones
Each
Local Zone
location is an extension of aRegion
.They are used to place compute, storage, database, and other select AWS services closer to end-users.
To run latency sensitive applications.
Also have multiple Availability Zone capability for high availability.
Connected to the main region servers, through optic cables.
Edge Locations
There are over 100 Edge locations.
They are Endpoints used for
CloudFront
(AWS's CDN) cached content for high performance delivery of content.The Edge locations basically consist of the AWS's CDNs.
Also provides DDOS protection.
There are more Edge locations than Regions.
Other AWS Infrastructures
AWS Wavelength
Allows parts of your application to use
AWS Compute
andStorage
services that are embedded within communication service providers (CSP) datacenters at the Edge of the 5G network.This enables developers to build applications that deliver ultra-low latencies to mobile devices because those applications will be run at the service provider's datacenter without having to traverse multiple hops across the internet to reach their final destination.
This is great for connected vehicles that need low latency for environmental analysis of critical data.
AWS Ground Station
Is a fully managed service that enables customers to easily command, control and downlink data from satellites.
Control satellite communications.
Process that data that is received.
And scale your operations without having to worry about building or managing your own satellite ground station infrastructure.
Located within the AWS Global Infrastructure footprint.
Contacts (reservations) are scheduled for an antenna to communicate with a satellite.
The schedules happen when the satellites are in proximity with the specific antenna.
Doing this can save up to 80% on the cost of your ground station operations.
Since you pay only for the actual antenna time that is used and relying upon the global footprint.
The ground station antennas are also located within the Global Infrastructure.
Project Kuiper
Kuiper systems which is a subsidiary of Amazon, will be launching a proposed constellation of low orbit satellites. - To deliver high speed internet via high performance customer terminal antennas.
Will extend low latency access to the AWS cloud from all the areas of the world.
AWS Billing and Pricing
Price Models
Price Fundaments
There are 3 fundamental cost factors in AWS:
Computing (More processing power = More expensive)
Storage (More storage = Less cost rates)
Bandwidth
Price Policies
Pay according to use.
Pay less when reserving.
Like when you reserve for 1 or 3 years.
Pay even less by unit when using more.
Like
storage
rates going down the more GB you use.
Pay event less with AWS growth.
Price models vary depending on the service so:
Always optimize costs.
IT must not be treated as Periodic Capital Investment anymore, BUT as efficient use of resources.
Maximize the power of flexibility.
AWS has independent ans transparent costs, so you can choose what you need and not waste.
AWS does not require long term contracts or minimum adhesion time, except when
reserving
.You only pay if services are executing and for the time that they executed.
Use the right price model for the work to be done.
AWS offers several price models like:
On Demand.
Dedicated Instances.
Spot Instances. (Supply and Demand)
Reserved.
EC2
Pricing
EC2
PricingPricing models for EC2
Lambda
Pricing
Lambda
PricingBy requisitions:
Free Tier: 1 million requisitions by month.
$0.20 for 1 million requisitions.
By duration:
400.000 GB by second by month for free, and up to 3.2 millions of seconds of execution time.
$0.00001667 for each GB by second used.
Additional costs may be charged if your lambda
functions use other services or transfer data.
EBS
Pricing
EBS
PricingVolume (by GB).
Snapshots (by GB).
Data transfer.
S3
Pricing
S3
PricingStorage.
Requisitions.
Types of storage class.
Data transfer.
Transfer acceleration.
Replication between Regions.
RDS
Pricing
RDS
PricingServer turn-on time.
Database caracteristics.
Database types.
Number of instances.
Provisioned storage.
Additional storage.
Requisitions.
Data transfer.
Cloudfront
Pricing
Cloudfront
PricingTraffic distribution.
Requests.
Data output.
Free Services that AWS provides
Amazon VPC
.Elastic Beanstalk
.CloudFormation
.IAM
.Auto Scaling
.OpsWorks
.Consolidated Billing
.
These service may be free, BUT some of them uses or creates other services that can be billed.
For instance, Elastic Beanstalk
may create EC2
instances that will be billed.
AWS Support Plans
This cards will show of the features that are added or changed, with each support plan.
Identity and Access Management (IAM)
What is IAM
A web service that allows you to securely control access to your AWS resources.
Manage permissions that control which resources Users can access.
Features
IAM is eventually consistent
Meaning that achieves high availability by replicating data across multiple servers around the world.
However these changes may take some time to replicate.
AWS recommeds that IAM changes should not be included in critical, high-availability code paths of the application. And be sure to verify that the changes have been propagated before production workflows depend on them.
Shared access to your AWS account.
Granular permissions.
Secure access to AWS resources for applications that run on
Amazon EC2
.Identity Federation
to grant permissions for users outside of AWS.Access log auditing using
CloudTrail
.Payment Card Industry (PCI) Data Security Standard (DSS) Complience.
IAM is integrated with many AWS services like.
Free to use.
How it Works
Initial Steps
When you create an AWS account, you begin with one sign-in identity that has complete access to all AWS services and resources.
This identity is called the AWS account
root user
.You can access it by the email and password provided when the account was created.
AWS strongly recommends that you don't use the
root user
for everyday tasks. (Use it only to perform tasks that onlyroot user
can perform like)
Use IAM to setup
IAM Users
in addition to theroot
, and grant them access to the resources they need to succeed in their tasks.These new users will use their sign-in credentials to authenticate.
Cycle
A human or application uses their sign-in credentials to authenticate with AWS.
Authentication is provided by matching the sign-in credentials to a principal (
IAM User
,Federated User
,IAM Role
, or application).
Next, a request is made to grant the principal access to resources.
E.g: When you firts sign in and are on the console Home page, you aren't accessing a specific service.
When you select a service, the request for authorization is sent to that service to see:
if you are in the list of authorized users,
what policies are being enforced to control the level of access granted,
and any other policies that might be in effect.
IAM Common Terms
Principals
A person or application that uses the AWS account
root user
, anIAM User
, or anIAM Role
to sign-in and make requests to AWS.Can be granted either permanent or temporary crendentials.
Typically
IAM Users
androot user
are granted permanent.While
IAM Roles
are granted temporary.
Principals include
Feredated Users
andassumed roles
.
IAM Entity
Are IAM Resources that AWS uses for authentication.
Entities can be specified as a Principal in a resource-based policy.
Includes:
IAM Users
IAM Roles
IAM Identity
Are IAM Resources that can be authorized in
policies
to perform actions and to access resources.Includes:
IAM Users
IAM Groups
IAM Roles
IAM Resource
Are stored in IAM, you can add, edit, and remove them from IAM.
The number and size of IAM Resources in AWS are limited.
IAM User
Represent the person or application accessing your account to interact with AWS.
Consist of a name and credentials.
Are identified by:
A friendly name;
Amazon Resource Name (
ARN
);arn:aws:iam::account-ID-without-hyphens:user/Bill
Unique identifier which is returned only when you use the
API
,SDK
,Tools for Windows Powershell
, orAWS CLI
to create the user.
Credentials can be assossiated to a user:
Console password.
Access keys. (access key ID and a secret access key)
SSH keys. (for use with
CodeCommit
)
It is associated with one and only one AWS account.
IAM Group
Are collections of IAM users.
Groups let you specify permissions for multiple users.
Users assume the permissions of the group.
Users can belong to multiple groups.
Groups cannot be nested, can only contain Users.
IAM Role
It is an
IAM Identity
that you can create that has specific permissions.Roles are intended to be assumed by one or more Users or Applications.
Does not have standard long-term credentials.
When you assume a Role, it provides you with temporary security credentials for your Role session.
You can use Roles to delegate access to Users, applications, or services that don't normally have access to your AWS resources.
E.g: Allow EC2 intances to access other AWS resources.
Identity federation using:
AWS Cognito.
OAUTH.
Enterprise Single Sign On with LDAP or Active Directory.
Identity Providers and Federation
If you already manage users outside of AWS, you can use
Identity Providers
instead of creatingIAM Users
.They can use
OpenID Connect (OIDC)
, orSAML 2.0
.
Are
IAM Role
can be used to specify permissions for externally identified (federated) users.Max 5000 IAM users per account. Identity Federation enables unlimited temporary credentials.
Can be identified by your organization or a third-party identity provider like:
OAuth.
Enterprise Single Sign-On with LDAP or Active Directory.
Access Management
You manage access in AWS by creating
Policies
and attaching them toIAM Identities
orAWS Resources
.AWS checks each
Policy
that applies to the context of the request.If a single
Policy
denies the request, AWS denies the entire request and stops evaluating policies. (Explicit Deny)By default, all requests are implicitly DENIED.
Policy
Are JSON documents in AWS that, when attached to an Identity or Resource, define their
Permissions
.A Policy may have one or more
Permissions
.
AWS evaluates these Policies when an
IAM Principal
makes a request.
Permission
What determine whether the request is allowed or denied in
Policies
.
AWS Organizations
Allows multiple AWS accounts used by an organization to be part of an
Organization Unit (OU)
.Service Control Policies (SCPs)
allow the whitelisting or blacklisting of services within an Organization Unit.A blacklisted service will not be available even if the IAM user or group policy allows it.
Benefits
Centrally manage policies across multiple AWS accounts.
Control access to AWS services.
Automate AWS account creation and management programatically with APIs.
Consolidate billing across multiple AWS accounts.
IAM Best Practices
Grant
Least Privilege
access.Give people access only to the minimum that they need.
Lock away your AWS account Root User Access Keys.
Require human users and workloads to use temporary credentials.
Rotate Credentials regularly.
Require
Multi-Factor Authentication (MFA)
.Regularly review and remove unused users, roles, permission, policies and credentials.
Monitor activity in your AWS Account. (eg.
CloudTrail
)Use conditions in
IAM Policies
for further restrict access.Use
IAM Access Analyser
to validate youIAM Policies
and ensure functional permissions.Delegate by using
Roles
instead of by sharing credentials.
Responsability Model
This Responsability Model can change based on what Service you are using. (Some services require more responsability from AWS than the Customer, and vice-versa)
Customers are responsible for Security IN the Cloud.
AWS is responsible for Security OF the Cload.
Infrastructure Services
Like: EC2
, Amazon EBS
, Auto Scaling
and Amazon VPC
.
Container Services
Like: Amazon RDS
, Amazon EMR
, AWS Elastic Beanstalk
.
Managed Services
Like: Amazon S3
, Amazon Glacier
, DynamoDB
, AWS Lambda
, Amazon SQS
, Amazon SES
.
Compute Services
Elastic Compute Cloud (EC2)
Provides virtual servers in the AWS cloud.
You can launch one or thousands of instances simultaneously and only pay for what you use.
EC2 Autoscaling
Allow you to dynamically scale your EC2 capacity up or down automatically according to defined conditions.
It can also perform health checks on those instances and replace them when they become unhealthy.
Types of Autoscaling
Instance Types
General Purpose
(T2, M3, M4)
: Small and mid-size databases, data processing tasks that require additional memory, caching fleets, etc.Compute Optimized
(C3, C4)
: High performance front-end fleets, web-servers, batch processing, distributed analytics, high performance science and engineering applications, etc.Memory Optimized
(X1, R3, R4)
: High performance databases, distributed memory caches, in-memory analytics, genome assembly and analysis, large deployment of SAP and etc.GPU/Accelerated Computing
(G3, G2)
: 3D application streaming, machine learning, video encoding, and other server-side graphics or GPU compite workload.Storage Optimized
:(I3, I2)
: NoSQL databases like Cassandra and MongoDB, scale out transactional databases, data warehousing, Hadoop, and cluster file systems.(D2)
: Massively Parallel Processing (MPP) data warehousing, MapReduce and Hadoop distributed computing, log or data processing applications.
Instance States
start
stop (EBS Backed only)
: Instance is shut down with no instance charges, still charged for EBS volumes.Stop-Hibernate (EBS Backed only)
: Suspend-to-disk, saves RAM to EBS.Reboot
: Operating system reboot.Terminate
Storage Options
Lightsail
Is the easiest way to launch virtual servers running applications.
AWS will provision everything you need, including DNS management and storage, to get up and running as quickly as possible.
Elastic Container Service (ECS)
Highly scalable high-performance container management service (orchestration) for Docker containers.
Provision
AWS EC2
resources for running containers.Configure the compute resources.
Deploy containerized applications to resources.
Scale using
Auto Scaling
.Monitor resources with
CloudWatch
.
Applications can be also deployed to a
serverless
environment usingAWS Fargate
.Instead of using
EC2
instances.
Elastic Container Registry (ECR)
Application containers can be stored, shared, and deployed from
Amazon Elastic Container Registry (ECR)
.ECR
is basically a repository for docker containers.Allowing applications that are on those docker containers to be stored in
ECR
.These can then be shared or deployed from
ECR
.
This service is totally managed by Amazon.
You can use
IAM
to manage security of access to these images.
Elastic Kubernetes Service (EKS)
Fully managed container service to run
Kubernetes
applications.Open source plataform for orchestrating containers.
It automates the deployment, scaling, and management of containerized applications, such as those build using Docker, as a cluster.
Cluster are built using the
EKS Distro
open source Kubernetes distribution.
Lambda
Is a serverless service and lets you run code in the AWS cloud without having to to worry about provisioning or managing that service.
You just upload your code, and AWS will take care of everything for you.
Storage Services
Simple Storage Service (S3)
Designed to store and access any type of data over the internet.
It's a serverless service, and as such, we don't need to worry about what is behind it.
You simply create a bucket, and upload thing to this bucket.
The size of this bucket is theoretically unlimited.
You can store virtually any amount of data, all the way to exabytes with unmatched performance.
S3
if fully elastic, automatically growing and shirinking as you add and remove data.
You only pay for what you use.
Provides the most durable storage in the cloud and industry leading availability, backed by the strongest SLAs in the cloud.
Designed to provide 99.999999999% data durability.
And 99.99% availability.
S3
is secure, private and encrypted by default.And supports numerous auditing capabilities to monitor access requests to
S3
resources.
If using to host front-end websites, you must make it
public
accessible.
S3 Storage Classes
Are a range of storage classes that you can choose from based on the data access, resiliency and cost requirements of your workloads.
S3 Intelligent-Tiering
: For automatic cost savings for data with unknown or changing access patterns.S3 Standard
: For frequently accessed data.S3 Express One Zone
: For your most frequently accessed data.S3 Standard-Infrequent Acess
: For less frequently accessed data.S3 Glacier Instant Retrieval
: For archive data that needs immediate access. (1-5 minutes retrieval time)S3 Glacier Flexible Retrieval
: (Formerly S3 Glacier) For rarely accessed long-term data that does not require immediate access._ (3-5 hours retrieval time)_S3 Glacier Deep Archive
: For long-term archive and digital preservation with retrieval in hours at the lowest cost storage in the cloud. (5-12 hours retrieval time)
Glacier
S3 Glacier
storage classes are purpose-built for data archiving.All 3 storage classes provide virtually unlimited scalability and are designed for 99.999999999% of data durability.
There are storage classes with options to deliver fast access to your archives to lowest-cost archieve storage.
Bucket
S3
stores data as objects withinBuckets
.An object is a file and any metadata that describes the file.
And each object has a
key
which is the unique identifier for the object within the Bucket.
Total volume of data and numbers of objects are unlimited.
0 bytes to a maximum of 5TB.
Largest pobject upload is 5GB.
Bucket's names must be unique (
S3
has universalnamespaces
).
Security
S3
is secure by default, and can only be accessed with explicitly granted access permission, but can be modified through:IAM Policies
roles, users and groups.Bucket Policies
applied at the bucket level.Access Control Lists (ACL)
applied at the bucket and/or object level.
Life Cycle Management
Object Deletion after expiry time.
Object Archiving to Glacier after expiry time.
Can be restored from Glacier back to S3.
Versioning
Preserves copies of objects inside a bucket (Like git).
Individual objects can be restored to previous versions.
Deleted objects can be recovered.
Cross Region Replication
CloudFront will do this replication for you. But for whatever other reason, you may do this with the Buckets themselves.
Reduced latency for end users.
Both source and destination buckets need versioning enabled if using versioning.
ACL
details updated.Will be migrated across from the source to the destination Bucket.
If
S3
enabled encryption is in place on the source bucket, then that will be also replicated across the destination bucket.
Need to copy existing objects to new region.
Replication always takes place between a pair of
AWS Regions
.Buckets can be source buckets for another cross region replication.
S3 Transfer Acceleration
You can use this service to accelerate internet transfers of data from and to
S3
.Also for transfers between AWS regions.
The cost is:
Internet -> S3
(US, Europe and Japan):$0.04 per GB
.Internet -> S3
(All other):$0.08 per GB
.S3 -> Internet
:$0.04 per GB
.Between AWS Regions
:$0.04 per GB
.
It is only advantageous for transfers up to 1TB of data.
More than that consider using Snowball Edge Device.
Amazon Macie
Uses machine learning and pattern matching to discover, classify and protect these confidential data stored in
S3
.Sensitive data such as personally identifiable information (PII),
Personal data used to establish identity of someone.
Access controls and Encryption.
Macie
uses AI to recognize if yourS3
objects contain these PII types of data.Per region jobs.
No charge for analysing up to 1GB of data each month.
Elastic Block Store (EBS)
Is a scalable, high-performance, block-storage service for use with
Amazon EC2
instances. (It's specifically for attaching to servers that are launched withAmazon EC2
)With EBS, you can scale your usage up or down within minutes.
Has high-durability (99.999%), including replication within
AZs
.Each EBS volume is automatically replicated within its
AZ
to protect from component failure.
You can encrypt your block storage without needing to build, maintain and secure your own key management infrastructure.
Protect block-storage with
EBS Snapshots
, that can be used to enable disaster recovery, migrate data acrossRegions
, and improve backup compliance.AWS further simplifies the lifecycle management of your
snapshots
integrating withAmazon Data Lifecycle Manager
.Creates
Policies
to automate tasks like creation, deletion, retention and sharing ofsnapshots
.
5 Types of different storage
General use (SSD) (gp3 and gp2).
IOPS provisioned (SSD) (io2 Block Express and io1).
Throughput Optimised Hard Disk Drive (st1).
Cold Hard Disk Drive (sc1).
Magnetic disk.
Elastic File System (EFS)
Serverless, fully elastic file storage for
EC2
.Meaning that it automatically grows and shrinks as you add and remove files with no need for management or provisioning.
Simple, scalable for Linux-based workloads for use with Cloud services and on-premise resources.
Built to scale on demand without disrupting applications.
Designed to provide massively parallel shared access to thoursands of
EC2
instances.This means that it is a good shareable storage for different services.
Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for extremely high-durability (99.999999999%) and high-availability (99.99%).
Supports protocol
NFSv4 (Network File System v4)
.
Storage Gateway
Is a hybrid storage service that allows your on-premises applications to seamlessly use AWS Cloud storage.
You can use the service for:
Backup and archiving,
Disaster recovery,
Cloud data processing,
Storage tiering,
and migration.
The gateway connects to AWS storage services, like
S3
,Glacier
,EBS
, or others, providing storage for files, volumes, and virtual tapes.It delivers low-latency data access, while leveraging the agility, economics and security of AWS Cloud.
Snowball Edge Device
Is a portable petabyte-scale data storage device, that can be used to migrate large amounts of data from on-premise environments over the AWS Cloud.
Like a pen-drive. They send to you so that you don't have to migrate all the data over the internet.
Database Services
Relational Database Service (RDS)
Is a fully-managed database service that makes it easy to launch database servers in the AWS Cloud and scale them when required.
RDS
will take care of updates or paching.RDS
will auto scale.RDS
will provide high availability automatically.RDS
will auto backup.
Ideal for (OLTP - Online Transaction Processing), when you only gather specific data.
Usually
RDS
usesEBS
storage, and you can choose different types ofEBS
classes.
Supported Engines
MySQL
MariaDB
PostgreSQL
Microsoft SQL Server
Oracle Database
IBM Db2
DB Instance Classes
You can change your instance class, if your needs change.
This will determine the computation and memory capacity of a RDS instance.
db.m*
General purpose
db.z*
, db.x*
, db.r*
Memory optimized
db.c*
Compute optimized
db.t*
Burstable performance
The *
represents the chosen generation, optional attribute and size.
Exemple:
db.m7g
is a 7th-generation, general purpose instance powered by AWS Graviton3 processors.
Backups
User initiated DB Snapshots of instance.
When instance is terminated, these backups are not deleted.
Automated DB backups to
S3
.Deleted by default when instance terminated.
Disabled by setting backup retention period to 0.
Encryption of database and snapshots at rest available.
Deployment Strategies
Multi-AZ (Availability Zones)
Ideal for disaster recovery or availability. (Replication of data)
When one of the instances fails, AWS does the failover to another instance automatically.
Route 53
will setup all that automatically.
AWS automatically provisions and maintains one or more secondary standby DB instances in different AZ.
The primary instance is replicated to the other standby instances.
Provides redundancy and failover support.
It is possible to use the replicas to serve read traffic on
Multi-AZ DB
Cluster
deployment only
.Application should also be located in multiple AZ's.
Available for all database types.
Read Replicas
Ideal for performance.
The master instance focus on
writes
, andreads
are designated to the replicas.Multiple read replicas (up to 15 for Aurora).
When the master instance fails, one replica is promoted.
But data consistency is than lost.
And a new DNS for the replica.
Supported for Aurora, PostgreSQL, MySQL and MariaDB.
Cannot be put behind AWS ELB.
Use Aurora Cluster for automatic distribution of requests, software, Route 53 routing or HaProxy.
AWS Aurora
Aurora takes advantage of
RDS
features for management and administration.Aurora management operations typically involve entire clusters of database instances that are syncronized through replication.
The automatic clustering, replication and storage allocation make it simple and cost-effective to set up, operate and scale.
1/10th the cost of commercial database.
You are only charged for the space that you use in an Aurora cluster volume.
Aurora Data
Aurora data is stored in the cluster volume
, which is a single, virtual volume that uses SSDs.
A
cluster volume
consists of copies of the data across three AZs in a single Region.
This data is automatically replicated. Aurora also supports failover.
This architecture makes your data independent from the DB instances in the cluster.
So you can delete an instance from the cluster and have 0 data loss.
Aurora only erases the data if you delete the entire cluster.
Also cluster volumes automatically grow as the amount of data increases.
Supported Engines
MySQL
PostgreSQL
DB Instance Classes
Like RDS you can choose different instance classes for different needs.
Basic Price Models
You may choose a Serverless model, which automatically starts up or shutdown instances, and scales capacity up or down, based on needs. In this model you pay only for capacity consumed.
You can choose Provisioned On-Demand instances and pay per DB instance-hour.
You can also choose a Provisioned Reserved instances for additional savings.
DynamoDB
AWS's Serverless, NoSQL fully managed database.
Provides zero downtime maintenance and it has no versions (major, minor, or patch).
As NoSQL it does not support
JOIN
operations.It is recommended to denormalize your data models to reduce round trips and processing power to queries.
But it still provides
read consistency
andACID transactions
.
It's purpose for operational workloads that require consistent performance at any scale.
Ex.: It delivers single-digit milisecond performance for a shopping cart use case, whether you have 10 or 100 million users.
Uses
IAM
to help you securely control access to your DynamoDB resources.Because of this, there are no usernames or passwords for accessing it.
By default DynamoDB encrypts all custumer data at rest.
By default, DynamoDB automatically replicates your data across three AZs.
Continuous backups provide you per-second granularity to recovery.
You can restore a table to any point in time up to the second during the last 35 days.
And these backups and restores don't use provisioned capacity.
And put no impact on the performance or availability of your applications.
Global Tables
DynamoDB global tables enable a 99.999% availability and multi-Region resilience.
Capacity Modes
Provisioned
You specify the number of reads and writes per second that you expect your application to require.
You can use autoscaling can adjust provisioned capacity based on demand.
Use cases:
Predictable traffic.
Traffic is consistent or ramps gradually.
On-Demand
Pay-as-you-go pricing for read and write requests.
Instantly scales up or down your tables to adjust for capacity. (Zero capacity planning)
Also scales down to zero so you don't pay for throughput when your table doesn't have traffic.
No cold starts.
Use cases:
Unpredictable traffic.
Traffic hard to forecast.
Usual Table Example
Tables: ex.: Person.
Attributes: ex.: FirstName, LastName, Email, ...
Items: ex.: Persons table contains items of individual people.
Partition Keys and optional Sort Key defined for fast access to items.
Basic Price Models
DynamoDB charges for reading, writing and storing data in your tables, along with any optional features you choose to enable.
Price also depends on the capacity mode:
on-demand
: Pay only for what you use.provisioned
: You will be charged for the throughput capacity event if not fully utilized.
It offers a free tier that provides 25GB of storage, and can handle 200 Million requests per month.
DocumentDB
Makes easy to set-up, operate and scale MongoDB compatible databases
Basically it is like running MongoDB in AWS.
Launched as instances in a cluster.
Supports
instance-based clusters
andelastic clusters
.
It automatically grows the size of your storage volumes as your database increases.
You can increase read throughput for higher requests volume by creating more instance replicas in the cluster.
You can also scale compute and memory of the replicas up and down .
It runs on a
VPC
to isolate the database.Automatically monitors the health of the cluster, replacing failing instances.
Automatic failover of instances.
Automatic, incremental and continuous backups, to
S3
.Backup capabilities allows you to restore your cluster to any second during your retention period, up to the last 5 minutes. (Retention period up to 35 days)
Backups have zero impact on cluster performance.
You can encrypt your databases with
KMS
.Cluster volume is replicated to three different AZs.
Clusters
Consists of 0 to 16 instances, and a cluster storage volume that manages the data for those instances.
All writes are done through the primary instance. (Meaning the other instances are read-only)
Basic Price Models
Instance hours (per-hour)
I/O requests (per 1 million requests per month)
Keyspaces (Cassandra)
Apache Cassandra compatible.
Compatible with Cassandra Query Language (CQL).
Provisioned capacity modes you specify read and writes capacity.
On-demand mode AWS manages capacity automatically.
Redshift
Is a fast, fully managed petabyte-scale data warehouse, based upon PostgreSQL engine.
Perfect for Big Data storage solutions.
Specifically designed for (OLAP - Online Analytics Processing) and (BI) applications, which require complex queries against large datasets.
Redshift
cluster is a set of nodes, which consists of a leader node and one or more compute nodes.Choice of type and size for nodes.
ElastiCache
Is an in-memory data store or cache in the cloud.
It allows to retrieve information from fast, fully managed in-memory caches, instead of relying for slower disk-based databases.
Supports
Redis
orMemcached
engine.It can be used on its own as a standalone Keystore database, OR it can be used in front of another database, for example DynamoDB, to respond to requests for regularly accessed content.
Neptune
Is a fast, reliable, fully-managed graph database service.
Uses graphs structures:
nodes
(data entities).edges
(Relationships).properties
to represent and store data.
It has a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with millisecond latency.
Graph query languages:
Gremlin
.SPARQL
.
Quantum Ledger Database (QLDB)
Fully managed ledger database.
Append only, immutable and transparent journal.
You cannot delete a transaction after it has occurred.
And any changes to a transaction will be visible.
Built-in cryptographic verification of transactions.
Critical applications. (e.g accounting, health, sensitive information)
Database Migration Services (DMS)
Orchestrates the migration of databases over to AWS easily and securely.
It can also migrate data from one database engine type to another totally different database engine type.
Deployment on AWS
Infrastructure as Code
Allows infrastructure to be managed in the same way as software. (In a text file with CloudFormation command language)
Version Control can be applied to it.
Examples
CloudFormation Templates (JSON, YAML).
CloudFormation Designer.
Enables visual design the needed architecture on screen.
And auto generates an
CloudFormation Template
.
Example - with OpsWork
Check out in OpsWorks Section.
Deployment Options
There are many deployment options in AWS.
You can use them individually or you can manage them and automate them across the entire lifecycle.
Manually
CodeCommit
You can manage the code with
CodeCommit
, which is a repository for the code that run Git.Very similar to GitHub.
Updates on the repository can trigger
Build Jobs
which can build the code automatically.With CodeBuild.
CodeBuild
Used to build applications from
CodeCommit
repositories.Also used to implement testing to make sure that not only the code compiles but also works as intended.
Once it's ready for production you can pass either:
Manually pass to CodeDeploy.
And after set
CloudFormation
andCloudWatch
.
Or use OpsWorks or Elastic BeanStalk (Deployment Service), so AWS will automatically manage Provision AND Monitor.
CodeDeploy
Used to get the code ready for deployment and pass it to CloudFormation.
It is also possible to use OpsWorks or Elastic BeanStalk (Deployment Service) instead, to manage provision and monitor of the resources.
CloudFormation
Will provision those compute resourses.
Once all it is all up and running, we can implement CloudWatch.
CloudWatch
That will continually monitor those deployment resources.
Automatically
With CodePipeline
CodePipeline will manage the entire process automatically.
Deployment with Containers
With Elastic Container Service (ECS)
Check more info about it Elastic Container Service (ECS).
With Elastic Kubernetes Service (EKS)
Check more info about it Elastic Kubernetes Service (EKS).
Deployment Strategies
There are a number of different strategies when doing deployments.
All at Once Deployments
Gets all of our instances and update them at once. (At the same time)
While they update the
Elastic Load Balancer
won't be able to send traffic to any of the instances.So there will be DOWNTIME.
No availability during deployment process if no new resources are created.
Availability can be maintained if resources are doubled (disposable resources).
You copy the environment, and tell
Route 53
to redirect traffic to this new copied environment.The original environment will be down while updating.
After the update finished, revert routes in
Route 53
and dispose the resources. (deleting the copied environment)
In-Place Deployments
Allow us to simply update each instance one-by-one. (Individually)
While each one update, the
Elastic Load Balancer
won't send traffic to the updating instance.So the overall capacity will be REDUCED.
This can affect availability in peak access periods.
No new resources are created.
Rolling Deployments
Similar to
In-Place
, except that we add an additional instance to cover that instance that is being updated.Overall capacity is maintained and no downtime will happen.
New resources are created but, only for the updating instance.
Blue/Green Deployment
You will always have 2 identical environments, (similar to the solution for downtime in
All at Once Deployments
).One is the
Blue
:Which is the one actually running and receiving traffic from
Route 53
.The Production environment.
The other is the
Green
:Which is the one receiving updates and being tested for production.
The Development environment.
When the
Green
is ready and fully updated, it becomes the newBlue
and traffic is redirected to it.Then the old
Blue
becomesGreen
, and will be used for future updates. (And the cycle restarts)If problems were detected with the update (e.g bad code), you can simply redirect the traffic back to the old environment. (Rollback)
Will have minimal to none downtime.
BUT this will require DOUBLE the resources ALL THE TIME.
Canary Deployment
Is a two-stage
Blue/Green
deployment.Where it differs is that instead directing 100% of the traffic to the new environment after the update, you tell
Route 53
to direct half of the traffic to theGreen
updated environment and keep half the traffic in theBlue
.Once we are satisfied with the update, then put 100% of traffic to the
Green
, and make the shiftGreen -> Blue
and restart the cycle.
This method will have reduced risks after updating BUT will have increased deployment time.
Linear Deployment
Is an incremental
Blue/Green
deployment.Instead of going 50/50 like the
Canary
, we start sending only 10% to theGreen
and keeping 90% in theBlue
.In the next stage, we increase
10% -> 20%
to theGreen
.And keep increasing traffic to the
Green
in each stage until all the traffic goes toGreen
, and switchGreen -> Blue
.
This method minimizes the spread of problems to users.
Will have increased deployment time.
Networking & Content Delivery
CloudFront
Is a global delivery network or (CDN), that securely delivers your frequently requested content to over 100 edge locations across the globe.
By doing this, it achieves low latency and high transfer speeds for your end-users.
It also provides protection against DDoS attacks.
Virtual Private Cloud (VPC)
Lets you provision a logically isolated section of the AWS cloud.
Lets you launch the AWS resources in that virtual network that you yourself define.
This is your own personal private space within AWS cloud, and no one can enter it unless you allow it.
Subnets
A VPC can span multiple
AZ
by having subnets in multipleAZ
.A VPC cannot span multiple regions (use VPC Peering).
Private Subnet
It only allows access from within the VPC.
They private by default.
It does not allow access from the public from outside.
Any traffic that tries to access a private subnet from outside will be blocked by default.
Public Subnet
They can receive traffic from the internet.
Receives traffic from an
Internet Gateway
.There needs to be an internet gateway in the VPC for them to work. (Receive traffic and direct it through to the public subnet)
Must have a
route
defined to an Internet Gateway.For the VPC service to know where to route the traffic to, we need to define a route in a route table, and associate this route table to the public subnet.
Route Tables
Contains a set of rules (routes), that are used to determine where network traffic from your subnet or gateway is directed.
All subnets must be associated with a Route Table.
If a route table is not defined then the Main Route Table will be implicitly associated to the subnet.
VPC Peering
Used to create VPCs in different regions, that behave as a single VPC.
Networking connection between two VPCs.
Instances communicate with each other as if they are within the same network.
Facilitates high speed transfer of data through the AWS backbone across differenc VPCs, regions and even accounts.
Security (EC2 Security Groups)
Acts as a virtual firewall to an instance.
Only allow rules. Any request not defined in a rule is rejected.
Stateful
- Responses to requests are automatically allowed.Meaning that there is a state save, if the request was allowed, it's response will be allowed.
Can associate multiple security groups to an instance.
Default security group:
Blocks inbound traffic not using the same security group.
Allows all outbound traffic.
Network Access Control Listss (NACL)
Associated to a subnet.
Allow or deny rules. Rules evaluated in number order.
Default NACL allows all inbound and outbound traffic.
Custom NACL denies all traffic until you add rules.
If you don't explicitly associate a subnet with a NACL, the subnet is implicitly associated with the default NACL.
Stateless
- responses to allowed inbound traffic are subject to the rules for oubound traffic.
Web Application Firewall (WAF)
Protects your web applications or APIs against common web exploits and bots.
AWS Managed Rules for specific applications.
Ip Sets to blacklist or whitelist Ip addresses.
Can be applied to CloudFront, the Application Load Balancers, Amazon API Gateway, or AWS AppSync.
Virtual Private Networking (VPN)
It uses a
Customer Gateway (CGW)
, which is in the client end.Tipically a device or a VPN software application running on the client side.
And it connect to AWS by hitting a
Virtual Private Gateway (VPG)
, which is in the AWS end.After hitting the VPG, traffic is then directed to the
VPC
.The VPN Connection is a dual tunnel connection. (2 CGW for redundancy)
Direct Connect
Is a high speed dedicated network connection to AWS.
Enterprises can use it to establish private connections to the cloud in situations where a standard internet connection wont be adequate.
Great advantages when combining with VPC Peering.
Elastic Load Balancing (ELB)
Automatically distributes incoming traffic for applications across multiple EC2 instances and also in multiple availability zones, so if one availability zone goes down, the traffic will still go to the other, and your application will continue to deliver responses.
Also allows you to achieve high availability and fault tolerance by distributing traffic evenly amongst those instances, and it can also bypass unhealthy instances.
Route 53
Is a highly available and scalable domain name system (DNS) and it can handle direct traffic for your domain name and direct that traffic to your back-end web server.
Useful to register DNS.
Can also buy domains.
API Gateway
Is a fully managed service that makes it easy for developers to create and deploy secure applications programming interfaces (APIs) at any scale.
It handles all of those tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls.
It's a serverless service, and as such, you don't need to worry about the underlying infrastructure.
Management & Governance Tools
Useful for:
Provisioning
Monitoring and Logging
Operations Management
Configuration Management
CloudFormation
Allow us to implement our infrastructure as code. (
JSON
orYAML
templates)Version control capability.
Template describes all the AWS resources and CloudFormation takes care of provisioning and configuring.
Stacks
All of our related resources that are defined in our CloudFormation template or multiple templates can be managed as a single unit called
stack
.Stacks
are managed using the console.Before making changes to your resources, you can generate a change set.
This allows you to view the changes to your resources on a review screen before you actually implement those changes.
Template Sections
A template is made up of a number of different sections.
Format Version
: Template conforms to.What version of Template you are actually using.
Description
: Must always follow Format Version.Just a description of what the Template is about.
Metadata
: JSON objects and keys that provide additional info.If you want to put something specific in that Template, you could put that in as a Metadata.
Parameters
: Allow values to be passed at stack creation.Very useful if other people will be using your Template.
Used to define parameters so when the template and the Stack are begin created, the CloudFormation servkce can prompt the person that is deploying for parameters.
E.g: What type of
EC2
instance you want to lauch.
Mappings
: Match keus to corresponding name value pairs.Transforms
: Optional transforms such as SAM, snippets.To prepare this CloudFormation Template for other services, such as the serverless application model.
Outputs
: Declares output values.Can be very useful when you want to see what is going on with your CloudFormation Template.
You can put at different stages outputs that can output messages to the console.
Resources (required)
: Declares the resources to be included for deploy.Conditions
: Define when a resource can be created or a property defined.E.g: You may put in a condition that an
EC2
resource cannot be deployed until theVPC
has been created.
CloudFormation Designer
Visual tool that provides a drag-and-drop interface for adding resources to templates.
Similar to UML templates.
In the template you have:
Boxes
indicating the type of resources.Arrows
linking resources indicate conditions or any relationships between them.Colored dots
to indicate some configurations on the resources.
Supports
JSON
andYAML
.Changes done on the visual representation are automatically converted to
JSON
orYAML
.
Service Catalog
Allow enterprises to catalog resources that can be deployed on the cloud.
This allows an enterprise to achieve common governance and compliance for its IT resources by clearly defining what is allowed to be deployed on the AWS cloud.
CloudWatch
Mainly monitors performance.
The Monitoring and Observability service to:
Collect logs, metrics and custom metrics of AWS resources.
Monitor metrics, statistics and alarms in dashboards.
Act on alarms and events. Implement corrective action.
Analyze metrics with
CloudWatch Log Insights
.Compliance and Security controlled with IAM and data encryption at rest and in transit.
It can be used for triggering scaling operations, or it can also be used for providing insight into your deployed resources.
Monitor resources like
EC2
instances.Autoscaling
groups.Elastic LoadBalancer
.Health check of
Route 53
.Monitor performance of
EBS
volumes.Storage Gateway
latency.CloudFront
.
Custom Metrics
Publish your own metrics to
CloudWatch
usingAWS CLI
or anAPI/SDK
.Standard resolution, one-minute granularity. (Samples will be taken every minute)
High resolution, one-second granularity. (Samples will be taken every second)
Metrics produced by AWS services are standard resolution by default.
Metrics include:
CPU
Network
Disk
Status check
Alarms
Billing alarms as well as resource alarms.
Integrates with
SNS
.Three states:
Ok
ALARM
INSUFFICIENT_DATA
If a metric is above the alarm threshold for the number of time periods definied by the evaluation period, an alarm is invoked.
Logs
Agent installed on instance.
Monitor, store, and access your log files from
EC2 instances
,CloudTrail
, or other sources.Search and Analyse data with
CloudWatch Log Insights
.
Systems Manager
Provides a unified user interface that allows you view operational data from multiple AWS services and to automate tasks across those resources.
That helps to shorten the time to detect and resolve any operational problems.
CloudTrail
Mainly monitors API calls done on AWS.
Monitors and logs AWS account activity, including actions taken through the AWS management console, the AWS software development kits, the command line tools, and other AWS services.
So, this greatly sympathize security and analysis of the activity of users of your account.
It can log calls to AWS services from the AWS API.
Logs are stored in a bucket and can be analysed (
Amazon Athena
,EMR
, etc).
It logs which AWS Users
called, from what IP address
and the Date
of the call.
AWS Config
Enables you to access, audit, and evaluate the configurations of your AWS resources.
This greatly simplifies compliance auditing, security analysis, change management and control, and also operational troubleshooting.
OpsWorks
It is a configuration management service fully managed by AWS.
AWS OpsWorks for
Chef Automate
.AWS OpsWorks for
Puppet Enterprise
.AWS OpsWorks for
Stacks
.Define different parts of an application with layers.
Chef Recipies define configuration of layers.
These allows us to define different parts of our application with layers. and each one of the layers will have a
Chef Recipe
to define the configuration and resources inside of those layers.Chef
andPuppet
can be used to configure and automate the deployment of AWS resources.
Example - with Chef
or Puppet
Chef
or Puppet
Application instances are registered to a
Chef
orPuppet
OpsWork
instance.Chef
orPuppet
configurations used to manage application instances.
Example - with Stacks
Stacks
A stack is divided into layers representing different parts of the application.
Chef
recipes are used to define layer configurations.Some AWS resources (e.g:
Amazon RDS
) need to be created oursideOpsWork
and added to the layer.
Elastic BeanStalk (Deployment Service)
Usually used for Web Apps.
Uses
CloudFormation
under the hood.Allows to quickly deploy and manage applications on environments.
Automatically handles capacity provisioning, load balancing, scaling, and application health monitoring.
New versions of the code can be uploaded through the console or CLI, and also complete environments can be re-deployed.
Applications can be:
Docker containers.
Node.js, Java, .NET, PHP, Ruby, Python and Go.
On servers such as Apache, Nginx, Passenger and IIS.
Highly Available & Fault Tolerant Architecture
Beanstalk will automatically create one, without us having to do anything.
Deployment Options
If you have like 20 EC2 instances, it will deploy on all of them at once.
Immutable (
All at Once - without downtime
). (Two environments temporarily)It will duplicate the 20 EC2 instances as "temporary backup", while the 20 original ones are deploying at once.
It will double the resources, but will not have downtime.
Rolling Deployments (a batch at a time), Rolling with additional batch.
It will deploy on batches of X instances at a time.
Blue/Green Deployment (two environments).
It is like having two environments like "Development" and "Production" environment.
Then when ready for deploying, the "Development" will be deployed and will turn into the "Production", and the "Production" will become the "Development", so that there is no downtime.
Trusted Advisor
Similar to Amazon Inspector.
Is an online expert system that can analyze your AWS account in real time and the resources inside it, and then it can advise you on how to best achieve high security and best performance from those resources.
It helps you to reduce costs, raise performance and security, optmizing you AWS environment.
Check Categories
Cost Optimization
Basic (Limited)
Recommendations that can potentially save you money.
Performance
Basic (Limited); Developer (Full)
Recommendations that can improve the speed and responsiveness of your applications.
Security
Basic (Limited); Business (Full)
Recommendations for security settings.
Fault Tolerance
Basic (Limited); Developer (Full)
Recommendations that help increase the resiliency of your solutions.
Service Limits
Basic (Limited); Developer (Full)
Checks the usage for your account and whether your account approaches or exceeds limits.
Operational Excellence
Business (Full)
Recommendations to help you operate your AWS environment effectively.
Analytics
Elastic MapReduce (EMR)
Fully managed
Hadoop
framework as a service.You can also run other frameworks in
EMR
, and they integrate with Hadoop, such as,Apache Spark
,Hive
,HBase
,Presto
andFlink
.
It runs over a cluster of
EC2
instance, and these clusters can be automatically deleted upon task completion.Data can be analyzed by
EMR
in several data stores, includingS3
andDynamoDB
.EMR Studio IDE
to create data science applications.
Athena
Allows you to analyze data stored in
S3 bucket
using your standardSQL
statements.You can use
EMR
to cleanse and/or transform the data, before running onAthena
.
Pay only for the queries you run OR by
TB
analysed.Serverless.
Good for analysing logs stored in S3
, like ELB logs
, S3 access logs
or others.
AWS Glue
It is a serverless data integration service that makes it easy to discover, prepare and combine data for analytics, machine learning and application development.
It provides all of the capabilities needed for data integration so that you can start analyzing your data and putting it to use in minutes.
Redshift
Similar to
Athena
, used to analyze data inS3
withSQL
.But can query exabytes of data.
Pay for queries you run AND the Redshift Cluster.
FinSpace
Is a petabyte scale data management, and analytics service, purpose-built for the financial services industry.
Also includes a library of over 100 financial analysis functions.
Kinesis
Allows you to collect, process, and analyze real-time streaming data.
It consists of a number os services like:
Kinisis
Data Streams
: Capture, process, and store data streams.Kinisis
Data Firehose
: Load data streams into AWS data stores.Kinisis
Data Analytics
: Analyze streams withSQL
orApache Flink
.Kinisis
Video Streams
: Capture, process, and store video streams.
QuickSight
Is a Business Intelligence (BI) reporting tool.
Similar to
tableau
, or if you're a java programmer, similar tobert
.
It allows you to visualize your analyzed data.
Uses a super-fast, Parallel, In-memory, Calculation Engine (
SPICE
).1/10 cost of traditional BI software.
CloudSearch
Is a fully managed search engine service that supports up to 34 languages.
It allows you to create search solutions for your website or application.
OpenSearch (previously ElasticSearch)
Is a fully managed
ElasticSearch
service.It is a real-time distributed search and analytics engine.
It is the most popular enterprise search engine. (Facebook, Github, Stack Exchange, Quora, ...)
This allows high-speed crawling and analysis of data that is stored on AWS.
Analyzes data from:
S3
Kinesis Streams
DynamoDB Streams
CloudWatch logs
CloudTrail logs
Security, Identity & Compliance
AWS Shield
Provides protection against DDoS
.
The standard version of Shield is implemented automatically on all AWS accounts.
There are two layers for
AWS Shield
:Standard
Covers
Network Flow Monitoring
.Protection from
DDoS
attacks.
Advanced
Covers deeper (layer 7) application layers traffic monitoring.
More
DDoS
mitigations ways.Visibility and Reporting of attacks.
Like forensic reports.
History resport.
DDoS
response support team.Cost protection, for reimbursing related
Route 53
,CloudFront
andELB
DDoS
charges.
Web Application Firewall (WAF)
It is a web application firewall that allows you to monitor
http
andhttps
requests forwaded tpCloudFront
,Load Balancer
orGateway API
.It also provide additional protection against common attacks such as SQL injection and cross-side scripting (XSS).
It has different sets of rules that can be used for different applications.
Like allow access to your content or not.
It allows 3 different behavior types:
Allow all the requests Except the ones you specify.
Block all the requests Except the ones you specify.
Count the number of requests that satisfy the properties you specify.
Identity and Access Management (IAM)
More info Identity and Access Management (IAM).
AWS Organizations
More info AWS Organizations.
Amazon Inspector
Is an automated security assessment service.
It can help automatically identifying vulnerabilities or areas of improvement within your AWS account.
After inspecting, it produces a detailed list of security vulnerabilities found ordered by importance.
Artifact
Is an online portal that provides access to AWS security and compliance documentation, and that documentation can be readily available when needed for auditing and compliance purposes.
Certificate Manager
Issues SSL certificates for HTTPS communication with your website.
It integrates with AWS services such as Route 53 and CloudFront, and the certificates that are provisioned through AWS Certificate Manager are completely free.
Amazon Cloud Directory
Is a cloud-based directory service that can have hierarchies of data in multiple dimensions.
Unlike conventional LDAP-based directory services that can only have a single hierarchy.
Directory Service
Is a fully managed Microsoft Active Directory service in the AWS cloud.
CloudHSM
Is a dedicated hardware security module in the AWS cloud.
This allows you to achieve corporate and regulatory compliance while at the same time greatly reducing your costs over using your own HSM in your own infrastructure.
Amazon Cognito
Provides sign-in and sign-up capability for your web and mobile applications.
You can also integrate that sign-up process with external OAuth providers such as Google and Facebook, and also Saml 2.0 providers as well.
Key Management Service (KMS)
Makes it easy to create and control encryption keys for your encrypted data, ant it also uses hardware security modules to secure your keys.
It's integrated well with AWS services such as S3, Redshift and EBS.
Application Integration
Step Functions
Makes it easy to coordinate the components of distributed applications and microservices using a visual workflow.
Simple WorkFlow Service (SWF)
Works on a similar way to Step Functions in coordinating multiple components of a business process.
For new applications, it is recommended to use Step Functions, not the SWF service.
Simple Notification Service (SNS)
Is a flexible, fully managed pub-sub messaging service.
What that means is that you can create a topic, and users can subscribe to that topic, and when you publish a message to that topic, those users will receive that message.
It can also be used for push notifications to mobile devices.
Simple Queue Service (SQS)
Is a fully managed message queuing service and that makes it easy to decouple your applications from demand.
What that means is that it allows messages to build up in a queue until the processing server that processes those messages can catch up with demand.
Developer Tools
Cloud9
Is an integrated development environment running in AWS cloud.
It allows you to deploy servers directly to AWS from an integrated development environment.
CodeStar
Makes it easy to develop and deploy applications to AWS.
It can manage the entire CI/CD pipeline for you.
It has a project management dashboard, including an integrated issue tracking capability powered by Atlassian Jira software.
X-Ray
Makes easy to analyze and debug applications.
This provides you with a better insight to the performance of your application and the underlying services that it relies upon.
CodeCommit
Is a git repository just like GitHub, and its running in the AWS Cloud.
CodePipeline
Is a CI/CD service that can build, test and then deploy your code every time a code change occurs.
CodeBuild
Compiles your source code, runs tests and then produces software packages that are ready to deploy on AWS.
CodeDeploy
Is a service that automates software deployments to a variety of compute services, including EC2, Lambda, and even instances that are running on-premises.
AWS Command Line Interface (CLI)
CloudShell
Simple terminal to be used.
Usefull for doing "stuff" through a terminal, like:
Creating buckets.
Databases.
Lambdas.
Most of stuff that can be done through the Website and etc.
Run AWS commands with
aws [command] [options]
.
AWS API
Provides the mechanism for communication with AWS through HTTPs calls.
Utilized by:
AWS Management Console.
AWS CLI.
AWS SDKs.
Other AWS Services.
AWS CLI Applications
CLI application available for Windows, Linux and Mac.
Allows API commands to be sent to AWS using Windows or Linux terminals.
AWS Shell cross plataform integrated shell environment written in python.
AWS CLoud9
Integrated development environment running on an EC2 instance accessed through the AWS Management Console.
Tree View of files enables drag and drop SFTP transfer of files.
Increased security as IAM credentials are not saved on computer.
Cloud9 IDE
It is like a VScode.
To have an easier management of AWS features.
Runs on the browser.
Machine Learning Services
DeepLens
Is a deep learning-enabled video camera.
It has a deep learning SDK that allows you to create advanced vision system applications.
SageMaker
Is AWS flagship machine learning product.
It allows you to build and train your own machine learning models and then deploy them to the AWS cloud and use them as a back end for your applications.
Rekognition
Provides deep learning-based analysis of video and images.
Amazon Lex
Allows you to build conversational chatbots.
These can be used in many applications, such as first-line support for customers.
Polly
Provides natural-sounding text to speech.
Comprehend
Can use deep learning to analyze text for insights and relationships.
This can be used for customer analysis or for advanced searching of documents.
Translate
Can use machine learning to accurately translate text to a number of different languages.
Transcribe
Is an automatic speech recognition service that can analyze audio files stored in S3 and then return the transcribed text.
Media Services
Elemental MediaConvert
Is a file-based video transcoding service for converting video formats for video-on-demand content.
MediaPackage
Prepares video content for delivery over the internet.
It can also protect against piracy through the use of digital rights management.
MediaTailor
Inserts individually targeted advertising into video streams.
Viewer receive streaming video with ads that are personalized for them.
Elemental MediaLive
Is a broadcast-grade live video processing service for creating video streams for delivery to televisions and internet-connected devices.
Elemental MediaCloud
Is a storage service in the AWS cloud that is
MediaStorage
Is a storage service in the AWS cloud that is optimized for media.
Kinesis Video Streams
Streams video from connected devices through to the AWS cloud for analytics machine learning and other processing applications.
Mobile Services
Mobile Hub
Allows you to easy configure your AWS services for mobile applications in one place.
It generates a cloud configuration file which stores information about those configured services.
Device Farm
Is an app testing service for Android, iOS and Web applications.
It allows you to test your app against a large collection of physical devices in the AWS cloud.
AppSync
Is a GraphQL backend for mobile and web applications.
Migration Services
Application Discovery Service
Gathers information about an enterprise's on-premises data centers to help plan migration over to AWS.
Data that is collected is retained in an encrypted format in an AWS Application Discovery Service datastore.
Database Migration Service
Orchestrates the migration of databases over to the AWS cloud.
You can also migrate databases with one database engine type to another totally different database engine type.
Server Migration Service
Can automate the migration of thousands of on-premise workloads over to the AWS cloud.
This reduces costs and minimizes the downtime for migrations.
AWS Snowball
Is a portable petabyte-scale data storage device that can be used to migrate data from on-premise environments over to the AWS cloud.
You can download your data to the Snowball device and then send it to AWS, who will then upload that to a storage service for you.
Internet of Things (IoT)
AWS IoT
Is a managed cloud platform that lets embedded devices such as Microcontrollers and Raspberry Pi to securely interact with cloud applications and other devices.
Amazon FreeRTOS
Is an operating system for microcontrollers such as the microchip PIC32 that allows small, low-cost, low-power devices to connect to AWS IoT.
AWS Greengrass
Is a software that lets you run local AWS Lambda functions, and messaging data caching sync, and machine learning applications on AWS IoT connected devices.
It extends AWS services to devices so they can act locally on the data they generate while still using cloud-based AWS IoT capabilities.
Customer Engagement Services
Amazon Connect
Is a self-service contact center in the cloud, and that is delivered on a pay-as-you-go pricing model.
It has a drag-and-drop graphical user interface, and that allows you to create process flows that define customer interactions without having any coding at all.
Amazon Pinpoint
Allows you to send email, SMS, and mobile push messages for targeted marketing campaigns, as well as direct messages to your individual customers.
Simple Email Service (SES)
Is a cloud based bulk email sending service.
Business Productivity & App Streaming
Amazon WorkDocs (like Microsoft Office)
Is a secure, fully managed file collaboration and management service in the AWS cloud.
The web client allows you to view and provide feedback for over 35 different file types, including Microsoft Office file types and PDF.
Amazon WorkMail (like Outlook)
Is a secure, managed, business email and calendar service.
Amazon Chime (like Zoom)
Is an online meeting service in AWS cloud.
It is great for businesses for online meetings, video conferencing, calls, chat and share content both inside and outside of your organization.
Amazon WorkSpaces
Is a fully managed secure desktop as a service.
It can easily provision streaming cloud-based Microsoft Windows desktops.
It is built on a fixed cost PLUS an hourly rate.
Amazon AppStream 2.0
Is a fully managed secure application streaming service, that allows you to stream desktop applications from AWS to an HTML5 compatible web browser.
This is great for users who want access to their applications from anywhere.
Last updated