Pages

Thursday, November 30, 2023

AWS shield Standard vs Advanced note

 



// AWS SHIELD advanced WITH WAF


- protect against signature atk

- have ML capabilities.  // can recognize new threat as they evolve


AWS Shield is a service that protects applications against DDoS attacks. AWS Shield provides two levels of protection: Standard and Advanced.






// standard

AWS Shield Standard automatically protects all AWS customers at no cost. It protects your AWS resources from the most common, frequently occurring types of DDoS attacks. 


As network traffic comes into your applications, AWS Shield Standard uses a variety of analysis techniques to detect malicious traffic in real time and automatically mitigates it. 




// advanced


AWS Shield Advanced is a paid service that provides detailed attack diagnostics and the ability to detect and mitigate sophisticated DDoS attacks. 




It also integrates with other services such as Amazon CloudFront, Amazon Route 53, and Elastic Load Balancing. Additionally, you can integrate AWS Shield with AWS WAF by writing custom rules to mitigate complex DDoS attacks.


==========

Amazon Security note

 // security mechanism


- shared responsiblity




//customer

Customers are responsible for the security of everything that they create and put in the AWS Cloud.




When using AWS services, you, the customer, maintain complete control over your content. You are responsible for managing security requirements for your content, including which content you choose to store on AWS, which AWS services you use, and who has access to that content. You also control how access rights are granted, managed, and revoked.


 


The security steps that you take will depend on factors such as the services that you use, the complexity of your systems, and your company’s specific operational and security needs. Steps include selecting, configuring, and patching the operating systems that will run on Amazon EC2 instances, configuring security groups, and managing user accounts. 



============



// aws


AWS is responsible for security of the cloud.


 


AWS operates, manages, and controls the components at all layers of infrastructure. This includes areas such as the host operating system, the virtualization layer, and even the physical security of the data centers from which services operate. 


 


AWS is responsible for protecting the global infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure includes AWS Regions, Availability Zones, and edge locations.


 


AWS manages the security of the cloud, specifically the physical infrastructure that hosts your resources, which include:


Physical security of data centers

Hardware and software infrastructure

Network infrastructure

Virtualization infrastructure

Although you cannot visit AWS data centers to see this protection firsthand, AWS provides several reports from third-party auditors. These auditors have verified its compliance with a variety of computer security standards and regulations.



=============



AWS Identity and Access Management (IAM)



AWS Identity and Access Management (IAM)(opens in a new tab) enables you to manage access to AWS services and resources securely. 




- user permission




> root account user  // can access and controla ny resource in the account


IAM users, groups, and roles

IAM policies

Multi-factor authentication









iam user by default = 0 permision.


dikasih permission br bs add ec2 instance dll.

============



// multi factor authentication



add randomized token. 

password + adding second form of authentication


===========



principle of least privilege

- user is granted on what they need


============


// IAM policy


json document that describe what API calls a user can or cannot make





effect = allow / deny


action = any aws api call


resource = aws api resource



==========



// IAM group


mempermudah policy. grouping of user policy



==========



// IAM Roles 


- associated permission

- no username or pass

- allow or deny

- assumed for temporary amounts of time 

- gain access to temporary permission


- users

- external identities

- applications

- other AWS Services



ketika dipasang roles, abandon all previous policy. dan apply policy roles.



========



// aws organization


- central location to manage multiple aws account


- combine account jadi 1


- bayar2 jadi 1.  / consolidated billing


- hierarchical group of account jadi OU / organizational unit


developer OU 

admin OU

HR OU

legal OU



// service control policies.


- restrict resource each role / individual user can access


- . An SCP affects all IAM users, groups, and roles within an account, including the AWS account root user.




In AWS Organizations, you can apply service control policies (SCPs) to the organization root, an individual member account, or an OU. An SCP affects all IAM users, groups, and roles within an account, including the AWS account root user.





=========



// compliance



- audit / follow the law




consumer data eu = GPDR / General data protection regulation


healthcare us = HIPAA / Health Insurance Portability and Accountability Act




========



// AWS Artifact


- access to compliance reports done by 3rd party with wide range of various standard



// AWS Compliance center


- compliance information all in one place 


- ada aws risk and security white paper


==========




// AWS Key Management Services (KMS)

- key management services.


encryption - securing msg or data in a way  that only authorized parties can access it




key an door.



1 encryption at rest

2 encryption in transit





encryption data at rest is enabled on all dynamodb table data.


encryption data in transit is between server and client




AWS Key Management Service (AWS KMS)(opens in a new tab) enables you to perform encryption operations through the use of cryptographic keys. A cryptographic key is a random string of digits used for locking (encrypting) and unlocking (decrypting) data. You can use AWS KMS to create, manage, and use cryptographic keys. You can also control the use of keys across a wide range of services and in your applications.


========


// Amazon Inspector


improve security and compliance of your aws deployed app.



=========

Amazon Database note

 MySQL, PostgreSQL, Oracle, Microsoft SQL Server,



========


//  Lift-and-Shift


migrate db environtment onprem to cloud




This means you have control over the same variables you do, in your on-premises environment, such as OS, memory, CPU, storage capacity, and so forth.




++ DATABASE MIGRATION SERVICE 



=========


// amazon RDS


running your databases in the cloud is to use a more managed service called Amazon Relational Database Service, or RDS







Amazon Relational Database Service (Amazon RDS)(opens in a new tab) is a service that enables you to run relational databases in the AWS Cloud.


Amazon RDS is a managed service that automates tasks such as hardware provisioning, database setup, patching, and backups. With these capabilities, you can spend less time completing administrative tasks and more time using data to innovate your applications. You can integrate Amazon RDS with other services to fulfill your business and operational needs, such as using AWS Lambda to query your database from a serverless application.


Amazon RDS provides a number of different security options. Many Amazon RDS database engines offer encryption at rest (protecting data while it is stored) and encryption in transit (protecting data while it is being sent and received).





// amazon RDS support


Amazon RDS is available on six database engines, which optimize for memory, performance, or input/output (I/O). Supported database engines include:


Amazon Aurora

PostgreSQL

MySQL

MariaDB

Oracle Database

Microsoft SQL Server


===========



// amazon aurora


support mysql

support postgresql



- price 1/10 cost of commercial db



ada data replication & 6 copy at a time


bs apply 15 read replicas. // offload read and scale performance 


ada continuous backup to s3 ,, ready to restore 



ada point in time recovery : can recover data from specific period




=========


In a relational database, data is stored in a way that relates it to other pieces of data. 


An example of a relational database might be the coffee shop’s inventory management system. Each record in the database would include data for a single item, such as product name, size, price, and so on.


Relational databases use structured query language (SQL) to store and query data. This approach allows data to be stored in an easily understandable, consistent, and scalable way. For example, the coffee shop owners can write a SQL query to identify all the customers whose most frequently purchased drink is a medium latte.


ID Product name Size Price

1 Medium roast ground coffee 12 oz. $5.30

2 Dark roast ground coffee 20 oz. $9.27





=============




// Amazon DynamoDB


- serverless database




table -> 


data organize into item.

item -> attributes





- redundant across AZ

- high performance / ms response time 

- support million of user


- noSQL database

- non relational database

- non schema

- add or remove attribute in table 


- simpler. fast.


- quick in response time and high scalable 

- fully managed






================




Nonrelational databases are sometimes referred to as “NoSQL databases” because they use structures other than rows and columns to organize data. One type of structural approach for nonrelational databases is key-value pairs. With key-value pairs, data is organized into items (keys), and items have attributes (values). You can think of attributes as being different features of your data.


In a key-value database, you can add or remove attributes from items in the table at any time. Additionally, not every item in the table has to have the same attributes. 




Key Value

1

Name: John Doe


Address: 123 Any Street


Favorite drink: Medium latte


2

Name: Mary Major


Address: 100 Main Street


Birthday: July 5, 1994





Amazon DynamoDB(opens in a new tab) is a key-value database service. It delivers single-digit millisecond performance at any scale.





==============



// rds vs dynamoDb


AWS Cloud Practitioners, welcome back to the championship chase of the database! In the relational corner, engineered to remove undifferentiated heavy lifting from your database administrators with automatic high availability and recovery provided. You control the data, you control the schema, you control the network. You are running Amazon RDS. Yes, Yeah. 




The NoSQL corner, using a key value pair that requires no advanced schema, able to operate as a global database at the touch of a button. It has massive throughput. It has petabyte scale potential. It has granular API access. It is Amazon DynamoDB. 





rds: business analytic.



============





// amazon redshift


Amazon Redshift(opens in a new tab) is a data warehousing service that you can use for big data analytics. It offers the ability to collect data from many sources and helps you to understand relationships and trends across your data.





data warehouse => buat big data.


historical analytic opposed to operational analysis.




- data warehouse as a service



-  multiple petabyte size 


- 10 times higher performance than relational db






// amazon redshift spectrum 

- run single sql query against exabytes of unstructured data running in data lakes.





Amazon Redshiftis a data warehousing service that you can use for big data analytics. It offers the ability to collect data from many sources and helps you to understand relationships and trends across your data.



=============



// AWS Database Migration Service (AWS DMS)



migrate existing db between source and target.


source tetep operational pas dipindah


downtime is minimized for app that rely on that database



source and target db ga perlu type yg sama 




mysql - amazon RDS


microsoft sql - amazon RDS


oracle - amazon RDS for oracle 





// compatible database

schema structure 

data type

database code





on premise ec2, amazon rds ------  cloud ec2, amazon rds








// heterogonous database


source dand destination berbeda databasenya.




mesti 2 step process. convert dl pake

AWS Schema Convertion Tool.



// 2 ini ke convert pake aws schema convertion tool

schme structure + 

data type -

database code  +



============



// 3 kegunaan lain DMS:


- development and test database migration  // migrate or copy data to 2nd db

- database consolidation  // gabungin beberapa db menjadi 1 

- continuous database replication   // continous db replication in multiple place




==========



// summary


dynamoDB : great for key value pair  






// amazon DocumentDB  ( with MongoDB Compatibility )



- great for small attributes


contoh: full content management system, catalog, user profile, 





// amazon Neptune

social web media tracking

fraud detection

supply chain. // track assurance that nothing is lost 




// amazon Managed Blockchain

blockchain solution 


- decentralization components.



// amazon Quantum Ledger Database  ( QLDB )

immutable ledger.  any entry can never be removed from audits.




// amazon ElastiCache 


- database accelerators.


bs dikasih caching layer. improve from milisecond to microseconds

ga perlu launch, uplift, maintenance.

comes with both memcached and redis flavors





// amazon DynamoDB Accelerator ( DAX )


- database accelrator for DynamoDB


improving read times for non relational data










=============


best for archival data:


Amazon S3 Glacier Flexible Retrieval

Amazon S3 Glacier Deep Archive



=========


========


AWS Storage note

 // storage access




block level storage = place to store  files  // bytes stores on disk. 



laptop / pc => gunain block level storage. ( hard drive )







// Instance Stores Volume



local Instance Stores Volume: hard drive di ec 2


- attached to ec2 instances 

- temporary block level storage

- lifespan = lifespan of ec2 instance


if stop / deleted ec2 instance all data written to the instance store volume will be deleted.  // dipake sama host lain ketika menjalankan ec2 instance karena sifatnya virtual.




temporary file

scratch data

data easily recreated.




- dont write important data to the drives that comes with  ecs instance.




u dont want important database deleted every time u stop ec 2 instances.









//  Amazon Elastic Block Store  ( EBS )


virtual hard drive / ebs volume.

bs di attach ke ec2 / directly attached

harddrive that is persistent



- can persist between stop and start of an ecs instances.



we define:

size 

type

config



volume that we need.





^ didalam ebs ada snapshost => incremental backup of data.

^ penting buat bikin regular snapshot backup

^ klo harddrive corrupt kita ga lost data

^ bs di restore data dr snapshot





// incremental backup


An EBS snapshot(opens in a new tab) is an incremental backup. This means that the first backup taken of a volume copies all the data. For subsequent backups, only the blocks of data that have changed since the most recent snapshot are saved. 




==================



// amazon simple storage service   

// amazon S3


- storing file

- data store that allow to store and retreive an unlimited amount of data at any scale

- store object in buckets





data that need save elsewhere.



receipt

images

excels

video

text file



maximum object size = 5 TB upload





bs dibikin version object to retain version / prevent accidental delete



bs create multiple bucket and store in diffferent classes or tiers of data



bs create permision who can see and accessing objects



bs stage data between different tiers




tiers:


data need to be used freq

audit data that need retained for several years

===================



// samazon s3  standard = 99.9999999% durability 


-11.9 of durability


remain intact of 1 years 



data stored in a ways aws can sustain 2 concurrent loss of data in 2 separate storage facilities.




> data is stored in at least 3 facilities  // multiple copy resides accross locations.




==================


// s3 static website hosting


- collection of html file, images, etc.



^ bs jd instant website





==================


// s3 standard-infrequent Access  ( s3 standard-IA)


- data accessed less frequent but need rapid access when needed.


- perfect for store backup, disaster recovery files, any object that required long term storage


===============


// s3 glacier flexible retrieval


- retain data for several years for auditing


- dont need to retreive very rapidly



bs simply move data kesini 

atau can create vault then populate them with archieves



Low-cost storage designed for data archiving

Able to retrieve objects within a few minutes to hours


S3 Glacier Flexible Retrieval is a low-cost storage class that is ideal for data archiving. For example, you might use this storage class to store archived customer records or older photos and video files. You can retrieve your data from S3 Glacier Flexible Retrieval from 1 minute to 12 hours.








// s3 glacier vault lock policy


retaining specfici period of time data.  //  lock ur vault for specific time




bs bikin rule =>  write once read many / WORM Policy di s3 glacier


^ lock policy from future edit



3 options for retreival:

- minutes

- hours 

- uploading directly to s3 glacier flexible retrieval / using s3 lifecycle policies



==============


// s3 lifecycle management / policies


- move data automatically between tiers 



1  keep object in standard 90d

2  move to s3 Standard-IA for the  next 30d

3 after 120 day total auto move to s3 glacier flexible retrieval




^ bikin config tanpa ngubah application code

^ perform those move automatically




============



// s3 one zone-infrequent


Stores data in a single Availability Zone

Has a lower storage price than Amazon S3 Standard-IA

Compared to S3 Standard and S3 Standard-IA, which store data in a minimum of three Availability Zones, S3 One Zone-IA stores data in a single Availability Zone. This makes it a good storage class to consider if the following conditions apply:


You want to save costs on storage.

You can easily reproduce your data in the event of an Availability Zone failure.





// s3 glacier instan retrieval

Works well for archived data that requires immediate access


Can retrieve objects within a few milliseconds


When you decide between the options for archival storage, consider how quickly you must retrieve the archived objects. You can retrieve objects stored in the S3 Glacier Instant Retrieval storage class within milliseconds, with the same performance as S3 Standard.






// s3 glacier deep archieve

Lowest-cost object storage class ideal for archiving

Able to retrieve objects within 12 hours

S3 Deep Archive supports long-term retention and digital preservation for data that might be accessed once or twice in a year. This storage class is the lowest-cost storage in the AWS Cloud, with data retrieval from 12 to 48 hours. All objects from this storage class are replicated and stored across at least three geographically dispersed Availability Zones.







// s3 intelligent-tiering


Ideal for data with unknown or changing access patterns

Requires a small monthly monitoring and automation fee per object

In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3 automatically moves it to the infrequent access tier, S3 Standard-IA. If you access an object in the infrequent access tier, Amazon S3 automatically moves it to the frequent access tier, S3 Standard.







// s3 outpost

Creates S3 buckets on Amazon S3 Outposts


Makes it easier to retrieve, store, and access data on AWS Outposts


Amazon S3 Outposts delivers object storage to your on-premises AWS Outposts environment. Amazon S3 Outposts is designed to store data durably and redundantly across multiple devices and servers on your Outposts. It works well for workloads with local data residency requirements that must satisfy demanding performance needs by keeping data close to on-premises applications.






============


// data metadata and key


In object storage, each object consists of data, metadata, and a key.

The data might be an image, video, text document, or any other type of file. Metadata contains information about what the data is, how it is used, the object size, and so on. An object’s key is its unique identifier.



when you modify a file in block storage, only the pieces that are changed are updated. When a file in object storage is modified, the entire object is updated.

==============





// EBS VS S3



ebs:

size up to 16 TiB

survive termination ec2 instance

ssd by default

hdd options



s3:

unlimited storage

individual object up to 5tb

write once / ready many

99.999999% durability





s3:

web enabled

regionally distributed

offer cost saving

serverless




object storage: doc, images, file   // everytime a change in object must upload entire file



block storage : blocks.   edit 80gb video.  edit, save. the engine only updates the blocks




==============


// amazon Elastic File System / EFS


- manage filesystem

- shared filesystem accross app

- Multiple instances can access the data in EFS at same time 

- auto scale up and scale down by system





klo ebs:

volume attach to ec2 instance

AZ level resource

need to be in the same  AZ to attach ec2 instance

volume do not auto scale -> klo 5t y 5t



klo efs:

bs multiple instance reading and writing simultaneously

linux true file system

regional resource / can edit between ec2 in same region

automaticaly scale as u write data



==============

AWS Networking note

 // amazon VPC 

amazon virtual private cloud





// amazon virtual private cloud


let u provision a logically isolated section

awas cloud.


- create virtual network environtment

- can public facing / private ( with internet or private )




public subnet

- talk to internet. 


private subnet

- ip internal





===========


public traffic --- internet gateway / IGW --- attach to vpc.



didalem vpc : 

elb

ec2 instance

db




===========


virtual private gateway --- attach to vpc.



^ allow traffic coming from approved network



- bs jg create vpn between private network dr DC ke virtual private  gateway


==========


// aws direct connect



- provide physical line that connect ur network to your aws vpc


connected dedicated fiber connection from DC1 to AWS VPC



- work with direct connect partner in ur area to establish this connection



1 vpc might have multiple type of gateway attached for multiple types of resources.

all reside in same vpc  but in different subnet




===========



// vpc network and acl.




igw --- public subnet --- private subnet 






========


// network ACL

packet yg msk IGW --> akan dicek oleh network access control list 



> The VPC component that checks packet permissions for subnets is a network access control list (ACL)(opens in a new tab).

> A network ACL is a virtual firewall that controls inbound and outbound traffic at the subnet level.




=========


// security group


- tiap ec2 instance yg di create msk kedalem security group

- by default blocking smua incoming traffic

- by default allow smua outbound traffic 




^ hrs dimodify allow certain type of traffic.






If you have multiple Amazon EC2 instances within the same VPC, you can associate them with the same security group or use different security groups for each instance. 


==========



// security group vs network acl


security group = stateful. // by default deny all inbound traffic ,, but allow all return traffic

network acl = stateless.   //  not allow return traffic. need to be specified




^ packet flow mesti didefine.






// stateful

Security groups perform stateful packet filtering. They remember previous decisions made for incoming packets.






Network ACLs perform stateless packet filtering. They remember nothing and check packets that cross the subnet border each way: inbound and outbound. 



When a packet response for that request comes back to the subnet, the network ACL does not remember your previous request. The network ACL checks the packet response against its list of rules to determine whether to allow or deny.



// acl default 

It is stateless and allows all inbound and outbound traffic.


=========



// route 53


- direct dns to public ip

- able to register domain name. can buy and manage right on aws

- direct traffic to different endpoint using several different policy such as :


latency-based routing - bs didirect ke region terkedat


geolocation dns - berdasarkan source user. bs didirect ke region terkedat /  yg berbeda


geoproximity routing


weighted round robin






========


// amazon cloudfront - cdn.



========


// flownya


user -- amazon route 53 -- amazon cloudffront -- amazon elb -- amazon auto scalling --- amazon ec2 instance




=========




provisioning note

 everything is API call



==========


invoke or call api to configure and manage aws instance



==========



// aws management console   == browser based

// aws CLI

// aws SDK

// aws cloudFormation



==========




// aws management console


- manual provisioning

- bs ada manual eror. melelahkan konfig next next next


test environtment

view aws billing

view monitoring

work with non tech resources


==========



// aws CLI


- digunain buat mempercepat konfigurasi via cli 

- digunain di production

- makes action scriptable and repeatable

- can use schedulle or can use trigger by another process

- enabling automation






make api call using the terminal on your machine 


==========



// aws SDK


interact with aws resources through various programming language 


- able to create program that using aws without low level api 




=========



// aws elastic beanstalk


managed provisioning tools for aws ec2



^ create app code and desired configuration to aws elastic beanstalk service 




auto build multiple environtment.


> us east region

> security group

> deploy elb

> deploy auto scalling

> raise 2 ec instance

> have 1 database running



- easy to save environtment configuration bundle. deployed again easily.



// goal task:

Adjust capacity

Load balancing

Automatic scaling

Application health monitoring









// aws cloudformation


- create automated and repeatable deployment

- infra as a code tool

- using json / yaml format

- support storage , db, analytic, machine learning




^ dibentuk dalam bentuk template

^ nanti diparse sama cloudformation lalu start provisioning all resource

secara pararel


^ aws CF dibalik layar connect ke backend AWS API ke masing2 resource.



^ bs bikin template untuk 1 region , lalu bikin identical clone buat di deploy ke region yg lain.



^ less room for human eror 


^ totally automated process









========


best practice = minimal 2 availability zone



==========

AWS Serverless note

 ec2 :

manage instance over time

patching instances

settingup scalling instances

high available manner




===========



// serverless

- cannot see or access underlying infra.



provision 

scanning

high availability 


udah diurus aws.




AWS LAMBA

- serverless

- upload code ke lambda function

- trigger via put Object => code run in managed environtment



For example, a simple Lambda function might involve automatically resizing uploaded images to the AWS Cloud. In this case, the function triggers when uploading a new image. 





1000 incoming trigger => lamba function will scale ur function to meet demand



lamba is designed to run code under 15 min.


- ga cocok buat deep learning.


- cocoknya buat quick process like web backend, handling request / backend expense report processing service. dmn takes less than 15 minutes to complete




goals:

- host short running functions

- service-oriented applications

- event driven applications

- no provision or manage server




==========


// container orchestration tools  => docker container 


- AMAZON ECS ( elastic container service ) = orchestration tool to manage container without hasle of managing ur own container orchestration software



- AMAZON EKS ( elastic kubernetes service ) = similar to ecs with different tool and features





Amazon EKS is a fully managed Kubernetes service. Kubernetes is open-source software that enables you to deploy and manage containerized applications at scale.





docker = using OS level virtualization to deliver software in container




container = package for ur code // dependency + configuration




container orchestration = manage multiple docker




** ecs and eks can run on top of ec2 

** atau bs dideploy di aws fargate  ( compute platform )





goals:

run docker container based workload on aws



=========


// aws fargate :

serverless compute platform for deploy ecs / eks   ( serverless environtment )



========



// container use case


Suppose that a company’s application developer has an environment on their computer that is different from the environment on the computers used by the IT operations staff. The developer wants to ensure that the application’s environment remains consistent regardless of deployment, so they use a containerized approach. This helps to reduce time spent debugging applications and diagnosing differences in computing environments.




// kenapa butuh orchestration tool 


- 10 host with 100 container 



When running containerized applications, it’s important to consider scalability. Suppose that instead of a single host with multiple containers, you have to manage tens of hosts with hundreds of containers. Alternatively, you have to manage possibly hundreds of hosts with thousands of containers. At a large scale, imagine how much time it might take for you to monitor memory usage, security, logging, and so on.



=======



" just code and configuration "


=====

monolitic app vs microservice note

 // monolitic


Suppose that you have an application with tightly coupled components. These components might include databases, servers, the user interface, business logic, and so on. This type of architecture can be considered a monolithic application. 


In this approach to application architecture, if a single component fails, other components fail, and possibly the entire application fails.





===========


// microservices


In a microservices approach, application components are loosely coupled. In this case, if a single component fails, the other components continue to work because they are communicating with each other. The loose coupling prevents the entire application from failing. 







When designing applications on AWS, you can take a microservices approach with services and components that fulfill different functions. Two services facilitate application integration: Amazon Simple Notification Service (Amazon SNS) and Amazon Simple Queue Service (Amazon SQS).




============


Amazon ELB load balancing, Amazon SQS and SNS note

 elastic load balance:



- route request to multiple instances

- evenly distribution load to multiple ec2 instance

- monitor ec2 instance ( combine with auto scalling ) to forward request whenever the server is up / down. stop forwarding to dead ec2 instance



- adding more  backend without interupting front end process  // decoupled architecturee




=============



- low demand period

- high demand period



============


// messaging and queueing



buffer = placing message into a buffer





// tight coupled architecture


cashier - straigh talk to barista.   // single component fail it causes issues for other compontents or even the whole system.



app a error - app b ikt error




// loosely coupled architecture


single failure wont cause cascading failures

one single failure is isolated so wont cause cascading failures





app A -- Message Queue -- app B


if app B fail.. A not fail.

app A will still send to message Queue until app B up again



============


// messaging and queueing




messaging  remain in the quue until they are consumed or deleted







// amazon simple queue service ( SQS - queue)


Using Amazon SQS, you can send, store, and receive messages between software components, without losing messages or requiring other services to be available. In Amazon SQS, an application sends messages into a queue. A user or service retrieves a message from the queue, processes it, and then deletes it from the queue.





- send store receive  msg between  software component at any vol.

- msg are placed in queue until they are processed

- scale automatically, easy configure and used

- can send notification



data contained within msg is called payload.  // protected until delivery 




- person name, coffee order, time order  => digabung jd payload. dimasukin ke SQS







// amazon simple notification service ( SNS )



bs berupa email,txt msg , push notif / http request. skali di push, sent ke semua subscriber





// buat ngasih notification ke user.   => bs berupa email,txt msg , push notif / http request

publish / subscribe model


sns topic : a channel for msg to be delivered


configure subscriber ke topic -> lalu publish msg to those subscriber


1 message to topic => disebar ke banyak subscriber skali jalan.




subscribernya bs end point jg kyk : 

- sqs queues

- aws lambda

- https / http web hook 



bs jg ngasih notification ke end user via:

- mobile push

- sms

- email 



=============




================



monolithic application. 


Amazon ec2 auto scalling note

 idle resources datacenter. on premises.



===========


provision exactly demand.

every hour



+ROI

===========


everything fails all the time

so plan for failure and nothing fails


==========


ha system with no fail 


==========



// Amazon EC2 Auto Scaling


If you’ve tried to access a website that wouldn’t load and frequently timed out, the website might have received more requests than it was able to handle. This situation is similar to waiting in a long line at a coffee shop, when there is only one barista present to take orders from customers.



Amazon EC2 Auto Scaling enables you to automatically add or remove Amazon EC2 instances in response to changing application demand





ada 2 type auto scalling:

- dynamic scalling

- predictive scalling





dynamic scalling:

respond to changing demand



predictive scalling:

automatically schedules the right number of Amazon EC2 instances based on predicted demand




** To scale faster, you can use dynamic scaling and predictive scaling together.



misal minggu sepi = ec2 instance turunin

==========


// scale up vs scale out 



scale up = make instance bigger // adding more power


scale out = make more x amount instance   // adding more instance




===========


happy customer 

happy ceo 

happy architecture



==========


1 set minimum

2 set desired

3 set maximum  / scale as needed



minimum ada 1 ec2 instance didalam 1 auto scalling group pas config awal2.




**If you do not specify the desired number of Amazon EC2 instances in an Auto Scaling group, the desired capacity defaults to your minimum capacity.







============


Because Amazon EC2 Auto Scaling uses Amazon EC2 instances, you pay for only the instances you use, when you use them. You now have a cost-effective architecture that provides the best customer experience while reducing expenses.



============



Amazon ec2 pricing note

 // on demand 


- per hour

- per second 



are ideal for short-term, irregular workloads that cannot be interrupted. No upfront costs or minimum contracts apply. The instances run continuously until you stop them, and you pay for only the compute time you use.


Sample use cases for On-Demand Instances include developing and testing applications and running applications that have unpredictable usage patterns. On-Demand Instances are not recommended for workloads that last a year or longer because these workloads can experience greater cost savings using Reserved Instances.



===============


// ec2 instance saving plans


1 / 3  year plan


a commitment to a consistent amount of usage measured in dollars per hour for a one or three-year term.


therefore provide savings of up to 72% on your AWS compute usage. This can lower prices on your EC2 usage, regardless of instance family, size, OS, tenancy, or AWS region. This also applies to AWS Fargate and AWS Lambda usage, which are serverless compute options that we will cover later in this course. 



=============


// reserved instance



steady-state workloads or ones with predictable usage and offer you up to a 75% discount versus On-Demand pricing. You qualify for a discount once you commit to a one or three-year term and can pay for them with three payment options: all upfront, where you pay for them in full when you commit; partial upfront, where you pay for a portion when you commit; and no upfront, where you don't pay anything at the beginning. 



ada 2 tipe:


- standard reserve

- convertible reserve



term berlangganan 1 taun / 3 taun.


3 taun = more discount 




----------------


// Standard Reserved Instances: This option is a good fit if you know the EC2 instance type and size you need for your steady-state applications and in which AWS Region you plan to run them. Reserved Instances require you to state the following qualifications:


Instance type and size: For example, m5.xlarge

Platform description (operating system): For example, Microsoft Windows Server or Red Hat Enterprise Linux

Tenancy: Default tenancy or dedicated tenancy

You have the option to specify an Availability Zone for your EC2 Reserved Instances. If you make this specification, you get EC2 capacity reservation. This ensures that your desired amount of EC2 instances will be available when you need them. 




// convertible = locationnya  bisa dipindah antar AZ.

// bs convert ke different instance type size ( m5.xlarge)



Convertible Reserved Instances: If you need to run your EC2 instances in different Availability Zones or different instance types, then Convertible Reserved Instances might be right for you. Note: You trade in a deeper discount when you require flexibility to run your EC2 instances.









=============



// spot instances



allow you to request spare Amazon EC2 computing capacity for up to 90% off of the On-Demand price. The catch here is that AWS can reclaim the instance at any time they need it, giving you a two-minute warning to finish up work and save state. You can always resume later if needed. So when choosing Spot Instances, make sure your workloads can tolerate being interrupted. A good example of those are batch workloads. 


Spot Instances are ideal for workloads with flexible start and end times, or that can withstand interruptions. Spot Instances use unused Amazon EC2 computing capacity and offer you cost savings at up to 90% off of On-Demand prices.




============


// dedicated host



Dedicated Hosts, which are physical hosts dedicated for your use for EC2. These are usually for meeting certain compliance requirements and nobody else will share tenancy of that host.



============

Amazon ec2 notes

 multitenancy = sharing underlying hardware between VM



=========


hypervisor = isolating vm from  each other as they share resources from host.





========


provision thousand of ec2 instances. on demand. with a blend of operating system and configuration

to power ur business different app


bs milih os + service yg jalan pas diinstall


========



// vertical scalling


bs bikin instance bigger or smaller





// horizontal scalling


menambah jumlah instance




========


// network 

public

or

private




========


ec2 ada groupnya. namanya instance family


combination of resource.






1 general purpose

- balanced resources

- diverse workload

- web server

- code repository




2 compute optimized

- compute intensive task 

- gaming server

- high performance computing / hpc

- scientific modeling





3 memory optimized

- compute intensive task 

- ++ database performance


This scenario might be a high-performance database or a workload that involves performing real-time processing of a large amount of unstructured data. In these types of use cases, consider using a memory optimized instance. Memory optimized instances enable you to run workloads with high memory needs and receive great performance.







4 accelerated computing

- floating number calculation

- graphic processing

- data pattern matching

- utilize hardware accelerator





5 storage optimized

- high performance io for locally stored data







=========



Amazon Services Note

you only pay for what u use


- Amazon Elastic Compute Cloud (Amazon EC2)  = a virtual server 



- AWS Cost explorer = visualize, understand, and manage your AWS costs and usage over time




- Amazon EC2 Auto Scaling = auto scaling ec2 server based on user demand needs / in response to changing app demand ( auto add instance and auto decommision when not needed ) 



- elastic load balance ( ELB ) = ervice that automatically distributes incoming application traffic across multiple resources, such as Amazon EC2 instances. 




- amazon simple queue service ( SQS - queue)


> send store receive  msg between  software component at any vol.

> msg are placed in queue until they are processed

> scale automatically, easy configure and used

> can send notification




- amazon simple notification service ( SNS )


> send notification for user  via publish / subscribe model.



subscriber bs:

- sqs queues

- aws lambda

- https / http web hook 


bs jg ngasih notification ke end user via:

- mobile push

- sms

- email 








- aws lamba = running code without manage instances. // serverless.

> suited for process under 15 min






// container orchestration tools  => docker container 


- AMAZON ECS ( elastic container service ) = orchestration tool to manage container without hasle of managing ur own container orchestration software



- AMAZON EKS ( elastic kubernetes service ) = similar to ecs with different tool and features



// aws fargate :

serverless compute platform for ecs / eks



==========



ha system with no fail 

auto scalling system based on user need 



=========



- regions 

geographical area that containts aws resource 



- availability zones

sing dc or group of DC within a regions


========



- aws outpost



> automatically install a fully operational mini region in customer own 


========


// amazon virtual private cloud


let u provision a logically isolated section

aws cloud.


- create virtual network environtment

- can public facing / private ( with internet or private )




public subnet

- talk to internet. 


private subnet

- ip internal





// fungsi vpc:

able to define private ip for aws resources.




elb dan ec2 butuh setting ip -> vpc






subnet = chunk of ip adress  in ur vpc that allow to group resources tgt.


control either services publicly or privately available





=========



// aws direct connect



- provide physical line that connect ur network to your aws vpc


connected dedicated fiber connection from DC1 to AWS VPC



- work with direct connect partner in ur area to establish this connection


========== 



//  Amazon Elastic Block Store  ( EBS )


virtual hard drive / ebs volume.

bs di attach ke ec2 / directly attached

harddrive that is persistent





==========



// amazon Elastic File System  ( EFS )


- manage filesystem

- shared filesystem accross app

- Multiple instances can access the data in EFS at same time 

- auto scale up and scale down by system




============


// amazon aurora





Amazon Aurora



an enterprise-class relational database. It is compatible with MySQL and PostgreSQL relational databases. It is up to five times faster than standard MySQL databases and up to three times faster than standard PostgreSQL databases.


Amazon Aurora helps to reduce your database costs by reducing unnecessary input/output (I/O) operations, while ensuring that your database resources remain reliable and available. 


Consider Amazon Aurora if your workloads require high availability. It replicates six copies of your data across three Availability Zones and continuously backs up your data to Amazon S3.







support mysql

support postgresql



- price 1/10 cost of commercial db



ada data replication & 6 copy at a time


bs apply 15 read replicas. // offload read and scale performance 


ada continuous backup to s3 ,, ready to restore 



ada point in time recovery : can recover data from specific period




================



// amazon RDS


running your databases in the cloud is to use a more managed service called Amazon Relational Database Service, or RDS



// amazon dynamoDB


noSQL database fully managed, high performance scalable serverless db.




// Amazon DocumentDB is a document database service that supports MongoDB workloads.

===================



AWS Database Migration Service (AWS DMS)


service to migrate existing db between source and target.



===================




AWS Identity and Access Management (IAM)


AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely.   


===============



// AWS Artifact


- access to compliance reports


- Access AWS security and compliance reports and special online agreements -



================



// Amazon Inspector


improve security and compliance of your aws deployed app.

by running automated security assessment


best practice

vulnerabilities

security issue and recomendation how to fix it



3 piece component di amazon inspector:


network configuration reachability piece

amazon agent

security assessment service





+ can retreive finding via api. bs diremediation. performing remediation to fix issues




================


// amazon GuardDuty


threat detecting



- analyze continuous streams of metadata generated from ur account and network activity

found on aws cloudtrail event, amazon vpc flow log, and dns log.

it uses integrated threat intelligence such as known malicious ip address, anomaly detection, and machine learning to identify threat more accurate




run independent from another ews service. so it wont affect performance or availability


1 enable guardduty

2 guardduty continuously analyze network and account activity

3 guardduty intelligently detect threats

4 review detailed finding and take action




===============



// amazon cloudwatch


visibility


monitor health and operation app and infra aws in real time 


- Monitor applications and respond to system-wide performance changes 




// cloudwatch alarm

set threshold for a metric

can generate alert and trigger action when threshold meet

can integrate with SNS




===============



// aws cloudtrail



- API Auditing tools



every request made to aws.

get logged to cloudtrail



can save log to s3 bucket



=============



// aws trusted advisor


Trusted Advisor compares its findings to AWS best practices in five categories: cost optimization, performance, security, fault tolerance, and service limits. For the checks in each category, Trusted Advisor offers a list of recommended actions and additional resources to learn more about AWS best practices. 


=============




// lightsails


deploy ready-made application stacks

(a service that enables you to run virtual private servers)



============



// AWS Pricing calculator


The AWS Pricing Calculator  lets you explore AWS services and create an estimate for the cost of your use cases on AWS.



- ada bulk discount pricing




==========



Consolidated billing also enables you to share volume pricing discounts across accounts. 


Some AWS services, such as Amazon S3, provide volume pricing discounts that give you lower prices the more that you use the service. In Amazon S3, after customers have transferred 10 TB of data in a month, they pay a lower per-GB transfer price for the next 40 TB of data transferred. 


In this example, there are three separate AWS accounts that have transferred different amounts of data in Amazon S3 during the current month: 


Account 1 has transferred 2 TB of data.

Account 2 has transferred 5 TB of data.

Account 3 has transferred 7 TB of data.



=========



// aws budget


set custom budget and alerting of usage




fungsi tag : bs dibikin per project. monitor usage db.



bs bikin report daily cost.




=========


// aws cost explorer


visualize usage data.


=============




// beanstalk


AWS Elastic Beanstalk

Deploy dan skalakan aplikasi web



Businesses upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.



============



// Amazon CloudFront 


a content delivery service. 



===========


// amazon route 53


Connect user requests to infrastructure in AWS and outside of AWS.

Manage DNS records for domain names. 


Amazon Route 53 is a DNS web service. It gives developers and businesses a reliable way to route end users to internet applications that are hosted in AWS. 


 


Additionally, businesses can transfer DNS records for existing domain names that are currently managed by other domain registrars, or register new domain names directly within Amazon Route 53.



===========



// aws shield


A service that helps protect applications against distributed denial-of-service (DDoS) attacks 



============



// Amazon Augmented AI (Amazon A2I) 



provides built-in human review workflows for common machine learning use cases, such as content moderation and text extraction from documents. With Amazon A2I, a person can also create their own workflows for machine learning models built on Amazon SageMaker or any other tools.



=========


// Amazon Textract 


 a machine learning service that automatically extracts text and data from scanned documents.


===========


// Amazon Lex 


a service that builds conversational interfaces using voice and text.



============


// AWS Key Management Service (AWS KMS) 


a service that creates, manages, and uses cryptographic keys.


============


// Amazon Redshift 


a data warehousing service for providing big data analytics. It offers the ability to collect data from many sources and provides insight into relationships and trends across a data set. 



============


// Amazon Quantum Ledger Database (Amazon QLDB) 


a ledger database service. A person can use Amazon QLDB to review a complete history of all the changes that have been made to application data.



============


// AWS Snowball 

a device that transfers large amounts of data into and out of AWS.



============


// Amazon ElastiCache 


service that adds caching layers on top of databases to help improve the read times of common requests.


===========


// Amazon Neptune 


a graph database service. Amazon Neptune provides the capability to build and run applications that work with highly connected datasets, such as recommendation engines, fraud detection, and knowledge graphs.



============


// AWS DeepRacer 


is an autonomous 1/18 scale race car that tests reinforcement learning models.


===========