Pages

Sunday, December 10, 2023

VPC Note

 // VPC



default ip vpc = 172.31.0.0/16



==========


// cara create vpc


1 select region



2 creating vpc

vpc > create vpc > ipv4 CIDR block       =>  masukin ip range yg ingin diallocate




^ nanti ke create vpc id

^ auto muncul route table

^ auto muncul nacl 

^ dns hostname = default disable








===========



// connect vpc to internet


1 create internet gateways


internet gateways > create internet gateway

- input name tag : ig-1


^ ketika di create status akan detache. nanti bs di attach ke vpc yg udah di create.




2 attach internet gateway ke vpc


internet gateways > actions > attach a vpc 

- select vpc id yg ingin di attach




3 add route table agar dapat rute ke internet


internet gateways > route table  > create route table

- name tag: route_to_internet

- vpc : select existing vpc



4 set main route table


route table > actions > set main route table 



5 edit route table add 0.0.0.0/0


routes table > edit routes > add route


destination = 0.0.0.0/0  

target = ( select internet gateway ) 






6 edit subnets 

// create plg enggk 2 atau 3  di az yg berbeda. biar ada HA nya.



subnets > create subnets >

- edit name tag

- select vpc 

- select AZ

- edit bagian ipv4 CIDR block ( size nya hrs lebih kecil / bagian dari vpc CIDR ) 




// create 3 public ip v4 cidr :

10.0.1.0/24

10.0.0.0/24

10.0.2.0/24



^ action => edit auto-assign public ipv4 address : yes





// create 1 private ip v4 cidr :

10.0.3.0/24



// create route table baru buat segmen ip private

- route tables > create route table > private_ip_route


// set route table association


subnets -> edit route table association 

- edit route table id 

- ganti ke private

- save









==========


// create ec2


- services > ec2 > instances > launch instances

select platform ( biasanya pake amazon linux 2 ami ( hvm ) => buat public subnet ) 


- select t2.micro


- network = select vpc network yg udah dicreate


- subnet = select public subnet yg telah di create


- create new IAM role 

buat bikin permission akses ke s3


^ select ec2 > bagian filter policies filter ssm  ( amazonec2RoleforSSM )

^ bagian filter policies filter s3  ( amazonS3FullAccess )

^ kasih role name > create role 



- select new IAM role





// optional


bagian advanced details bs input script buat bikin web server




- storage => biarin default dl




// configure security group

- create new sec group > security group name = ec2-sec-group1


open port ssh

open port http


- edit source yg buat allow 






// create ec2 key pair

- digunain untuk remote access menggunakan key. agar lebih secure.





// security group => by default deny

// kita bikin rule buat allow services


==========


// create ec2 buat private subnet


sama stepnya kyk yg diatas tp subnetnya bedain. => select yg private




=========


// edit nacl


- bs buat block specific ip address

- nacl adanya di subnets



Subnets > select subents > liad dibawah ada menu Network ACL > edit


- select inbound rule > add rule           // lowest to highest



- add rule 10 > source : ip_private/32  > deny 



========



// cara connect dari luar ke private subnet

// bs pake jumphost atau bastion ( cari di market place ) 

^ buat remote browser ssh ke private instance. biar lebih secure

^ jumpbox = hardened instance 

^ bs dikasih google authenticator / mfa

^ bs screen recording 

^ ada audit log 


^ alternativenya bs pake session manager tp ga ada screen recording






- instances > launch instances > aws marketplace > guacamole bastion host


- edit bagian network dan subnet


- edit policies > filter guaws


- create role > ec2 > filter > ec2readonlyaccess






========

NAT Gateway Note

 // nat


1 dipake buat koneksi private ip  ke internet

2 dipake klo ada ip private network yg bentrok / sama dan ingin koneksi keluar




============



// nat instances vs nat gateway



nat instances = individual ec2 instance.


- bs down nat instances

- mesti bikin lebih dr 1 






// nat gateways


- manage service which launches redundant instances within the selected AZ.


- di manage sama aws


- ada redundansi dibalik layar. aws yg manage.



** nat instances hrs ada di public subnet.



ec2 -> ada di private subnet





^ semua nat jalan per AZ


=========




// nat instancce and nat gateway note++




// note nat instance


- pas bikin  nat mesti disable source and destination checks di instance

- nat instances mesti ada di public subnet

- hrs ada route out dari private subnet ke nat instance 

- ukuran nat instance determine seberapa besar traffic bisa dihandle

- high availability bs pake autoscalling group, multiple subnet di AZ yg berbeda, dan automate failover pake script   =>  lebih repot dibanding nat gateway




// note nat gateway


- bersifat redundant didalam sebuah AZ.

- cm boleh punya 1 nat gateway didalem 1 AZ / ga bs dispan

- start dr 5Gbps dan bisa discale up ke 45Gbps


- Nat Gateway dipake buat enterprise


- ga perlu ngepatch nat gateawy. ga perlu disable source/destination checks 

- nat gateway otomatis diassign public ip

- route tables for nat gateway mesti di update

- resource di multiple AZ sharing gateway will  lose internet access if gateway goes down, unless u create a gateway in each AZ and configure  routes accordingly









========

Security Group Note

 // security group:


virtual firewall at instance level


========


- inbound  rulle

- outbound rule

- no deny rules. all traffic blocked by ddefault unless a rule specifically allow it

- multiple instances across multiple subnet can belong to security group



=========



- bs specify /32 atau specific ip adress


sg web app ->  db via ip




- bs specify another sec group



sg web app ->  db via sec group





- instance bs diapply multiple security group ( nest ). rulenya jadi permisive.

awalnya deny trs di apply sec group k2 allow. jadi allow 




=========



// sec group limit



can have up to 10k sec group in region. // default 2500


can have 60 inbound rule and 60 outboundd rule per sec group


16 sec group per elastic network interface ( default is 5 ) 



========


- firewall at instance level

- sec group are stateful. // if traffic is allowed inbound, it is also allowed outbound

- unless allow spec, all inbound traffic is blocked by default

- all outbound traffic from the instance is allowed by default

- source specify can be either ip range, single ip addr or another sec group

- any changes effect immediate

- ec2 instances can belong to multiple sec group

- sec group can contain multiple ec2 instance 



- ga bs block specific ip via sec group


Thursday, December 7, 2023

AWS NACL NOTE

NACL = Network Access List 

NACL: an optional layer of security that act as a firewall for controlling traffic in and out of subnet



- virtual firewall at subnet level


- vpc auto get a default nacl allow all outbound and inbound traffic


- tiap subnet cm boleh associated dengann 1 nacl. klo ad alebih dari 1 bakal nge overwrite previous rule sblmnya


- tiap nacl ada  rules allow atau deny traffic inbound (into ) and outbound ( out of ) subnets



- nacl ada inbound dan outbound rules 


- ada rule number #  => determine order of evaluation. from lowest to highest.   0 - 32766.  // recomended increment 10 / 100


- bs  block single ip adress ( ga bs klo pake security groups )


- ada allow / deny


- stateless


- deny all  traffic by default when create nacl











===================


// nacl use case -- subnet level


- block single ip address from internet

- block incoming all ssh port


=============


Tuesday, December 5, 2023

AWS VPC Note

 VPC / Virtual Private Cloud




- VPC = personal datacenter


give complete cotrol over virtual networking environtment



region > vpc > AZ > 


public / private subnet ---- security group --- ec2 instance / rdsDB --- nat ---


NACL --- Route table --- router --- IGW --- internet



================



// VPC Key Features


- vpc are region specfic // ga span across region


- bs create 5 vpc per region


- tiap region ada 1 default vpc


- bs create 200 subnet per VPC


- bs make ipv4 cidr block + ipv6 cidr block 




features cost nothing:

vpc

route table

NACL

internet gateway 

security group and sunet

VPC Peering




features cost money :

NAT Gateway

VPC edpoint

Vpn gateway

Customer gateway

DNS hostname ( klo instance butuh dns )





============



// default vpc


- ada default vpc di tiap region sehingga bs immediate deploy instance



1 create vpc with size /16 cidr block.



2 create a size /20 default subnet in each AZ



3 create internet gateway and connect it to default vpc


4 create default security group and asssociate with default VPC



5 create default NACL / network access control list and associate with default VPC



6 associate default dhcp option to default vpc



7 when vpc created = auto create route table



===========



0.0.0/0 = all possible ip address.



klo specify di route table for IGW = allow internet access 


klo specify 0di security group inbound rules = allowing all traffic from internet to our public resources



0.0.0.0/0 => giving access from anywhere or the internet



==========



// VPC peering


- allowing connect one vpc with another over a direct route using private IP Address




1 instance on peered vpc behave like they are on same network

2 able to connect vpc across same or different aws account and regions


3 peering use start configuration:  1 central vpc - 4 other vpc


4 no transitive peering ( peering must take place directly between vpcs )

- need a one to one connect to immediator VPC



5 no overlapping CIDR block





VPC A = 10.0.0.0/16

VPC B = 172.31.0.0/16




   VPC PEERING CON

VPC A 10.0.0.4/32  ---------------  VPC B 172.31.0.8/32


==========



// route table


- route table = determine where network traffic is directed



- tiap subnet d vpc mesti ada route tablenya.



- 1 route table bisa berisi multiple subnet




destination target


10.0.0.0/16 local

0.0.0.0/0 igw-19asda21312ifsd







public subnet --- route table ---- router --- igw --- internet 


===========



// internet gateway ( IGW )


- allow vpc access ke internet




fungsi:

1 provide target didalem vpc buat ngeroute ke internet

2 melakukan NAT buat instances yg telah diassign public ipv4 IP



BUAT NGEROUTE KE internet mesti add ke routing table 


destination = 0.0.0.0

target = igw





(route table) ---- router --- IGW --- internet 



==========



// bastion / jumpbox



bastion = intermediate ec2 instances yg telah di hardening. // bs buat jump jalur traffic remote dari internet ke private ec2 ip

- help gain access ke ec2 instance via SSH / RCP yg ada di private subnet




** bastion ga boleh pake NAT ( security purpose ) 



// nat gateways


- nat gateway : penggunaan nat gateway digunakan agar ec2 instances dapet akses ke outbound internet for security updates







** bastion bs direplaace pake Session manager ( ada didalem system manager )


==========



// direct connect


- aws direct connect : establish dedicated network connection from on premises location to AWS



- help reduce network cost

- increase bandwidth throughput 

- provide more consisten network experience than typical internet based connection






++ very fast network  


ada 2 service:

1 lower bandwidth 50M-500M 

2 higher bandwidth 1gb / 10gb




on premises customer ---- customer/partner cage ( router ) ---- aws cage ( router ) ---- vpc / ec2 





aws direct connect = router ditengah2 ( customer / partner cage dan aws caage )



=========



// vpc endpoint




- secret tunnel inside private network aws

- privately connect vpc to other AWS service, and VPC endpoint services

- eliminate the need for an internet gateway, NAT, VPN or AWS Direct connect

- instance in vpc ga perlu public ip address buat ngobrol dengan service tertentu


- traffic antar vpc dan other service ga akan bs keluar dari aws network

- horizontal scaled, redundant and high available VPC component

- allow secure communication between instance and service without adding availability risk or bandwidth constraint on ur traffic





// ga perlu route traffic via internet buat akses service tertentu

VPC -- VPC endpoint --- s3 bucket 



2 tipe vpc endpoint :

1 interface endpoint

2 gateway endpoint






// interface endpoint 


- disebut ENI / elastic network interface with private ip address.


entry point for traffic going to a supported service.



interface endpoint are powered by AWS PrivateLink

- access service hosted on AWS easily and secured by keeping network private within AWS network




// ENI Cost 


price per vpc endpoint per az $/hour = 0.01

price per GB data processed ($)  = 0.01


estimated 7.5$ / month






ENI support following service:


API GW

cloudformation

cloudwatch

kinsesis

sageMaker

Codebuild

AWS Config

EC2 API

ELB API

AWS KMS

Secret manager

security token service

service catalog

SNS

SQS

System Manager

Marketplace partner services

endpoint services in other AWS accounts




// vpc gateway endpoint


- gateway that is a target for a specific route in ur routing table

- used for traffic destined for a supported AWS Service




buat bikin gateway endpoint mesti specify vpc dan target service yg mau diestablish connectionnya




aws gateway endpoint only support 2 service:

1 S3

2 DynamoDB



** vpc endpoint is free


=========


Aws SnowFamily note

 //aws snowball


- petabytes-scale data transfer service

move data onto aws via physical briefcase computer




============


low cost


- cost thousand of dollar to transfer 100TB over high speed internet.


- snowball bs reduce sampe 1/5 nya



===========


// snowball feature 


- e-ink display

- tamper and weather proof

- encryption 256 bit

- use trusted platform module

- data transfer must completed in 90d for sec purpose

- can import and export from s3



comes in 2size:

50TB ( 42tb of usable space )

80TB  ( 72 tb per node ) 


===========



// snowball edge


- petabytes-scale data transfer service

move data onto aws via physical briefcase computer


++ more storage 

++ more compute capabilities




// features:


- lcd display

- local process and edge computing workload

- can use in a cluster of 5 - 10 device



3 option device :


- storage optimized   /24 vcpu

- compute optimized  / 54 vcpu

- gpu optimized / 54 vcpu



2 option size:

100 tb / 83 tb usable space

100tb clustered / 45 tb per node





==========


// snow mobile


- 45 foot long shipping container pulled y semi trailer trck

transfer 100PB per snowmobile


aws personel will help connect ur network to snowmobile and when

data transfer is complete they drive back and import

into s3 / glacier




security features:

- gps tracking

- alarm monitoring

- 24/7 video surveillence

- escort security vehicle while in transit ( optional )





=========

Monday, December 4, 2023

Amazon S3 note

// s3 simple storage service



object based storage service 

serverless storage in the cloud


ga perlu worry filesystem / disk space





==========



file system storage = manage data as file and fire hierarchy

block storage = manage data as bock within sector and track



s3 = unlimited storage. ga perlu mikirin underlying infra

s3 console provide data buat upload dan access data




s3 object= object contain ur data. kyk files.


- bs nyimpen data dengan zie 0 - 5 terabytes per object.




object consist of:

1 key  : nama object

2 value : datanya dalam bentuk sequence of bytes

3 version ID  : klo versioning enabled, ngetag version object

4 Metadata   : additional information



s3 bucket:

- bucket = berisi object.  atau bs jg dalam bentuk folder. isinya object


bucket name hrs unique.





=========


s3 storage class



1 standard

2 Intelligence tiering

3 standard-IA

4 one zone IA

5 Glacier

6 glacier deep archieve



^ semakin kbwh smakin cheap / murah



// 1 standard  ( by default )


fast, 99% availability ,  11 9's durability replicated across at least 3 AZ



// 2 intelligent tiering


use ML to aalyze ur object usage and determine appropriate storage class.

data dipindah ke most cost effective access tier tanpa impact / added overhead



// 3 standard IA / infrequent access


cheaper, klo kita access file skali sebulan.

ada additional fee buat retreival.

50% less than standard ( reduced availability )



// 4 one zone IA


cheaper than standard IA by 20% less

object cm ada di 1 AZ. 99.5% availability

data could get destroyed

ada retreival fee




// 5 Glacier


long term cold storage, tp retreival data timenya agak lama.

bs menit sampe jam. tp sangat murah dalam segi cost



// 6 glacier deep archive


the lowest cost storage class

data retreival = 12 jam




** glacier = kyk berbentuk service sendiri tp sebenernya bagian dari S3


** smuanya data di replicate lebih dari 3 AZ. kecuali one-zone IA = data cm ada d 1 AZ


** retreival fee dihitung per GB access of data



=========


// s3 security


- smua new bucket are private by default


logging per request can be turned on a bucket.

log generated in a different bucket. // bs di log di akun aws yg berbeda


access control is configured using:  1 BUCKET POLICIES and 2 ACL / Access control list





- Access control list 

legacy features of controlling access to bucket and object.



- bucket policies

use a policy to define complex use case



policy -> statement

misal bucket A cuma boleh di allow via www.toro.com/*




contoh statement policy:


{

"version": "2023-10-18",

"Statement": [

"sid": "PublicReadGetObject",

"Effect": "Allow",

"Principal": "*",

"Action": "s3.GetObject",

"Resource":  "arn:aws:s3:::www.toro.com/*"


]


}



=========


// s3 encryption



traffic between ur local pc and s3 is achieved via SSL / TLS



- Server Side Encryption ( SSE ) - Encryption at Rest


Amazon help u encrypt the object data


s3 managed keys- ( amazon manage all the key )



3 tipe SSE:


SSE-AES  = s3 handle the key, use aes-256 algorithm  ( 256 bytes of length ) 

SSE-KMS  = envelope encryption, AWS KMS and u manage the key

SSE-C    = customer provide key ( u manage key )



- Client side encryption

customer encrypt own file locally sblm diupload ke s3




** KMS = key is encrypted by another key





** security in transit = upload file is done via ssl 


==========


// s3 data consistency



new object / puts

1 read after write consistency

when upload a new s3 object = able to read immediately after writing





overwrite puts or delete object

2 eventual consistently

when overwrite or delete object s3 taes time to replicate data and version to AZ


klo langsung diread = biasana return old copy of data.

butuh waktu few second before reading updated object( setelah replikasi complete ) 







========



// s3 cross region replication ( CRR )


fitur di s3 yg ketika di enabled, smua object yg di upload ke s3 akan di replicate secara otomatis ke region yg berbeda


provides higher durability and potential disaster recovery for object.





** mesti enabling versioning on both on source and destination bucket if want this feature enabled

** customer bisa melakukan CRR replicate to another AWS Account


=========


// s3 versioning



- store all version of an object in s3

- once enabled cannot be disabled, only suspended on bucket

- fully integrates with s3 lifecycle rules

- MFA Delete feature provide extra protection against deleting of ur data




//versioning


- ditag di idnya.


key = gambar1.png

id=1111

   1112



klo accidently delete key dengan id 1112, masih bisa retreive file back dengan id 1111




** klo kita udah bikin versioning klo delete file baru yg lama auto recover.


========



// s3 lifecycle management



- automate process moving object to different storage class or deleting object all together


- bisa dipake bersama dengan versioning


- bisa diapply ke current dan previous version





contoh:


1 customer create object ke s3


2 setelah 7 hari dipindah ke glacier


3 setelah 365 hari permanent delete





** ada opsi nentuin brp X hari sblm mindahin object ke glacier 


=======



// s3 - Transfer Acceleration



fast and secure transfer over long distance between end user and s3 bucket.


- menggunakan cloudfront distributed edge location.


- instead of uploading ur data to bucket, user use distinct URL for an edge location ( nearest edge location - DC )



ketika data sampe ke edge location, automatically routed to s3 over optimized network path ( amazon backbone network )




======


// s3 - Presigned url


digunakan apabila membutuhkan temporary access / allowing user to download file from a password protected part of ur 

web APP. ur web app generates presigned url which will expired after X second.



aws s3 presign s3://mybucket/object1 --expires-in 500




^ digenerate dalam bentuk url yg ada accesskeyID  dan Expires token & signature. cm bisa diakses selama x amount sec



========


// s3 - multi factor auth delete 


memastikan user ga bs delete object dari bucket kecuali punya mfa code.


** cuma bucket owner loggin in as root user / yg punya akses ke MFA can delete object from bucket.




aws s3api put-bucket-versioning \

--bucket XXX \

--versioning-configuration Status=Enabled, MFADelete=Enabled \

--mfa " mfa-sn mfa-code " 






1 AWS CLI must be used to turn on MFA

2 the bucket must have versioning turned on



==========



// public permission object configuration



1 amazon s3 > Permission > uncentang block all public access


2 overview > make public 



object url -> access via browser


========= 


// versioning configuration



1 amazon s3 > properties > versioning


2 enable

- pilihannya cuma enable / suspend





3 cek di s3 > overview > version > show / hide

- muncul version id nya




4 test upload file dengan nama yg sama



=======



// s3 encryption configuration ( server side )


1 amazon s3 > properties > default encryption


2 turn on aes-256 / aws-kms


3 cek di s3 > overview > server-side encryption


=========



// s3 cli



aws s3 ls     // list all current bucket


aws s3 ls toro    // print output inside single bucket 




// download file from local to s3 bucket 


aws s3 cp   s3://toro/1/abc.jpg   ~/desktop/toro





// upload file from local to s3 bucket 


aws s3 cp ~/desktop/toro  s3://toro/1/abc.jpg



// create presigned url  expires in 500s

// create temporary access


aws-s3 presign s3://toro  --expires-in 500






// change s3 storage class to save fulus $ 




1 enter bucket > properties > storage class


2 ubah jadi standard / intelligent-tiering / standard-ia / one zone-ia / glacier / glacier deep archive





// add management lifecycle



1 enter s3 > management > lifecycle > + add lifecycle rule


2 add rule 45 day rule, add tag



3 select current version

- select transition to standard-ia after 45d   // minimal 30d





========



// cross region replication


copy file from 1 budget to another budget / across region / diff awas acct.



1 create another bucket for destination bucket


2 enable version di source and dest bucket

- properties > versioning


3 set replication

- s3 > properties > replication

- set source = entire bucket 

- choose destination bucket

- optional: change storage class

- optional : change object ownership to another aws acct

- create role



wait until replication complete








========



// setup bucket policies

// json document buat bikin complex control access




1 s3 > permission > bucket policy

- create policy dalam bentuk json

- bs copy dari policy generator 

- paste ke bucket policy

- save



** bs bikin policy sapa yg boleh upload ke s3 -> action : put object








Thursday, November 30, 2023

AWS shield Standard vs Advanced note

 



// AWS SHIELD advanced WITH WAF


- protect against signature atk

- have ML capabilities.  // can recognize new threat as they evolve


AWS Shield is a service that protects applications against DDoS attacks. AWS Shield provides two levels of protection: Standard and Advanced.






// standard

AWS Shield Standard automatically protects all AWS customers at no cost. It protects your AWS resources from the most common, frequently occurring types of DDoS attacks. 


As network traffic comes into your applications, AWS Shield Standard uses a variety of analysis techniques to detect malicious traffic in real time and automatically mitigates it. 




// advanced


AWS Shield Advanced is a paid service that provides detailed attack diagnostics and the ability to detect and mitigate sophisticated DDoS attacks. 




It also integrates with other services such as Amazon CloudFront, Amazon Route 53, and Elastic Load Balancing. Additionally, you can integrate AWS Shield with AWS WAF by writing custom rules to mitigate complex DDoS attacks.


==========

Amazon Security note

 // security mechanism


- shared responsiblity




//customer

Customers are responsible for the security of everything that they create and put in the AWS Cloud.




When using AWS services, you, the customer, maintain complete control over your content. You are responsible for managing security requirements for your content, including which content you choose to store on AWS, which AWS services you use, and who has access to that content. You also control how access rights are granted, managed, and revoked.


 


The security steps that you take will depend on factors such as the services that you use, the complexity of your systems, and your company’s specific operational and security needs. Steps include selecting, configuring, and patching the operating systems that will run on Amazon EC2 instances, configuring security groups, and managing user accounts. 



============



// aws


AWS is responsible for security of the cloud.


 


AWS operates, manages, and controls the components at all layers of infrastructure. This includes areas such as the host operating system, the virtualization layer, and even the physical security of the data centers from which services operate. 


 


AWS is responsible for protecting the global infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure includes AWS Regions, Availability Zones, and edge locations.


 


AWS manages the security of the cloud, specifically the physical infrastructure that hosts your resources, which include:


Physical security of data centers

Hardware and software infrastructure

Network infrastructure

Virtualization infrastructure

Although you cannot visit AWS data centers to see this protection firsthand, AWS provides several reports from third-party auditors. These auditors have verified its compliance with a variety of computer security standards and regulations.



=============



AWS Identity and Access Management (IAM)



AWS Identity and Access Management (IAM)(opens in a new tab) enables you to manage access to AWS services and resources securely. 




- user permission




> root account user  // can access and controla ny resource in the account


IAM users, groups, and roles

IAM policies

Multi-factor authentication









iam user by default = 0 permision.


dikasih permission br bs add ec2 instance dll.

============



// multi factor authentication



add randomized token. 

password + adding second form of authentication


===========



principle of least privilege

- user is granted on what they need


============


// IAM policy


json document that describe what API calls a user can or cannot make





effect = allow / deny


action = any aws api call


resource = aws api resource



==========



// IAM group


mempermudah policy. grouping of user policy



==========



// IAM Roles 


- associated permission

- no username or pass

- allow or deny

- assumed for temporary amounts of time 

- gain access to temporary permission


- users

- external identities

- applications

- other AWS Services



ketika dipasang roles, abandon all previous policy. dan apply policy roles.



========



// aws organization


- central location to manage multiple aws account


- combine account jadi 1


- bayar2 jadi 1.  / consolidated billing


- hierarchical group of account jadi OU / organizational unit


developer OU 

admin OU

HR OU

legal OU



// service control policies.


- restrict resource each role / individual user can access


- . An SCP affects all IAM users, groups, and roles within an account, including the AWS account root user.




In AWS Organizations, you can apply service control policies (SCPs) to the organization root, an individual member account, or an OU. An SCP affects all IAM users, groups, and roles within an account, including the AWS account root user.





=========



// compliance



- audit / follow the law




consumer data eu = GPDR / General data protection regulation


healthcare us = HIPAA / Health Insurance Portability and Accountability Act




========



// AWS Artifact


- access to compliance reports done by 3rd party with wide range of various standard



// AWS Compliance center


- compliance information all in one place 


- ada aws risk and security white paper


==========




// AWS Key Management Services (KMS)

- key management services.


encryption - securing msg or data in a way  that only authorized parties can access it




key an door.



1 encryption at rest

2 encryption in transit





encryption data at rest is enabled on all dynamodb table data.


encryption data in transit is between server and client




AWS Key Management Service (AWS KMS)(opens in a new tab) enables you to perform encryption operations through the use of cryptographic keys. A cryptographic key is a random string of digits used for locking (encrypting) and unlocking (decrypting) data. You can use AWS KMS to create, manage, and use cryptographic keys. You can also control the use of keys across a wide range of services and in your applications.


========


// Amazon Inspector


improve security and compliance of your aws deployed app.



=========

Amazon Database note

 MySQL, PostgreSQL, Oracle, Microsoft SQL Server,



========


//  Lift-and-Shift


migrate db environtment onprem to cloud




This means you have control over the same variables you do, in your on-premises environment, such as OS, memory, CPU, storage capacity, and so forth.




++ DATABASE MIGRATION SERVICE 



=========


// amazon RDS


running your databases in the cloud is to use a more managed service called Amazon Relational Database Service, or RDS







Amazon Relational Database Service (Amazon RDS)(opens in a new tab) is a service that enables you to run relational databases in the AWS Cloud.


Amazon RDS is a managed service that automates tasks such as hardware provisioning, database setup, patching, and backups. With these capabilities, you can spend less time completing administrative tasks and more time using data to innovate your applications. You can integrate Amazon RDS with other services to fulfill your business and operational needs, such as using AWS Lambda to query your database from a serverless application.


Amazon RDS provides a number of different security options. Many Amazon RDS database engines offer encryption at rest (protecting data while it is stored) and encryption in transit (protecting data while it is being sent and received).





// amazon RDS support


Amazon RDS is available on six database engines, which optimize for memory, performance, or input/output (I/O). Supported database engines include:


Amazon Aurora

PostgreSQL

MySQL

MariaDB

Oracle Database

Microsoft SQL Server


===========



// amazon aurora


support mysql

support postgresql



- price 1/10 cost of commercial db



ada data replication & 6 copy at a time


bs apply 15 read replicas. // offload read and scale performance 


ada continuous backup to s3 ,, ready to restore 



ada point in time recovery : can recover data from specific period




=========


In a relational database, data is stored in a way that relates it to other pieces of data. 


An example of a relational database might be the coffee shop’s inventory management system. Each record in the database would include data for a single item, such as product name, size, price, and so on.


Relational databases use structured query language (SQL) to store and query data. This approach allows data to be stored in an easily understandable, consistent, and scalable way. For example, the coffee shop owners can write a SQL query to identify all the customers whose most frequently purchased drink is a medium latte.


ID Product name Size Price

1 Medium roast ground coffee 12 oz. $5.30

2 Dark roast ground coffee 20 oz. $9.27





=============




// Amazon DynamoDB


- serverless database




table -> 


data organize into item.

item -> attributes





- redundant across AZ

- high performance / ms response time 

- support million of user


- noSQL database

- non relational database

- non schema

- add or remove attribute in table 


- simpler. fast.


- quick in response time and high scalable 

- fully managed






================




Nonrelational databases are sometimes referred to as “NoSQL databases” because they use structures other than rows and columns to organize data. One type of structural approach for nonrelational databases is key-value pairs. With key-value pairs, data is organized into items (keys), and items have attributes (values). You can think of attributes as being different features of your data.


In a key-value database, you can add or remove attributes from items in the table at any time. Additionally, not every item in the table has to have the same attributes. 




Key Value

1

Name: John Doe


Address: 123 Any Street


Favorite drink: Medium latte


2

Name: Mary Major


Address: 100 Main Street


Birthday: July 5, 1994





Amazon DynamoDB(opens in a new tab) is a key-value database service. It delivers single-digit millisecond performance at any scale.





==============



// rds vs dynamoDb


AWS Cloud Practitioners, welcome back to the championship chase of the database! In the relational corner, engineered to remove undifferentiated heavy lifting from your database administrators with automatic high availability and recovery provided. You control the data, you control the schema, you control the network. You are running Amazon RDS. Yes, Yeah. 




The NoSQL corner, using a key value pair that requires no advanced schema, able to operate as a global database at the touch of a button. It has massive throughput. It has petabyte scale potential. It has granular API access. It is Amazon DynamoDB. 





rds: business analytic.



============





// amazon redshift


Amazon Redshift(opens in a new tab) is a data warehousing service that you can use for big data analytics. It offers the ability to collect data from many sources and helps you to understand relationships and trends across your data.





data warehouse => buat big data.


historical analytic opposed to operational analysis.




- data warehouse as a service



-  multiple petabyte size 


- 10 times higher performance than relational db






// amazon redshift spectrum 

- run single sql query against exabytes of unstructured data running in data lakes.





Amazon Redshiftis a data warehousing service that you can use for big data analytics. It offers the ability to collect data from many sources and helps you to understand relationships and trends across your data.



=============



// AWS Database Migration Service (AWS DMS)



migrate existing db between source and target.


source tetep operational pas dipindah


downtime is minimized for app that rely on that database



source and target db ga perlu type yg sama 




mysql - amazon RDS


microsoft sql - amazon RDS


oracle - amazon RDS for oracle 





// compatible database

schema structure 

data type

database code





on premise ec2, amazon rds ------  cloud ec2, amazon rds








// heterogonous database


source dand destination berbeda databasenya.




mesti 2 step process. convert dl pake

AWS Schema Convertion Tool.



// 2 ini ke convert pake aws schema convertion tool

schme structure + 

data type -

database code  +



============



// 3 kegunaan lain DMS:


- development and test database migration  // migrate or copy data to 2nd db

- database consolidation  // gabungin beberapa db menjadi 1 

- continuous database replication   // continous db replication in multiple place




==========



// summary


dynamoDB : great for key value pair  






// amazon DocumentDB  ( with MongoDB Compatibility )



- great for small attributes


contoh: full content management system, catalog, user profile, 





// amazon Neptune

social web media tracking

fraud detection

supply chain. // track assurance that nothing is lost 




// amazon Managed Blockchain

blockchain solution 


- decentralization components.



// amazon Quantum Ledger Database  ( QLDB )

immutable ledger.  any entry can never be removed from audits.




// amazon ElastiCache 


- database accelerators.


bs dikasih caching layer. improve from milisecond to microseconds

ga perlu launch, uplift, maintenance.

comes with both memcached and redis flavors





// amazon DynamoDB Accelerator ( DAX )


- database accelrator for DynamoDB


improving read times for non relational data










=============


best for archival data:


Amazon S3 Glacier Flexible Retrieval

Amazon S3 Glacier Deep Archive



=========


========


AWS Storage note

 // storage access




block level storage = place to store  files  // bytes stores on disk. 



laptop / pc => gunain block level storage. ( hard drive )







// Instance Stores Volume



local Instance Stores Volume: hard drive di ec 2


- attached to ec2 instances 

- temporary block level storage

- lifespan = lifespan of ec2 instance


if stop / deleted ec2 instance all data written to the instance store volume will be deleted.  // dipake sama host lain ketika menjalankan ec2 instance karena sifatnya virtual.




temporary file

scratch data

data easily recreated.




- dont write important data to the drives that comes with  ecs instance.




u dont want important database deleted every time u stop ec 2 instances.









//  Amazon Elastic Block Store  ( EBS )


virtual hard drive / ebs volume.

bs di attach ke ec2 / directly attached

harddrive that is persistent



- can persist between stop and start of an ecs instances.



we define:

size 

type

config



volume that we need.





^ didalam ebs ada snapshost => incremental backup of data.

^ penting buat bikin regular snapshot backup

^ klo harddrive corrupt kita ga lost data

^ bs di restore data dr snapshot





// incremental backup


An EBS snapshot(opens in a new tab) is an incremental backup. This means that the first backup taken of a volume copies all the data. For subsequent backups, only the blocks of data that have changed since the most recent snapshot are saved. 




==================



// amazon simple storage service   

// amazon S3


- storing file

- data store that allow to store and retreive an unlimited amount of data at any scale

- store object in buckets





data that need save elsewhere.



receipt

images

excels

video

text file



maximum object size = 5 TB upload





bs dibikin version object to retain version / prevent accidental delete



bs create multiple bucket and store in diffferent classes or tiers of data



bs create permision who can see and accessing objects



bs stage data between different tiers




tiers:


data need to be used freq

audit data that need retained for several years

===================



// samazon s3  standard = 99.9999999% durability 


-11.9 of durability


remain intact of 1 years 



data stored in a ways aws can sustain 2 concurrent loss of data in 2 separate storage facilities.




> data is stored in at least 3 facilities  // multiple copy resides accross locations.




==================


// s3 static website hosting


- collection of html file, images, etc.



^ bs jd instant website





==================


// s3 standard-infrequent Access  ( s3 standard-IA)


- data accessed less frequent but need rapid access when needed.


- perfect for store backup, disaster recovery files, any object that required long term storage


===============


// s3 glacier flexible retrieval


- retain data for several years for auditing


- dont need to retreive very rapidly



bs simply move data kesini 

atau can create vault then populate them with archieves



Low-cost storage designed for data archiving

Able to retrieve objects within a few minutes to hours


S3 Glacier Flexible Retrieval is a low-cost storage class that is ideal for data archiving. For example, you might use this storage class to store archived customer records or older photos and video files. You can retrieve your data from S3 Glacier Flexible Retrieval from 1 minute to 12 hours.








// s3 glacier vault lock policy


retaining specfici period of time data.  //  lock ur vault for specific time




bs bikin rule =>  write once read many / WORM Policy di s3 glacier


^ lock policy from future edit



3 options for retreival:

- minutes

- hours 

- uploading directly to s3 glacier flexible retrieval / using s3 lifecycle policies



==============


// s3 lifecycle management / policies


- move data automatically between tiers 



1  keep object in standard 90d

2  move to s3 Standard-IA for the  next 30d

3 after 120 day total auto move to s3 glacier flexible retrieval




^ bikin config tanpa ngubah application code

^ perform those move automatically




============



// s3 one zone-infrequent


Stores data in a single Availability Zone

Has a lower storage price than Amazon S3 Standard-IA

Compared to S3 Standard and S3 Standard-IA, which store data in a minimum of three Availability Zones, S3 One Zone-IA stores data in a single Availability Zone. This makes it a good storage class to consider if the following conditions apply:


You want to save costs on storage.

You can easily reproduce your data in the event of an Availability Zone failure.





// s3 glacier instan retrieval

Works well for archived data that requires immediate access


Can retrieve objects within a few milliseconds


When you decide between the options for archival storage, consider how quickly you must retrieve the archived objects. You can retrieve objects stored in the S3 Glacier Instant Retrieval storage class within milliseconds, with the same performance as S3 Standard.






// s3 glacier deep archieve

Lowest-cost object storage class ideal for archiving

Able to retrieve objects within 12 hours

S3 Deep Archive supports long-term retention and digital preservation for data that might be accessed once or twice in a year. This storage class is the lowest-cost storage in the AWS Cloud, with data retrieval from 12 to 48 hours. All objects from this storage class are replicated and stored across at least three geographically dispersed Availability Zones.







// s3 intelligent-tiering


Ideal for data with unknown or changing access patterns

Requires a small monthly monitoring and automation fee per object

In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3 automatically moves it to the infrequent access tier, S3 Standard-IA. If you access an object in the infrequent access tier, Amazon S3 automatically moves it to the frequent access tier, S3 Standard.







// s3 outpost

Creates S3 buckets on Amazon S3 Outposts


Makes it easier to retrieve, store, and access data on AWS Outposts


Amazon S3 Outposts delivers object storage to your on-premises AWS Outposts environment. Amazon S3 Outposts is designed to store data durably and redundantly across multiple devices and servers on your Outposts. It works well for workloads with local data residency requirements that must satisfy demanding performance needs by keeping data close to on-premises applications.






============


// data metadata and key


In object storage, each object consists of data, metadata, and a key.

The data might be an image, video, text document, or any other type of file. Metadata contains information about what the data is, how it is used, the object size, and so on. An object’s key is its unique identifier.



when you modify a file in block storage, only the pieces that are changed are updated. When a file in object storage is modified, the entire object is updated.

==============





// EBS VS S3



ebs:

size up to 16 TiB

survive termination ec2 instance

ssd by default

hdd options



s3:

unlimited storage

individual object up to 5tb

write once / ready many

99.999999% durability





s3:

web enabled

regionally distributed

offer cost saving

serverless




object storage: doc, images, file   // everytime a change in object must upload entire file



block storage : blocks.   edit 80gb video.  edit, save. the engine only updates the blocks




==============


// amazon Elastic File System / EFS


- manage filesystem

- shared filesystem accross app

- Multiple instances can access the data in EFS at same time 

- auto scale up and scale down by system





klo ebs:

volume attach to ec2 instance

AZ level resource

need to be in the same  AZ to attach ec2 instance

volume do not auto scale -> klo 5t y 5t



klo efs:

bs multiple instance reading and writing simultaneously

linux true file system

regional resource / can edit between ec2 in same region

automaticaly scale as u write data



==============

AWS Networking note

 // amazon VPC 

amazon virtual private cloud





// amazon virtual private cloud


let u provision a logically isolated section

awas cloud.


- create virtual network environtment

- can public facing / private ( with internet or private )




public subnet

- talk to internet. 


private subnet

- ip internal





===========


public traffic --- internet gateway / IGW --- attach to vpc.



didalem vpc : 

elb

ec2 instance

db




===========


virtual private gateway --- attach to vpc.



^ allow traffic coming from approved network



- bs jg create vpn between private network dr DC ke virtual private  gateway


==========


// aws direct connect



- provide physical line that connect ur network to your aws vpc


connected dedicated fiber connection from DC1 to AWS VPC



- work with direct connect partner in ur area to establish this connection



1 vpc might have multiple type of gateway attached for multiple types of resources.

all reside in same vpc  but in different subnet




===========



// vpc network and acl.




igw --- public subnet --- private subnet 






========


// network ACL

packet yg msk IGW --> akan dicek oleh network access control list 



> The VPC component that checks packet permissions for subnets is a network access control list (ACL)(opens in a new tab).

> A network ACL is a virtual firewall that controls inbound and outbound traffic at the subnet level.




=========


// security group


- tiap ec2 instance yg di create msk kedalem security group

- by default blocking smua incoming traffic

- by default allow smua outbound traffic 




^ hrs dimodify allow certain type of traffic.






If you have multiple Amazon EC2 instances within the same VPC, you can associate them with the same security group or use different security groups for each instance. 


==========



// security group vs network acl


security group = stateful. // by default deny all inbound traffic ,, but allow all return traffic

network acl = stateless.   //  not allow return traffic. need to be specified




^ packet flow mesti didefine.






// stateful

Security groups perform stateful packet filtering. They remember previous decisions made for incoming packets.






Network ACLs perform stateless packet filtering. They remember nothing and check packets that cross the subnet border each way: inbound and outbound. 



When a packet response for that request comes back to the subnet, the network ACL does not remember your previous request. The network ACL checks the packet response against its list of rules to determine whether to allow or deny.



// acl default 

It is stateless and allows all inbound and outbound traffic.


=========



// route 53


- direct dns to public ip

- able to register domain name. can buy and manage right on aws

- direct traffic to different endpoint using several different policy such as :


latency-based routing - bs didirect ke region terkedat


geolocation dns - berdasarkan source user. bs didirect ke region terkedat /  yg berbeda


geoproximity routing


weighted round robin






========


// amazon cloudfront - cdn.



========


// flownya


user -- amazon route 53 -- amazon cloudffront -- amazon elb -- amazon auto scalling --- amazon ec2 instance




=========




provisioning note

 everything is API call



==========


invoke or call api to configure and manage aws instance



==========



// aws management console   == browser based

// aws CLI

// aws SDK

// aws cloudFormation



==========




// aws management console


- manual provisioning

- bs ada manual eror. melelahkan konfig next next next


test environtment

view aws billing

view monitoring

work with non tech resources


==========



// aws CLI


- digunain buat mempercepat konfigurasi via cli 

- digunain di production

- makes action scriptable and repeatable

- can use schedulle or can use trigger by another process

- enabling automation






make api call using the terminal on your machine 


==========



// aws SDK


interact with aws resources through various programming language 


- able to create program that using aws without low level api 




=========



// aws elastic beanstalk


managed provisioning tools for aws ec2



^ create app code and desired configuration to aws elastic beanstalk service 




auto build multiple environtment.


> us east region

> security group

> deploy elb

> deploy auto scalling

> raise 2 ec instance

> have 1 database running



- easy to save environtment configuration bundle. deployed again easily.



// goal task:

Adjust capacity

Load balancing

Automatic scaling

Application health monitoring









// aws cloudformation


- create automated and repeatable deployment

- infra as a code tool

- using json / yaml format

- support storage , db, analytic, machine learning




^ dibentuk dalam bentuk template

^ nanti diparse sama cloudformation lalu start provisioning all resource

secara pararel


^ aws CF dibalik layar connect ke backend AWS API ke masing2 resource.



^ bs bikin template untuk 1 region , lalu bikin identical clone buat di deploy ke region yg lain.



^ less room for human eror 


^ totally automated process









========


best practice = minimal 2 availability zone



==========

AWS Serverless note

 ec2 :

manage instance over time

patching instances

settingup scalling instances

high available manner




===========



// serverless

- cannot see or access underlying infra.



provision 

scanning

high availability 


udah diurus aws.




AWS LAMBA

- serverless

- upload code ke lambda function

- trigger via put Object => code run in managed environtment



For example, a simple Lambda function might involve automatically resizing uploaded images to the AWS Cloud. In this case, the function triggers when uploading a new image. 





1000 incoming trigger => lamba function will scale ur function to meet demand



lamba is designed to run code under 15 min.


- ga cocok buat deep learning.


- cocoknya buat quick process like web backend, handling request / backend expense report processing service. dmn takes less than 15 minutes to complete




goals:

- host short running functions

- service-oriented applications

- event driven applications

- no provision or manage server




==========


// container orchestration tools  => docker container 


- AMAZON ECS ( elastic container service ) = orchestration tool to manage container without hasle of managing ur own container orchestration software



- AMAZON EKS ( elastic kubernetes service ) = similar to ecs with different tool and features





Amazon EKS is a fully managed Kubernetes service. Kubernetes is open-source software that enables you to deploy and manage containerized applications at scale.





docker = using OS level virtualization to deliver software in container




container = package for ur code // dependency + configuration




container orchestration = manage multiple docker




** ecs and eks can run on top of ec2 

** atau bs dideploy di aws fargate  ( compute platform )





goals:

run docker container based workload on aws



=========


// aws fargate :

serverless compute platform for deploy ecs / eks   ( serverless environtment )



========



// container use case


Suppose that a company’s application developer has an environment on their computer that is different from the environment on the computers used by the IT operations staff. The developer wants to ensure that the application’s environment remains consistent regardless of deployment, so they use a containerized approach. This helps to reduce time spent debugging applications and diagnosing differences in computing environments.




// kenapa butuh orchestration tool 


- 10 host with 100 container 



When running containerized applications, it’s important to consider scalability. Suppose that instead of a single host with multiple containers, you have to manage tens of hosts with hundreds of containers. Alternatively, you have to manage possibly hundreds of hosts with thousands of containers. At a large scale, imagine how much time it might take for you to monitor memory usage, security, logging, and so on.



=======



" just code and configuration "


=====

monolitic app vs microservice note

 // monolitic


Suppose that you have an application with tightly coupled components. These components might include databases, servers, the user interface, business logic, and so on. This type of architecture can be considered a monolithic application. 


In this approach to application architecture, if a single component fails, other components fail, and possibly the entire application fails.





===========


// microservices


In a microservices approach, application components are loosely coupled. In this case, if a single component fails, the other components continue to work because they are communicating with each other. The loose coupling prevents the entire application from failing. 







When designing applications on AWS, you can take a microservices approach with services and components that fulfill different functions. Two services facilitate application integration: Amazon Simple Notification Service (Amazon SNS) and Amazon Simple Queue Service (Amazon SQS).




============


Amazon ELB load balancing, Amazon SQS and SNS note

 elastic load balance:



- route request to multiple instances

- evenly distribution load to multiple ec2 instance

- monitor ec2 instance ( combine with auto scalling ) to forward request whenever the server is up / down. stop forwarding to dead ec2 instance



- adding more  backend without interupting front end process  // decoupled architecturee




=============



- low demand period

- high demand period



============


// messaging and queueing



buffer = placing message into a buffer





// tight coupled architecture


cashier - straigh talk to barista.   // single component fail it causes issues for other compontents or even the whole system.



app a error - app b ikt error




// loosely coupled architecture


single failure wont cause cascading failures

one single failure is isolated so wont cause cascading failures





app A -- Message Queue -- app B


if app B fail.. A not fail.

app A will still send to message Queue until app B up again



============


// messaging and queueing




messaging  remain in the quue until they are consumed or deleted







// amazon simple queue service ( SQS - queue)


Using Amazon SQS, you can send, store, and receive messages between software components, without losing messages or requiring other services to be available. In Amazon SQS, an application sends messages into a queue. A user or service retrieves a message from the queue, processes it, and then deletes it from the queue.





- send store receive  msg between  software component at any vol.

- msg are placed in queue until they are processed

- scale automatically, easy configure and used

- can send notification



data contained within msg is called payload.  // protected until delivery 




- person name, coffee order, time order  => digabung jd payload. dimasukin ke SQS







// amazon simple notification service ( SNS )



bs berupa email,txt msg , push notif / http request. skali di push, sent ke semua subscriber





// buat ngasih notification ke user.   => bs berupa email,txt msg , push notif / http request

publish / subscribe model


sns topic : a channel for msg to be delivered


configure subscriber ke topic -> lalu publish msg to those subscriber


1 message to topic => disebar ke banyak subscriber skali jalan.




subscribernya bs end point jg kyk : 

- sqs queues

- aws lambda

- https / http web hook 



bs jg ngasih notification ke end user via:

- mobile push

- sms

- email 



=============




================



monolithic application. 


Amazon ec2 auto scalling note

 idle resources datacenter. on premises.



===========


provision exactly demand.

every hour



+ROI

===========


everything fails all the time

so plan for failure and nothing fails


==========


ha system with no fail 


==========



// Amazon EC2 Auto Scaling


If you’ve tried to access a website that wouldn’t load and frequently timed out, the website might have received more requests than it was able to handle. This situation is similar to waiting in a long line at a coffee shop, when there is only one barista present to take orders from customers.



Amazon EC2 Auto Scaling enables you to automatically add or remove Amazon EC2 instances in response to changing application demand





ada 2 type auto scalling:

- dynamic scalling

- predictive scalling





dynamic scalling:

respond to changing demand



predictive scalling:

automatically schedules the right number of Amazon EC2 instances based on predicted demand




** To scale faster, you can use dynamic scaling and predictive scaling together.



misal minggu sepi = ec2 instance turunin

==========


// scale up vs scale out 



scale up = make instance bigger // adding more power


scale out = make more x amount instance   // adding more instance




===========


happy customer 

happy ceo 

happy architecture



==========


1 set minimum

2 set desired

3 set maximum  / scale as needed



minimum ada 1 ec2 instance didalam 1 auto scalling group pas config awal2.




**If you do not specify the desired number of Amazon EC2 instances in an Auto Scaling group, the desired capacity defaults to your minimum capacity.







============


Because Amazon EC2 Auto Scaling uses Amazon EC2 instances, you pay for only the instances you use, when you use them. You now have a cost-effective architecture that provides the best customer experience while reducing expenses.



============



Amazon ec2 pricing note

 // on demand 


- per hour

- per second 



are ideal for short-term, irregular workloads that cannot be interrupted. No upfront costs or minimum contracts apply. The instances run continuously until you stop them, and you pay for only the compute time you use.


Sample use cases for On-Demand Instances include developing and testing applications and running applications that have unpredictable usage patterns. On-Demand Instances are not recommended for workloads that last a year or longer because these workloads can experience greater cost savings using Reserved Instances.



===============


// ec2 instance saving plans


1 / 3  year plan


a commitment to a consistent amount of usage measured in dollars per hour for a one or three-year term.


therefore provide savings of up to 72% on your AWS compute usage. This can lower prices on your EC2 usage, regardless of instance family, size, OS, tenancy, or AWS region. This also applies to AWS Fargate and AWS Lambda usage, which are serverless compute options that we will cover later in this course. 



=============


// reserved instance



steady-state workloads or ones with predictable usage and offer you up to a 75% discount versus On-Demand pricing. You qualify for a discount once you commit to a one or three-year term and can pay for them with three payment options: all upfront, where you pay for them in full when you commit; partial upfront, where you pay for a portion when you commit; and no upfront, where you don't pay anything at the beginning. 



ada 2 tipe:


- standard reserve

- convertible reserve



term berlangganan 1 taun / 3 taun.


3 taun = more discount 




----------------


// Standard Reserved Instances: This option is a good fit if you know the EC2 instance type and size you need for your steady-state applications and in which AWS Region you plan to run them. Reserved Instances require you to state the following qualifications:


Instance type and size: For example, m5.xlarge

Platform description (operating system): For example, Microsoft Windows Server or Red Hat Enterprise Linux

Tenancy: Default tenancy or dedicated tenancy

You have the option to specify an Availability Zone for your EC2 Reserved Instances. If you make this specification, you get EC2 capacity reservation. This ensures that your desired amount of EC2 instances will be available when you need them. 




// convertible = locationnya  bisa dipindah antar AZ.

// bs convert ke different instance type size ( m5.xlarge)



Convertible Reserved Instances: If you need to run your EC2 instances in different Availability Zones or different instance types, then Convertible Reserved Instances might be right for you. Note: You trade in a deeper discount when you require flexibility to run your EC2 instances.









=============



// spot instances



allow you to request spare Amazon EC2 computing capacity for up to 90% off of the On-Demand price. The catch here is that AWS can reclaim the instance at any time they need it, giving you a two-minute warning to finish up work and save state. You can always resume later if needed. So when choosing Spot Instances, make sure your workloads can tolerate being interrupted. A good example of those are batch workloads. 


Spot Instances are ideal for workloads with flexible start and end times, or that can withstand interruptions. Spot Instances use unused Amazon EC2 computing capacity and offer you cost savings at up to 90% off of On-Demand prices.




============


// dedicated host



Dedicated Hosts, which are physical hosts dedicated for your use for EC2. These are usually for meeting certain compliance requirements and nobody else will share tenancy of that host.



============

Amazon ec2 notes

 multitenancy = sharing underlying hardware between VM



=========


hypervisor = isolating vm from  each other as they share resources from host.





========


provision thousand of ec2 instances. on demand. with a blend of operating system and configuration

to power ur business different app


bs milih os + service yg jalan pas diinstall


========



// vertical scalling


bs bikin instance bigger or smaller





// horizontal scalling


menambah jumlah instance




========


// network 

public

or

private




========


ec2 ada groupnya. namanya instance family


combination of resource.






1 general purpose

- balanced resources

- diverse workload

- web server

- code repository




2 compute optimized

- compute intensive task 

- gaming server

- high performance computing / hpc

- scientific modeling





3 memory optimized

- compute intensive task 

- ++ database performance


This scenario might be a high-performance database or a workload that involves performing real-time processing of a large amount of unstructured data. In these types of use cases, consider using a memory optimized instance. Memory optimized instances enable you to run workloads with high memory needs and receive great performance.







4 accelerated computing

- floating number calculation

- graphic processing

- data pattern matching

- utilize hardware accelerator





5 storage optimized

- high performance io for locally stored data







=========



Amazon Services Note

you only pay for what u use


- Amazon Elastic Compute Cloud (Amazon EC2)  = a virtual server 



- AWS Cost explorer = visualize, understand, and manage your AWS costs and usage over time




- Amazon EC2 Auto Scaling = auto scaling ec2 server based on user demand needs / in response to changing app demand ( auto add instance and auto decommision when not needed ) 



- elastic load balance ( ELB ) = ervice that automatically distributes incoming application traffic across multiple resources, such as Amazon EC2 instances. 




- amazon simple queue service ( SQS - queue)


> send store receive  msg between  software component at any vol.

> msg are placed in queue until they are processed

> scale automatically, easy configure and used

> can send notification




- amazon simple notification service ( SNS )


> send notification for user  via publish / subscribe model.



subscriber bs:

- sqs queues

- aws lambda

- https / http web hook 


bs jg ngasih notification ke end user via:

- mobile push

- sms

- email 








- aws lamba = running code without manage instances. // serverless.

> suited for process under 15 min






// container orchestration tools  => docker container 


- AMAZON ECS ( elastic container service ) = orchestration tool to manage container without hasle of managing ur own container orchestration software



- AMAZON EKS ( elastic kubernetes service ) = similar to ecs with different tool and features



// aws fargate :

serverless compute platform for ecs / eks



==========



ha system with no fail 

auto scalling system based on user need 



=========



- regions 

geographical area that containts aws resource 



- availability zones

sing dc or group of DC within a regions


========



- aws outpost



> automatically install a fully operational mini region in customer own 


========


// amazon virtual private cloud


let u provision a logically isolated section

aws cloud.


- create virtual network environtment

- can public facing / private ( with internet or private )




public subnet

- talk to internet. 


private subnet

- ip internal





// fungsi vpc:

able to define private ip for aws resources.




elb dan ec2 butuh setting ip -> vpc






subnet = chunk of ip adress  in ur vpc that allow to group resources tgt.


control either services publicly or privately available





=========



// aws direct connect



- provide physical line that connect ur network to your aws vpc


connected dedicated fiber connection from DC1 to AWS VPC



- work with direct connect partner in ur area to establish this connection


========== 



//  Amazon Elastic Block Store  ( EBS )


virtual hard drive / ebs volume.

bs di attach ke ec2 / directly attached

harddrive that is persistent





==========



// amazon Elastic File System  ( EFS )


- manage filesystem

- shared filesystem accross app

- Multiple instances can access the data in EFS at same time 

- auto scale up and scale down by system




============


// amazon aurora





Amazon Aurora



an enterprise-class relational database. It is compatible with MySQL and PostgreSQL relational databases. It is up to five times faster than standard MySQL databases and up to three times faster than standard PostgreSQL databases.


Amazon Aurora helps to reduce your database costs by reducing unnecessary input/output (I/O) operations, while ensuring that your database resources remain reliable and available. 


Consider Amazon Aurora if your workloads require high availability. It replicates six copies of your data across three Availability Zones and continuously backs up your data to Amazon S3.







support mysql

support postgresql



- price 1/10 cost of commercial db



ada data replication & 6 copy at a time


bs apply 15 read replicas. // offload read and scale performance 


ada continuous backup to s3 ,, ready to restore 



ada point in time recovery : can recover data from specific period




================



// amazon RDS


running your databases in the cloud is to use a more managed service called Amazon Relational Database Service, or RDS



// amazon dynamoDB


noSQL database fully managed, high performance scalable serverless db.




// Amazon DocumentDB is a document database service that supports MongoDB workloads.

===================



AWS Database Migration Service (AWS DMS)


service to migrate existing db between source and target.



===================




AWS Identity and Access Management (IAM)


AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely.   


===============



// AWS Artifact


- access to compliance reports


- Access AWS security and compliance reports and special online agreements -



================



// Amazon Inspector


improve security and compliance of your aws deployed app.

by running automated security assessment


best practice

vulnerabilities

security issue and recomendation how to fix it



3 piece component di amazon inspector:


network configuration reachability piece

amazon agent

security assessment service





+ can retreive finding via api. bs diremediation. performing remediation to fix issues




================


// amazon GuardDuty


threat detecting



- analyze continuous streams of metadata generated from ur account and network activity

found on aws cloudtrail event, amazon vpc flow log, and dns log.

it uses integrated threat intelligence such as known malicious ip address, anomaly detection, and machine learning to identify threat more accurate




run independent from another ews service. so it wont affect performance or availability


1 enable guardduty

2 guardduty continuously analyze network and account activity

3 guardduty intelligently detect threats

4 review detailed finding and take action




===============



// amazon cloudwatch


visibility


monitor health and operation app and infra aws in real time 


- Monitor applications and respond to system-wide performance changes 




// cloudwatch alarm

set threshold for a metric

can generate alert and trigger action when threshold meet

can integrate with SNS




===============



// aws cloudtrail



- API Auditing tools



every request made to aws.

get logged to cloudtrail



can save log to s3 bucket



=============



// aws trusted advisor


Trusted Advisor compares its findings to AWS best practices in five categories: cost optimization, performance, security, fault tolerance, and service limits. For the checks in each category, Trusted Advisor offers a list of recommended actions and additional resources to learn more about AWS best practices. 


=============




// lightsails


deploy ready-made application stacks

(a service that enables you to run virtual private servers)



============



// AWS Pricing calculator


The AWS Pricing Calculator  lets you explore AWS services and create an estimate for the cost of your use cases on AWS.



- ada bulk discount pricing




==========



Consolidated billing also enables you to share volume pricing discounts across accounts. 


Some AWS services, such as Amazon S3, provide volume pricing discounts that give you lower prices the more that you use the service. In Amazon S3, after customers have transferred 10 TB of data in a month, they pay a lower per-GB transfer price for the next 40 TB of data transferred. 


In this example, there are three separate AWS accounts that have transferred different amounts of data in Amazon S3 during the current month: 


Account 1 has transferred 2 TB of data.

Account 2 has transferred 5 TB of data.

Account 3 has transferred 7 TB of data.



=========



// aws budget


set custom budget and alerting of usage




fungsi tag : bs dibikin per project. monitor usage db.



bs bikin report daily cost.




=========


// aws cost explorer


visualize usage data.


=============




// beanstalk


AWS Elastic Beanstalk

Deploy dan skalakan aplikasi web



Businesses upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.



============



// Amazon CloudFront 


a content delivery service. 



===========


// amazon route 53


Connect user requests to infrastructure in AWS and outside of AWS.

Manage DNS records for domain names. 


Amazon Route 53 is a DNS web service. It gives developers and businesses a reliable way to route end users to internet applications that are hosted in AWS. 


 


Additionally, businesses can transfer DNS records for existing domain names that are currently managed by other domain registrars, or register new domain names directly within Amazon Route 53.



===========



// aws shield


A service that helps protect applications against distributed denial-of-service (DDoS) attacks 



============



// Amazon Augmented AI (Amazon A2I) 



provides built-in human review workflows for common machine learning use cases, such as content moderation and text extraction from documents. With Amazon A2I, a person can also create their own workflows for machine learning models built on Amazon SageMaker or any other tools.



=========


// Amazon Textract 


 a machine learning service that automatically extracts text and data from scanned documents.


===========


// Amazon Lex 


a service that builds conversational interfaces using voice and text.



============


// AWS Key Management Service (AWS KMS) 


a service that creates, manages, and uses cryptographic keys.


============


// Amazon Redshift 


a data warehousing service for providing big data analytics. It offers the ability to collect data from many sources and provides insight into relationships and trends across a data set. 



============


// Amazon Quantum Ledger Database (Amazon QLDB) 


a ledger database service. A person can use Amazon QLDB to review a complete history of all the changes that have been made to application data.



============


// AWS Snowball 

a device that transfers large amounts of data into and out of AWS.



============


// Amazon ElastiCache 


service that adds caching layers on top of databases to help improve the read times of common requests.


===========


// Amazon Neptune 


a graph database service. Amazon Neptune provides the capability to build and run applications that work with highly connected datasets, such as recommendation engines, fraud detection, and knowledge graphs.



============


// AWS DeepRacer 


is an autonomous 1/18 scale race car that tests reinforcement learning models.


===========