Pages

Sunday, December 10, 2023

VPC Note

 // VPC



default ip vpc = 172.31.0.0/16



==========


// cara create vpc


1 select region



2 creating vpc

vpc > create vpc > ipv4 CIDR block       =>  masukin ip range yg ingin diallocate




^ nanti ke create vpc id

^ auto muncul route table

^ auto muncul nacl 

^ dns hostname = default disable








===========



// connect vpc to internet


1 create internet gateways


internet gateways > create internet gateway

- input name tag : ig-1


^ ketika di create status akan detache. nanti bs di attach ke vpc yg udah di create.




2 attach internet gateway ke vpc


internet gateways > actions > attach a vpc 

- select vpc id yg ingin di attach




3 add route table agar dapat rute ke internet


internet gateways > route table  > create route table

- name tag: route_to_internet

- vpc : select existing vpc



4 set main route table


route table > actions > set main route table 



5 edit route table add 0.0.0.0/0


routes table > edit routes > add route


destination = 0.0.0.0/0  

target = ( select internet gateway ) 






6 edit subnets 

// create plg enggk 2 atau 3  di az yg berbeda. biar ada HA nya.



subnets > create subnets >

- edit name tag

- select vpc 

- select AZ

- edit bagian ipv4 CIDR block ( size nya hrs lebih kecil / bagian dari vpc CIDR ) 




// create 3 public ip v4 cidr :

10.0.1.0/24

10.0.0.0/24

10.0.2.0/24



^ action => edit auto-assign public ipv4 address : yes





// create 1 private ip v4 cidr :

10.0.3.0/24



// create route table baru buat segmen ip private

- route tables > create route table > private_ip_route


// set route table association


subnets -> edit route table association 

- edit route table id 

- ganti ke private

- save









==========


// create ec2


- services > ec2 > instances > launch instances

select platform ( biasanya pake amazon linux 2 ami ( hvm ) => buat public subnet ) 


- select t2.micro


- network = select vpc network yg udah dicreate


- subnet = select public subnet yg telah di create


- create new IAM role 

buat bikin permission akses ke s3


^ select ec2 > bagian filter policies filter ssm  ( amazonec2RoleforSSM )

^ bagian filter policies filter s3  ( amazonS3FullAccess )

^ kasih role name > create role 



- select new IAM role





// optional


bagian advanced details bs input script buat bikin web server




- storage => biarin default dl




// configure security group

- create new sec group > security group name = ec2-sec-group1


open port ssh

open port http


- edit source yg buat allow 






// create ec2 key pair

- digunain untuk remote access menggunakan key. agar lebih secure.





// security group => by default deny

// kita bikin rule buat allow services


==========


// create ec2 buat private subnet


sama stepnya kyk yg diatas tp subnetnya bedain. => select yg private




=========


// edit nacl


- bs buat block specific ip address

- nacl adanya di subnets



Subnets > select subents > liad dibawah ada menu Network ACL > edit


- select inbound rule > add rule           // lowest to highest



- add rule 10 > source : ip_private/32  > deny 



========



// cara connect dari luar ke private subnet

// bs pake jumphost atau bastion ( cari di market place ) 

^ buat remote browser ssh ke private instance. biar lebih secure

^ jumpbox = hardened instance 

^ bs dikasih google authenticator / mfa

^ bs screen recording 

^ ada audit log 


^ alternativenya bs pake session manager tp ga ada screen recording






- instances > launch instances > aws marketplace > guacamole bastion host


- edit bagian network dan subnet


- edit policies > filter guaws


- create role > ec2 > filter > ec2readonlyaccess






========

NAT Gateway Note

 // nat


1 dipake buat koneksi private ip  ke internet

2 dipake klo ada ip private network yg bentrok / sama dan ingin koneksi keluar




============



// nat instances vs nat gateway



nat instances = individual ec2 instance.


- bs down nat instances

- mesti bikin lebih dr 1 






// nat gateways


- manage service which launches redundant instances within the selected AZ.


- di manage sama aws


- ada redundansi dibalik layar. aws yg manage.



** nat instances hrs ada di public subnet.



ec2 -> ada di private subnet





^ semua nat jalan per AZ


=========




// nat instancce and nat gateway note++




// note nat instance


- pas bikin  nat mesti disable source and destination checks di instance

- nat instances mesti ada di public subnet

- hrs ada route out dari private subnet ke nat instance 

- ukuran nat instance determine seberapa besar traffic bisa dihandle

- high availability bs pake autoscalling group, multiple subnet di AZ yg berbeda, dan automate failover pake script   =>  lebih repot dibanding nat gateway




// note nat gateway


- bersifat redundant didalam sebuah AZ.

- cm boleh punya 1 nat gateway didalem 1 AZ / ga bs dispan

- start dr 5Gbps dan bisa discale up ke 45Gbps


- Nat Gateway dipake buat enterprise


- ga perlu ngepatch nat gateawy. ga perlu disable source/destination checks 

- nat gateway otomatis diassign public ip

- route tables for nat gateway mesti di update

- resource di multiple AZ sharing gateway will  lose internet access if gateway goes down, unless u create a gateway in each AZ and configure  routes accordingly









========

Security Group Note

 // security group:


virtual firewall at instance level


========


- inbound  rulle

- outbound rule

- no deny rules. all traffic blocked by ddefault unless a rule specifically allow it

- multiple instances across multiple subnet can belong to security group



=========



- bs specify /32 atau specific ip adress


sg web app ->  db via ip




- bs specify another sec group



sg web app ->  db via sec group





- instance bs diapply multiple security group ( nest ). rulenya jadi permisive.

awalnya deny trs di apply sec group k2 allow. jadi allow 




=========



// sec group limit



can have up to 10k sec group in region. // default 2500


can have 60 inbound rule and 60 outboundd rule per sec group


16 sec group per elastic network interface ( default is 5 ) 



========


- firewall at instance level

- sec group are stateful. // if traffic is allowed inbound, it is also allowed outbound

- unless allow spec, all inbound traffic is blocked by default

- all outbound traffic from the instance is allowed by default

- source specify can be either ip range, single ip addr or another sec group

- any changes effect immediate

- ec2 instances can belong to multiple sec group

- sec group can contain multiple ec2 instance 



- ga bs block specific ip via sec group


Thursday, December 7, 2023

AWS NACL NOTE

NACL = Network Access List 

NACL: an optional layer of security that act as a firewall for controlling traffic in and out of subnet



- virtual firewall at subnet level


- vpc auto get a default nacl allow all outbound and inbound traffic


- tiap subnet cm boleh associated dengann 1 nacl. klo ad alebih dari 1 bakal nge overwrite previous rule sblmnya


- tiap nacl ada  rules allow atau deny traffic inbound (into ) and outbound ( out of ) subnets



- nacl ada inbound dan outbound rules 


- ada rule number #  => determine order of evaluation. from lowest to highest.   0 - 32766.  // recomended increment 10 / 100


- bs  block single ip adress ( ga bs klo pake security groups )


- ada allow / deny


- stateless


- deny all  traffic by default when create nacl











===================


// nacl use case -- subnet level


- block single ip address from internet

- block incoming all ssh port


=============


Tuesday, December 5, 2023

AWS VPC Note

 VPC / Virtual Private Cloud




- VPC = personal datacenter


give complete cotrol over virtual networking environtment



region > vpc > AZ > 


public / private subnet ---- security group --- ec2 instance / rdsDB --- nat ---


NACL --- Route table --- router --- IGW --- internet



================



// VPC Key Features


- vpc are region specfic // ga span across region


- bs create 5 vpc per region


- tiap region ada 1 default vpc


- bs create 200 subnet per VPC


- bs make ipv4 cidr block + ipv6 cidr block 




features cost nothing:

vpc

route table

NACL

internet gateway 

security group and sunet

VPC Peering




features cost money :

NAT Gateway

VPC edpoint

Vpn gateway

Customer gateway

DNS hostname ( klo instance butuh dns )





============



// default vpc


- ada default vpc di tiap region sehingga bs immediate deploy instance



1 create vpc with size /16 cidr block.



2 create a size /20 default subnet in each AZ



3 create internet gateway and connect it to default vpc


4 create default security group and asssociate with default VPC



5 create default NACL / network access control list and associate with default VPC



6 associate default dhcp option to default vpc



7 when vpc created = auto create route table



===========



0.0.0/0 = all possible ip address.



klo specify di route table for IGW = allow internet access 


klo specify 0di security group inbound rules = allowing all traffic from internet to our public resources



0.0.0.0/0 => giving access from anywhere or the internet



==========



// VPC peering


- allowing connect one vpc with another over a direct route using private IP Address




1 instance on peered vpc behave like they are on same network

2 able to connect vpc across same or different aws account and regions


3 peering use start configuration:  1 central vpc - 4 other vpc


4 no transitive peering ( peering must take place directly between vpcs )

- need a one to one connect to immediator VPC



5 no overlapping CIDR block





VPC A = 10.0.0.0/16

VPC B = 172.31.0.0/16




   VPC PEERING CON

VPC A 10.0.0.4/32  ---------------  VPC B 172.31.0.8/32


==========



// route table


- route table = determine where network traffic is directed



- tiap subnet d vpc mesti ada route tablenya.



- 1 route table bisa berisi multiple subnet




destination target


10.0.0.0/16 local

0.0.0.0/0 igw-19asda21312ifsd







public subnet --- route table ---- router --- igw --- internet 


===========



// internet gateway ( IGW )


- allow vpc access ke internet




fungsi:

1 provide target didalem vpc buat ngeroute ke internet

2 melakukan NAT buat instances yg telah diassign public ipv4 IP



BUAT NGEROUTE KE internet mesti add ke routing table 


destination = 0.0.0.0

target = igw





(route table) ---- router --- IGW --- internet 



==========



// bastion / jumpbox



bastion = intermediate ec2 instances yg telah di hardening. // bs buat jump jalur traffic remote dari internet ke private ec2 ip

- help gain access ke ec2 instance via SSH / RCP yg ada di private subnet




** bastion ga boleh pake NAT ( security purpose ) 



// nat gateways


- nat gateway : penggunaan nat gateway digunakan agar ec2 instances dapet akses ke outbound internet for security updates







** bastion bs direplaace pake Session manager ( ada didalem system manager )


==========



// direct connect


- aws direct connect : establish dedicated network connection from on premises location to AWS



- help reduce network cost

- increase bandwidth throughput 

- provide more consisten network experience than typical internet based connection






++ very fast network  


ada 2 service:

1 lower bandwidth 50M-500M 

2 higher bandwidth 1gb / 10gb




on premises customer ---- customer/partner cage ( router ) ---- aws cage ( router ) ---- vpc / ec2 





aws direct connect = router ditengah2 ( customer / partner cage dan aws caage )



=========



// vpc endpoint




- secret tunnel inside private network aws

- privately connect vpc to other AWS service, and VPC endpoint services

- eliminate the need for an internet gateway, NAT, VPN or AWS Direct connect

- instance in vpc ga perlu public ip address buat ngobrol dengan service tertentu


- traffic antar vpc dan other service ga akan bs keluar dari aws network

- horizontal scaled, redundant and high available VPC component

- allow secure communication between instance and service without adding availability risk or bandwidth constraint on ur traffic





// ga perlu route traffic via internet buat akses service tertentu

VPC -- VPC endpoint --- s3 bucket 



2 tipe vpc endpoint :

1 interface endpoint

2 gateway endpoint






// interface endpoint 


- disebut ENI / elastic network interface with private ip address.


entry point for traffic going to a supported service.



interface endpoint are powered by AWS PrivateLink

- access service hosted on AWS easily and secured by keeping network private within AWS network




// ENI Cost 


price per vpc endpoint per az $/hour = 0.01

price per GB data processed ($)  = 0.01


estimated 7.5$ / month






ENI support following service:


API GW

cloudformation

cloudwatch

kinsesis

sageMaker

Codebuild

AWS Config

EC2 API

ELB API

AWS KMS

Secret manager

security token service

service catalog

SNS

SQS

System Manager

Marketplace partner services

endpoint services in other AWS accounts




// vpc gateway endpoint


- gateway that is a target for a specific route in ur routing table

- used for traffic destined for a supported AWS Service




buat bikin gateway endpoint mesti specify vpc dan target service yg mau diestablish connectionnya




aws gateway endpoint only support 2 service:

1 S3

2 DynamoDB



** vpc endpoint is free


=========


Aws SnowFamily note

 //aws snowball


- petabytes-scale data transfer service

move data onto aws via physical briefcase computer




============


low cost


- cost thousand of dollar to transfer 100TB over high speed internet.


- snowball bs reduce sampe 1/5 nya



===========


// snowball feature 


- e-ink display

- tamper and weather proof

- encryption 256 bit

- use trusted platform module

- data transfer must completed in 90d for sec purpose

- can import and export from s3



comes in 2size:

50TB ( 42tb of usable space )

80TB  ( 72 tb per node ) 


===========



// snowball edge


- petabytes-scale data transfer service

move data onto aws via physical briefcase computer


++ more storage 

++ more compute capabilities




// features:


- lcd display

- local process and edge computing workload

- can use in a cluster of 5 - 10 device



3 option device :


- storage optimized   /24 vcpu

- compute optimized  / 54 vcpu

- gpu optimized / 54 vcpu



2 option size:

100 tb / 83 tb usable space

100tb clustered / 45 tb per node





==========


// snow mobile


- 45 foot long shipping container pulled y semi trailer trck

transfer 100PB per snowmobile


aws personel will help connect ur network to snowmobile and when

data transfer is complete they drive back and import

into s3 / glacier




security features:

- gps tracking

- alarm monitoring

- 24/7 video surveillence

- escort security vehicle while in transit ( optional )





=========

Monday, December 4, 2023

Amazon S3 note

// s3 simple storage service



object based storage service 

serverless storage in the cloud


ga perlu worry filesystem / disk space





==========



file system storage = manage data as file and fire hierarchy

block storage = manage data as bock within sector and track



s3 = unlimited storage. ga perlu mikirin underlying infra

s3 console provide data buat upload dan access data




s3 object= object contain ur data. kyk files.


- bs nyimpen data dengan zie 0 - 5 terabytes per object.




object consist of:

1 key  : nama object

2 value : datanya dalam bentuk sequence of bytes

3 version ID  : klo versioning enabled, ngetag version object

4 Metadata   : additional information



s3 bucket:

- bucket = berisi object.  atau bs jg dalam bentuk folder. isinya object


bucket name hrs unique.





=========


s3 storage class



1 standard

2 Intelligence tiering

3 standard-IA

4 one zone IA

5 Glacier

6 glacier deep archieve



^ semakin kbwh smakin cheap / murah



// 1 standard  ( by default )


fast, 99% availability ,  11 9's durability replicated across at least 3 AZ



// 2 intelligent tiering


use ML to aalyze ur object usage and determine appropriate storage class.

data dipindah ke most cost effective access tier tanpa impact / added overhead



// 3 standard IA / infrequent access


cheaper, klo kita access file skali sebulan.

ada additional fee buat retreival.

50% less than standard ( reduced availability )



// 4 one zone IA


cheaper than standard IA by 20% less

object cm ada di 1 AZ. 99.5% availability

data could get destroyed

ada retreival fee




// 5 Glacier


long term cold storage, tp retreival data timenya agak lama.

bs menit sampe jam. tp sangat murah dalam segi cost



// 6 glacier deep archive


the lowest cost storage class

data retreival = 12 jam




** glacier = kyk berbentuk service sendiri tp sebenernya bagian dari S3


** smuanya data di replicate lebih dari 3 AZ. kecuali one-zone IA = data cm ada d 1 AZ


** retreival fee dihitung per GB access of data



=========


// s3 security


- smua new bucket are private by default


logging per request can be turned on a bucket.

log generated in a different bucket. // bs di log di akun aws yg berbeda


access control is configured using:  1 BUCKET POLICIES and 2 ACL / Access control list





- Access control list 

legacy features of controlling access to bucket and object.



- bucket policies

use a policy to define complex use case



policy -> statement

misal bucket A cuma boleh di allow via www.toro.com/*




contoh statement policy:


{

"version": "2023-10-18",

"Statement": [

"sid": "PublicReadGetObject",

"Effect": "Allow",

"Principal": "*",

"Action": "s3.GetObject",

"Resource":  "arn:aws:s3:::www.toro.com/*"


]


}



=========


// s3 encryption



traffic between ur local pc and s3 is achieved via SSL / TLS



- Server Side Encryption ( SSE ) - Encryption at Rest


Amazon help u encrypt the object data


s3 managed keys- ( amazon manage all the key )



3 tipe SSE:


SSE-AES  = s3 handle the key, use aes-256 algorithm  ( 256 bytes of length ) 

SSE-KMS  = envelope encryption, AWS KMS and u manage the key

SSE-C    = customer provide key ( u manage key )



- Client side encryption

customer encrypt own file locally sblm diupload ke s3




** KMS = key is encrypted by another key





** security in transit = upload file is done via ssl 


==========


// s3 data consistency



new object / puts

1 read after write consistency

when upload a new s3 object = able to read immediately after writing





overwrite puts or delete object

2 eventual consistently

when overwrite or delete object s3 taes time to replicate data and version to AZ


klo langsung diread = biasana return old copy of data.

butuh waktu few second before reading updated object( setelah replikasi complete ) 







========



// s3 cross region replication ( CRR )


fitur di s3 yg ketika di enabled, smua object yg di upload ke s3 akan di replicate secara otomatis ke region yg berbeda


provides higher durability and potential disaster recovery for object.





** mesti enabling versioning on both on source and destination bucket if want this feature enabled

** customer bisa melakukan CRR replicate to another AWS Account


=========


// s3 versioning



- store all version of an object in s3

- once enabled cannot be disabled, only suspended on bucket

- fully integrates with s3 lifecycle rules

- MFA Delete feature provide extra protection against deleting of ur data




//versioning


- ditag di idnya.


key = gambar1.png

id=1111

   1112



klo accidently delete key dengan id 1112, masih bisa retreive file back dengan id 1111




** klo kita udah bikin versioning klo delete file baru yg lama auto recover.


========



// s3 lifecycle management



- automate process moving object to different storage class or deleting object all together


- bisa dipake bersama dengan versioning


- bisa diapply ke current dan previous version





contoh:


1 customer create object ke s3


2 setelah 7 hari dipindah ke glacier


3 setelah 365 hari permanent delete





** ada opsi nentuin brp X hari sblm mindahin object ke glacier 


=======



// s3 - Transfer Acceleration



fast and secure transfer over long distance between end user and s3 bucket.


- menggunakan cloudfront distributed edge location.


- instead of uploading ur data to bucket, user use distinct URL for an edge location ( nearest edge location - DC )



ketika data sampe ke edge location, automatically routed to s3 over optimized network path ( amazon backbone network )




======


// s3 - Presigned url


digunakan apabila membutuhkan temporary access / allowing user to download file from a password protected part of ur 

web APP. ur web app generates presigned url which will expired after X second.



aws s3 presign s3://mybucket/object1 --expires-in 500




^ digenerate dalam bentuk url yg ada accesskeyID  dan Expires token & signature. cm bisa diakses selama x amount sec



========


// s3 - multi factor auth delete 


memastikan user ga bs delete object dari bucket kecuali punya mfa code.


** cuma bucket owner loggin in as root user / yg punya akses ke MFA can delete object from bucket.




aws s3api put-bucket-versioning \

--bucket XXX \

--versioning-configuration Status=Enabled, MFADelete=Enabled \

--mfa " mfa-sn mfa-code " 






1 AWS CLI must be used to turn on MFA

2 the bucket must have versioning turned on



==========



// public permission object configuration



1 amazon s3 > Permission > uncentang block all public access


2 overview > make public 



object url -> access via browser


========= 


// versioning configuration



1 amazon s3 > properties > versioning


2 enable

- pilihannya cuma enable / suspend





3 cek di s3 > overview > version > show / hide

- muncul version id nya




4 test upload file dengan nama yg sama



=======



// s3 encryption configuration ( server side )


1 amazon s3 > properties > default encryption


2 turn on aes-256 / aws-kms


3 cek di s3 > overview > server-side encryption


=========



// s3 cli



aws s3 ls     // list all current bucket


aws s3 ls toro    // print output inside single bucket 




// download file from local to s3 bucket 


aws s3 cp   s3://toro/1/abc.jpg   ~/desktop/toro





// upload file from local to s3 bucket 


aws s3 cp ~/desktop/toro  s3://toro/1/abc.jpg



// create presigned url  expires in 500s

// create temporary access


aws-s3 presign s3://toro  --expires-in 500






// change s3 storage class to save fulus $ 




1 enter bucket > properties > storage class


2 ubah jadi standard / intelligent-tiering / standard-ia / one zone-ia / glacier / glacier deep archive





// add management lifecycle



1 enter s3 > management > lifecycle > + add lifecycle rule


2 add rule 45 day rule, add tag



3 select current version

- select transition to standard-ia after 45d   // minimal 30d





========



// cross region replication


copy file from 1 budget to another budget / across region / diff awas acct.



1 create another bucket for destination bucket


2 enable version di source and dest bucket

- properties > versioning


3 set replication

- s3 > properties > replication

- set source = entire bucket 

- choose destination bucket

- optional: change storage class

- optional : change object ownership to another aws acct

- create role



wait until replication complete








========



// setup bucket policies

// json document buat bikin complex control access




1 s3 > permission > bucket policy

- create policy dalam bentuk json

- bs copy dari policy generator 

- paste ke bucket policy

- save



** bs bikin policy sapa yg boleh upload ke s3 -> action : put object