Pages

Monday, December 4, 2023

Amazon S3 note

// s3 simple storage service



object based storage service 

serverless storage in the cloud


ga perlu worry filesystem / disk space





==========



file system storage = manage data as file and fire hierarchy

block storage = manage data as bock within sector and track



s3 = unlimited storage. ga perlu mikirin underlying infra

s3 console provide data buat upload dan access data




s3 object= object contain ur data. kyk files.


- bs nyimpen data dengan zie 0 - 5 terabytes per object.




object consist of:

1 key  : nama object

2 value : datanya dalam bentuk sequence of bytes

3 version ID  : klo versioning enabled, ngetag version object

4 Metadata   : additional information



s3 bucket:

- bucket = berisi object.  atau bs jg dalam bentuk folder. isinya object


bucket name hrs unique.





=========


s3 storage class



1 standard

2 Intelligence tiering

3 standard-IA

4 one zone IA

5 Glacier

6 glacier deep archieve



^ semakin kbwh smakin cheap / murah



// 1 standard  ( by default )


fast, 99% availability ,  11 9's durability replicated across at least 3 AZ



// 2 intelligent tiering


use ML to aalyze ur object usage and determine appropriate storage class.

data dipindah ke most cost effective access tier tanpa impact / added overhead



// 3 standard IA / infrequent access


cheaper, klo kita access file skali sebulan.

ada additional fee buat retreival.

50% less than standard ( reduced availability )



// 4 one zone IA


cheaper than standard IA by 20% less

object cm ada di 1 AZ. 99.5% availability

data could get destroyed

ada retreival fee




// 5 Glacier


long term cold storage, tp retreival data timenya agak lama.

bs menit sampe jam. tp sangat murah dalam segi cost



// 6 glacier deep archive


the lowest cost storage class

data retreival = 12 jam




** glacier = kyk berbentuk service sendiri tp sebenernya bagian dari S3


** smuanya data di replicate lebih dari 3 AZ. kecuali one-zone IA = data cm ada d 1 AZ


** retreival fee dihitung per GB access of data



=========


// s3 security


- smua new bucket are private by default


logging per request can be turned on a bucket.

log generated in a different bucket. // bs di log di akun aws yg berbeda


access control is configured using:  1 BUCKET POLICIES and 2 ACL / Access control list





- Access control list 

legacy features of controlling access to bucket and object.



- bucket policies

use a policy to define complex use case



policy -> statement

misal bucket A cuma boleh di allow via www.toro.com/*




contoh statement policy:


{

"version": "2023-10-18",

"Statement": [

"sid": "PublicReadGetObject",

"Effect": "Allow",

"Principal": "*",

"Action": "s3.GetObject",

"Resource":  "arn:aws:s3:::www.toro.com/*"


]


}



=========


// s3 encryption



traffic between ur local pc and s3 is achieved via SSL / TLS



- Server Side Encryption ( SSE ) - Encryption at Rest


Amazon help u encrypt the object data


s3 managed keys- ( amazon manage all the key )



3 tipe SSE:


SSE-AES  = s3 handle the key, use aes-256 algorithm  ( 256 bytes of length ) 

SSE-KMS  = envelope encryption, AWS KMS and u manage the key

SSE-C    = customer provide key ( u manage key )



- Client side encryption

customer encrypt own file locally sblm diupload ke s3




** KMS = key is encrypted by another key





** security in transit = upload file is done via ssl 


==========


// s3 data consistency



new object / puts

1 read after write consistency

when upload a new s3 object = able to read immediately after writing





overwrite puts or delete object

2 eventual consistently

when overwrite or delete object s3 taes time to replicate data and version to AZ


klo langsung diread = biasana return old copy of data.

butuh waktu few second before reading updated object( setelah replikasi complete ) 







========



// s3 cross region replication ( CRR )


fitur di s3 yg ketika di enabled, smua object yg di upload ke s3 akan di replicate secara otomatis ke region yg berbeda


provides higher durability and potential disaster recovery for object.





** mesti enabling versioning on both on source and destination bucket if want this feature enabled

** customer bisa melakukan CRR replicate to another AWS Account


=========


// s3 versioning



- store all version of an object in s3

- once enabled cannot be disabled, only suspended on bucket

- fully integrates with s3 lifecycle rules

- MFA Delete feature provide extra protection against deleting of ur data




//versioning


- ditag di idnya.


key = gambar1.png

id=1111

   1112



klo accidently delete key dengan id 1112, masih bisa retreive file back dengan id 1111




** klo kita udah bikin versioning klo delete file baru yg lama auto recover.


========



// s3 lifecycle management



- automate process moving object to different storage class or deleting object all together


- bisa dipake bersama dengan versioning


- bisa diapply ke current dan previous version





contoh:


1 customer create object ke s3


2 setelah 7 hari dipindah ke glacier


3 setelah 365 hari permanent delete





** ada opsi nentuin brp X hari sblm mindahin object ke glacier 


=======



// s3 - Transfer Acceleration



fast and secure transfer over long distance between end user and s3 bucket.


- menggunakan cloudfront distributed edge location.


- instead of uploading ur data to bucket, user use distinct URL for an edge location ( nearest edge location - DC )



ketika data sampe ke edge location, automatically routed to s3 over optimized network path ( amazon backbone network )




======


// s3 - Presigned url


digunakan apabila membutuhkan temporary access / allowing user to download file from a password protected part of ur 

web APP. ur web app generates presigned url which will expired after X second.



aws s3 presign s3://mybucket/object1 --expires-in 500




^ digenerate dalam bentuk url yg ada accesskeyID  dan Expires token & signature. cm bisa diakses selama x amount sec



========


// s3 - multi factor auth delete 


memastikan user ga bs delete object dari bucket kecuali punya mfa code.


** cuma bucket owner loggin in as root user / yg punya akses ke MFA can delete object from bucket.




aws s3api put-bucket-versioning \

--bucket XXX \

--versioning-configuration Status=Enabled, MFADelete=Enabled \

--mfa " mfa-sn mfa-code " 






1 AWS CLI must be used to turn on MFA

2 the bucket must have versioning turned on



==========



// public permission object configuration



1 amazon s3 > Permission > uncentang block all public access


2 overview > make public 



object url -> access via browser


========= 


// versioning configuration



1 amazon s3 > properties > versioning


2 enable

- pilihannya cuma enable / suspend





3 cek di s3 > overview > version > show / hide

- muncul version id nya




4 test upload file dengan nama yg sama



=======



// s3 encryption configuration ( server side )


1 amazon s3 > properties > default encryption


2 turn on aes-256 / aws-kms


3 cek di s3 > overview > server-side encryption


=========



// s3 cli



aws s3 ls     // list all current bucket


aws s3 ls toro    // print output inside single bucket 




// download file from local to s3 bucket 


aws s3 cp   s3://toro/1/abc.jpg   ~/desktop/toro





// upload file from local to s3 bucket 


aws s3 cp ~/desktop/toro  s3://toro/1/abc.jpg



// create presigned url  expires in 500s

// create temporary access


aws-s3 presign s3://toro  --expires-in 500






// change s3 storage class to save fulus $ 




1 enter bucket > properties > storage class


2 ubah jadi standard / intelligent-tiering / standard-ia / one zone-ia / glacier / glacier deep archive





// add management lifecycle



1 enter s3 > management > lifecycle > + add lifecycle rule


2 add rule 45 day rule, add tag



3 select current version

- select transition to standard-ia after 45d   // minimal 30d





========



// cross region replication


copy file from 1 budget to another budget / across region / diff awas acct.



1 create another bucket for destination bucket


2 enable version di source and dest bucket

- properties > versioning


3 set replication

- s3 > properties > replication

- set source = entire bucket 

- choose destination bucket

- optional: change storage class

- optional : change object ownership to another aws acct

- create role



wait until replication complete








========



// setup bucket policies

// json document buat bikin complex control access




1 s3 > permission > bucket policy

- create policy dalam bentuk json

- bs copy dari policy generator 

- paste ke bucket policy

- save



** bs bikin policy sapa yg boleh upload ke s3 -> action : put object








No comments:

Post a Comment