Pages

Thursday, November 30, 2023

AWS Storage note

 // storage access




block level storage = place to store  files  // bytes stores on disk. 



laptop / pc => gunain block level storage. ( hard drive )







// Instance Stores Volume



local Instance Stores Volume: hard drive di ec 2


- attached to ec2 instances 

- temporary block level storage

- lifespan = lifespan of ec2 instance


if stop / deleted ec2 instance all data written to the instance store volume will be deleted.  // dipake sama host lain ketika menjalankan ec2 instance karena sifatnya virtual.




temporary file

scratch data

data easily recreated.




- dont write important data to the drives that comes with  ecs instance.




u dont want important database deleted every time u stop ec 2 instances.









//  Amazon Elastic Block Store  ( EBS )


virtual hard drive / ebs volume.

bs di attach ke ec2 / directly attached

harddrive that is persistent



- can persist between stop and start of an ecs instances.



we define:

size 

type

config



volume that we need.





^ didalam ebs ada snapshost => incremental backup of data.

^ penting buat bikin regular snapshot backup

^ klo harddrive corrupt kita ga lost data

^ bs di restore data dr snapshot





// incremental backup


An EBS snapshot(opens in a new tab) is an incremental backup. This means that the first backup taken of a volume copies all the data. For subsequent backups, only the blocks of data that have changed since the most recent snapshot are saved. 




==================



// amazon simple storage service   

// amazon S3


- storing file

- data store that allow to store and retreive an unlimited amount of data at any scale

- store object in buckets





data that need save elsewhere.



receipt

images

excels

video

text file



maximum object size = 5 TB upload





bs dibikin version object to retain version / prevent accidental delete



bs create multiple bucket and store in diffferent classes or tiers of data



bs create permision who can see and accessing objects



bs stage data between different tiers




tiers:


data need to be used freq

audit data that need retained for several years

===================



// samazon s3  standard = 99.9999999% durability 


-11.9 of durability


remain intact of 1 years 



data stored in a ways aws can sustain 2 concurrent loss of data in 2 separate storage facilities.




> data is stored in at least 3 facilities  // multiple copy resides accross locations.




==================


// s3 static website hosting


- collection of html file, images, etc.



^ bs jd instant website





==================


// s3 standard-infrequent Access  ( s3 standard-IA)


- data accessed less frequent but need rapid access when needed.


- perfect for store backup, disaster recovery files, any object that required long term storage


===============


// s3 glacier flexible retrieval


- retain data for several years for auditing


- dont need to retreive very rapidly



bs simply move data kesini 

atau can create vault then populate them with archieves



Low-cost storage designed for data archiving

Able to retrieve objects within a few minutes to hours


S3 Glacier Flexible Retrieval is a low-cost storage class that is ideal for data archiving. For example, you might use this storage class to store archived customer records or older photos and video files. You can retrieve your data from S3 Glacier Flexible Retrieval from 1 minute to 12 hours.








// s3 glacier vault lock policy


retaining specfici period of time data.  //  lock ur vault for specific time




bs bikin rule =>  write once read many / WORM Policy di s3 glacier


^ lock policy from future edit



3 options for retreival:

- minutes

- hours 

- uploading directly to s3 glacier flexible retrieval / using s3 lifecycle policies



==============


// s3 lifecycle management / policies


- move data automatically between tiers 



1  keep object in standard 90d

2  move to s3 Standard-IA for the  next 30d

3 after 120 day total auto move to s3 glacier flexible retrieval




^ bikin config tanpa ngubah application code

^ perform those move automatically




============



// s3 one zone-infrequent


Stores data in a single Availability Zone

Has a lower storage price than Amazon S3 Standard-IA

Compared to S3 Standard and S3 Standard-IA, which store data in a minimum of three Availability Zones, S3 One Zone-IA stores data in a single Availability Zone. This makes it a good storage class to consider if the following conditions apply:


You want to save costs on storage.

You can easily reproduce your data in the event of an Availability Zone failure.





// s3 glacier instan retrieval

Works well for archived data that requires immediate access


Can retrieve objects within a few milliseconds


When you decide between the options for archival storage, consider how quickly you must retrieve the archived objects. You can retrieve objects stored in the S3 Glacier Instant Retrieval storage class within milliseconds, with the same performance as S3 Standard.






// s3 glacier deep archieve

Lowest-cost object storage class ideal for archiving

Able to retrieve objects within 12 hours

S3 Deep Archive supports long-term retention and digital preservation for data that might be accessed once or twice in a year. This storage class is the lowest-cost storage in the AWS Cloud, with data retrieval from 12 to 48 hours. All objects from this storage class are replicated and stored across at least three geographically dispersed Availability Zones.







// s3 intelligent-tiering


Ideal for data with unknown or changing access patterns

Requires a small monthly monitoring and automation fee per object

In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects’ access patterns. If you haven’t accessed an object for 30 consecutive days, Amazon S3 automatically moves it to the infrequent access tier, S3 Standard-IA. If you access an object in the infrequent access tier, Amazon S3 automatically moves it to the frequent access tier, S3 Standard.







// s3 outpost

Creates S3 buckets on Amazon S3 Outposts


Makes it easier to retrieve, store, and access data on AWS Outposts


Amazon S3 Outposts delivers object storage to your on-premises AWS Outposts environment. Amazon S3 Outposts is designed to store data durably and redundantly across multiple devices and servers on your Outposts. It works well for workloads with local data residency requirements that must satisfy demanding performance needs by keeping data close to on-premises applications.






============


// data metadata and key


In object storage, each object consists of data, metadata, and a key.

The data might be an image, video, text document, or any other type of file. Metadata contains information about what the data is, how it is used, the object size, and so on. An object’s key is its unique identifier.



when you modify a file in block storage, only the pieces that are changed are updated. When a file in object storage is modified, the entire object is updated.

==============





// EBS VS S3



ebs:

size up to 16 TiB

survive termination ec2 instance

ssd by default

hdd options



s3:

unlimited storage

individual object up to 5tb

write once / ready many

99.999999% durability





s3:

web enabled

regionally distributed

offer cost saving

serverless




object storage: doc, images, file   // everytime a change in object must upload entire file



block storage : blocks.   edit 80gb video.  edit, save. the engine only updates the blocks




==============


// amazon Elastic File System / EFS


- manage filesystem

- shared filesystem accross app

- Multiple instances can access the data in EFS at same time 

- auto scale up and scale down by system





klo ebs:

volume attach to ec2 instance

AZ level resource

need to be in the same  AZ to attach ec2 instance

volume do not auto scale -> klo 5t y 5t



klo efs:

bs multiple instance reading and writing simultaneously

linux true file system

regional resource / can edit between ec2 in same region

automaticaly scale as u write data



==============

No comments:

Post a Comment