0% found this document useful (0 votes)
126 views136 pages

Aws 03 S3

Amazon S3 provides secure, durable, highly scalable object storage. It can store and retrieve any amount of data from anywhere via its web services interface. S3 is integrated with other AWS services and offers low-cost storage that is durable, highly available, scalable, and secure. Objects stored in S3 can be used for a variety of purposes including content storage and distribution, backup and archiving, static website hosting, and as storage for cloud-native applications.

Uploaded by

Dheepika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
126 views136 pages

Aws 03 S3

Amazon S3 provides secure, durable, highly scalable object storage. It can store and retrieve any amount of data from anywhere via its web services interface. S3 is integrated with other AWS services and offers low-cost storage that is durable, highly available, scalable, and secure. Objects stored in S3 can be used for a variety of purposes including content storage and distribution, backup and archiving, static website hosting, and as storage for cloud-native applications.

Uploaded by

Dheepika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 136

Amazon S3

Amazon S3

Secure, durable, highly-scalable object storage


Accessible via a simple web services interface
Store & retrieve any amount of data
Use alone or together with other AWS services
Integrated Low Cost

Durable Available

Amazon S3
Easy to Use Secure

Scalable High Performance


Content Storage & Distribution

Backup & Archiving Big Data Analytics

Amazon S3

Disaster Recovery Static Website Hosting

Cloud-native Application Data


Agenda

Amazon S3 Concepts & Fundamentals


Namespaces
Access Controls
Storage Classes
Encryption & Other Security Features
Versioning & Cross-Region Replication
Lifecycle Rules
Website Hosting
AMAZON S3 CONCEPTS
BUCKETS

Containers for objects stored in S3


Serve several purposes:
Organise the Amazon S3 namespace at the highest level
Identify the account responsible for charges
Play a role in access control
Serve as the unit of aggregation for usage reporting
OBJECTS

Fundamental entities stored in Amazon S3


Consist of data & metadata
Data portion is opaque to Amazon S3
Metadata is a set of name-value pairs that describe the object
Object is uniquely identified within a bucket by a key (name) and a version ID
http://doc.s3.amazonaws.com/2006-03-01/AmazonS3.wsdl

KEYS

Unique identifier for an object within a bucket.


Every object in a bucket has exactly one key
Combination of a bucket, key & version ID
uniquely identify each object
REGIONS

The geographical region where Amazon S3 will


store the buckets that you create
Choose a region to optimise latency, minimise
costs, or address regulatory requirements.
Highly scalable data storage

A web store, not a file system Access via APIs

AMAZON S3 FUNDAMENTALS

Fast Economical

Highly available & durable


http://aws.amazon.com/documentation/s3/ Access via APIs
http://aws.amazon.com/tools/ Access via SDKs
https://github.com/awslabs/aws-sdk-go

Access via SDKs


AWS CLI
List buckets

Copy a file to an object


Access via AWS CLI
List buckets contents
Stream the contents of
an object to STDOUT

Delete an object

Sync a directory with


a bucket

List buckets contents

Delete buckets contents

AWS S3 CLI help


AWS CLI
Detailed help on a
specific command

Access via AWS CLI


Highly scalable data storage

A web store, not a file system Access via APIs

AMAZON S3 FUNDAMENTALS

Fast Economical

Highly available & durable


A web store, not a file system

Write once,
Read many

Eventually consistent
A web store, not a file system

Write once,
Read many

Eventually consistent
A web store, not a file system

Load balancers Load balancers

Write once, Web servers Web servers


Read many

Indexing Storage Indexing Storage


Eventually consistent
Availability Zone Availability Zone

Region
A web store, not a file system

Load balancers Load balancers

Write once, Web servers Web servers


Read many

Indexing Storage Indexing Storage


Eventually consistent
Availability Zone Availability Zone

Region
A web store, not a file system

Load balancers Load balancers

Write once, Web servers Web servers


Read many

Indexing Storage Indexing Storage


Eventually consistent
Availability Zone Availability Zone

Region
A web store, not a file system

Load balancers Load balancers

Write once, Web servers Web servers


Read many

Indexing Storage Indexing Storage


Eventually consistent
Availability Zone Availability Zone

Region
A web store, not a file system

Load balancers Load balancers

Write once, Web servers Web servers


Read many

Indexing Storage Indexing Storage


Eventually consistent
Availability Zone Availability Zone

Region
A web store, not a file system

Load balancers Load balancers

Write once, Web servers Web servers


Read many

Indexing Storage Indexing Storage


Eventually consistent
Availability Zone Availability Zone

Region
A web store, not a file system

Load balancers Load balancers

Write once, Web servers Web servers


Read many

Indexing Storage Indexing Storage


Eventually consistent
Availability Zone Availability Zone

Region
A web store, not a file system

Write once,
Read many

Eventually consistent
New Objects
Synchronously stores your data across multiple
facilities before returning SUCCESS
A web store, not a file system
Read-after-write consistency,
except US-STANDARD region

Updates
Write then read: could report key does not exist
Write once, Write then list: might not include key in list
Read many Overwrite then read: old data could be returned

Deletes
Delete then read: could still get old data
Eventually consistent Delete then list: deleted key could be included in
list

Find out more here: docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html


NAMESPACES
Globally Unique

Bucket Name + Object Name (key)


Globally Unique

Bucket Name + Object Name (key)


Amazon S3

bucket bucket bucket

object object object object object object


Globally Unique

Bucket Name + Object Name (key)


Amazon S3

ianm-aws-docs ianm-aws-bootstrap aws-exampl.es

s3-webinar.pptx vid/s3-webinar.mp4 wp/bootstrap.sh wp/credentials.txt index.html logo.png


Object key

Unique within a bucket


Object key
Max 1024 bytes UTF-8 Including ‘path’ prefixes
Unique within a bucket
Object key
Max 1024 bytes UTF-8 Including ‘path’ prefixes
Unique within a bucket

assets/js/jquery/plugins/jtables.js

an example object key


Throughput Optimisation

S3 automatically partitions based upon key prefix:


Bucket Object keys
mynewgame 2134857/gamedata/start.png
2134857/gamedata/resource.rsrc
2134857/gamedata/results.txt
2134858/gamedata/start.png
2134858/gamedata/resource.rsrc
2134858/gamedata/results.txt
2134859/gamedata/start.png
2134859/gamedata/ resource.rsrc
2134859/gamedata/results.txt
Throughput Optimisation

S3 automatically partitions based upon key prefix:


Bucket Object keys
mynewgame 2134857/gamedata/start.png
2134857/gamedata/resource.rsrc
2134857/gamedata/results.txt
2134858/gamedata/start.png
2134858/gamedata/resource.rsrc
2134858/gamedata/results.txt
2134859/gamedata/start.png
2134859/gamedata/ resource.rsrc
Incrementing 2134859/gamedata/results.txt
game IDs
Throughput Optimisation

S3 automatically partitions based upon key prefix:


Bucket Object keys
mynewgame 2134857/gamedata/start.png
2134857/gamedata/resource.rsrc
2134857/gamedata/results.txt
2134858/gamedata/start.png
2134858/gamedata/resource.rsrc
2134858/gamedata/results.txt
2134859/gamedata/start.png
2134859/gamedata/ resource.rsrc
Partition: 2134859/gamedata/results.txt
mynewgame/2
Throughput Optimisation

S3 automatically partitions based upon key prefix:


Bucket Object keys
mynewgame 7584312/gamedata/start.png
7584312/gamedata/resource.rsrc
7584312/gamedata/results.txt
8584312/gamedata/start.png
8584312/gamedata/resource.rsrc
8584312/gamedata/results.txt
9584312/gamedata/start.png
9584312/gamedata/resource.rsrc
Reversed 9584312/gamedata/results.txt
game ID
Throughput Optimisation

S3 automatically partitions based upon key prefix:


Bucket Object keys
mynewgame 7584312/gamedata/start.png
7584312/gamedata/resource.rsrc
7584312/gamedata/results.txt
8584312/gamedata/start.png
8584312/gamedata/resource.rsrc
8584312/gamedata/results.txt
Partitions: 9584312/gamedata/start.png
mynewgame/7 9584312/gamedata/resource.rsrc
mynewgame/8 9584312/gamedata/results.txt
mynewgame/9
ACCESS CONTROLS
You decide what to share
Apply policies to buckets and objects

SECURE BY DEFAULT

Policies, ACLs & IAM


Use S3 policies, ACLs or IAM to define rules
IAM Policies
Fine grained
Administer as part of role
based access
Apply policies to S3 at
role, user & group level

Allow

Actions
PutObject
Resource
arn:aws:s3:::mybucket/*

Bob Jane
Find out more here: aws.amazon.com/iam
IAM Policies
Fine grained
Administer as part of role
based access
Apply policies to S3 at
role, user & group level

Allow

Actions
PutObject
Resource
arn:aws:s3:::mybucket/*

Bob Jane
Find out more here: aws.amazon.com/iam
IAM Policies
Fine grained
Administer as part of role
based access
Apply policies to S3 at
role, user & group level

Allow

Actions
PutObject
Resource
arn:aws:s3:::mybucket/*

Bob Jane
Find out more here: aws.amazon.com/iam
IAM Policies Bucket Policies
Fine grained Fine grained
Administer as part of role Apply policies at the bucket
based access level in S3
Apply policies to S3 at Incorporate user restrictions
role, user & group level without using IAM

Allow Allow
Bob, Jane
Actions Actions
PutObject PutObject
Resource Resource
arn:aws:s3:::mybucket/* arn:aws:s3:::mybucket/*

Bob Jane mybucket


Bucket Policies Granting Read-Only Permission
to an Anonymous User
Fine grained
Apply policies at the bucket
level in S3
{
Incorporate user restrictions "Version":"2012-‐10-‐17",
without using IAM " S t a t e m ent " : [
{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": " * " ,
Allow "Action":["s3:GetObject"],
Bob, Jane "Resource":["arn:aws:s3:::examplebucket/*"]
Actions }
PutObject ]
}
Resource
arn:aws:s3:::mybucket/*

mybucket
Granting Read-Only Permission to an Anonymous User

{
"Version":"2012-‐10-‐17",
" Sta te m ent " : [
{
“Sid":"AddPerm",
"Effect":"Allow", The effect of the policy, allow or deny
"Principal": " * " , Who the policy applies to. * means everyone
"Action":["s3:GetObject"] , The actions allowed/denied by this policy
"Resource":["arn:aws:s3:::examplebucket/*"]
}
The AWS resource that this policy applies to.
]
In this case all objects in example bucket
}
Bucket Policies Restricting Access to Specific IP Addresses

Fine grained
Apply policies at the bucket
level in S3 {
" V e r s i o n " : "2012-‐10-‐17",
Incorporate user restrictions " I d " : " S 3 P o licyId 1",
"Statement": [
without using IAM {
" Si d " : "IPAllow",
" E f f e c t " : "Allow",
"Principal": " * " ,
"Action": " s 3 : * " ,
"Resource": "arn:aws:s3:::examplebucket/*",
Allow "Condition": {
Bob, Jane "IpAddress": {"aws:SourceIp": "54.240.143.0/24"},
"NotIpAddress": {"aws:SourceIp": "54.240.143.188/32"}
Actions }
PutObject }
]
Resource }
arn:aws:s3:::mybucket/*

mybucket
Bucket Policies
Fine grained Other Example Use-Cases for Bucket Policies
Apply policies at the bucket • Granting Permissions to Multiple Accounts with
level in S3 Added Conditions
Incorporate user restrictions • Restricting Access to a Specific HTTP Referrer
without using IAM • Granting Permission to an Amazon CloudFront
Origin Identity
• Adding a Policy to Require MFA Authentication
Allow
• Granting Cross-Account Permissions to Upload
Bob, Jane
Objects While Ensuring the Bucket Owner Has Full
Actions
Control
PutObject
Resource
arn:aws:s3:::mybucket/*

mybucket
Find out more here: docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
IAM Policies Bucket Policies ACLs
Fine grained Fine grained Coarse grained
Administer as part of role Apply policies at the bucket Apply access control rules
based access level in S3 at the bucket and/or object
Apply policies to S3 at Incorporate user restrictions level in S3
role, user & group level without using IAM

Allow Allow Allow


Bob, Jane Everyone, Bob, Jane
Actions Actions Actions
PutObject PutObject Read
Resource Resource
arn:aws:s3:::mybucket/* arn:aws:s3:::mybucket/*

Bob Jane mybucket mybucket myobject


ACLs
Coarse grained
Apply access control rules
at the bucket and/or object You can use ACLs to grant basic read/write
level in S3 permissions to other AWS accounts.
There are limits to managing permissions using
ACLs.
For example, you can grant permissions only to
Allow
Everyone, Bob, Jane
other AWS accounts, you cannot grant permissions
Actions
to users in your account.
Read

mybucket myobject
DEMO: GETTING STARTED
STORAGE CLASSES
Standard Designed to provide 99.999999999% durability and 99.99%
availability of objects over a given year
Designed to sustain the concurrent loss of data in two facilities

Reduced Reduces costs by storing data at lower levels of redundancy than


the Standard storage
Redundancy
Designed to provide 99.99% durability and 99.99% availability of
Storage objects over a given year

Glacier Suitable for archiving data, where data access is infrequent and a
retrieval time of several hours is acceptable
Uses the very low-cost Amazon Glacier storage service, but
managed through Amazon S3
AWS CLI

Moving Objects between S3 storage classes

You can specify the storage class of an object


when uploading or creating it

$ aws s3 cp aws_uki .txt s3://aws-‐ianm-‐s3-‐masterclass/ -‐-‐storage-‐class


REDUCED_REDUNDANCY
AWS CLI

Moving Objects between S3 storage classes

You can change the storage class of an object that


is already stored in Amazon S3 by copying it to the
same key name in the same bucket

$ aws s3 cp s3://aws-‐ianm-‐s3-‐masterclass/aws_uki.txt s3://aws-‐ ianm-‐s3-


‐masterclass/aws_uki.txt -‐-‐storage-‐class STANDARD
AWS Console

Moving Objects between storage classes


Python

Moving Objects between storage classes


$ python

>>> import boto

>>> conn = boto.connect_s3()

>>> mybucket = conn.get_bucket(‘aws-‐ianm-‐s3-‐masterclass’)

>>> mybucket.copy_key('aws_uki.txt','aws-‐ianm-‐s3-‐
masterclass','aws_uki.txt',storage_class='REDUCED_REDUNDANCY')
What about Amazon Glacier?

We will come to this when we talk about Lifecycle


Management
ENCRYPTION
Securing Data in Transit

Securely upload or download your data via


SSL-encrypted endpoints using HTTPS
Alternatively, use a client encryption library such as
the Amazon S3 Encryption Client to encrypt your
data before uploading to Amazon S3

http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3EncryptionClient.html
SECURING DATA AT REST
Amazon S3 Server Side Encryption (SSE)

Amazon S3 will automatically encrypt your data on


write and decrypt your data on retrieval
Uses Advanced Encryption Standard (AES) 256-bit
symmetric keys
There are three different ways to mange encryption
keys:
SSE with Amazon S3 Key Management (SSE-SE)

With SSE-S3, Amazon S3 will encrypt your data at rest and


manage the encryption keys for you
Data bucket Encrypted data

Encrypted per-object key

Per-object key

Key management (monthly rotation)

Master key
SSE with Customer-Provided Keys (SSE-C)

With SSE-C, Amazon S3 will encrypt your data at rest


using the custom encryption keys that you provide
Amazon S3 doesn’t store your encryption key anywhere;
the key is immediately discarded after Amazon S3
completes your requests

Data bucket Encrypted data

Customer Provided Key


SSE with AWS KMS (SSE-KMS)

With SSE-KMS, Amazon S3 will encrypt your data at rest


using keys that you manage in the AWS Key Management
Service (KMS)

AWS KMS provides an audit trail so you can see who used
your key to access which object and when
ADDITIONAL SECURITY FEATURES
AUDIT LOGS
MULTI-FACTOR
AUTHENTICATION DELETE
TIME-LIMTED ACCESS
TO OBJECTS
Signed URLs Access Permissions
Provide time-limited access to Use on objects in non-public
specific objects that expires after a buckets to prevent access once
set period the signed URL has expired

https://ianmas-‐aws.testbucket.s3.amazonaws.com/testfile.txt
?Signature=JHCa39GV1fKRKkEnAWzI88lH7f8%3D
&Expires=1391425438
&AWSAccessKeyId=AKIAIRBKBJ3ZAYAXFC2Q
Generating time-limited signed links Python

>>> import boto


>>> conn = boto.connect_s3()
>>> conn.generate_url(3600, 'GE T ', bucket='aws-‐ianm-‐s3-‐masterclass',
k e y= ‘ a w s _ u k i . t x t ' )
'https://aws-‐ianm-‐s3-‐masterclass.s3.amazonaws.com/aws_uki.txt?
Signature=hEBUPczy8DXCyqTz1JHgEaihvMo
%3D&Expires=1431697820&AWSAccessKeyId=AKIAI65L23YDGKGQTRFA'

>>> import boto


>>> conn = boto.connect_s3()
>>> conn.generate_url(30, 'GET ', bucket='aws-‐ianm-‐s3-‐masterclass',
k e y= ‘ a w s _ u k i . t x t ’ , force_http=True)
'http://aws-‐ianm-‐s3-‐masterclass.s3.amazonaws.com/aws_uki.txt?
Signature=yIYPyn0DMXk2cOcZkWPRuSHoKPA
%3D&Expires=1431694649&AWSAccessKeyId=AKIAI65L23YDGKGQTRFA'
Generating time-limited signed links Python

>>> import boto 1st parameter is link lifetime in seconds


>>> conn = boto.connect_s3()
>>> conn.generate_url(3600, 'GE T ', bucket='aws-‐ianm-‐s3-‐masterclass',
k e y= ‘ a w s _ u k i . t x t ' )
'https://aws-‐ianm-‐s3-‐masterclass.s3.amazonaws.com/aws_uki.txt?
Signature=hEBUPczy8DXCyqTz1JHgEaihvMo
%3D&Expires=1431697820&AWSAccessKeyId=AKIAI65L23YDGKGQTRFA'

>>> import boto


Force a non-SSL link
>>> conn = boto.connect_s3()
>>> conn.generate_url(30, 'GET ', bucket='aws-‐ianm-‐s3-‐masterclass',
k e y= ‘ a w s _ u k i . t x t ’ , force_http=True)
'http://aws-‐ianm-‐s3-‐masterclass.s3.amazonaws.com/aws_uki.txt?
Signature=yIYPyn0DMXk2cOcZkWPRuSHoKPA
%3D&Expires=1431694649&AWSAccessKeyId=AKIAI65L23YDGKGQTRFA'
Generating time-limited signed links
Error response: link expired
VERSIONING &
CROSS REGION REPLICATION
Bucket level Persistent
Automatically preserves Even deleted object
all copies of objects history is held

VERSIONING
AWS Console
AWS Console
AWS CLI

Working with versioned objects via the CLI

uses the ‘aws s3api’ CLI command, which has


additional functionality over ‘aws s3’

$ aws s3api list-‐object-‐versions -‐-‐bucketaws-‐ianm-‐s3-‐masterclass

http://docs.aws.amazon.com/cli/latest/reference/s3api/index.html#cli-aws-s3api
AWS CLI

$ aws s3api list-‐object-‐versions -‐-‐bucketaws-‐ianm-‐s3-‐masterclass

None None
VERSIONS "36bc67941830bb388c9bf201440683a4" True s3-‐masterclass-‐logo.txt 2015-‐05-‐18T15:47:38.000Z
337 STANDARD q2FcbXFAgU7P82Hh6y81hYkrbGYtOCCc
OWNERianm 4ee381d180ee58aa815e7d4a3a5f739b20bb8980a568947384e59c8d0ff8379b
VERSIONS "e0253c9354f61097cbf6ce239afd0464" False s3-‐masterclass-‐logo.txt 2015-‐05-‐18T15:47:30.000Z
337 STANDARD p_4oF4eG7Be.0aNXqUYFlZL7Q9OuK9nU
OWNERianm 4ee381d180ee58aa815e7d4a3a5f739b20bb8980a568947384e59c8d0ff8379b
VERSIONS "84defb05031845e8b0616a9b70b2ae93" False s3-‐masterclass-‐logo.txt 2015-‐05-‐18T15:47:10.000Z
328 STANDARD BykgMQ6bRY02Y6krxvMaMvwrL2Ep2e6X
OWNER ianm 4ee381d180ee58aa815e7d4a3a5f739b20bb8980a568947384e59c8d0ff8379b
AWS CLI

$ aws s3api get-‐object -‐-‐bucketaws-‐ianm-‐s3-‐masterclass -‐-‐keys3-‐masterclass--


‐logo.txt -‐-‐version-‐id q2FcbXFAgU7P82Hh6y81hYkrbGYtOCCc v e r s i o n . t x t

bytes 337 text/plain "36bc67941830bb388c9bf201440683a4" Mon, 18 May 2015 15:47:38 GMT


q2FcbXFAgU7P82Hh6y81hYkrbGYtOCCc

$ more v e r s i o n . t x t
_ _
/ | / | \/ | _ | |_ _ | | _
\ \ |_ \ | | \ / | | / _` / | / _ \ ' / | | / _` / / |
) | ) | | | | | (_| \ \ || / | | ( | | (_| \ \ \
| / / | _| | _| \ , _| /\ \ | _| \ | _| \ , _| / /

Version 3

http://docs.aws.amazon.com/cli/latest/reference/s3api/index.html#cli-aws-s3api
Listing object versions Python

>>> import boto


>>> conn = boto.connect_s3()
>>> bucket=conn.get_bucket('aws-‐ianm-‐s3-‐masterclass')
>>> versions = b u c k e t . l i s t _ v e r s i o n s ( )
>>> f o r version i n v e r s i on s:
... p r i n t version.name + ' ' + versi on.versi on_i d
...
s3-‐masterclass-‐logo.txt q2FcbXFAgU7P82Hh6y81hYkrbGYtOCCc s3-
‐masterclass-‐logo.txt p_4oF4eG7Be.0aNXqUYFlZL7Q9OuK9nU s3--
‐masterclass-‐logo.txt BykgMQ6bRY02Y6krxvMaMvwrL2Ep2e6X
Getting a specific object version Python

>>> key = bucket.get_key('s3-‐masterclass-‐logo.txt',


version_id='p_4oF4eG7Be.0aNXqUYFlZL7Q9OuK9nU')
>>> key.get_contents_as_string()
"\n _ _ \n/ | / |
\\/ | _ | |_ _ | | _ \n\\ \ \ |_ \ \ | | \ \ / | | / _` /
| / _ \\ ' / | | / _` / / |\n ) | ) | | | | | (_|
\\ \\ || / | | ( | | (_| \ \ \\ \\\n| / / |_|
|_|\\ ,_| / \ \ \\ |_| \\ |_|\
\ ,_| / / \ n \ n Ve r si o n 2 \ n ”
>>> key.generate_url(300)
'https://aws-‐ianm-‐s3-‐masterclass.s3.amazonaws.com/s3-‐masterclass-‐logo.txt?
Signature=c%2BjgGY5EZ4tDuI0xcKg572qL%2B9Y
%3D&Expires=1431965853&AWSAccessKeyId=AKIAI65L23YDGKGQTRFA&versionId=p_4oF4eG7Be.
0aNXqUYFlZL7Q9OuK9nU'
>>>
CROSS REGION REPLICATION
AWS Console
AWS Console
AWS Console
AWS Console
AWS Console
AWS Console
AWS CLI

$ aws s3 cp s3-‐masterclass-‐logo.txt s3://aws-‐ianm-‐s3-‐masterclass


upload: ./s3-‐masterclass-‐logo.txt t o s3://aws-‐ianm-‐s3-‐masterclass/s3-‐masterclass-‐logo.txt

$ aws s3 l s s3://aws-‐ianm-‐s3-‐masterclass-‐fra -‐-‐region=eu-‐central-‐1


2015-‐05-‐18 17:27:47 337 s3-‐masterclass-‐logo.txt
LIFECYCLE RULES
Object Deletion
Permanently delete objects from S3

LIFECYCLE RULES

Object Archiving
Move objects from S3 to Glacier
Amazon Glacier
Durable
Designed for 99.999999999%
durability of archives

Amazon Glacier

Cost Effective
Write-once, read-never. Cost effective for long
term storage. Pay for accessing data
Expir

Logs logs
accessible from S3
y

time
accessible from S3
Expir


Logs logs
y

Objects expire
and are deleted

time
Object transition to
Glacier invoked
Transitio

Txns
accessible from S3
n

accessible from S3
Expir


Logs logs
y

Objects expire
and are deleted

time
Object transition to Restoration of object
Glacier invoked requested for x hrs
Transitio

Txns
accessible from S3
n

accessible from S3
Expir


Logs logs
y

Objects expire
and are deleted

time
Object transition to Restoration of object Object held in S3
Glacier invoked requested for x hrs RRS for x hrs
Transitio

Txns
accessible from S3 3-5hrs
n

accessible from S3
Expir


Logs logs
y

Objects expire
and are deleted

time
Configuring Lifecycle Rules AWS Console
Configuring Lifecycle Rules AWS Console
Configuring Lifecycle Rules AWS Console
Configuring Lifecycle Rules AWS Console
Configuring Lifecycle Rules AWS Console
Configuring Lifecycle Rules AWS Console
Configuring Lifecycle Rules AWS Console
Configuring Lifecycle Rules .Net

using ( c l i e n t = new AmazonS3Client()){


var lif e Cycle Co n f igu ra t io n = new L i f e cyc l e C o n f ig u r a t io n ( )
{
Rules = new List<Lif ecycleRule>
{
new Lif ecycleRule
{
I d = "Archive and delete r u l e " ,
Prefix = "projectdocs/",
Status = LifecycleRuleStatus.Enabled,
Tr a n s i t i o n = new L i f e c y c l e Tr a n s i t i o n ( )
{
Days = 365,
StorageClass = S3StorageClass.Glacier
},
E xp ira t io n = new L if e cycle Ru le E xp ira t io n ()
{
Days = 3650
}
}
}
};
Configuring Lifecycle Rules .Net

using ( c l i e n t = new AmazonS3Client()){


var lif e Cycle Co n f igu ra t io n = new L i f e cyc l e C o n f ig u r a t io n ( )
{
Rules = new List<Lif ecycleRule>
{
new Lif ecycleRule
{
I d = "Archive and delete r u l e " , Transition to Glacier after 1 year
Prefix = "projectdocs/",
Status = LifecycleRuleStatus.Enabled,
Tr a n s i t i o n = new L i f e c y c l e Tr a n s i t i o n ( )
{
Days = 365,
StorageClass = S3StorageClass.Glacier
},
E xp ira t io n = new L if e cycle Ru le E xp ira t io n ()
{
Days = 3650
}
}
}
};
Configuring Lifecycle Rules .Net

using ( c l i e n t = new AmazonS3Client()){


var lif e Cycle Co n f igu ra t io n = new L i f e cyc l e C o n f ig u r a t io n ( )
{
Rules = new List<Lif ecycleRule>
{
new Lif ecycleRule
{
I d = "Archive and delete r u l e " ,
Prefix = "projectdocs/",
Status = LifecycleRuleStatus.Enabled,
Tr a n s i t i o n = new L i f e c y c l e Tr a n s i t i o n ( )
{
Days = 365, Delete object after 10 years
StorageClass = S3StorageClass.Glacier
},
E xp ira t io n = new L if e cycle Ru le E xp ira t io n ()
{
Days = 3650
}
}
}
};
Restoring from Amazon Glacier AWS Console
Restoring from Amazon Glacier AWS Console
Restoring from Amazon Glacier AWS Console
Restoring from Amazon Glacier AWS Console
Restoring from Amazon Glacier AWS Console
WEBSITE HOSTING
Static Website Hosting with Amazon S3

You can host your entire static website on Amazon S3 for a


low-cost, highly available hosting solution that can scale
automatically to meet traffic demands

With Amazon S3, you can reliably serve your traffic and
handle unexpected peaks without worrying about scaling
your infrastructure

docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html
Static Website Hosting Bucket Properties
Setting Default Documents
Redirecting Requests
Bucket Policy

{
"Version":"2008-‐10-‐17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": {
"AWS": " * "
},
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::example-‐bucket/*"
]
}
]
}
Website Addressing
{bucket-‐name}.s3-‐website-‐{region}.amazonaws.com
e . g . mybucket.s3-‐website-‐eu-‐west-‐1.amazonaws.com

Normal Addressing
s3-‐{region}.amazonaws.com/{bucket-‐name}/{object-‐key}
e . g . s3-‐eu-‐west-‐1.amazonaws.com/mybucket/img.png

{bucket-‐name}.s3-‐{region}.amazonaws.com/{object-‐key}
e . g . mybucket.s3-‐eu-‐west-‐1.amazonaws.com/img.png
DNS Record set for:

aws-‐exampl.es

Route
53
DNS Record set for:

aws-‐exampl.es

Route
53

Index Error
bucket bucket .html .html

Website bucket name: Website bucket name:


www.aws-‐exampl.es aws-‐exampl.es
DNS Record set for:

aws-‐exampl.es

Route
53

Website redirect to:


aws-‐exampl.es Index Error
bucket bucket .html .html

Website bucket name: Website bucket name:


www.aws-‐exampl.es aws-‐exampl.es
DNS Record set for:

aws-‐exampl.es

Route A Record ‘Alias’ to S3 website:


53 aws-‐exampl.es @
s3website-‐eu-‐west1-‐amazonaws.com

Website redirect to:


aws-‐exampl.es
Index Error
bucket bucket .html .html

Website bucket name: Website bucket name:


www.aws-‐exampl.es aws-‐exampl.es
DNS Record set for:

aws-‐exampl.es
CNAME for www. to:
Route A Record ‘Alias’ to S3 website:
www.aws-‐exampl.es.s3-‐website-‐ eu- 53 aws-‐exampl.es @
‐west-‐1.amazonaws.com s3website-‐eu-‐west1-‐amazonaws.com

Website redirect to:


aws-‐exampl.es
Index Error
bucket bucket .html .html

Website bucket name: Website bucket name:


www.aws-‐exampl.es aws-‐exampl.es
docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html
SUMMARY
1 S3 provides developers with secure, durable & highly scalable object storage

2 S3 can be used alone with other AWS services or 3rd party tools & services

3 Cost effective for a wide variety of use-cases from cloud applications,


content distribution, backup, archiving & disaster recovery to analytics
THINGS WE DIDN’T COVER
Amazon CloudFront
aws.amazon.com/cloudfront
S3 EVENT NOTIFICATIONS

http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
AWS Lambda
aws.amazon.com/lambda

S3 event Cloud Functions Automatic


notifications

You might also like