Tag Archives: storage

Get to Know AWS Snowball Edge

snowball edge

This post certainly falls in the category of “what can’t you do with AWS these days!” This post is also an excerpt from the rough draft of my upcoming AWS SysOps Associate text from Pearson Publishing.

Before we dive into Snowball Edge, let’s quickly review AWS Snowball.

Snowball

Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of AWS. With Snowball, you do not need to write any code or purchase any hardware to transfer your data. You follow these steps:

  1. Create a job in the AWS Management Console.
  2. A Snowball appliance is automatically shipped to you.
  3. After it arrives, attach the appliance to your local network, download and run the Snowball client to establish a connection, and then use the client to select the file directories that you want to transfer to the appliance.
  4. The client encrypts and transfers the files to the appliance at high speed.
  5. Once the transfer is complete, and the appliance is ready to be returned, the E Ink shipping label automatically updates. You can track the job status using the Simple Notification Service (SNS), checking text messages, or directly using the console.

Snowball uses multiple layers of security designed to protect your data including tamper-resistant enclosures, 256-bit encryption, and an industry-standard Trusted Platform Module (TPM) designed to ensure both security and the full chain of custody of your data. Once the data transfer job has been processed and verified, AWS performs a software erasure of the Snowball appliance using industry secure erasure standards.

Snowball Edge

Snowball Edge is a type of Snowball device with onboard storage and compute power for select AWS capabilities. Snowball Edge can undertake local processing and edge-computing workloads in addition to transferring data between your local environment and the AWS Cloud.

Each Snowball Edge device can transport data at speeds faster than the public Internet. This transport is done by shipping the data in the appliances through a regional carrier.

Snowball Edge devices have three options for device configurations – storage optimized, compute optimized, and with GPU.

Snowball Edge devices have the following features:

  • Large amounts of storage capacity or compute functionality for devices, depending on the options you choose when you create your job.
  • Network adapters with transfer speeds of up to 100 GB/second.
  • Encryption is enforced, protecting your data at rest and in physical transit.
  • You can import or export data between your local environments and S3, physically transporting the data with one or more devices, completely bypassing the public Internet.
  • Snowball Edge devices are their own rugged shipping containers, and the built-in E Ink display changes to show your shipping label when the device is ready to ship.
  • Snowball Edge devices come with an onboard LCD display that can be used to manage network connections and get service status information.
  • You can cluster Snowball Edge devices for local storage and compute jobs to achieve 99.999 percent data durability across 5–10 devices, and to locally grow and shrink storage on demand.
  • You can use the file interface to read and write data to a Snowball Edge device through a file share or NFS mount point.
  • You can write Python-language Lambda functions and associate them with S3 buckets when you create a Snowball Edge device job. Each function triggers whenever there is a local S3 PUT object action executed on the associated bucket on the appliance.
  • Snowball Edge devices have S3 and EC2 compatible endpoints available, enabling programmatic use cases.
  • Snowball Edge devices support the new sbe1, sbe-c, and sbe-g instance types, which you can use to run compute instances on the device using Amazon Machine Images (AMIs).

As always, I hope this post was informative for you, and I would like to thank you for reading!

AWS S3 Gets Some Security Improvements 10/12/2017

s3

Hello S3 lovers! This week, Amazon announced some nice security-related improvements for S3. Enjoy this brief recap:

  • Default Encryption – you can now set your S3 bucket to require the encryption of objects placed inside it. Of course, the big news is that you can now do this without the use of a Bucket Policy in AWS.
  • Public icons – now in your list of S3 buckets in the Web Management Console, there is a large, yellow icon which indicates if your bucket is publicly accessible based on permission (see screenshot above).
  • Cross-region replication now supports a re-write of the ACL in the destination region if desired.
  • Cross-region replication support now exists for encrypted objects using AWS KMS.
  • The detailed inventory report now provides the status of the encryption for objects. This report itself can also now be encrypted.

Want more information on storage in AWS – check out my AWS Solutions Architect – Storage Services course at CBT Nuggets.
Microsoft

AWS Solutions Architect – Storage Services is Complete!

S3

My latest course will appear on the CBT Nuggets early next week! Here is the final video list!

Simple Storage Service (S3)
Course Introduction
IT Storage Types
What is S3?
S3 Storage Classes
S3 Object Lifecycle Management
S3 Versioning
Working with S3 Buckets
S3 Metadata
S3 Server Access Logging
S3 ACLs
S3 Bucket Policies
S3 Encryption and Other Security Options
Scripting S3 – An Example
S3 Static Website Hosting

Glacier
What is Glacier?
Working with Glacier Vaults

Elastic Block Store (EBS)
Instance Stores vs EBS
Working with EBS Volumes
EBS Volume Types
Using EBS-Optimized Instances
Protecting EBS Data

Elastic File System (EFS)
Elastic File System Basics
Using EFS

Storage Gateway
Why Use Storage Gateway?
Storage Gateway Basics
Four Types of Storage Gateways

Transfer Services
AWS Import/Export
AWS Snowball