What Does “Cloud” Really Mean???

September 27, 2018 at 10:58 pm


I was inspired to write this post after watching the latest Network Chuck YouTube video where he interviewed me regarding AWS at the 2018 Cisco Live conference. What struck me was the excitement surrounding the video as evidenced by the comments on all the major social media channels. There are so many students excited to start these various certification tracks!

In that regard – I wanted to break down what cloud really is. For this definition, we turn to the NIST. They identify 5 common characteristics of cloud solutions. Here they are for you in plain English. Keep in mind that I turned to the NIST as these specific charactersitics they point out are frequently tested across all the various cloud vendors.

Questions? Please let me know in the comments below this post. I am VERY responsive to these comments.

  • On-demand self-service – this characteristic means that a customer of cloud technologies (even if you are a customer of your own company’s private cloud) can provision and manage resources without the intervention of cloud hosting administrative personnel. For example, you might deem that you need a new Web server to advertise a particular product or service. You can completely provision and configure and deploy this We server without contacting anyone responsible for hosting the cloud solution.
  • Broad network access – this aspect of cloud states that your cloud resources should be available over the network and accessed through standard mechanisms. These standard access approaches (such as HTTPS) promote the use of the cloud by thin or thick client platforms (for example, mobile phones, tablets, laptops, and workstations).
  • Resource pooling – the provider’s computing resources are pooled to serve multiple clients using a multi-tenant model. This model allows multiple customers to securely use the same physical hardware of the provider. At any time, the cloud provider can use different physical and virtual resources dynamically assigned and reassigned according to consumer demand. You should note that this approach provides a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources. If required, the customer is typically able to specify location at a higher level of abstraction (such as country, state, or datacenter). Examples of resources that are typically pooled include storage, processing, memory, and network bandwidth.
  • Rapid elasticity – capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward in accordance with demand from customers. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
  • Measured service – cloud systems automatically control and optimize resource use by leveraging a metering capability. This is done by the provider at some level of abstraction appropriate to the type of service. For example, the metering may be based on storage, processing, bandwidth, or active user accounts. Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service. This is where cloud services your IT department pays for are often compared to a utility bill. Like the electric bill, you can be billed monthly, for just those services you used.

A Sample AWS Implementation

September 20, 2018 at 7:34 pm

In this sample Nugget from the AWS Certified Cloud Practitioner course at CBT Nuggets – we examine an AWS solution and how it uses the various services.

Certified Cloud Practitioner

AWS Networking Components – Per the 2018 Sol Arch Exam

August 22, 2018 at 3:12 pm

Solutions Architect

Here are just some of the networking components you should be familiar with if you are interested in mastering AWS.

  • Network Interfaces – this logical network component serves to represent a physical network interface card (NIC); as such, this component can be configured with IPv4 and IPv6 addresses
  • Route Tables – just as would exist on a physical router, AWS route tables contain a set of rules, called routes, that are used to determine where network traffic is directed
  • Internet Gateways – an internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses
  • Egress-Only Internet Gateways – a VPC component that allows outbound communication over IPv6 from instances in your VPC to the Internet, and prevents the Internet from initiating an IPv6 connection with your instances
  • DHCP Options Sets – DHCP provides a standard for passing configuration information to hosts on a TCP/IP network; the options field of a DHCP message contains the configuration parameters; some of those parameters are the domain name, domain name server, and the netbios-node-type; the option sets allow you to configure such options for your virtual private clouds (VPC)
  • DNS – AWS provides you with a DNS server for your VPC, but it is important to realize that you can also use you own
  • Elastic IP Addresses – a static IPv4 address designed for dynamic cloud computing; an Elastic IP address is associated with your AWS account; with this address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account
  • VPC Endpoints – enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection
  • NAT – you can use a NAT device to enable instances in a private subnet to connect to the Internet (for example, for software updates) or other AWS services, but prevent the Internet from initiating connections with the instances; AWS offers two kinds of NAT devices—a NAT gateway or a NAT instance, but strongly recommends the use of NAT gateways
  • VPC Peering – a networking connection between two VPCs that enables you to route traffic between them privately; you can create a VPC peering connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region
  • ClassicLink – allows you to link your EC2-Classic instance to a VPC in your account, within the same region; this allows you to associate the VPC security groups with the EC2-Classic instance, enabling communication between your EC2-Classic instance and instances in your VPC using private IPv4 addresses

Building Performant Amazon (AWS) Aurora Solutions

August 16, 2018 at 12:03 pm

Here is another snippet from my upcoming AWS Solutions Architect Certification Guide. This text will be one of a kind, direct Certification prep!

Amazon Aurora

Remember, Amazon Aurora is a MySQL and PostgreSQL compatible relational database built by Amazon specifically for AWS. The goal of Amazon here was to combine the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Here are best practices you should keep handy when working with Aurora MySQL databases

Which DB Instance are you Connected To?

Use the innodb_read_only global variable to determine which DB instance in an Aurora DB cluster you are connected to. Here is an example:

show global variables like ‘innodb_read_only’;

This variable is set to ON if you are connected to an Aurora replica or OFF if you are connected to the primary instance. This value is critical to ensure that any of your write operations are using the correct connection.

Using T2 Instances

Aurora MySQL instances that use the db.t2.small or db.t2.medium DB instance classes are best suited for applications that do not support a high workload for an extended amount of time. Amazon recommends only using the db.t2.small and db.t2.medium DB instance classes for development and test servers, or other non-production servers.

The MySQL Performance Schema should not be enabled on Amazon Aurora MySQL T2 instances. If the Performance Schema is enabled, the T2 instance might run out of memory.

Amazon recommends the following when you use a T2 instance for the primary instance or Aurora Replicas in an Aurora MySQL DB cluster:

  • If you use a T2 instance as a DB instance class in your DB cluster, all instances in the DB cluster should use the same DB instance class
  • Monitor your CPU Credit Balance (CPUCreditBalance) to ensure that it is at a sustainable level
  • When you have exhausted the CPU credits for an instance, you see an immediate drop in the available CPU and an increase in the read and write latency for the instance; this results in a severe decrease in the overall performance of the instance
  • If your CPU credit balance is not at a sustainable level, modify your DB instance to use one of the supported R3 DB instance classes (scale compute)
  • Monitor the replica lag (AuroraReplicaLag) between the primary instance and the Aurora Replicas in the Aurora MySQL DB cluster
  • If an Aurora Replica runs out of CPU credits before the primary instance, the lag behind the primary instance results in the Aurora Replica frequently restarting
  • If you see a sustained increase in replica lag, make sure that your CPU credit balance for the Aurora Replicas in your DB cluster is not being exhausted
  • If your CPU credit balance is not at a sustainable level, modify your DB instance to use one of the supported R3 DB instance classes (scale compute)
  • Keep the number of inserts per transaction below 1 million for DB clusters that have binary logging enabled
  • If the DB cluster parameter group for your DB cluster has the binlog_format parameter set to a value other than OFF, then your DB cluster might experience out-of-memory conditions if the DB cluster receives transactions that contain over 1 million rows to insert
  • You can monitor the freeable memory (FreeableMemory) metric to determine if your DB cluster is running out of available memory
  • You can check the write operations (VolumeWriteIOPS) metric to see if your primary instance is receiving a heavy load of writer operations; If this is the case, update your application to limit the number of inserts in a transaction to less than 1 million; alternatively, you can modify your instance to use one of the supported R3 DB instance classes (scale compute)

Working with Asynchronous Key Prefetch

Aurora can use Asynchronous Key Prefetch (AKP) to improve the performance of queries that join tables across indexes. This feature improves performance by anticipating the rows needed to run queries in which a JOIN query requires use of the Batched Key Access (BKA) Join algorithm and Multi-Range Read (MRR) optimization features.

Avoid Multithreaded Replication

By default, Aurora uses single-threaded replication when an Aurora MySQL DB cluster is used as a replication slave. While Aurora does not prohibit multithreaded replication, Aurora MySQL has inherited several issues regarding multithreaded replication from MySQL. Amazon recommends against the use of multithreaded replication in production.

Scale Reads

You can use Aurora with your MySQL DB instance to take advantage of the read scaling capabilities of Aurora and expand the read workload for your MySQL DB instance. To use Aurora to read scale your MySQL DB instance, create an Amazon Aurora MySQL DB cluster and make it a replication slave of your MySQL DB instance. This applies to an Amazon RDS MySQL DB instance, or a MySQL database running external to Amazon RDS.

Consider Hash Joins

When you need to join a large amount of data by using an equijoin, a hash join can improve query performance. Fortunately, you can enable hash joins for Aurora MySQL. A hash join column can be any complex expression.

To find out whether a query can take advantage of a hash join, use the EXPLAIN statement to profile the query first. The EXPLAIN statement provides information about the execution plan to use for a specified query.

When using Aurora PostgreSQL database, keep the following best practices in mind.

Use TCP Keepalive Parameters

Enabling TCP keepalive parameters and setting them aggressively ensures that if your client is no longer able to connect to the database, then any active connections are quickly closed. This action allows the application to react appropriately, such as by picking a new host to connect to.

The following TCP keepalive parameters need to be set:

  • tcp_keepalive_time controls the time, in seconds, after which a keepalive packet is sent when no data has been sent by the socket (ACKs are not considered data); Amazon recommends:

tcp_keepalive_time = 1

  • tcp_keepalive_intvl controls the time, in seconds, between sending subsequent keepalive packets after the initial packet is sent (set using the tcp_keepalive_time parameter); Amazon recommends:

tcp_keepalive_intvl = 1

  • tcp_keepalive_probes is the number of unacknowledged keepalive probes that occur before the application is notified; Amazon recommends:

tcp_keepalive_probes = 5

These settings should notify the application within five seconds when the database stops responding. A higher tcp_keepalive_probes value can be set if keepalive packets are often dropped within the application’s network. This subsequently increases the time it takes to detect an actual failure but allows for more buffer in less reliable networks.

CompTIA Cloud+ Security Groups vs Network ACLs

July 23, 2018 at 3:50 pm

This Nugget is a sample Nugget from the CompTIA Cloud+ CV0-002 course at CBT Nuggets. In this video, we examine the differences between Security Groups and Network ACLs in AWS.