Popular Tags:

4.1.a MPLS VPN and Transit Labels

August 16, 2018 at 2:35 pm
You need to login to view this content. Please . Not a Member? Join Us

Building Performant Amazon (AWS) Aurora Solutions

August 16, 2018 at 12:03 pm

Here is another snippet from my upcoming AWS Solutions Architect Certification Guide. This text will be one of a kind, direct Certification prep!

Amazon Aurora

Remember, Amazon Aurora is a MySQL and PostgreSQL compatible relational database built by Amazon specifically for AWS. The goal of Amazon here was to combine the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Here are best practices you should keep handy when working with Aurora MySQL databases

Which DB Instance are you Connected To?

Use the innodb_read_only global variable to determine which DB instance in an Aurora DB cluster you are connected to. Here is an example:

show global variables like ‘innodb_read_only’;

This variable is set to ON if you are connected to an Aurora replica or OFF if you are connected to the primary instance. This value is critical to ensure that any of your write operations are using the correct connection.

Using T2 Instances

Aurora MySQL instances that use the db.t2.small or db.t2.medium DB instance classes are best suited for applications that do not support a high workload for an extended amount of time. Amazon recommends only using the db.t2.small and db.t2.medium DB instance classes for development and test servers, or other non-production servers.

The MySQL Performance Schema should not be enabled on Amazon Aurora MySQL T2 instances. If the Performance Schema is enabled, the T2 instance might run out of memory.

Amazon recommends the following when you use a T2 instance for the primary instance or Aurora Replicas in an Aurora MySQL DB cluster:

  • If you use a T2 instance as a DB instance class in your DB cluster, all instances in the DB cluster should use the same DB instance class
  • Monitor your CPU Credit Balance (CPUCreditBalance) to ensure that it is at a sustainable level
  • When you have exhausted the CPU credits for an instance, you see an immediate drop in the available CPU and an increase in the read and write latency for the instance; this results in a severe decrease in the overall performance of the instance
  • If your CPU credit balance is not at a sustainable level, modify your DB instance to use one of the supported R3 DB instance classes (scale compute)
  • Monitor the replica lag (AuroraReplicaLag) between the primary instance and the Aurora Replicas in the Aurora MySQL DB cluster
  • If an Aurora Replica runs out of CPU credits before the primary instance, the lag behind the primary instance results in the Aurora Replica frequently restarting
  • If you see a sustained increase in replica lag, make sure that your CPU credit balance for the Aurora Replicas in your DB cluster is not being exhausted
  • If your CPU credit balance is not at a sustainable level, modify your DB instance to use one of the supported R3 DB instance classes (scale compute)
  • Keep the number of inserts per transaction below 1 million for DB clusters that have binary logging enabled
  • If the DB cluster parameter group for your DB cluster has the binlog_format parameter set to a value other than OFF, then your DB cluster might experience out-of-memory conditions if the DB cluster receives transactions that contain over 1 million rows to insert
  • You can monitor the freeable memory (FreeableMemory) metric to determine if your DB cluster is running out of available memory
  • You can check the write operations (VolumeWriteIOPS) metric to see if your primary instance is receiving a heavy load of writer operations; If this is the case, update your application to limit the number of inserts in a transaction to less than 1 million; alternatively, you can modify your instance to use one of the supported R3 DB instance classes (scale compute)

Working with Asynchronous Key Prefetch

Aurora can use Asynchronous Key Prefetch (AKP) to improve the performance of queries that join tables across indexes. This feature improves performance by anticipating the rows needed to run queries in which a JOIN query requires use of the Batched Key Access (BKA) Join algorithm and Multi-Range Read (MRR) optimization features.

Avoid Multithreaded Replication

By default, Aurora uses single-threaded replication when an Aurora MySQL DB cluster is used as a replication slave. While Aurora does not prohibit multithreaded replication, Aurora MySQL has inherited several issues regarding multithreaded replication from MySQL. Amazon recommends against the use of multithreaded replication in production.

Scale Reads

You can use Aurora with your MySQL DB instance to take advantage of the read scaling capabilities of Aurora and expand the read workload for your MySQL DB instance. To use Aurora to read scale your MySQL DB instance, create an Amazon Aurora MySQL DB cluster and make it a replication slave of your MySQL DB instance. This applies to an Amazon RDS MySQL DB instance, or a MySQL database running external to Amazon RDS.

Consider Hash Joins

When you need to join a large amount of data by using an equijoin, a hash join can improve query performance. Fortunately, you can enable hash joins for Aurora MySQL. A hash join column can be any complex expression.

To find out whether a query can take advantage of a hash join, use the EXPLAIN statement to profile the query first. The EXPLAIN statement provides information about the execution plan to use for a specified query.

When using Aurora PostgreSQL database, keep the following best practices in mind.

Use TCP Keepalive Parameters

Enabling TCP keepalive parameters and setting them aggressively ensures that if your client is no longer able to connect to the database, then any active connections are quickly closed. This action allows the application to react appropriately, such as by picking a new host to connect to.

The following TCP keepalive parameters need to be set:

  • tcp_keepalive_time controls the time, in seconds, after which a keepalive packet is sent when no data has been sent by the socket (ACKs are not considered data); Amazon recommends:

tcp_keepalive_time = 1

  • tcp_keepalive_intvl controls the time, in seconds, between sending subsequent keepalive packets after the initial packet is sent (set using the tcp_keepalive_time parameter); Amazon recommends:

tcp_keepalive_intvl = 1

  • tcp_keepalive_probes is the number of unacknowledged keepalive probes that occur before the application is notified; Amazon recommends:

tcp_keepalive_probes = 5

These settings should notify the application within five seconds when the database stops responding. A higher tcp_keepalive_probes value can be set if keepalive packets are often dropped within the application’s network. This subsequently increases the time it takes to detect an actual failure but allows for more buffer in less reliable networks.

Graded Challenge – Port Channels 1

August 15, 2018 at 8:49 pm
You need to login to view this content. Please . Not a Member? Join Us

Can I Do That With Cisco VIRL???

August 15, 2018 at 8:09 pm

VIRL Training

In this video, I make sure you can quickly verify the available features you have for emulation inside of Cisco VIRL. This is very important because the last thing you want to do is spend time building a VIRL topology only to discover you cannot practice the feature because it is not supported.

6.2.a Auto QoS

August 14, 2018 at 10:45 pm
You need to login to view this content. Please . Not a Member? Join Us

Cisco ACI Introduction – Part 2 – The Architecture

August 14, 2018 at 10:13 pm

Cisco ACI

The Cisco ACI attempts to reach beyond “traditional” SDN tasks and provide a new network architectural approach. Of course, it is one focused around programmability. This post quickly reviews the architectural components involved.

Rather impressively, the Application Centric Infrastructure (ACI) requires only three base components for operation:

Nexus 9500

This impressive device offers the following features:

  • Chassis models include 4-, 8-, and 16-slot options, each using the same line cards, chassis controllers, supervisor engines, and 80% efficient power supplies
  • Individualized parts, based on the particular chassis, are fan trays and fabric modules (each line card must attach to all fabric modules)
  • Line cards include physical ports based on twisted-pair copper for 1/10Gbps and optical Small Form Factor (SFP) as well as Quad Small Form Factor (QSFP) for 1/10/25/40/50/100Gbps port speeds
  • All ports are at line rate and have no feature dependencies by card type other than the software under which they will operate
  • Some are NX-OS only (94xx, 95xx, 96xx series), some are ACI spine only (96xx series), and still others (the latest, as of this writing, of the 97xx-EX series) will run both software operating systems
  • There are also three different models of fabric modules, based on scale: FM, FM-S, and FM-E
  • If your design requires 100Gbps support, the FM-E is the fabric module for your chassis

Nexus 9300

The 9300 series of leaf switches are those devices responsible for the bulk of the network functionality: switching L2/L3 at line rate, supporting VTEP operations for VXLAN, IGP routing protocols such as BGP, OSPF, EIGRP, multicast, anycast gateways, and much more.

They also support a wide range of speeds in order to accommodate both modern and not so modern workloads that can be found in data centers: as low as 100Mbps for legacy components in your data center, and as high as 100Gbps for the uplink connectivity to the rest of the network. Sizes vary from 1 to 3 rack units high, with selectable airflow intakes and exhaust to match placement, cable terminations, and airflows within any data center.

Application Centric Infrastructure Controllers

These single rack-unit appliances are based on the UCS C-series x86 server. They are often considered the “brains” of the network operations.

The APIC offers a GUI mechanism for access, along with a fully exposed API set, allowing consumers a rich set of tools with which to configure and operate an ACI fabric. The APIC is also how the leaf and spine elements are added to and retired from the fabric. It is also how they get their firmware updates and patches. No more device-by-device operations or scripting. The APIC does all that operations work for you via a few simple mouse clicks or via those exposed APIs.

Protocols

ACI is based entirely on a set of existing and evolving standards that allows for the unique and powerful capabilities that provide a truly flexible, automated, scalable, and modern network to support applications.

Data Plane Protocols

Forwarding across the ACI fabric is entirely encapsulated in VXLAN. VXLAN is a protocol that allows for minimized fault domains, can stretch across an L3 boundary, and uses a direct-forwarding nonbroadcast control plane (BGP-EVPN). This can provide L3 separation as well as L2 adjacency of elements attached at the leaf that might reside across the fabric on another leaf.

The use of VXLAN is prevalent across the ACI fabric, within the spine and leaf switches, and even within various vSwitch elements attached to the fabric via various hypervisors. However, 802.1q VLANs are still exposed in the ACI policy model because the actual vNIC of any “hypervised” workload and those of bare-metal servers today do not support VXLAN native encapsulation. Therefore, 802.1Q networks still appear in ACI policy and are valid forwarding methods at the workload NIC.

Control Plane Protocols

Several well-understood and -tested protocols form the ACI control plane. Each new leaf or spine attached to the fabric uses a specific type-length-value in a Local Link Discovery Protocol (LLDP) signaling flow to connect with the APIC and thus register itself as a potential new addition to the fabric. Admission is not allowed until a human or some automation point adds the new leaf or spine element. This guards against the registration of switches for nefarious purposes.

Forwarding across the fabric and reachability are achieved via a single-area link-state interior gateway protocol, more specifically Intermediate System to Intermediate System (IS-IS). This lends itself to massive scaling, with simplicity at the heart of the design.

Various interior gateway protocols are supported for communicating with external routing devices at the edge of the fabric: I-BGP, OSPF, and EIGRP, along with static routing are options for achieving IP communication to and from the fabric itself. These protocols run only on the border leaf, which physically attaches the adjacent networks to the fabric. Border leaf switches are not a special device configuration, only a notation of the edge of the ACI fabric connecting to adjacent networks.

Because the data plane of the ACI fabric uses VXLAN, the control plane protocol in use, as of version 3.0, is Multi-Protocol BGP with EVPN. This provides an enhancement over the prior use of multicast to deal with control-plane traffic needs around broadcast, unknown unicast, and multicast (BUM) traffic across the VXLAN fabric.

OpFlex is another new control-plane protocol used in ACI. Although it is pre-standard, Cisco and a consortium of ecosystem partners have submitted it for ratification. OpFlex is a protocol designed to communicate policy intent, from APIC, and compliance or noncompliance from a policy-enforcement element attached to the ACI fabric. The OpFlex protocol is used to communicate policy between the APIC and the Application Virtual Switch (AVS). This not only demonstrates the use of OpFlex but also allows for ACI policy to reach into server virtualization hypervisor host to enforce policy defined on the APIC.

Graded Challenge – BGP 2 – SOLUTION

August 13, 2018 at 9:35 pm
You need to login to view this content. Please . Not a Member? Join Us