Popular Tags:

6.2.a Auto QoS

August 14, 2018 at 10:45 pm
You need to login to view this content. Please . Not a Member? Join Us

Cisco ACI Introduction – Part 2 – The Architecture

August 14, 2018 at 10:13 pm

Cisco ACI

The Cisco ACI attempts to reach beyond “traditional” SDN tasks and provide a new network architectural approach. Of course, it is one focused around programmability. This post quickly reviews the architectural components involved.

Rather impressively, the Application Centric Infrastructure (ACI) requires only three base components for operation:

Nexus 9500

This impressive device offers the following features:

  • Chassis models include 4-, 8-, and 16-slot options, each using the same line cards, chassis controllers, supervisor engines, and 80% efficient power supplies
  • Individualized parts, based on the particular chassis, are fan trays and fabric modules (each line card must attach to all fabric modules)
  • Line cards include physical ports based on twisted-pair copper for 1/10Gbps and optical Small Form Factor (SFP) as well as Quad Small Form Factor (QSFP) for 1/10/25/40/50/100Gbps port speeds
  • All ports are at line rate and have no feature dependencies by card type other than the software under which they will operate
  • Some are NX-OS only (94xx, 95xx, 96xx series), some are ACI spine only (96xx series), and still others (the latest, as of this writing, of the 97xx-EX series) will run both software operating systems
  • There are also three different models of fabric modules, based on scale: FM, FM-S, and FM-E
  • If your design requires 100Gbps support, the FM-E is the fabric module for your chassis

Nexus 9300

The 9300 series of leaf switches are those devices responsible for the bulk of the network functionality: switching L2/L3 at line rate, supporting VTEP operations for VXLAN, IGP routing protocols such as BGP, OSPF, EIGRP, multicast, anycast gateways, and much more.

They also support a wide range of speeds in order to accommodate both modern and not so modern workloads that can be found in data centers: as low as 100Mbps for legacy components in your data center, and as high as 100Gbps for the uplink connectivity to the rest of the network. Sizes vary from 1 to 3 rack units high, with selectable airflow intakes and exhaust to match placement, cable terminations, and airflows within any data center.

Application Centric Infrastructure Controllers

These single rack-unit appliances are based on the UCS C-series x86 server. They are often considered the “brains” of the network operations.

The APIC offers a GUI mechanism for access, along with a fully exposed API set, allowing consumers a rich set of tools with which to configure and operate an ACI fabric. The APIC is also how the leaf and spine elements are added to and retired from the fabric. It is also how they get their firmware updates and patches. No more device-by-device operations or scripting. The APIC does all that operations work for you via a few simple mouse clicks or via those exposed APIs.

Protocols

ACI is based entirely on a set of existing and evolving standards that allows for the unique and powerful capabilities that provide a truly flexible, automated, scalable, and modern network to support applications.

Data Plane Protocols

Forwarding across the ACI fabric is entirely encapsulated in VXLAN. VXLAN is a protocol that allows for minimized fault domains, can stretch across an L3 boundary, and uses a direct-forwarding nonbroadcast control plane (BGP-EVPN). This can provide L3 separation as well as L2 adjacency of elements attached at the leaf that might reside across the fabric on another leaf.

The use of VXLAN is prevalent across the ACI fabric, within the spine and leaf switches, and even within various vSwitch elements attached to the fabric via various hypervisors. However, 802.1q VLANs are still exposed in the ACI policy model because the actual vNIC of any “hypervised” workload and those of bare-metal servers today do not support VXLAN native encapsulation. Therefore, 802.1Q networks still appear in ACI policy and are valid forwarding methods at the workload NIC.

Control Plane Protocols

Several well-understood and -tested protocols form the ACI control plane. Each new leaf or spine attached to the fabric uses a specific type-length-value in a Local Link Discovery Protocol (LLDP) signaling flow to connect with the APIC and thus register itself as a potential new addition to the fabric. Admission is not allowed until a human or some automation point adds the new leaf or spine element. This guards against the registration of switches for nefarious purposes.

Forwarding across the fabric and reachability are achieved via a single-area link-state interior gateway protocol, more specifically Intermediate System to Intermediate System (IS-IS). This lends itself to massive scaling, with simplicity at the heart of the design.

Various interior gateway protocols are supported for communicating with external routing devices at the edge of the fabric: I-BGP, OSPF, and EIGRP, along with static routing are options for achieving IP communication to and from the fabric itself. These protocols run only on the border leaf, which physically attaches the adjacent networks to the fabric. Border leaf switches are not a special device configuration, only a notation of the edge of the ACI fabric connecting to adjacent networks.

Because the data plane of the ACI fabric uses VXLAN, the control plane protocol in use, as of version 3.0, is Multi-Protocol BGP with EVPN. This provides an enhancement over the prior use of multicast to deal with control-plane traffic needs around broadcast, unknown unicast, and multicast (BUM) traffic across the VXLAN fabric.

OpFlex is another new control-plane protocol used in ACI. Although it is pre-standard, Cisco and a consortium of ecosystem partners have submitted it for ratification. OpFlex is a protocol designed to communicate policy intent, from APIC, and compliance or noncompliance from a policy-enforcement element attached to the ACI fabric. The OpFlex protocol is used to communicate policy between the APIC and the Application Virtual Switch (AVS). This not only demonstrates the use of OpFlex but also allows for ACI policy to reach into server virtualization hypervisor host to enforce policy defined on the APIC.

Graded Challenge – BGP 2 – SOLUTION

August 13, 2018 at 9:35 pm
You need to login to view this content. Please . Not a Member? Join Us

What Am I Working On? 8-12-2018 Podcast Episode

August 12, 2018 at 6:47 pm


 

Podcast

Staging Configurations

August 11, 2018 at 1:37 pm
You need to login to view this content. Please . Not a Member? Join Us

CCIE Data Center Written Exam Study Tracker

August 11, 2018 at 1:00 pm

ccie

Here is the latest tracker for this exciting exam. THis is the one that is appropriate for those of us who will be testing August 30, 2018. As always – if you are viewing this on the home page – be sure to click the READ MORE button to see more than section 1.

1.0 Data Center Layer 2/Layer 3 Connectivity

1.1 Design, implement, and troubleshoot Layer 2 technologies
1.1.a Link aggregation
1.1.b Tagging/trunking
1.1.c Spanning Tree Protocol
1.2 Design, implement, and troubleshoot overlays
1.2.a VXLAN
1.2.b EVPN
1.2.c OTV
1.3 Design, implement, and troubleshoot routing protocols and features
1.3.a OSPF
1.3.b IS-IS
1.3.c BGP
1.3.d BFD
1.3.e FHRP
1.4 Design, implement, and troubleshoot multicast protocols
1.4.a PIM
1.4.b IGMP
1.5 Describe interfabric connectivity
1.5.a Multipod
1.5.b Multisite
1.6 Design, implement, and troubleshoot external fabric connectivity
1.6.a L2/L3Out
1.6.b VRF-Lite
1.7 Design, implement, and troubleshoot traffic management
1.7.a Queueing
1.7.b Policing
1.7.c Classification/marking
1.7.d RoCE

Cisco ACI Introduction – Part 1 – Industry Trends

August 10, 2018 at 5:25 pm

Cisco ACI

I have been working with ACI quite a bit as it is introduced in a class I am teaching for CBT Nuggets – 200-155 DCICT – Introducing Cisco Data Center Networking Technologies (DCICT). I should finish that class next by the way.

In order to fully understand the Application-Centric Infrastructure from Cisco Systems, we need to understand how the IT industry is changing as far as the Data Center is concerned. We really have reached a point where enterprises are overhauling their designs and implementing true Private Clouds.

Why is this happening? It is being driven by many factors. Here are just some:

  • Application lifecycles are being broken up into much smaller windows
  • Applications are becoming less rigidly structured
  • Applications are being implemented through virtualization and hypervisors
  • Applications are being implemented through containers and microservices
  • Data flows in the data center are moving from North to South flows to East to West flows, as application components communicate with other application components across the virtualized workloads on different server hosts
  • Network equipment must become more flexible to keep up with the fast pace of change, and also the required integration between new systems and legacy equipment

Automation and orchestration are more of a goal than ever given the above factors. This is one of the areas where ACI shines. It allows us to integrate many components, and automate processes like never before. It also directly addresses security, as traditional firewall configurations would be almost impossible to keep up with given the rate of change and scope of changes necessitated by the new Data Center technologies.

I hope this post has peaked your interest. I will be back with more great introductory material for you on ACI before we delve deep into this exciting concept.