Cisco ACI Introduction – Part 2 – The Architecture

Cisco ACI

The Cisco ACI attempts to reach beyond “traditional” SDN tasks and provide a new network architectural approach. Of course, it is one focused around programmability. This post quickly reviews the architectural components involved.

Rather impressively, the Application Centric Infrastructure (ACI) requires only three base components for operation:

Nexus 9500

This impressive device offers the following features:

  • Chassis models include 4-, 8-, and 16-slot options, each using the same line cards, chassis controllers, supervisor engines, and 80% efficient power supplies
  • Individualized parts, based on the particular chassis, are fan trays and fabric modules (each line card must attach to all fabric modules)
  • Line cards include physical ports based on twisted-pair copper for 1/10Gbps and optical Small Form Factor (SFP) as well as Quad Small Form Factor (QSFP) for 1/10/25/40/50/100Gbps port speeds
  • All ports are at line rate and have no feature dependencies by card type other than the software under which they will operate
  • Some are NX-OS only (94xx, 95xx, 96xx series), some are ACI spine only (96xx series), and still others (the latest, as of this writing, of the 97xx-EX series) will run both software operating systems
  • There are also three different models of fabric modules, based on scale: FM, FM-S, and FM-E
  • If your design requires 100Gbps support, the FM-E is the fabric module for your chassis

Nexus 9300

The 9300 series of leaf switches are those devices responsible for the bulk of the network functionality: switching L2/L3 at line rate, supporting VTEP operations for VXLAN, IGP routing protocols such as BGP, OSPF, EIGRP, multicast, anycast gateways, and much more.

They also support a wide range of speeds in order to accommodate both modern and not so modern workloads that can be found in data centers: as low as 100Mbps for legacy components in your data center, and as high as 100Gbps for the uplink connectivity to the rest of the network. Sizes vary from 1 to 3 rack units high, with selectable airflow intakes and exhaust to match placement, cable terminations, and airflows within any data center.

Application Centric Infrastructure Controllers

These single rack-unit appliances are based on the UCS C-series x86 server. They are often considered the “brains” of the network operations.

The APIC offers a GUI mechanism for access, along with a fully exposed API set, allowing consumers a rich set of tools with which to configure and operate an ACI fabric. The APIC is also how the leaf and spine elements are added to and retired from the fabric. It is also how they get their firmware updates and patches. No more device-by-device operations or scripting. The APIC does all that operations work for you via a few simple mouse clicks or via those exposed APIs.

Protocols

ACI is based entirely on a set of existing and evolving standards that allows for the unique and powerful capabilities that provide a truly flexible, automated, scalable, and modern network to support applications.

Data Plane Protocols

Forwarding across the ACI fabric is entirely encapsulated in VXLAN. VXLAN is a protocol that allows for minimized fault domains, can stretch across an L3 boundary, and uses a direct-forwarding nonbroadcast control plane (BGP-EVPN). This can provide L3 separation as well as L2 adjacency of elements attached at the leaf that might reside across the fabric on another leaf.

The use of VXLAN is prevalent across the ACI fabric, within the spine and leaf switches, and even within various vSwitch elements attached to the fabric via various hypervisors. However, 802.1q VLANs are still exposed in the ACI policy model because the actual vNIC of any “hypervised” workload and those of bare-metal servers today do not support VXLAN native encapsulation. Therefore, 802.1Q networks still appear in ACI policy and are valid forwarding methods at the workload NIC.

Control Plane Protocols

Several well-understood and -tested protocols form the ACI control plane. Each new leaf or spine attached to the fabric uses a specific type-length-value in a Local Link Discovery Protocol (LLDP) signaling flow to connect with the APIC and thus register itself as a potential new addition to the fabric. Admission is not allowed until a human or some automation point adds the new leaf or spine element. This guards against the registration of switches for nefarious purposes.

Forwarding across the fabric and reachability are achieved via a single-area link-state interior gateway protocol, more specifically Intermediate System to Intermediate System (IS-IS). This lends itself to massive scaling, with simplicity at the heart of the design.

Various interior gateway protocols are supported for communicating with external routing devices at the edge of the fabric: I-BGP, OSPF, and EIGRP, along with static routing are options for achieving IP communication to and from the fabric itself. These protocols run only on the border leaf, which physically attaches the adjacent networks to the fabric. Border leaf switches are not a special device configuration, only a notation of the edge of the ACI fabric connecting to adjacent networks.

Because the data plane of the ACI fabric uses VXLAN, the control plane protocol in use, as of version 3.0, is Multi-Protocol BGP with EVPN. This provides an enhancement over the prior use of multicast to deal with control-plane traffic needs around broadcast, unknown unicast, and multicast (BUM) traffic across the VXLAN fabric.

OpFlex is another new control-plane protocol used in ACI. Although it is pre-standard, Cisco and a consortium of ecosystem partners have submitted it for ratification. OpFlex is a protocol designed to communicate policy intent, from APIC, and compliance or noncompliance from a policy-enforcement element attached to the ACI fabric. The OpFlex protocol is used to communicate policy between the APIC and the Application Virtual Switch (AVS). This not only demonstrates the use of OpFlex but also allows for ACI policy to reach into server virtualization hypervisor host to enforce policy defined on the APIC.

Leave a Reply

Your email address will not be published. Required fields are marked *