Cisco ACI Introduction – Part 3 – The Logical Components

August 18, 2018 at 6:19 pm

Cisco ACI

It is critical that you understand the physical components and protocols discussed in Part 2, but it is also critical that you understand the logical constructs used within the ACI system. You might need to create some Flash Cards on these until they are second nature.

  • Tenant: Contains policies that enable qualified users to have domain-based access control. Qualified users can access privileges such as tenant administration and networking administration.
  • Context: A context is a unique Layer 3 forwarding and application policy domain. A tenant can have multiple contexts. A context is often defined with VRFs.
  • Bridge domain: A bridge domain represents a Layer 2 forwarding construct within the fabric. A bridge domain must link to a context and have at least one subnet associated with it. The bridge domain defines the unique Layer 2 MAC address space and a Layer 2 flood domain if such flooding is enabled.
  • EPG: The EPG is a managed object that contains a collection of endpoints (devices that are connected to the network directly or indirectly) that have common policy requirements such as security, virtual machine mobility, QoS, or Layer 4 to Layer 7 services. Endpoints have an address (identity), a location, attributes (such as version or patch level), and a physical or virtual status. Rather than configure and manage endpoints individually, they are placed in an EPG and are managed as a group. EPGs are fully decoupled from the physical and logical topology, and endpoint membership in an EPG can be dynamic or static.
  • Application network profile: An application profile models the application requirements, and it is a convenient logical container for grouping EPGs.
  • Contract: The contract governs the types of endpoint group traffic that can pass between EPGs, including the protocols and ports that are allowed. If there is no contract, inter-EPG communication is disabled by default. No contract is required for intra-EPG communication. EPGs can only communicate with other EPGs according to the contract rules.
  • Filter: The filter sorts Layer 2 to Layer 4 fields, TCP/IP header fields such as Layer 3 protocol type, Layer 4 ports, and so on.
  • Subject: Within a contract, subjects use filters to specify the type of traffic that can be communicated, and how it occurs. Subjects determine whether filters are unidirectional or bidirectional. Contract subjects contain associations to the filters (and their directions) that are applied between EPGs that produce and consume the contract.

Wrapping Up 200-155 CCNA Data Center Today!

August 17, 2018 at 1:14 pm

firepower

I wanted to make this post because so many of you have asked me about a completion date for this exciting new CBT Nuggets content. I am indeed wrapping up the final videos today! This makes it very close to release on the CBT Nuggets website. Woohoo!

I will be sure to follow up with another post in the next couple of business days on the exact date of availability on the CBT Nuggets site. By the way, the final Nugget count looks like it is going to come in about 55! They cover the following:

  • NX-OS
  • Orchestration
  • ACI
  • UCS
  • Virtualization

and many more topics critical for the modern, Cisco-centric data center. And of course, many topics are not JUST Cisco focused. Enjoy!

Cisco ACI Introduction – Part 2 – The Architecture

August 14, 2018 at 10:13 pm

Cisco ACI

The Cisco ACI attempts to reach beyond “traditional” SDN tasks and provide a new network architectural approach. Of course, it is one focused around programmability. This post quickly reviews the architectural components involved.

Rather impressively, the Application Centric Infrastructure (ACI) requires only three base components for operation:

Nexus 9500

This impressive device offers the following features:

  • Chassis models include 4-, 8-, and 16-slot options, each using the same line cards, chassis controllers, supervisor engines, and 80% efficient power supplies
  • Individualized parts, based on the particular chassis, are fan trays and fabric modules (each line card must attach to all fabric modules)
  • Line cards include physical ports based on twisted-pair copper for 1/10Gbps and optical Small Form Factor (SFP) as well as Quad Small Form Factor (QSFP) for 1/10/25/40/50/100Gbps port speeds
  • All ports are at line rate and have no feature dependencies by card type other than the software under which they will operate
  • Some are NX-OS only (94xx, 95xx, 96xx series), some are ACI spine only (96xx series), and still others (the latest, as of this writing, of the 97xx-EX series) will run both software operating systems
  • There are also three different models of fabric modules, based on scale: FM, FM-S, and FM-E
  • If your design requires 100Gbps support, the FM-E is the fabric module for your chassis

Nexus 9300

The 9300 series of leaf switches are those devices responsible for the bulk of the network functionality: switching L2/L3 at line rate, supporting VTEP operations for VXLAN, IGP routing protocols such as BGP, OSPF, EIGRP, multicast, anycast gateways, and much more.

They also support a wide range of speeds in order to accommodate both modern and not so modern workloads that can be found in data centers: as low as 100Mbps for legacy components in your data center, and as high as 100Gbps for the uplink connectivity to the rest of the network. Sizes vary from 1 to 3 rack units high, with selectable airflow intakes and exhaust to match placement, cable terminations, and airflows within any data center.

Application Centric Infrastructure Controllers

These single rack-unit appliances are based on the UCS C-series x86 server. They are often considered the “brains” of the network operations.

The APIC offers a GUI mechanism for access, along with a fully exposed API set, allowing consumers a rich set of tools with which to configure and operate an ACI fabric. The APIC is also how the leaf and spine elements are added to and retired from the fabric. It is also how they get their firmware updates and patches. No more device-by-device operations or scripting. The APIC does all that operations work for you via a few simple mouse clicks or via those exposed APIs.

Protocols

ACI is based entirely on a set of existing and evolving standards that allows for the unique and powerful capabilities that provide a truly flexible, automated, scalable, and modern network to support applications.

Data Plane Protocols

Forwarding across the ACI fabric is entirely encapsulated in VXLAN. VXLAN is a protocol that allows for minimized fault domains, can stretch across an L3 boundary, and uses a direct-forwarding nonbroadcast control plane (BGP-EVPN). This can provide L3 separation as well as L2 adjacency of elements attached at the leaf that might reside across the fabric on another leaf.

The use of VXLAN is prevalent across the ACI fabric, within the spine and leaf switches, and even within various vSwitch elements attached to the fabric via various hypervisors. However, 802.1q VLANs are still exposed in the ACI policy model because the actual vNIC of any “hypervised” workload and those of bare-metal servers today do not support VXLAN native encapsulation. Therefore, 802.1Q networks still appear in ACI policy and are valid forwarding methods at the workload NIC.

Control Plane Protocols

Several well-understood and -tested protocols form the ACI control plane. Each new leaf or spine attached to the fabric uses a specific type-length-value in a Local Link Discovery Protocol (LLDP) signaling flow to connect with the APIC and thus register itself as a potential new addition to the fabric. Admission is not allowed until a human or some automation point adds the new leaf or spine element. This guards against the registration of switches for nefarious purposes.

Forwarding across the fabric and reachability are achieved via a single-area link-state interior gateway protocol, more specifically Intermediate System to Intermediate System (IS-IS). This lends itself to massive scaling, with simplicity at the heart of the design.

Various interior gateway protocols are supported for communicating with external routing devices at the edge of the fabric: I-BGP, OSPF, and EIGRP, along with static routing are options for achieving IP communication to and from the fabric itself. These protocols run only on the border leaf, which physically attaches the adjacent networks to the fabric. Border leaf switches are not a special device configuration, only a notation of the edge of the ACI fabric connecting to adjacent networks.

Because the data plane of the ACI fabric uses VXLAN, the control plane protocol in use, as of version 3.0, is Multi-Protocol BGP with EVPN. This provides an enhancement over the prior use of multicast to deal with control-plane traffic needs around broadcast, unknown unicast, and multicast (BUM) traffic across the VXLAN fabric.

OpFlex is another new control-plane protocol used in ACI. Although it is pre-standard, Cisco and a consortium of ecosystem partners have submitted it for ratification. OpFlex is a protocol designed to communicate policy intent, from APIC, and compliance or noncompliance from a policy-enforcement element attached to the ACI fabric. The OpFlex protocol is used to communicate policy between the APIC and the Application Virtual Switch (AVS). This not only demonstrates the use of OpFlex but also allows for ACI policy to reach into server virtualization hypervisor host to enforce policy defined on the APIC.

Cisco ACI Introduction – Part 1 – Industry Trends

August 10, 2018 at 5:25 pm

Cisco ACI

I have been working with ACI quite a bit as it is introduced in a class I am teaching for CBT Nuggets – 200-155 DCICT – Introducing Cisco Data Center Networking Technologies (DCICT). I should finish that class next by the way.

In order to fully understand the Application-Centric Infrastructure from Cisco Systems, we need to understand how the IT industry is changing as far as the Data Center is concerned. We really have reached a point where enterprises are overhauling their designs and implementing true Private Clouds.

Why is this happening? It is being driven by many factors. Here are just some:

  • Application lifecycles are being broken up into much smaller windows
  • Applications are becoming less rigidly structured
  • Applications are being implemented through virtualization and hypervisors
  • Applications are being implemented through containers and microservices
  • Data flows in the data center are moving from North to South flows to East to West flows, as application components communicate with other application components across the virtualized workloads on different server hosts
  • Network equipment must become more flexible to keep up with the fast pace of change, and also the required integration between new systems and legacy equipment

Automation and orchestration are more of a goal than ever given the above factors. This is one of the areas where ACI shines. It allows us to integrate many components, and automate processes like never before. It also directly addresses security, as traditional firewall configurations would be almost impossible to keep up with given the rate of change and scope of changes necessitated by the new Data Center technologies.

I hope this post has peaked your interest. I will be back with more great introductory material for you on ACI before we delve deep into this exciting concept.

The NX-OS CLI – Part 1

August 9, 2018 at 2:48 pm

In this video – we get you familiar and comfortable with the NX-OS CLI and NX-OS in general. This is the first part of many videos on NX-OS.

NX-OS

Cisco Nexus Functional Planes – 5000 Series

June 4, 2018 at 11:10 am

Cisco Nexus

This post provides some detailed examples architecturally of the Cisco Nexus Functional Planes we initially discussed in the post – Cisco Nexus Functional Planes.

The control plane of the Nexus 5000 series contains many components you are already familiar with as a CCNA R&S:

  • The CPU
  • DRAM
  • Boot memory
  • BIOS Flash memory
  • Internal Gigabit Ethernet ports for connectivity to the data plane components

The data plane consists of:

  • Unified Ports Controllers (UPCs) – manages all packet-processing operations within the switch; these components are Layer 2 Multipath capable and support classic Ethernet, Fibre Channel, and Fibre Channel over Ethernet (FCoE)
  • UPC ASIC – handles the forwarding decisions and buffering for multiple 10-Gigabit Ethernet ports
  • Unified Crossbar Fabric (UCF) – responsible for coupling ingress UPCs to available egress UPCs; the UCF internally connects each 10-Gigabit Ethernet, FCoE-capable interface through fabric interfaces running at 12 Gbps

Remember, the control plane is responsible for managing all control traffic. Data frames bypass the control plane and are managed by the UCF and the UPC. Layer 2 control packets (BPDUs, CDP, UDLD, etc), Layer 3 control packets (OSPF, BGP, PIM, FHRP, etc), and storage control packets (FLOGI frames) are managed by the control plane supervisor.

For management access, Cisco Nexus Series switches can be managed in-band, via a single serial console port, or through a single out-of-band 10/100/1000-Mbps Ethernet management port.

Keep in mind that architectures will differ for different Nexus devices. For example, the Cisco Nexus 7000 devices use a distributed control plane approach. It has a multicore CPU on each I/O module, as well as a multicore CPU for the switch control plane on the (dual) supervisor module. The 7000 Series Switch offloads intensive tasks to the I/O module CPU for ACL and FIB programming. It scales the control plane capacity with the number of line cards. This avoids supervisor CPU bottleneck which could occur in a centralized control plane architecture.