Configuring FabricPath

August 30, 2018 at 11:44 pm

Enjoy this sample Nugget from my latest CCNA Data Center course (200-155) at CBT Nuggets!

200-155

Cisco CCNA Data Center 200-155 DCICT Arrives at CBT Nuggets

August 29, 2018 at 6:07 pm

200-155

It is here! So many of you have asked about this course and it is now live on the CBT Nuggets site!

Cisco CCNA Data Center 200-155 DCICT

Jeremy Cioara is still hard at work on the other CCNA Data Center course (200-150 DCICN), but keep in mind if you are a CCNA R&S, much of that course (80% or so) will be review.

This course was an incredible amount of fun to create as it covers the very latest technologies found in the modern data center. The Nuggets are as follows:

1. Introduction: The CCNA Data Center
2. Introduction: Getting Your Hands on Equipment
3. Network Virtualization: Module Introduction
4. Network Virtualization: Functional Planes
5. Network Virtualization: CoPP
6. Network Virtualization: Stateful Fault Recovery
7. Network Virtualization: Virtual Routing and Forwarding
8. Network Virtualization: Default and Mgmt VRFs
9. Network Virtualization: Virtual Device Contexts
10. Network Virtualization: VDC Resources
11. Network Virtualization: VDC Context Types
12. Network Virtualization: VDC Resource Allocation
13. Network Virtualization: Managing VDCs
14. Network Virtualization: A VDC STP Example
15. Network Virtualization: Introducing Overlay Networks
16. Network Virtualization: VXLAN
17. Network Virtualization: NVGRE
18. Network Virtualization: You Down with OTV?
19. Network Virtualization: OTV Basic Operations
20. Cisco DC Networking: FEX
21. Cisco DC Networking: FEX Options
22. Cisco DC Networking: vPC
23. Cisco DC Networking: Configuring a vPC
24. Cisco DC Networking: FabricPath
25. Cisco DC Networking: Configuring FabricPath
26. Cisco DC Networking: Unified Switch Ports
27. Cisco DC Networking: Unified Fabric
28. Cisco DC Networking: FCoE
29. Unified Computing: Virtual Machines
30. Unified Computing: Hypervisors
31. Unified Computing: Installing the ESXi Hypervisor
32. Unified Computing: Using Hyper-V
33. Unified Computing: Virtual Machine Manager
34. Unified Computing: Virtual Switches
35. Unified Computing: Creating a Standard vSwitch
36. Unified Computing: Cisco 1000V
37. Unified Computing: 1000V Operations
38. Unified Computing: Shared Storage
39. Unified Computing: Configuring Shared Storage
40. Unified Computing: vMotion and Migration
41. Unified Computing: Server Types
42. Unified Computing: UCS Components
43. Unified Computing: Hardware Abstraction
44. Unified Computing: RBAC
45. Unified Computing: Basic UCS Config
46. Unified Computing: Service Profiles
47. Orchestration: Cloud Concepts
48. Orchestration: APIs
49. Orchestration: UCS Director
50. Orchestration: UCS Director Workflows
51. ACI: Architecture
52. ACI: Fabric Discovery
53. ACI: Policy Driven Model
54. ACI: The Logical Model
55. ACI: Programmability
56. ACI: Orchestration Options

Cisco ACI Introduction – Part 3 – The Logical Components

August 18, 2018 at 6:19 pm

Cisco ACI

It is critical that you understand the physical components and protocols discussed in Part 2, but it is also critical that you understand the logical constructs used within the ACI system. You might need to create some Flash Cards on these until they are second nature.

  • Tenant: Contains policies that enable qualified users to have domain-based access control. Qualified users can access privileges such as tenant administration and networking administration.
  • Context: A context is a unique Layer 3 forwarding and application policy domain. A tenant can have multiple contexts. A context is often defined with VRFs.
  • Bridge domain: A bridge domain represents a Layer 2 forwarding construct within the fabric. A bridge domain must link to a context and have at least one subnet associated with it. The bridge domain defines the unique Layer 2 MAC address space and a Layer 2 flood domain if such flooding is enabled.
  • EPG: The EPG is a managed object that contains a collection of endpoints (devices that are connected to the network directly or indirectly) that have common policy requirements such as security, virtual machine mobility, QoS, or Layer 4 to Layer 7 services. Endpoints have an address (identity), a location, attributes (such as version or patch level), and a physical or virtual status. Rather than configure and manage endpoints individually, they are placed in an EPG and are managed as a group. EPGs are fully decoupled from the physical and logical topology, and endpoint membership in an EPG can be dynamic or static.
  • Application network profile: An application profile models the application requirements, and it is a convenient logical container for grouping EPGs.
  • Contract: The contract governs the types of endpoint group traffic that can pass between EPGs, including the protocols and ports that are allowed. If there is no contract, inter-EPG communication is disabled by default. No contract is required for intra-EPG communication. EPGs can only communicate with other EPGs according to the contract rules.
  • Filter: The filter sorts Layer 2 to Layer 4 fields, TCP/IP header fields such as Layer 3 protocol type, Layer 4 ports, and so on.
  • Subject: Within a contract, subjects use filters to specify the type of traffic that can be communicated, and how it occurs. Subjects determine whether filters are unidirectional or bidirectional. Contract subjects contain associations to the filters (and their directions) that are applied between EPGs that produce and consume the contract.

Wrapping Up 200-155 CCNA Data Center Today!

August 17, 2018 at 1:14 pm

firepower

I wanted to make this post because so many of you have asked me about a completion date for this exciting new CBT Nuggets content. I am indeed wrapping up the final videos today! This makes it very close to release on the CBT Nuggets website. Woohoo!

I will be sure to follow up with another post in the next couple of business days on the exact date of availability on the CBT Nuggets site. By the way, the final Nugget count looks like it is going to come in about 55! They cover the following:

  • NX-OS
  • Orchestration
  • ACI
  • UCS
  • Virtualization

and many more topics critical for the modern, Cisco-centric data center. And of course, many topics are not JUST Cisco focused. Enjoy!

Cisco ACI Introduction – Part 2 – The Architecture

August 14, 2018 at 10:13 pm

Cisco ACI

The Cisco ACI attempts to reach beyond “traditional” SDN tasks and provide a new network architectural approach. Of course, it is one focused around programmability. This post quickly reviews the architectural components involved.

Rather impressively, the Application Centric Infrastructure (ACI) requires only three base components for operation:

Nexus 9500

This impressive device offers the following features:

  • Chassis models include 4-, 8-, and 16-slot options, each using the same line cards, chassis controllers, supervisor engines, and 80% efficient power supplies
  • Individualized parts, based on the particular chassis, are fan trays and fabric modules (each line card must attach to all fabric modules)
  • Line cards include physical ports based on twisted-pair copper for 1/10Gbps and optical Small Form Factor (SFP) as well as Quad Small Form Factor (QSFP) for 1/10/25/40/50/100Gbps port speeds
  • All ports are at line rate and have no feature dependencies by card type other than the software under which they will operate
  • Some are NX-OS only (94xx, 95xx, 96xx series), some are ACI spine only (96xx series), and still others (the latest, as of this writing, of the 97xx-EX series) will run both software operating systems
  • There are also three different models of fabric modules, based on scale: FM, FM-S, and FM-E
  • If your design requires 100Gbps support, the FM-E is the fabric module for your chassis

Nexus 9300

The 9300 series of leaf switches are those devices responsible for the bulk of the network functionality: switching L2/L3 at line rate, supporting VTEP operations for VXLAN, IGP routing protocols such as BGP, OSPF, EIGRP, multicast, anycast gateways, and much more.

They also support a wide range of speeds in order to accommodate both modern and not so modern workloads that can be found in data centers: as low as 100Mbps for legacy components in your data center, and as high as 100Gbps for the uplink connectivity to the rest of the network. Sizes vary from 1 to 3 rack units high, with selectable airflow intakes and exhaust to match placement, cable terminations, and airflows within any data center.

Application Centric Infrastructure Controllers

These single rack-unit appliances are based on the UCS C-series x86 server. They are often considered the “brains” of the network operations.

The APIC offers a GUI mechanism for access, along with a fully exposed API set, allowing consumers a rich set of tools with which to configure and operate an ACI fabric. The APIC is also how the leaf and spine elements are added to and retired from the fabric. It is also how they get their firmware updates and patches. No more device-by-device operations or scripting. The APIC does all that operations work for you via a few simple mouse clicks or via those exposed APIs.

Protocols

ACI is based entirely on a set of existing and evolving standards that allows for the unique and powerful capabilities that provide a truly flexible, automated, scalable, and modern network to support applications.

Data Plane Protocols

Forwarding across the ACI fabric is entirely encapsulated in VXLAN. VXLAN is a protocol that allows for minimized fault domains, can stretch across an L3 boundary, and uses a direct-forwarding nonbroadcast control plane (BGP-EVPN). This can provide L3 separation as well as L2 adjacency of elements attached at the leaf that might reside across the fabric on another leaf.

The use of VXLAN is prevalent across the ACI fabric, within the spine and leaf switches, and even within various vSwitch elements attached to the fabric via various hypervisors. However, 802.1q VLANs are still exposed in the ACI policy model because the actual vNIC of any “hypervised” workload and those of bare-metal servers today do not support VXLAN native encapsulation. Therefore, 802.1Q networks still appear in ACI policy and are valid forwarding methods at the workload NIC.

Control Plane Protocols

Several well-understood and -tested protocols form the ACI control plane. Each new leaf or spine attached to the fabric uses a specific type-length-value in a Local Link Discovery Protocol (LLDP) signaling flow to connect with the APIC and thus register itself as a potential new addition to the fabric. Admission is not allowed until a human or some automation point adds the new leaf or spine element. This guards against the registration of switches for nefarious purposes.

Forwarding across the fabric and reachability are achieved via a single-area link-state interior gateway protocol, more specifically Intermediate System to Intermediate System (IS-IS). This lends itself to massive scaling, with simplicity at the heart of the design.

Various interior gateway protocols are supported for communicating with external routing devices at the edge of the fabric: I-BGP, OSPF, and EIGRP, along with static routing are options for achieving IP communication to and from the fabric itself. These protocols run only on the border leaf, which physically attaches the adjacent networks to the fabric. Border leaf switches are not a special device configuration, only a notation of the edge of the ACI fabric connecting to adjacent networks.

Because the data plane of the ACI fabric uses VXLAN, the control plane protocol in use, as of version 3.0, is Multi-Protocol BGP with EVPN. This provides an enhancement over the prior use of multicast to deal with control-plane traffic needs around broadcast, unknown unicast, and multicast (BUM) traffic across the VXLAN fabric.

OpFlex is another new control-plane protocol used in ACI. Although it is pre-standard, Cisco and a consortium of ecosystem partners have submitted it for ratification. OpFlex is a protocol designed to communicate policy intent, from APIC, and compliance or noncompliance from a policy-enforcement element attached to the ACI fabric. The OpFlex protocol is used to communicate policy between the APIC and the Application Virtual Switch (AVS). This not only demonstrates the use of OpFlex but also allows for ACI policy to reach into server virtualization hypervisor host to enforce policy defined on the APIC.