The NX-OS CLI – Part 1

August 9, 2018 at 2:48 pm

In this video – we get you familiar and comfortable with the NX-OS CLI and NX-OS in general. This is the first part of many videos on NX-OS.

NX-OS

Your 100-105 ICND1 Study Tracker

July 11, 2018 at 10:30 am

ICND1

If you know anything about my approach to certifications, you will know that I am a huge proponent of building a tracker to make sure you are studying the right topics and to measure your progress and keep yourself accountable. Here is a tracker you can copy for ICND1.

NOTE: I only show Section 1 below – you need to click the READ MORE link in order to see the other sections 2 – 5.

1 Network Fundamentals
1.1 Compare and contrast OSI and TCP/IP models
1.2 Compare and contrast TCP and UDP protocols
1.3 Describe the impact of infrastructure components in an enterprise network
1.3.a Firewalls
1.3.b Access points
1.3.c Wireless controllers
1.4 Compare and contrast collapsed core and three-tier architectures
1.5 Compare and contrast network topologies
1.5.a Star
1.5.b Mesh
1.5.c Hybrid
1.6 Select the appropriate cabling type based on implementation requirements
1.7 Apply troubleshooting methodologies to resolve problems
1.7.a Perform fault isolation and document
1.7.b Resolve or escalate
1.7.c Verify and monitor resolution
1.8 Configure, verify, and troubleshoot IPv4 addressing and subnetting
1.9 Compare and contrast IPv4 address types
1.9.a Unicast
1.9.b Broadcast
1.9.c Multicast
1.1 Describe the need for private IPv4 addressing
1.12 Configure, verify, and troubleshoot IPv6 addressing
1.13 Configure and verify IPv6 Stateless Address Auto Configuration
1.14 Compare and contrast IPv6 address types

Cisco Nexus Stateful Fault Recovery

May 28, 2018 at 10:06 pm

CCNA Data Center

As we discussed in the previous post in this category, we know that Cisco NX-OS Software provides isolation between the control and data plane within a Nexus device. This isolation means that a failure within one plane does not disrupt the other plane. Great!

In this post, let’s elaborate more on the fact that a process cannot only be restarted if it fails but restarted statefully. Meaning the process has information that existed prior to the failure.

When a restartable service fails, it is restarted on the same supervisor. If the new instance of the service determines that the operating system abnormally terminated the previous instance, the service then determines whether a persistent context exists.

The initialization of the new instance attempts to read the persistent context to build a run-time context that makes the new instance appear like the previous one. After the initialization is complete, the service resumes the tasks that it was performing when it stopped. During the restart and initialization of the new instance, other services are unaware of the service failure. Any messages that are sent to the failed service by other services are available from the Message and Transaction Services (MTS) when the service resumes.

The success of the new instance in surviving the stateful initialization depends on the cause of failure of the previous instance. If the service is unable to survive a few subsequent restart attempts, the restart is considered as failed.

In cases where the stateful restart fails, the System Manager performs the action that is specified by the high-availability (HA) policy of the services. This action forces one of the following:

  • Stateless restart
  • No restart
  • A supervisor switchover
  • A reset

During a successful stateful restart, there is no delay while the system reaches a consistent state. Stateful restarts reduce the system recovery time after a failure.

Let’s examine a step by step example of stateful restart in action!

  1. During normal operation, the running services make a checkpoint of their run-time state information to the Persistent Storage Service (PSS)
  2. During normal operation, the system manager monitors the health of the running services using heartbeats
  3. The service encounters a fatal error
  4. The system manager restarts the service instantly when it crashes or stops responding
  5. After restarting, the service recovers its state information from the PSS and resumes all pending transactions
  6. If the service does not resume a stable operation after multiple restarts, the system manager initiates a reset or switchover of the supervisor
  7. Cisco NX-OS collects the process stack and core for debugging purposes with an option to transfer core files to a remote location

I hope this has been informative for you, and I would like to thank you for reading!

Cisco Nexus Functional Planes

May 26, 2018 at 10:12 pm

Cisco Nexus

One of the key Cisco Nexus switch features to ensure great availability and high performance is the separation of traffic and processing of traffic into what are called different planes. The three main planes are:

  • Data
  • Control
  • Management

Data refers to packets that are being transferred between systems – for example, the packets that make up a website that a client is accessing. Control traffic is that traffic that helps make the infrastructure functional and intelligent. For example, Spanning Tree Protocol traffic at Layer 2 and OSPF traffic at Layer 3. Finally, management traffic might consist of SSH access and SNMP packets.

Notice the illustration above – it shows different traffic forms flowing through the device. From the bottom up – these traffic flows shown are data, services, control, and management traffic. Notice how interface Access Control Lists can restrict all of these traffic forms on ingress. Control Plane Policing (CoPP) permits the limiting of control, services, and management traffic to ensure the CPU does not experience a Denial of Service (malicious or otherwise) during network activity.

Notice also from the graphic the intentional separation of the control plane traffic and the data traffic. By design, the data traffic is switched through the system while bypassing the control plane. This adds stability and performance to the system.

Something else to consider in the Nexus architecture is the ability for failed services to restart and (hopefully) not affect forwarding on the device. A System Manager watches over the processes running on the system and can restart them in a stateful manner (thanks to a setting called the HA Policy). The process can restart with state information thanks to a Persistent Storage Service that the System Manager can access for the previous state information for the process.

This post represents a high-level overview of this subject covered in detail in the 200-155 course at CBT Nuggets releasing in June of 2018.

CCNA Data Center DCICT (200-155) CBT Nuggets Outline

May 23, 2018 at 7:05 pm

200-155

By popular demand – here is the rough outline of the exciting new CCNA Data Center course I am working on at CBT Nuggets!

  1. Introduction: The CCNA Data Center
  2. Introduction: Getting Your Hands on Equipment
  3. Network Virtualization: Module Introduction
  4. Network Virtualization: Functional Planes
  5. Network Virtualization: Default and Management VRFs
  6. Network Virtualization: OTV
  7. Network Virtualization: NVGRE
  8. Network Virtualization: VXLAN
  9. Network Virtualization: Troubleshooting VDC STP
  10. Networking Technologies: Module Introduction
  11. Networking Technologies: Configuring FEX
  12. Networking Technologies: Configuring vPC
  13. Networking Technologies: Configuring FabricPath
  14. Networking Technologies: Configuring Unified Switch Ports
  15. Networking Technologies: Benefits of the Unified Fabric
  16. Networking Technologies: RBAC
  17. Unified Computing: Module Introduction
  18. Unified Computing: Server Types
  19. Unified Computing: Connectivity
  20. Unified Computing: Cisco UCS
  21. Unified Computing: Hardware Abstraction
  22. Unified Computing: Configuring High Availability
  23. Unified Computing: Configuring Port Roles
  24. Unified Computing: Configuring Hardware Discovery
  25. Unified Computing: Hypervisors
  26. Unified Computing: Virtual Switches
  27. Unified Computing: Shared Storage
  28. Unified Computing: VM Components
  29. Unified Computing: Virtual Machine Manager
  30. Automation and Orchestration: Module Introduction
  31. Automation and Orchestration: Using APIs
  32. Automation and Orchestration: Cloud Computing
  33. Automation and Orchestration: UCS Director
  34. Automation and Orchestration: Troubleshooting a UCS Director Workflow
  35. Application Centric Infrastructure: Module Introduction
  36. Application Centric Infrastructure: The ACI Environment
  37. Application Centric Infrastructure: ACI Fabric Discovery
  38. Application Centric Infrastructure: The ACI Deployment Model
  39. Application Centric Infrastructure: The ACI Logical Model

DCICT (200-155) Unified Computing Server Types and Connectivity

May 19, 2018 at 6:34 pm

NOTE: This post discusses just a fraction of the incredible content covered in my upcoming DCICT course for CBT Nuggets.

I can recall my shock in 2009 when Cisco Systems entered the server hardware market! I suppose it was similar to when I saw Amazon try (and sadly fail) at making smartphones. A Vice President at HP certainly was surprised and famously stated: “A year from now the difference will be (Cisco) UCS (Unified Compute System) is dead and we have had phenomenal market share growth in the networking space.“ Fortunately for Cisco Systems, he could not have been more wrong. In the 4th quarter of 2016 alone, Cisco did nearly 1 billion of server sales!

Cisco not only entered this market but has produced several variants already including rack mount servers, blade servers, and the hyper-converged Unified Computing System (UCS) of which there have already been three generations of technology.

In this post, let’s take an overview of the main products and technologies that make up the Cisco UCS umbrella.

Management Software

  • Cisco UCS Manager – this is the software for managing a single UCS domain. Don’t think this necessarily means a small environment, however, since this could mean up to 160 blade or rack mount servers in that management domain. And of course, each of those many servers could be running a huge number of virtual servers and/or containers. You have options when working with this software thanks to a GUI (Graphical User Interface), an XML Application Programming Interface (XML API), and a Command Language Interface.
200-155

The Cisco UCS Manager GUI

  • Cisco UCS Central Software – this software permits you to manage multiple domains located in the same campus, or even distributed worldwide. This provides the scalability required for very large enterprises.
  • Cisco UCS Director Software – since there are many different integrated systems included in the Cisco UCS world featuring equipment from the likes of EMC, Hitachi and more, the UCS Director Software helps you automate integrated infrastructure orchestration and management. Elements managed by this software include networking, hardware compute, operating systems, virtual machines, and storage.

Connection Technologies

  • Cisco SingleConnect Technology – connect your LAN, SAN, and management networks using one physical connection. Remember, this includes the connectivity for both your physical and virtual servers.
  • Cisco Direct Connect Technology – this advancement permits you to connect various servers in your overall system directly to the Fabric Interconnects. This allows you to manage these servers using a single cable for both management and data traffic. If you are not familiar with the UCS Fabric Interconnects, these are described below.

UCS Series Hardware

  • Cisco UCS Blade Server Chassis – these chassis can mount in industry-standard racks and use standard front to back cooling. They are so flexible in that they accommodate full-width blade servers or half-width blades. You can even mix and match these in the chassis. Cisco’s goal with these UCS chassis was to feature fewer physical components, eliminate the need for independent management of systems, and to increase energy efficiency.
200-155

The Cisco UCS 5108 Blade Server Chassis

  • Cisco UCS Fabric Extenders – the idea here is to scale the system without unnecessary complexity. Fabric Extenders bring the unified fabric into the blade server enclosure, providing multiple 10 Gigabit Ethernet connections between blade servers and the fabric interconnect, simplifying diagnostics, cabling, and management. As its name implies, this device extends the I/O fabric between the Fabric Interconnects (covered below) and the Cisco UCS Series Blade Server Chassis. This enables a lossless and deterministic Fibre Channel over Ethernet (FCoE) fabric to connect all blades and chassis together. Since the fabric extender is similar to a distributed line card, it does not perform any switching and is managed as an extension of the Fabric Interconnects. This approach removes switching from the chassis, reducing overall infrastructure complexity and enabling Cisco UCS to scale to many chassis without multiplying the number of switches needed, reducing TCO and allowing all chassis to be managed as a single, highly available management domain. The Cisco UCS Fabric Extenders also help manages the chassis environment (the power supply and fans as well as the blades) in conjunction with the Fabric Interconnect. Therefore, separate chassis management modules are not required. The Cisco UCS Fabric Extenders fit into the back of the Cisco UCS Blade Server Chassis. Each Cisco UCS chassis can support up to two fabric extenders, allowing increased capacity and redundancy.
200-155

The Rear of the Chassis with Fabric Extenders Installed

  • Cisco UCS Fabric Interconnects – these critical devices support a single point of connectivity and management for the overall UCS system. Because it is such a critical component of the system, it is often deployed in redundant pairs. As an example, consider the 6332 Fabric Interconnect which provides:
    •  LAN and SAN connectivity for all servers within their domains
    • Bandwidth up to 2.56 Tbps
    • 32 40-Gbps ports in one 1 rack unit (RU)
    • Support for 4×10-Gbps breakout cables
    • Ports capable of line-rate, low-latency, lossless 40 Gigabit Ethernet and Fibre Channel over Ethernet (FCoE)
    • Centralized unified management with Cisco UCS Manager
    • Efficient cooling and serviceability
200-155

The Cisco UCS 6332 Fabric Interconnect

  • Cisco Nexus Fabric Extenders – these optional components are third-generation devices that support LAN and SAN connectivity to the UCS system. They offer ultra-high and flexible bandwidth options. Thanks to the Nexus Fabric Extenders, you can take advantage of the latest data center technologies including:
    • Virtual Port Channels
    • Enhanced Virtual Port Channels
    • FabricPath
    • Unified Fabric
    • Application-Centric Infrastructure
    • Virtual Extensible VLAN-based topologies
    • Versatile TCAM
  • Cisco R-Series Racks – these racks are optimized for Cisco UCS featuring a custom-design for the Cisco infrastructure, including computing, network, and power
    while they comply with EIA-310-D rack standards.
  • Cisco UCS B-Series Blade Servers – the approach here is a blade server for any purpose and any scale. Models are tailored for scale out, enterprise-class, or mission-critical deployments. As an example, the enterprise-class Cisco UCS B480 M5 Blade Server delivers support for the Intel Xeon Scalable processors; up to 6 terabytes (TB) of memory; four SAS, SATA, and NVMe drives; M.2 storage; up to four GPUs, and 160 Gigabit Ethernet connectivity for I/O throughput.
  • Cisco UCS C-Series Rack Servers – again, an approach for various workloads and scale. Consider the Cisco UCS® C480 M5 Rack Server that delivers:
    • A 4RU form-factor
    • The latest Intel Xeon Scalable processors with up to 28 cores per socket and support for two-or four-processor configurations
    • 2666-MHz DDR4 memory and 48 DIMM slots for up to 6 TeraBytes (TB) of total memory
    • 12 PCI Express (PCIe) 3.0 slots
    • Six x8 full-height, full-length slots
    • Six x16 full-height, full-length slots
    • Flexible storage options with support up to 32 Small-Form-Factor (SFF) 2.5-inch, SAS, SATA, and PCIe NVMe disk drives
    • Cisco 12-Gbps SAS Modular RAID Controller in a dedicated slot
    • Internal Secure Digital (SD) and M.2 boot options
    • Dual embedded 10 Gigabit Ethernet LAN-On-Motherboard (LOM) ports
  • Cisco UCS Virtual Interface Cards (VICs) – as described above, these interface cards permit simplified computing connectivity thanks to Cisco SingleConnect support. This unifies LAN, SAN, and systems management into one simplified link for rack servers, blade servers, and virtual machines. Second and third generation cards even feature lower latency thanks to usNIC technology. usNIC (user-space NIC) is Cisco’s low-latency computer networking product for Message Passing Interface (MPI) over 10 Gigabit Ethernet in high-performance computing. It operates at the OSI Model’s data link layer (Ethernet frames) or the network layer (UDP packets) to eliminate the overhead of TCP within a data center.
  • Cisco UCS Invicta Series – while officially End of Life from Cisco Systems, you still might find mention of these servers in certification and of course you might find them installed in the field. The idea behind these servers is ultra fast performance through the use of NAND flash memory for sustained high throughput, a high rate of I/O operations per second (IOPS), ultra-low latency, and fast write performance.
  • Cisco Integrated Infrastructure – through partnerships with other networking giants, Cisco has offered integrated systems including:
    • FlexPod – a pre-validated data center platform built on Cisco UCS, the Cisco Nexus family of switches, and NetApp data management solutions
    • Vxblock Systems – provide a wide range of solutions to meet requirements for size, performance, and scalability; built with compute and networking from Cisco, storage from Dell EMC, and virtualization technologies from VMware
    • Cisco Solutions for EMC VSPEX
    • Nimble Storage SmartStack
    • Cisco Solutions for Hitachi UCP Select

VIRL ASAv with ASDM

April 16, 2018 at 6:10 pm

VIRL

I had many requests to demonstrate how to use the ASDM GUI to manage an ASAv running inside of VIRL. Here is the video demonstration of how to do it. Enjoy!