Virtual Edge Deployment Guide (Switch Hosted)

 

This guide applies to Cloud Control Center (CCC) versions 15.5.0 and newer. Ensure your CCC is updated to at least version 15.5.0 before proceeding.

This article walks through the steps to onboard, configure, and delete Virtual Edges in the Elisity Platform. Elisity Virtual Edge (Switch Hosted) is a docker container-based implementation of Elisity Microsegmentation software running on a Cisco Catalyst 9000 series switch by leveraging the switch’s integrated application hosting functionality.

As of today, all Cisco Catalyst 9300, 9300L and 9400 models support hosting Elisity Virtual Edge container using Cisco Application Hosting. Cisco StackWise© switch stacking technology is also supported. Additional switch models will be supported in future releases. Please see the switch compatibility matrix for more details. 

  • Switches running Elisity Virtual Edge must be equipped with a supported storage device such as the SSD-120G or C9400-SSD-240GB (M.2)​ module. Front panel USB and internal flash are not supported. Catalyst 9400 series switches require the installation of an M.2 SSD which requires a switch reboot. See the document here for installation instructions and the document here for verification. 
  • All Catalyst 9000 series switches require DNA Advantage licensing. This requirement is not unique to the Elisity Virtual Edge container. It is a requirement imposed by Cisco on the application hosting environment within IOS-XE.
  • The Elisity Virtual Edge has been developed using IOS-XE version 17.6.1. While it may work with earlier versions of IOS-XE we cannot guarantee that it will operate correctly.
  • All switches running Elisity Virtual Edge must have their clocks synchronized with the Active Directory server so that attachment events are displayed accurately. You can use your own NTP server or a public one such as time.google.com 
  • If your switch is currently hosting another application such as ThousandEyes, please connect with your Elisity account team for assistance on appropriately assigned switch compute resources. 
  • When a switch hosting Virtual Edge is deployed using the recommended configuration it can support up to 16 Virtual Edge Nodes (access switches)
CATALYST 9400 SPECIFIC NOTE:
  • Catalyst 9400 series switches must have application hosting verification disabled by issuing the app-hosting verification disable command.  
  • Catalyst 9410 series switch. When using slot 4 of the 48-port linecard for application hosting, the port must be in the default shutdown mode. If slot 4 of the 48-port linecard is active, application hosting is rejected. If the linecard port is disabled, slot 4 of the 48-port linecard is marked as inactive. If slot 4 of the 48-port linecard is populated, the port 4/0/48 will not come up. If linecard 4 is empty or if it is a 24-port linecard, no ports are disabled. See this document for more information.
  • Catalyst 9410 series switch. To enable the AppGigabitEthernet interface for application hosting, configure the enable command in interface configuration mode. See this document for more information.

The following chart describes the terminology used in this document

Cloud Control Center Elisity's cloud native and cloud delivered control, policy and management plane.
Virtual Edge The Elisity software running as a docker container on an access or aggregation switch that supports Application Hosting functionality.
Virtual Edge Node An access switch onboarded to a Virtual Edge to be leveraged as an enforcement point in the network.

 

Deploying Elisity Virtual Edge (Switch Hosted)

The Elisity Virtual Edge container has a single virtual interface used to communicate with Cloud Control Center as well as with Virtual Edge Nodes. In more detail, the Virtual Edge virtual interface is used to maintain a persistent control plane connection to Cloud Control Center in order to receive identity based policies as well as to send identity metadata and analytics to Cloud Control Center. This same interface is used to glean identity metadata, traffic analytics and other switch information from the Virtual Edge Nodes and to read the Catalyst configuration and configure security policies, traffic filters and other switch functions. 

Elisity Virtual Edge supports a 1:1 and a 1:Many model. In other words, you can deploy a Virtual Edge on every access switch that supports application hosting functionality and onboard that same switch as a Virtual Edge Node or you could deploy a Virtual Edge on an aggregation switch that supports application hosting functionality and onboard many access switches as Virtual Edge Nodes. The 1:Many model would be beneficial in the case where the access switches to onboard do not support application hosting, ie. Catalyst 3850 or Catalyst 9200, but you could really onboard any supported switch. Both models are depicted below:

 

Initial Requirements Check

Step 1: To deploy Elisity Virtual Edge on a Catalyst 9000 series switch first ensure that the switch is running a Network Advantage license with the DNA Advantage add-on. Execute the following commands under global configuration mode

 

switch# show license summary

! check the license level first

switch(config)# license boot level network-advantage addon dna-advantage


Step 2:
If the witch hosting the Virtual Edge container is also going to be onboarded as a Virtual Edge Node you should either have a user account with privilege 15 configured or TACACS/RADIUS login configured to provide privilege 15 level access. This is needed for the Virtual Edge to authenticate with the host switch. Execute the following command under global configuration mode if a local account is being used and is not already configured:

 

switch(config)# username <username> privilege 15 secret 0 <password>
Note: Special characters in your RADIUS/TACACS passwords can cause issues with Cisco RESTCONF or scripting for certain activities (such as troubleshooting or upgrading procedures)

 

Adding the Virtual Edge in Cloud Control Center


Step 3:
 Log into Cloud Control Center and navigate to Virtual Edges > Add Virtual Edge. The drop down menu will give you the option to add a Single Virtual Edge or Add Multiple Virtual Edges. We will select "Add Single Virtual Edge." See our Virtual Edge Bulk Onboarding article to add multiple VEs.

 


Step 4:
 Fill out the required fields and select Add. Details about each field are provided in the chart below. These details can always be viewed and edited by selecting the more options icon to the right and selecting Edit/Download Virtual Edge Configuration. 

 





The following chart provides details about each required field

IP Address This is the IP assigned to the Virtual Edge container. This IP needs to be routable and must have access to reach Cloud Control Center. This IP also needs reachability to any Virtual Edge Node management interface you plan to onboard. The network for this IP can be configured locally on the application hosting switch or it can be configured on an aggregation switch upstream. This can be a new network or an existing network. This field is mandatory.
Gateway IP This is the default gateway IP for the network described above. The default gateway for this IP can be configured locally on the application hosting switch or it can be configured on an aggregation switch upstream. This can be a a default gateway IP from a new network or an existing network. This field is mandatory.
Host Name This is the host name assigned to the Virtual Edge container. This will be used by Cloud Control Center when automating the generation of the application hosting configuration to be configured on the application hosting switch. This field is optional.
Domain Name Server (DNS) This is the DNS server IP to be used by the Virtual Edge container. This can be either a public or private DNS server.  This will be used by Cloud Control Center when automating the generation of the application hosting configuration to be configured on the application hosting switch. To specify more than one DNS server use a comma. You should list your private DNS first in your comma-separated list to ensure that hostname entries can be imported from your private DNS during device discovery. 
Uplink VLAN Here you should enter the VLAN info for the uplink interface, commonly the management VLAN for the switch. 
Site Label You can assign a pre-created Site Label to your Virtual Edge that is inherited by any associated Virtual Edge Node, or you can create a new Site Label on the spot. This allows you to filter and view assets and Virtual Edges using these Site Labels, and apply Policy Sets based on Site Label for selective policy distribution. See our VE/VEN management article for info on how to create and manage your Site Labels effectively.
Distribution Zone Here you can assign the Virtual Edge to a pre-created Distribution Zone label for selective distribution of device to Policy Group mappings, or create a new DZ label and assign to the VE immediately.
See our VE/VEN management article for info on how to create and manage your Distribution Zone labels effectively.

 

After clicking "Add" a text file will download to your local machine. Be sure to allow this file to download, as it contains configurations to onboard your switch as a Virtual Edge.


 

Configuring the Virtual Edge

This text file contains the instructions and configurations required to bring up the Virtual Edge container on the application hosting switch as well as the switch configurations required to onboard a Virtual Edge Node. Each Virtual Edge receives a unique identifier which is embedded in the file name.

 

Below is an example of the content in the text file you should have just generated in CCC.

iox
interface AppGigabitEthernet1/0/1
switchport mode trunk
app-hosting appid VE
app-vnic AppGigabitEthernet trunk
vlan 203 guest-interface 1
guest-ipaddress 10.203.1.76 netmask 255.255.255.0
app-default-gateway 10.203.1.1 guest-interface 1
app-resource docker
run-opts 1 "--entrypoint /etc/init.d/edge --cap-add=NET_ADMIN --hostname VE-9K --ulimit nofile=90000:90000 --env EDGE_TYPE=VE"
run-opts 2 "--env EDGE_TOKEN_L0='eyJ2ZV9yZWdfa2VjoiOGNmN2YyM2YwNDMwMWFjYSIsInZlX3VwbGlua19pcCI6IjEwLjIwMy4xLjc2IiwidmVfY2xvdWRfbWFuYWdlX3VybCI6IjE4LjExNy4zNy4xMjQiLCJ2ZV9kbnNfc2VydmVyIjpbIjguOC44LjgiLCIxLjEuMS4x'"
run-opts 3 "--env EDGE_TOKEN_L1='Il0sInZlX29wZW5G5fc2VydmVyIjoiMTguMTE3LjM3LjEyNCIsInZlX29wZW52cG5fY2EiOiItLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS1cbk1JSUJXekNDQVFHZ0F3SUJBZ0lCQVRBS0JnZ3Foa2pPUFFRREFqQVZNUk13RVFZRFZR'"
run-opts 4 "--env EDGE_TOKEN_L2='UURFd3B2WkdJeWNXZcbmRFTkJNQjRYRFRJME1ERXlNekl6TURFek9Gb1hEVE0wTURFeU16SXpNREV6T0Zvd0ZURVRNQkVHQTFVRUF4TUtcbmIyUmlNbkp2YjNSRFFUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJFZWw4'"
run-opts 5 "--env EDGE_TOKEN_L3='YmF4RzRCaTM5M21cV6a096UVQrd0VzaGIvUks4dEFsYnBmUWtlWjBZTGhQdXdUYkVTNWlmVTBmeGFFQm9yV3p5M2ZUeUJUUkMrTU5cbkF4MXlBVldqUWpCQU1BNEdBMVVkRHdFQi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgv'"
run-opts 6 "--env EDGE_TOKEN_L4='TUIwR0ExVWRcbkUVdCQlI1aVhjRERpMWN1MUdDTzlOVlNrU1BReXlTL1RBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlCSXFYYVJcbmZkU2JCamxmckQxVkdlc3NWWi90TnIranpkOHdrNm51RGVTRDdBSWhBTUJnYWQ2ai8rb3RmSm1zSUxY'"
run-opts 7 "--env EDGE_TOKEN_L5='S282YU1cbmhiQlMndlYkN6c25FSGJ1WU1QXG4tLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tXG4iLCJ2ZV9vcGVudnBuX3ByaXZhdGVfa2V5IjoiLS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tXG5NSUdIQWdFQU1CTUdCeXFHU000'"
run-opts 8 "--env EDGE_TOKEN_L6='OUFnRUdDQ3FHU0OUF3RUhCRzB3YXdJQkFRUWc4ZVdyWUw0b3NxV0pCRnVEXG4xb0dYTmZpb0QyNTQyanpmSG9TVXZCa3crbENoUkFOQ0FBUkFSYW1FRXlkaHVBYXR0ZzFqQmdEeWlpdXNnRmp4XG5pMjN6Nzk4VFgzdEhQc29seHJ2Qm1M'"
run-opts 9 "--env EDGE_TOKEN_L7='UVB0TEs1ZmQ1awV2d1dVRMU3BMcHI2aHJQRDkzdUNLS1xuLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLVxuIiwidmVfb3BlbnZwbl9zZXJ2ZXJfcG9ydCI6IjExOTQiLCJ2ZV9vcGVudnBuX3Byb3RvY29sIjoidWRwIiwidmVfb3Bl'"
run-opts 10 "--env EDGE_TOKEN_L8='bnZwbl9jZXJjoiLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tXG5NSUlCaFRDQ0FTcWdBd0lCQWdJUWNaTjJ5WXplNGViaWtYMG5CNnFzOXpBS0JnZ3Foa2pPUFFRREFqQVZNUk13XG5FUVlEVlFRREV3cHZaR0l5Y205dmRFTkJNQjRY'"
run-opts 11 "--env EDGE_TOKEN_L9='RFRJME1ETXREU0TVRReE5Wb1hEVEkxTURNd05ERTRNVFF4XG5OVm93R3pFWk1CY0dBMVVFQXhNUU9HTm1OMll5TTJZd05ETXdNV0ZqWVRCWk1CTUdCeXFHU000OUFnRUdDQ3FHXG5TTTQ5QXdFSEEwSUFCRUJGcVlRVEoyRzRCcTIyRFdN'"
run-opts 12 "--env EDGE_TOKEN_L10='R0FQS0tLlBV1BHTGJmUHYzeE5mZTBjK3lpWEd1OEdZXG50QSswc3JsOTNtR1dsYUM2NU10S2t1bXZxR3M4UDNlNElvcWpWakJVTUE0R0ExVWREd0VCL3dRRUF3SUZvREFUXG5CZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFNQmdOVkhS'"
run-opts 13 "--env EDGE_TOKEN_L11='TUJBZjhFQWpU1COEdBMVVkSXdRWU1CYUFGSG1KXG5kd01PTFZ5N1VZSTcwMVZLUkk5RExKTDlNQW9HQ0NxR1NNNDlCQU1DQTBrQU1FWUNJUURmS0tMamE1MzBsSXBMXG5wZTZkV3BGUlRHZ0tUdXJSK0hDY1hrS1dCNWlueXdJaEFQMTdt'"
run-opts 14 "--env EDGE_TOKEN_L12='TUZ6MVJUejV2U3a3QyWkRaNWtBTjR4aUhXXG5iQ3BkTFQzOXV3RzFcbi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS1cbiJ9'"
app-resource profile custom
cpu-percent 100
memory 2048
persist-disk 4096
vcpu 2
name-server0 8.8.8.8
start

Execute the following exec command on Catalyst9300/9400

c9300: app-hosting install appid VE package usbflash1:<tar file name>
c9400: app-hosting install appid VE package disk0:<tar file name>

Verify the Elisity Virtual Edge is RUNNING using the following exec command on Catalyst9000

show app-hosting list

 


Step 5:
 Copy the Elisity Virtual Edge .tar file provided by your Elisity SE to the application hosting switch's SSD drive usually called usbflash1:. Make sure to confirm your switch's USB flash storage name so that it is copied to the correct storage media. You can use any method you wish to transfer the file such as FTP, SCP, TFTP, HTTPS etc. The file name should look something like this: docker_edge-release-x86_64-15.5.0.35.tar


Step 6:
 Log into the application hosting switch, copy and paste the configuration provided by Cloud Control Center into the command line and don't forget to write mem. It may be necessary to copy and paste one section at a time.

 

Step 7: Run the provided command to install the Virtual Edge Container on the application hosting switch. Replace <tar file name> with the name of the .tar file name provided by your Elisity SE. For example, docker_edge-release-x86_64-15.4.0.14.tar. 

app-hosting install appid VE package usbflash1:<tar file name>


Step 8:
 Wait a minute or two until the application is finished installing and then run the following command to ensure it was correctly installed and running.

 

Latest.Elisity.Core.ME#show app-hosting list
App id                                   State
---------------------------------------------------------
VE                                      RUNNING


Step 9: Check Cloud Control Center to ensure that the Virtual Edge registered successfully. If the Virtual Edge status never changes to green then there is an IP connectivity issue between the Virtual Edge and Cloud Control Center. 

 

 

 

Enabling Virtual Edge Redundancy with Cisco StackWise™

If leveraging Cisco Stackwise™ technology to stack switches together, Elisity supports Virtual Edge redundancy. This means that one Virtual Edge instance will be active on the stack primary switch and a second Virtual Edge instance will be cold-standby on the stack secondary switch. If the primary switch goes offline, the stack secondary switch automatically boots up the cold-standby Virtual Edge instance and the now active Virtual Edge instance takes over identity and policy functions for the switch stack. 

Cisco StackWise 1:1 Redundancy is required for Virtual Edge redundancy on Catalyst 9000 series switches. Review this configuration guide for 1:1 redundancy

Enabling Virtual Edge redundancy on a stack of switches is very simple and uses the normal deployment process detailed above. The only additional step is to ensure that the stack secondary switch AppGig interface is configured as a trunk, the same way the primary stack switch AppGig interface is.

The designated switch stack number will determine the full name of the AppGig interface. For example, if switch #4 in the stack is configured as the secondary stack switch, the AppGig interface will be named AppGig4/0/1. Notice the number of the interface is 4/0/1

 

interface AppGigabitEthernet4/0/1
       switchport mode trunk
NOTE:
  • StackWise™ 1:1 Redundancy Mode must be enabled
  • Both primary and secondary stack switches must have SSDs inserted.
  • Both primary and secondary stack switches must have the Virtual Edge .tar file on their respective SSD. 
  • Preemption will NOT occur. The secondary stack switch will continue to host the Active Virtual Edge until it goes offline. 

 

Upgrading a Virtual Edge (Switch Hosted)

Most Virtual Edge upgrades are performed through our SDU console (Stateful Delta Update) by an Elisity team member and are scheduled with the customer in advance to fulfill any change requirements. However, Virtual Edges can also be upgraded manually by following the process below.


Step 1:
 Copy the Elisity Virtual Edge .tar file provided by your Elisity SE to the application hosting switch's SSD drive usually called usbflash1:. Make sure to confirm your switch's USB flash storage name so that it is copied to the correct storage media. You can use any method you wish to transfer the file such as FTP, SCP, TFTP, HTTPS etc. The file name should look something like this: docker_edge-release-x86_64-15.5.0.35.tar

Step 2: Log into the switch and run the following commands to remove the old Virtual Edge from the switch application hosting space before installing the new one. 

*** Stop the container app ***

switch# app-hosting stop appid VE
VE stopped successfully
Current state is: STOPPED

*** Deactivate the container app ***

switch# app-hosting deactivate appid VE
VE deactivated successfully
Current state is: DEPLOYED

*** Uninstall the container app ***

switch# app-hosting uninstall appid VE
Uninstalling 'VE'. Use 'show app-hosting list' for progress.


Step 3: Run the provided command to install the upgraded Virtual Edge container on the application hosting switch. Replace <tar file name> with the name of the .tar file name provided by your Elisity SE. For example, docker_edge-release-x86_64-15.5.0.35.tar. 

app-hosting install appid VE package usbflash1:<tar file name>

 

Step 4: Wait a minute or two until the application is finished installing and then run the following command to ensure it was correctly installed and running.

 

Latest.Elisity.Core.ME#show app-hosting list
App id                                   State
---------------------------------------------------------
VE                                      RUNNING


Step 5: Check Cloud Control Center to ensure that new code version is displayed. 

 

 

 

Changing Switch Hosted Virtual Edge Configuration

To change any switch hosted Virtual Edge configuration such as its IP address, DNS, or hostname follow the steps below.

 

Step 1: Log on to the switch hosting the Virtual Edge then stop and deactivate the container

app-hosting stop app-id <VE Name>
app-hosting deactivate app-id <VE Name>

 

Step 2: Edit the app-hosting configuration on the switch and don't forget to write mem.

app-hosting appid <VE Name>
app-vnic AppGigabitEthernet trunk
vlan 43 guest-interface 1
guest-ipaddress 4.2.1.1 netmask 255.255.255.0
app-default-gateway 4.2.1.3 guest-interface 1
app-resource docker
run-opts 1 "--entrypoint /etc/init.d/cat9k --cap-add=NET_ADMIN --hostname Example-VE --ulimit nofile=90000:90000 --env EDGE_TYPE=VE --env EDGE_REG_KEY=bcd0e224f183562f --env EDGE_UPLINK_IP=10.63.0.12"
run-opts 2 "--env EDGE_CLOUD_MANAGE_URL=3.131.136.253 --env EDGE_DNS_SERVER=8.8.8.8,4.2.2.2 --env EDGE_OPENVPN_SERVER=3.131.136.253 --env EDGE_OPENVPN_SERVER_PORT=1194 --env EDGE_OPENVPN_SERVER_PROTOCOL=udp"

 

Step 3: Re-activate and start the the container.

app-hosting activate app-id <VE Name>
app-hosting start app-id <VE Name>

 

Deleting a Virtual Edge

Select the more options icon to the right of the Virtual Edge and then select Delete Virtual Edge

NOTE: Before you can delete a Virtual Edge, all Virtual Edge Nodes onboarded with that Virtual Edge must first be deleted.  Follow the guide here to first decommission Virtual Edge Nodes attached to the Virtual Edge you are trying to decommission.
 
The delete action for the Virtual Edge will appear in the Cloud Control Center audit logs.

Step 2:
After the Virtual Edge has been deleted in Cloud Control Center, you can delete the VM on your Hypervisor.
Was this article helpful?
0 out of 0 found this helpful