Virtual Edge Deployment Guide (Hypervisor-hosted)

This article walks through the steps to onboard, configure, and manage Virtual Edges in the Elisity Platform, specifically for Hypervisor-hosted Virtual Edges that reside as a Virtual Machine. This article shows this workflow for Cloud Control Center version 15.5 and newer.

For information on how to use the Virtual Edge dashboard, see our VE/VEN management article.

 

As of today, you can onboard all Cisco Catalyst 3850/3650, Catalyst 9000 series switches and Catalyst IE3400 series switches as Virtual Edge Nodes for policy enforcement using Elisity Virtual Edge VM. Cisco StackWise© switch stacking technology is also supported. Additional switch models will be supported in future releases. Please see the switch compatibility matrix for more details. 

 

NOTE:

The recommended requirements to run Virtual Edge VM on a hypervisor

  • VMware ESXi 7.x or later.
  • 2 CPU (4 vCPU with hyper-theading)
  • 8 GB RAM
  • 40 GB Storage
  • 1 x Virtual Network Adapter (underlying hypervisor vnic should support 10 Gbps)

TIP:

Elisity Virtual Edge is based on a docker container architecture. This means you can deploy it on pretty much any host that supports docker container hosting. For example, you could deploy this on your own private cloud docker infrastructure! 

The following example is leveraging an Elisity provided pre-packaged Ubuntu Linux OS host that is hosting the docker container. 

NOTE:

Catalyst series switches require a minimum of IPBase licensing to be onboarded as Virtual Edge Nodes. 
Catalyst IE3400 switches require a Cisco SD Card (P/N SD-IE-4GB)
The Elisity Virtual Edge VM has been developed to work with switches running these minimum IOS versions. While it may work with earlier versions of IOS-XE we cannot guarantee that it will operate correctly.
All switches being onboarded must have their clocks synchronized with the Active Directory server so that attachment events are displayed accurately. You can use your own NTP server or a public one such as time.google.com. 

 

The following chart describes the terminology used in this document

Cloud Control Center Elisity's cloud native and cloud delivered control, policy and management plane.
Virtual Edge VM The Elisity software running as a docker container on a hypervisor such as VMware ESXi.
Virtual Edge Node An access switch onboarded to a Virtual Edge to be leveraged as an enforcement point in the network.


Deploying Elisity Virtual Edge VM (Hypervisor Hosted)

The Elisity Virtual Edge VM container has a single virtual interface used to communicate with Cloud Control Center as well as with Virtual Edge Nodes. In more detail, the Virtual Edge VM virtual interface is used to maintain a persistent control plane connection to Cloud Control Center in order to receive identity based policies as well as to send identity metadata and analytics to Cloud Control Center. This same interface is used to glean identity metadata, traffic analytics and other switch information from the Virtual Edge Nodes and to read the Catalyst configuration and configure security policies, traffic filters and other switch functions. 

Elisity Virtual Edge VM allows you to onboard any type of switch on the compatibility matrix as Virtual Edge Nodes for policy enforcement. The Virtual Edge VM model is depicted below:

 

Deploying the OVA


Step 1:
To deploy Elisity Virtual Edge VM on a hypervisor you will need to acquire the Virtual Edge VM OVA file from your Elisity SE. In this example we will be using VMware ESXi. Once you have the OVA log into your ESXi instance and select Create / Register VM.



 

Step 2: Select Deploy a Virtual Machine from an OVF or OVA file and then select Next.




Step 3: Enter the name for the virtual machine and upload the OVA and select Next.




Step 4: Select the VM Datastore you wish to use as persistent storage for the VM and select Next.




Step 5: Select the Uplink Port Group that provides the correct access for the Virtual Edge VM to reach the internet as well as the access switches to be onboarded as Virtual Edge Nodes for policy enforcement. Select the Disk Provisioning option of your choice and ensure Power on automatically is enabled. 
 




 

Step 6: If everything looks good select Finish and wait for the OVA to complete the deployment.
 





Make sure to enable Autostart so that the Virtual Edge VM starts up automatically after ESXi boots up.



Step 7: Once the deployment is complete we need to log into the Virtual Edge VM host system to configure the host IP address and deploy the rest of the software in later steps. Review the following diagram to understand the IP address assignment requirements:



Select Console and then select Open Console in new window.




Configuring the VM


Step 8: Log into the Virtual Edge VM host system using the credentials provided to you by your Elisity SE. 




Step 9: By default DHCP is enabled. It is recommended to configure a static IP address for the host VM.  Run the following command to configure a static IP, default gateway and DNS settings. You should list your private DNS first in your comma-separated list to ensure that hostname entries can be imported from your private DNS during device discovery. Replace the example IPs with your own.

sudo docker-edgectl static ens192 10.100.102.71/24 10.100.102.1 "10.100.102.20,8.8.8.8"

 

NOTE:

This is NOT the IP address for the Virtual Edge. This is the IP address for the VM that hosts the Virtual Edge container. A second IP address in the same subnet will be required for the container within the host operating system. This is configured in Cloud Control Center in a later step.

  

Step 10: Verify that the new configuration was applied by running the following command:

 

elisity@docker-edge:~$ ifconfig ens192

___OUTPUT___

ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.100.102.71 netmask 255.255.255.0 broadcast 10.100.102.255
inet6 fe80::20c:29ff:fe51:8028 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:51:80:28 txqueuelen 1000 (Ethernet)
RX packets 440173 bytes 41856989 (41.8 MB)
RX errors 0 dropped 3 overruns 0 frame 0
TX packets 119033 bytes 6283432 (6.2 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


Test to make sure you can ping the default gateway as well as the internet. 

 

Adding the Virtual Edge in Cloud Control Center


Step 11:
 Log into Cloud Control Center and navigate to Virtual Edges > Add Virtual Edge. The drop down menu will give you the option to add a Single Virtual Edge or Add Multiple Virtual Edges. We will select "Add Single Virtual Edge." See our Virtual Edge Bulk Onboarding article to add multiple VEs.

 


Step 12:
Fill out the required fields and select Add. Details about each field are provided in the chart below. These details can always be viewed and edited by selecting the more options icon to the right and selecting Edit/Download Virtual Edge Configuration. 

 




The following chart provides details about each required field

IP Address This is the IP assigned to the Virtual Edge container. This IP needs to be routable and must have access to reach Cloud Control Center. This IP also needs reachability to any Virtual Edge Node management interface you plan to onboard. The network for this IP can be configured locally on the application hosting switch or it can be configured on an aggregation switch upstream. This can be a new network or an existing network. This field is mandatory.
Gateway IP This is the default gateway IP for the network described above. The default gateway for this IP can be configured locally on the application hosting switch or it can be configured on an aggregation switch upstream. This can be a a default gateway IP from a new network or an existing network. This field is mandatory.

Host Name

(optional)

This is the host name assigned to the Virtual Edge container. This will be used by Cloud Control Center when automating the generation of the application hosting configuration to be configured on the application hosting switch. 
Domain Name Server (DNS) This is the DNS server IP to be used by the Virtual Edge container. This can be either a public or private DNS server.  This will be used by Cloud Control Center when automating the generation of the application hosting configuration to be configured on the application hosting switch. To specify more than one DNS server use a comma. You should list your private DNS first in your comma-separated list to ensure that hostname entries can be imported from your private DNS during device discovery. 
Site Label (optional) You can assign a pre-created Site Label to your Virtual Edge that is inherited by any associated Virtual Edge Node, or you can create a new Site Label on the spot. This allows you to filter and view assets and Virtual Edges using these Site Labels, and apply Policy Sets based on Site Label for selective policy distribution. See our VE/VEN management article for info on how to create and manage your Site Labels effectively.

Distribution Zone

(optional)

Here you can assign the Virtual Edge to a pre-created Distribution Zone label for selective distribution of device to Policy Group mappings, or create a new DZ label and assign to the VE immediately.
See our VE/VEN management article for info on how to create and manage your Distribution Zone labels effectively.

 

Configuring the Virtual Edge Container


Step 14: After clicking Submit & Generate Configuration, the configuration file will be automatically downloaded to your workstation. 

  • VE_DOCKER_xxxxxxxxxxxxxxxx.yml

This YAML file contains all of the details the Virtual Edge VM needs to deploy the container on the host system. Each Virtual Edge VM receives a unique identifier which is embedded in the file name. Below is an example of the content in the YAML file generated by CCC. 

 

version: '2'
services:
Home-VE2:
networks:
vlan1:
ipv4_address: 10.100.102.154
cap_add:
- ALL
environment:
- EDGE_TYPE=VE
- EDGE_TOKEN=eyJ2ZV9yZWdfa2V5IjoiYjdmYjFhNTQyMzUyNjFmYyIsInZlX3VwbGlua19pcCI6IjEwLjEwMC4xMDIuMTU0IiwidmVfY2xvdWRfbWFuYWdlX3VybCI6IjE4LjExNy4zNy4xMjQiLCJ2ZV9kbnNfc2VydmVyIjpbIjEwLjEwMC4xMDIuMjAiLCIxMC4xMDAuMTAyLjUiLCI4LjguOC44Il0sInZlX29wZW52cG5fc2VydmVyIjoiMTguMTE3LjM3LjEyNCIsInZlX29wZW52cG5fY2EiOiItLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS1cbk1JSUJXekNDQVFHZ0F3SUJBZ0lCQVRBS0JnZ3Foa2pPUFFRREFqQVZNUk13RVFZRFZRUURFd3B2WkdJeWNtOXZcbmRFTkJNQjRYRFRJME1ERXlNekl6TURFek9Gb1hEVE0wTURFeU16SXpNREV6T0Zvd0ZURVRNQkVHQTFVRUF4TUtcbmIyUmlNbkp2YjNSRFFUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJFZWw4YmF4RzRCaTM5M21cblV6a096UVQrd0VzaGIvUks4dEFsYnBmUWtlWjBZTGhQdXdUYkVTNWlmVTBmeGFFQm9yV3p5M2ZUeUJUUkMrTU5cbkF4MXlBVldqUWpCQU1BNEdBMVVkRHdFQi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWRcbkRnUVdCQlI1aVhjRERpMWN1MUdDTzlOVlNrU1BReXlTL1RBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlCSXFYYVJcbmZkU2JCamxmckQxVkdlc3NWWi90TnIranpkOHdrNm51RGVTRDdBSWhBTUJnYWQ2ai8rb3RmSm1zSUxYS282YU1cbmhiQlM0QndlYkN6c25FSGJ1WU1QXG4tLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tXG4iLCJ2ZV9vcGVudnBuX3ByaXZhdGVfa2V5IjoiLS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tXG5NSUdIQWdFQU1CTUdCeXFHU000OUFnRUdDQ3FHU000OUF3RUhCRzB3YXdJQkFRUWcrVldUU3ZIR2FoRklsVm5IXG5LUEF6dS9BcFhhWVNQaDI3bDVoRkJSN3lmMldoUkFOQ0FBUWg4UjZwRHRXV1h0cDFqSlRmb285VHVTL2lsMXpHXG4wNGYvRW1OWE9KMzlVUEJCS1FDQVdrcnBmbXFXb2IyVXc1c2NDN0lOOTdRUmg0UCtiMzVzSEExaFxuLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLVxuIiwidmVfb3BlbnZwbl9zZXJ2ZXJfcG9ydCI6IjExOTQiLCJ2ZV9vcGVudnBuX3Byb3RvY29sIjoidWRwIiwidmVfb3BlbnZwbl9jZXJ0IjoiLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tXG5NSUlCaFRDQ0FTdWdBd0lCQWdJUkFMRld0S3BwL09YSVYzSnh3MFNZTytVd0NnWUlLb1pJemowRUF3SXdGVEVUXG5NQkVHQTFVRUF4TUtiMlJpTW5KdmIzUkRRVEFlRncweU5EQXlNamt4TmpRM01EaGFGdzB5TlRBek1ERXhOalEzXG5NRGhhTUJzeEdUQVhCZ05WQkFNVEVHSTNabUl4WVRVME1qTTFNall4Wm1Nd1dUQVRCZ2NxaGtqT1BRSUJCZ2dxXG5oa2pPUFFNQkJ3TkNBQVFoOFI2cER0V1dYdHAxakpUZm9vOVR1Uy9pbDF6RzA0Zi9FbU5YT0ozOVVQQkJLUUNBXG5Xa3JwZm1xV29iMlV3NXNjQzdJTjk3UVJoNFArYjM1c0hBMWhvMVl3VkRBT0JnTlZIUThCQWY4RUJBTUNCYUF3XG5Fd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWZCZ05WSFNNRUdEQVdnQlI1XG5pWGNERGkxY3UxR0NPOU5WU2tTUFF5eVMvVEFLQmdncWhrak9QUVFEQWdOSUFEQkZBaUVBcUMweGthNGN0ODAyXG5ZSUcrcWMvbzZQaXhLNVVwemx3bVl0ZTBwcFdDV0JJQ0lDVjJsUko2dlpvTS9uRXE4YWhCcGwwdW03aHlJK2JtXG5FUE40aW1rblF0c3pcbi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS1cbiJ9
entrypoint: /etc/init.d/edge
# Change the image tag version appropriately instead of 15.0.12
image: elisity/docker_edge:15.0.12
restart: always
hostname: Home-VE2
container_name: Home-VE2
stdin_open: true
tty: true
privileged: true
volumes:
- type: bind
source: /etc/elisity/VE/Home-VE2/data/
target: /iox_data/
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock

networks:
vlan1:
driver: ipvlan
driver_opts:
parent: ens192
ipam:
config:
- subnet: 10.100.102.0/24
gateway: 10.100.102.1

 

Step 15: Edit line 14 that says image: elisity/docker_edge:15.0.12 to reflect the OVA release you are deploying. For example if you are deploying a release named DOCKER_EDGE_ESXI-0.27-v15.0.12.ova then change the string to image: elisity/docker_edge:15.0.12

 

NOTE: Line 26 parent: ens192 does not usually need to be changed. However, if your interface ID on the Virtual Edge VM host system is different, adjust this to reflect the correct name. You can verify this by running ifconfig -a command on Terminal. 


Step 16:
Transfer the YAML file to the Virtual Edge VM host system /home/elisity directory, and run the following command from the same directory to deploy the container. Make sure to use the appropriate YAML file name generated by Cloud Control Center, not the example one below.


When prompted for a password, use the same password you used to log into the Virtual Edge VM host system. 

sudo upgrade-edge create VE_DOCKER_xxxxxxxxxxxxxxxx.yml

After a couple seconds the container will be created and the following output will be displayed

Creating VE ... done
VE successfully created !


Run the following command to make sure the container is running properly

docker ps

An output similar to the one below should be displayed:


Step 17: Check Cloud Control Center to ensure that the Virtual Edge registered successfully. If the Virtual Edge status never changes to green, then there is an IP connectivity issue between the Virtual Edge container and Cloud Control Center.



Now you can onboard your existing access switches as Elisity Virtual Edge Nodes for policy enforcement by following this guide. 

 

Upgrading A Virtual Edge (Hypervisor Hosted)

This is the process for manually upgrading a Virtual Edge. Most VE upgrades are now scheduled using SDU (Stateful Delta Update) which allows Elisity to schedule upgrades for large numbers of Virtual Edges, completely alleviating the need to manually upgrade Virtual Edges. However, the manual upgrade process can be used where necessary.

Step 1: Transfer the new Elisity Virtual Edge .tar file provided by your Elisity SE to the Virtual Edge VM host system /home/elisity directory. 

Step 2: Run the docker ps command on the Virtual Edge command prompt to collect the docker instance name. You will need this name when issuing the upgrade command. 



Step 3: Make sure to replace the docker instance name with the one you just collected with the docker ps command and update the .tar file name to reflect the file name you are upgrading with.

elisity@docker-edge:~$ sudo upgrade-edge upgrade <docker instance name> file:/home/elisity/docker_edge-x86_64-15.5.0.tar

[sudo] password for elisity:

Loading docker tar file...

Docker load successfully completed

Stopping VE ... done

VE

Creating VE ... done

Upgrade successfully completed !

 

Step 4: After a couple minutes, verify that the new code version is reflected in Cloud Control Center 


 

 

Changing Hypervisor Hosted Virtual Edge Configuration

To change any hypervisor hosted Virtual Edge configuration such as its IP address, DNS, or hostname follow the steps below.

 

Step 1: Log on to the Virtual Edge VM operating system linux shell and deactivate the container. 

sudo upgrade-edge delete <docker instance name>

 

Step 2: Using the text editor of your choice, edit the container's YAML configuration file you previously uploaded to deploy the Virtual Edge. This file is located at /home/elisity. Don't forget to save the changes. 

elisity@docker-edge:~$ dir /home/elisity
VE_DOCKER_9c61ec7be6320cfb.yml
vi VE_DOCKER_9c61ec7be6320cfb.yml


version: '2'
services:
Home-VE2:
networks:
vlan1:
ipv4_address: 10.100.102.154
cap_add:
- ALL
environment:
- EDGE_TYPE=VE
- EDGE_TOKEN=eyJ2ZV9yZWdfa2V5IjoiYjdmYjFhNTQyMzUyNjFmYyIsInZlX3VwbGlua19pcCI6IjEwLjEwMC4xMDIuMTU0IiwidmVfY2xvdWRfbWFuYWdlX3VybCI6IjE4LjExNy4zNy4xMjQiLCJ2ZV9kbnNfc2VydmVyIjpbIjEwLjEwMC4xMDIuMjAiLCIxMC4xMDAuMTAyLjUiLCI4LjguOC44Il0sInZlX29wZW52cG5fc2VydmVyIjoiMTguMTE3LjM3LjEyNCIsInZlX29wZW52cG5fY2EiOiItLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS1cbk1JSUJXekNDQVFHZ0F3SUJBZ0lCQVRBS0JnZ3Foa2pPUFFRREFqQVZNUk13RVFZRFZRUURFd3B2WkdJeWNtOXZcbmRFTkJNQjRYRFRJME1ERXlNekl6TURFek9Gb1hEVE0wTURFeU16SXpNREV6T0Zvd0ZURVRNQkVHQTFVRUF4TUtcbmIyUmlNbkp2YjNSRFFUQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJFZWw4YmF4RzRCaTM5M21cblV6a096UVQrd0VzaGIvUks4dEFsYnBmUWtlWjBZTGhQdXdUYkVTNWlmVTBmeGFFQm9yV3p5M2ZUeUJUUkMrTU5cbkF4MXlBVldqUWpCQU1BNEdBMVVkRHdFQi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWRcbkRnUVdCQlI1aVhjRERpMWN1MUdDTzlOVlNrU1BReXlTL1RBS0JnZ3Foa2pPUFFRREFnTklBREJGQWlCSXFYYVJcbmZkU2JCamxmckQxVkdlc3NWWi90TnIranpkOHdrNm51RGVTRDdBSWhBTUJnYWQ2ai8rb3RmSm1zSUxYS282YU1cbmhiQlM0QndlYkN6c25FSGJ1WU1QXG4tLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tXG4iLCJ2ZV9vcGVudnBuX3ByaXZhdGVfa2V5IjoiLS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tXG5NSUdIQWdFQU1CTUdCeXFHU000OUFnRUdDQ3FHU000OUF3RUhCRzB3YXdJQkFRUWcrVldUU3ZIR2FoRklsVm5IXG5LUEF6dS9BcFhhWVNQaDI3bDVoRkJSN3lmMldoUkFOQ0FBUWg4UjZwRHRXV1h0cDFqSlRmb285VHVTL2lsMXpHXG4wNGYvRW1OWE9KMzlVUEJCS1FDQVdrcnBmbXFXb2IyVXc1c2NDN0lOOTdRUmg0UCtiMzVzSEExaFxuLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLVxuIiwidmVfb3BlbnZwbl9zZXJ2ZXJfcG9ydCI6IjExOTQiLCJ2ZV9vcGVudnBuX3Byb3RvY29sIjoidWRwIiwidmVfb3BlbnZwbl9jZXJ0IjoiLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tXG5NSUlCaFRDQ0FTdWdBd0lCQWdJUkFMRld0S3BwL09YSVYzSnh3MFNZTytVd0NnWUlLb1pJemowRUF3SXdGVEVUXG5NQkVHQTFVRUF4TUtiMlJpTW5KdmIzUkRRVEFlRncweU5EQXlNamt4TmpRM01EaGFGdzB5TlRBek1ERXhOalEzXG5NRGhhTUJzeEdUQVhCZ05WQkFNVEVHSTNabUl4WVRVME1qTTFNall4Wm1Nd1dUQVRCZ2NxaGtqT1BRSUJCZ2dxXG5oa2pPUFFNQkJ3TkNBQVFoOFI2cER0V1dYdHAxakpUZm9vOVR1Uy9pbDF6RzA0Zi9FbU5YT0ozOVVQQkJLUUNBXG5Xa3JwZm1xV29iMlV3NXNjQzdJTjk3UVJoNFArYjM1c0hBMWhvMVl3VkRBT0JnTlZIUThCQWY4RUJBTUNCYUF3XG5Fd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWZCZ05WSFNNRUdEQVdnQlI1XG5pWGNERGkxY3UxR0NPOU5WU2tTUFF5eVMvVEFLQmdncWhrak9QUVFEQWdOSUFEQkZBaUVBcUMweGthNGN0ODAyXG5ZSUcrcWMvbzZQaXhLNVVwemx3bVl0ZTBwcFdDV0JJQ0lDVjJsUko2dlpvTS9uRXE4YWhCcGwwdW03aHlJK2JtXG5FUE40aW1rblF0c3pcbi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS1cbiJ9
entrypoint: /etc/init.d/edge
# Change the image tag version appropriately instead of 15.0.12
image: elisity/docker_edge:15.5.0
restart: always
hostname: Home-VE2
container_name: Home-VE2
stdin_open: true
tty: true
privileged: true
volumes:
- type: bind
source: /etc/elisity/VE/Home-VE2/data/
target: /iox_data/
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock

networks:
vlan1:
driver: ipvlan
driver_opts:
parent: ens192
ipam:
config:
- subnet: 10.100.102.0/24
gateway: 10.100.102.1

 

Step 3: Re-activate the Virtual Edge 

sudo upgrade-edge create VE_DOCKER_9c61ec7be6320cfb.yml

 

Deleting a Virtual Edge

Step 1: Select the more options icon to the right of the Virtual Edge and then select Delete Virtual Edge

NOTE: Before you can delete a Virtual Edge, all Virtual Edge Nodes onboarded with that Virtual Edge must first be deleted.  Follow the guide here to first decommission Virtual Edge Nodes attached to the Virtual Edge you are trying to decommission.
 
The delete action for the Virtual Edge will appear in the Cloud Control Center audit logs.

Step 2:
After the Virtual Edge has been deleted in Cloud Control Center, you can delete the VM on your Hypervisor.

 

 

Was this article helpful?
0 out of 0 found this helpful