Elisity Virtual Edge VM Deployment Guide (Hypervisor Hosted) 15.4+

Elisity Virtual Edge VM (Hypervisor Hosted) is a docker container-based implementation of Elisity Cognitive Trust software running as a VM on your hypervisor of choice.

This document is only for Cloud Control Center 15.4+ Deployments. If you are deploying a switch hosted Virtual Edge prior to 15.4 follow this article instead. 





As of today, you can onboard all Cisco Catalyst 3850/3650, Catalyst 9000 series switches and Catalyst IE3400 series switches as Virtual Edge Nodes for policy enforcement using Elisity Virtual Edge VM. Cisco StackWise© switch stacking technology is also supported. Additional switch models will be supported in future releases. Please see the switch compatibility matrix for more details. 

 

NOTE:

The recommended requirements to run Virtual Edge VM on a hypervisor

  • VMware ESXi 7.x or later. 
  • 8 vCPU @ 2 Ghz
  • 8 GB RAM
  • 32 GB Storage
  • 1 x Virtual Network Adapter (underlying hypervisor vnic should support 10 Gbps)
  • Less than 100ms RTT to Virtual Edge Nodes

 

TIP:

Elisity Virtual Edge is based on a docker container architecture. This means you can deploy it on pretty much any host that supports docker container hosting. For example, you could deploy this on your own private cloud docker infrastructure! 

The following example is leveraging an Elisity provided pre-packaged Ubuntu Linux OS host that is hosting the docker container. 

 

NOTE:

  • Catalyst series switches require a minimum of IPBase licensing to be onboarded as Virtual Edge Nodes. 
  • Catalyst IE3400 switches require a Cisco SD Card (P/N SD-IE-4GB)
  • The Elisity Virtual Edge VM has been developed to work with Catalyst 3850/3650 series switches running IOS-XE version 16.12.5b and Catalyst 9000 series switches running IOS-XE version 17.6.x. While it may work with earlier versions of IOS-XE we cannot guarantee that it will operate correctly.
  • All switches being onboarded must have their clocks synchronized with the Active Directory server so that attachment events are displayed accurately. You can use your own NTP server or a public one such as time.google.com. 

 


The following chart describes the terminology used in this document

Cloud Control Center Elisity's cloud native and cloud delivered control, policy and management plane.
Virtual Edge VM The Elisity Cognitive Trust software running as a docker container on a hypervisor such as VMware ESXi.
Virtual Edge Node An access switch onboarded to a Virtual Edge to be leveraged as an enforcement point in the network.


Deploying Elisity Virtual Edge VM (Hypervisor Hosted)

The Elisity Virtual Edge VM container has a single virtual interface used to communicate with Cloud Control Center as well as with Virtual Edge Nodes. In more detail, the Virtual Edge VM virtual interface is used to maintain a persistent control plane connection to Cloud Control Center in order to receive identity based policies as well as to send identity metadata and analytics to Cloud Control Center. This same interface is used to glean identity metadata, traffic analytics and other switch information from the Virtual Edge Nodes and to read the Catalyst configuration and configure security policies, traffic filters and other switch functions. 

Elisity Virtual Edge VM allows you to onboard any type of switch on the compatibility matrix as Virtual Edge Nodes for policy enforcement. The Virtual Edge VM model is depicted below:

 

Deploying the OVA


Step 1:
To deploy Elisity Virtual Edge VM on a hypervisor you will need to acquire the Virtual Edge VM OVA file from your Elisity SE. In this example we will be using VMware ESXi. Once you have the OVA log into your ESXi instance and select Create / Register VM.



 

Step 2: Select Deploy a Virtual Machine from an OVF or OVA file and then select Next.




Step 3: Enter the name for the virtual machine and upload the OVA and select Next.




Step 4: Select the VM Datastore you wish to use as persistent storage for the VM and select Next.




Step 5: Select the Uplink Port Group that provides the correct access for the Virtual Edge VM to reach the internet as well as the access switches to be onboarded as Virtual Edge Nodes for policy enforcement. Select the Disk Provisioning option of your choice and ensure Power on automatically is enabled. 
 




 

Step 6: If everything looks good select Finish and wait for the OVA to complete the deployment.
 





Make sure to enable Autostart so that the Virtual Edge VM starts up automatically after ESXi boots up.



Step 7: Once the deployment is complete we need to log into the Virtual Edge VM host system to configure the host IP address and deploy the rest of the software in later steps. Review the following diagram to understand the IP address assignment requirements:



Select Console and then select Open Console in new window.




Configuring the VM


Step 8: Log into the Virtual Edge VM host system using the credentials provided to you by your Elisity SE. 




Step 9: By default DHCP is enabled. If static settings are required run the following command to configure a static IP, default gateway and DNS settings. You should list your private DNS first in your comma-separated list to ensure that hostname entries can be imported from your private DNS during device discovery. Replace the example IPs with your own.

sudo docker-edgectl static ens192 10.60.1.11/24 10.60.1.1 "8.8.8.8,4.2.2.2"

 

NOTE:

A second IP address in the same subnet will be required for the container within the host operating system.

  

Step 10: Verify that the new configuration was applied by running the following command:

 

ifconfig ens192

<OUTPUT>

ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.60.1.11  netmask 255.255.255.0  broadcast 10.60.1.255
        inet6 fe80::20c:29ff:fe5e:ff31  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:5e:ff:31  txqueuelen 1000  (Ethernet)
        RX packets 5681  bytes 916010 (916.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4600  bytes 1655465 (1.6 MB)
      TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



Test to make sure you can ping the default gateway as well as the internet. 

 

Adding the Virtual Edge in Cloud Control Center


Step 11:
Log into Cloud Control Center and navigate to Policy Fabric > Elisity Edge > Add Edge


13.png


Step 12: Select the Virtual Edge tile.

14.png


Step 13: The next step is to configure the Virtual Edge docker container IP and network settings. Fill out the required fields and select Submit & Generate Configuration. Details about each field are provided in the chart below. These details can always be viewed and edited by selecting the more options icon to the right and selecting Edit/Download Virtual Edge Configuration. 

15.jpg


As a reminder, review the following diagram to understand the IP address assignment requirements:
16.jpg

The following chart provides details about each required field

Uplink IP Address This is the IP assigned to the Virtual Edge VM container. This IP needs to be routable and must have access to reach Cloud Control Center. This IP also needs reachability to any Virtual Edge Node management interface you plan to onboard. The network for this IP can be configured locally on the application hosting switch or it can be configured on an aggregation switch upstream. This can be a new network or an existing network. This is NOT the same IP configured on the Virtual Edge VM host system during a previous step however it must be on the same network. This field is mandatory. 
Uplink Gateway IP This is the default gateway IP for the network described above. This field is mandatory.
Uplink VLAN This field is not used for Virtual Edge VM Deployments however it is still mandatory. Use any VLAN you wish. 
Host Name This is the host name assigned to the Virtual Edge VM container. This field is optional.
Domain Name Server (DNS) This is the DNS server IP to be used by the Virtual Edge VM container. This can be either a public or private DNS server. To specify more than one DNS server use a comma. This field is optional. 
Site Label You can assign a pre-created site label to your Virtual Edge that is inherited by any attached asset. This allows you to filter and view assets using these site labels.
Virtual Edge Location Address The location of the Virtual Edge VM container so that Cloud Control Center reflects the location of the installed container. This field is optional. 

 

 

Final Steps


Step 14: After clicking Submit & Generate Configuration, two files will be automatically downloaded to your workstation. 

  • VE_xxxxxxxxxxxxxxxx.txt

This text file contains information to bring up Virtual Edge when hosted by a switch using application hosting functionality. It is not relevant to the Virtual Edge VM hosted by a hypervisor model. More details on this file are provided in the Elisity Virtual Edge (switch hosted) deployment guide

  • VE_DOCKER_xxxxxxxxxxxxxxxx.yml

The YAML file is what we need to focus on. This YAML file contains all of the details the Virtual Edge VM needs to deploy the container on the host system. Each Virtual Edge VM receives a unique identifier which is embedded in the file name. Below is an example of the content in the YAML file generated by CCC. 

 

 

version: '2'
services:
  ve:
    networks:
      vlan1:
        ipv4_address: 10.60.1.12
    cap_add:
      - ALL
    environment:
      - EDGE_TYPE=VE
    - EE_CFG_JSON={"ve_reg_key":"bcd0e224f183562f","ve_uplink_ip":"10.63.0.12","ve_cloud_manage_url":"3.131.136.253","ve_dns_server":["8.8.8.8","4.2.2.2"],"ve_openvpn_server":"3.131.136.253","ve_openvpn_ca":"-----BEGIN CERTIFICATE-----\nMIIBWzCCAQGgAwIBAgIBATAKBggqhkjOPQQDAjAVMRMwEQYDVQQDEwpvZGIycm9v\ndENBMB4XDTIzMTEwMTE2MjI1NVoXDTMzMTEwMTE2MjI1NVowFTETMBEGA1UEAxMK\nb2RiMnJvb3RDQTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABMXigAswInuApy/W\n++Gg75naRlfcRfVPpygfXAR32sTIoP3IwfcFmF3Mn51VrkJvrDbakhKZPPNGgQ8M\n53T/Fi+jQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud\nDgQWBBR8LIS5xYLK9OuKNld42qK7NJ9xwTAKBggqhkjOPQQDAgNIADBFAiEAyQ2K\nNYZxtHdBHu6sy9fXEXcp3ySf2FE4E0bhCn9EF0ICIDNg4p4+6R66zJOQAT4uzpgi\nZ+9N4gHXM0bwYma/t3Uh\n-----END CERTIFICATE-----\n","ve_openvpn_private_key":"-----BEGIN EC PRIVATE KEY-----\nMIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgA4KXWpTmvGonLnHQ\nHdUwOr+p7lXQJ2gMr8npKQGAYnyhRANCAAScf0r9Lme+odrvogRvr1Ypv0M4+0xW\nOOHpiNUkXpQqElmX+2O4JvAv3USGAenDyd9kKvpq2/tpoDHT2yBj+0gT\n-----END EC PRIVATE KEY-----\n","ve_openvpn_server_port":"1194","ve_openvpn_protocol":"udp","ve_openvpn_cert":"-----BEGIN CERTIFICATE-----\nMIIBhTCCASugAwIBAgIRAPd6gQU7BsNpwTBceeG3HkowCgYIKoZIzj0EAwIwFTET\nMBEGA1UEAxMKb2RiMnJvb3RDQTAeFw0yMzExMDMxOTE0MTJaFw0yNDExMDMxOTE0\nMTJaMBsxGTAXBgNVBAMTEGJjZDBlMjI0ZjE4MzU2MmYwWTATBgcqhkjOPQIBBggq\nhkjOPQMBBwNCAAScf0r9Lme+odrvogRvr1Ypv0M4+0xWOOHpiNUkXpQqElmX+2O4\nJvAv3USGAenDyd9kKvpq2/tpoDHT2yBj+0gTo1YwVDAOBgNVHQ8BAf8EBAMCBaAw\nEwYDVR0lBAwwCgYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBR8\nLIS5xYLK9OuKNld42qK7NJ9xwTAKBggqhkjOPQQDAgNIADBFAiAGbxnE+71D2GoS\nMNijojQXl/DkL7Uh5w/JJ0bNOgYAywIhAP8aQa0r2ohNL0y9Tx97mCleqBZmaBlH\n8jkANGR+bSyl\n-----END CERTIFICATE-----\n"}
    entrypoint: /etc/init.d/edge
  # Change the image tag version appropriately instead of 15.0.12
    image: elisity/docker_edge:15.4.0
    restart: always
    hostname: VE
    container_name: VE
    stdin_open: true
    tty: true
    privileged: true
volumes:
- type: bind
source: /etc/elisity/VE/Example-VE/data/
target: /iox_data/
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock

networks:
  vlan1:
    driver: ipvlan
    driver_opts:
      parent: ens192
    ipam:
      config:
        - subnet: 10.60.1.0/24
          gateway: 10.60.1.1


Step 15:
Edit line 14 that says image: elisity/docker_edge:15.0.12 to reflect the OVA release you are deploying. For example if you are deploying a release named DOCKER_EDGE_ESXI-0.27-v15.0.12.ova then change the string to image: elisity/docker_edge:15.0.12

NOTE: Line 26 parent: ens192 does not usually need to be changed. However, if your interface ID on the Virtual Edge VM host system is different, adjust this to reflect the correct name. You can verify this by running ifconfig -a command on Terminal. 


Step 16:
Transfer the YAML file to the Virtual Edge VM host system /home/elisity directory, and run the following command from the same directory to deploy the container. Make sure to use the appropriate YAML file name generated by Cloud Control Center, not the example one below.


When prompted for a password, use the same password you used to log into the Virtual Edge VM host system. 

sudo upgrade-edge create VE_DOCKER_xxxxxxxxxxxxxxxx.yml

After a couple seconds the container will be created and the following output will be displayed

Creating VE ... done
VE successfully created !


Run the following command to make sure the container is running properly

docker ps

An output similar to the one below should be displayed:


Step 17: Check Cloud Control Center to ensure that the Virtual Edge VM registered successfully. If the Virtual Edge VM status never changes to green then there is an IP connectivity issue between the Virtual Edge VM container and Cloud Control Center


18.png


Now you can onboard your existing access switches as Elisity Virtual Edge Nodes for policy enforcement by following
this guide. 

 

Upgrading A Virtual Edge (Hypervisor Hosted)


Step 1: Transfer the new Elisity Virtual Edge .tar file provided by your Elisity SE to the Virtual Edge VM host system /home/elisity directory. 

Step 2: Run the docker ps command on the Virtual Edge command prompt to collect the docker instance name. You will need this name when issuing the upgrade command. 



Step 3: Make sure to replace the docker instance name with the one you just collected with the docker ps command and update the .tar file name to reflect the file name you are upgrading with.

elisity@docker-edge:~$ sudo upgrade-edge upgrade <docker instance name> file:/home/elisity/docker_edge-x86_64-14.4.0.tar

[sudo] password for elisity:

Loading docker tar file...

Docker load successfully completed

Stopping VE ... done

VE

Creating VE ... done

Upgrade successfully completed !

 

Step 4: After a couple minutes, verify that the new code version is reflected in Cloud Control Center 


 

 

Changing Hypervisor Hosted Virtual Edge Configuration

To change any hypervisor hosted Virtual Edge configuration such as its IP address, DNS, or hostname follow the steps below.

 

Step 1: Log on to the Virtual Edge VM operating system linux shell and deactivate the container. 

sudo upgrade-edge delete <docker instance name>

 

Step 2: Using the text editor of your choice, edit the container's YAML configuration file you previously uploaded to deploy the Virtual Edge. This file is located at /home/elisity. Don't forget to save the changes. 

elisity@docker-edge:~$ dir /home/elisity
VE_DOCKER_9c61ec7be6320cfb.yml
vi VE_DOCKER_9c61ec7be6320cfb.yml


version: '2'
services:
DEMO-VE-VM:
networks:
vlan1:
ipv4_address: 10.203.1.17
cap_add:
- ALL
environment:
- EDGE_TYPE=VE
- EE_CFG_JSON={"ve_reg_key":"bcd0e224f183562f","ve_uplink_ip":"10.63.0.12","ve_cloud_manage_url":"3.131.136.253","ve_dns_server":["8.8.8.8","4.2.2.2"],"ve_openvpn_server":"3.131.136.253","ve_openvpn_ca":"-----BEGIN CERTIFICATE-----\nMIIBWzCCAQGgAwIBAgIBATAKBggqhkjOPQQDAjAVMRMwEQYDVQQDEwpvZGIycm9v\ndENBMB4XDTIzMTEwMTE2MjI1NVoXDTMzMTEwMTE2MjI1NVowFTETMBEGA1UEAxMK\nb2RiMnJvb3RDQTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABMXigAswInuApy/W\n++Gg75naRlfcRfVPpygfXAR32sTIoP3IwfcFmF3Mn51VrkJvrDbakhKZPPNGgQ8M\n53T/Fi+jQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud\nDgQWBBR8LIS5xYLK9OuKNld42qK7NJ9xwTAKBggqhkjOPQQDAgNIADBFAiEAyQ2K\nNYZxtHdBHu6sy9fXEXcp3ySf2FE4E0bhCn9EF0ICIDNg4p4+6R66zJOQAT4uzpgi\nZ+9N4gHXM0bwYma/t3Uh\n-----END CERTIFICATE-----\n","ve_openvpn_private_key":"-----BEGIN EC PRIVATE KEY-----\nMIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgA4KXWpTmvGonLnHQ\nHdUwOr+p7lXQJ2gMr8npKQGAYnyhRANCAAScf0r9Lme+odrvogRvr1Ypv0M4+0xW\nOOHpiNUkXpQqElmX+2O4JvAv3USGAenDyd9kKvpq2/tpoDHT2yBj+0gT\n-----END EC PRIVATE KEY-----\n","ve_openvpn_server_port":"1194","ve_openvpn_protocol":"udp","ve_openvpn_cert":"-----BEGIN CERTIFICATE-----\nMIIBhTCCASugAwIBAgIRAPd6gQU7BsNpwTBceeG3HkowCgYIKoZIzj0EAwIwFTET\nMBEGA1UEAxMKb2RiMnJvb3RDQTAeFw0yMzExMDMxOTE0MTJaFw0yNDExMDMxOTE0\nMTJaMBsxGTAXBgNVBAMTEGJjZDBlMjI0ZjE4MzU2MmYwWTATBgcqhkjOPQIBBggq\nhkjOPQMBBwNCAAScf0r9Lme+odrvogRvr1Ypv0M4+0xWOOHpiNUkXpQqElmX+2O4\nJvAv3USGAenDyd9kKvpq2/tpoDHT2yBj+0gTo1YwVDAOBgNVHQ8BAf8EBAMCBaAw\nEwYDVR0lBAwwCgYIKwYBBQUHAwIwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBR8\nLIS5xYLK9OuKNld42qK7NJ9xwTAKBggqhkjOPQQDAgNIADBFAiAGbxnE+71D2GoS\nMNijojQXl/DkL7Uh5w/JJ0bNOgYAywIhAP8aQa0r2ohNL0y9Tx97mCleqBZmaBlH\n8jkANGR+bSyl\n-----END CERTIFICATE-----\n"}entrypoint: /etc/init.d/edge
# Change the image tag version appropriately instead of 15.0.12
image: elisity/docker_edge:15.4.0
restart: always
hostname: DEMO-VE-VM
container_name: DEMO-VE-VM
stdin_open: true
tty: true
privileged: true
volumes:
- type: bind
source: /etc/elisity/VE/DEMO-VE-VM/data/
target: /iox_data/
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock

networks:
vlan1:
driver: ipvlan
driver_opts:
parent: ens192
ipam:
config:
- subnet: 10.203.1.0/24
gateway: 10.203.1.1
~

 

Step 3: Re-activate the Virtual Edge 

sudo upgrade-edge create VE_DOCKER_9c61ec7be6320cfb.yml

 

 

Decommissioning and Deleting a Virtual Edge

NOTE:  Please do not attempt to decommission VE/VENs on your own.  Please schedule time with  for assistance and confirmation of a successful decommission.

 

Step 1: Select the more options icon to the right of the Virtual Edge and then select Decommission Virtual Edge

 

NOTE: Before you can decommission a Virtual Edge, all Virtual Edge Nodes onboarded with that Virtual Edge must first be decommissioned and deleted.  Follow the guide here to first decommission Virtual Edge Nodes attached to the Virtual Edge you are trying to decommission. 

 

Step 2: Wait 60 seconds after decommissioning the Virtual Edge. Select the more options icon to the right of the Virtual Edge and then select Delete Virtual Edge. Refer to the previous image. 

 

Step 3: After the Virtual Edge has been decommissioned in Cloud Control Center, you can delete the VM on your Hypervisor.

 

 

Was this article helpful?
0 out of 0 found this helpful