VPLS installation and configuration

VPLS is an ONOS application, included by default in the platform (both source code and packages). It needs to be explicitly configured and activated.

The goal of VPLS is to connect multiple end-points in an OpenFlow network, creating isolated L2 broadcast overlay networks.

While legacy technologies require the manual configuration of multiple devices in the network, VPLS tries to make the process easier for network operators.

The current model expects that hosts to be connected together (any L3 device), sends into the network tagged packets (VLAN Ids).

The User Guide assumes:

Starting VPLS

VPLS can set to be installed and configured:

Following, the procedure to activate and configure VPLS will be described.

As for other ONOS applications, VPLS can be activated either:

# Basic VPLS topology

export ONOS_NIC=192.168.56.*
export OCI="192.168.56.101"
export OC1="192.168.56.101"
export OC2="192.168.56.102"
export OCN="192.168.56.103"
export ONOS_APPS=drivers,openflow,vpls
export ONOS_GROUP=sdn
export ONOS_SCENARIOS=$ONOS/tools/test/scenarios
export ONOS_TOPO=vpls
export ONOS_USER=sdn
export ONOS_USE_SSH=true
export ONOS_WEB_PASS=rocks
export ONOS_WEB_USER=onos

Configuring VPLS

VPLS relies on the ONOS network configuration subsystem, which is by default distributed on all ONOS nodes and shared by all ONOS applications.

In order to configure VPLS two things need to be done:

The goal of the configuration process is to define what attachment points the hosts are connected to (so which DPID, twhich ports), and to associate them under the same overlay network, a VPLS. This will essentially determine what hosts should talk one each other (and which don’t).

Both the interfaces and the VPLS configuration itself can be applied either:

Configuration file format and syntax

Let’s assume the following scenario:

vpls_overview-port.png

In this example, five hosts are sending in tagged packets with different VLAN Ids. Two networks will be created, one called VPLS-EXAMPLE-1 (VPLS1) - in green, the second one called VPLS-EXAMPLE-2 (VPLS2) - in red. Three hosts will be grouped and connected in VPLS1, while other two will be associated to VPLS2.

Hosts are grouped by VPLS in the following table:

 

VPLS name

VLAN Id

Interface Name

OF Switch DPID

OF Port Number

VPLS1

10

vpls1h1

0000000000000001

1

VPLS1

10

vpls1h2

0000000000000004

1

VPLS1

20

vpls1h3

0000000000000003

1

VPLS2

30

vpls2h1

0000000000000004

2

VPLS2

40

vpls2h2

0000000000000002

1

 


In order to configure what has been described above, the following configuration should be pushed to ONOS (either before or after VPLS has been started).

{ 
  "ports": {
    "of:0000000000000001/1": {
      "interfaces": [
        {
          "name": "vpls1h1",
          "vlan": "10"
        }
      ]
    },
    "of:0000000000000004/1": {
      "interfaces": [
        {
          "name": "vpls1h2",
          "vlan": "10"
        }
      ]
    },
    "of:0000000000000004/2": {
      "interfaces": [
        {
          "name": "vpls2h1",
          "vlan": "30"
        }
      ]
    },
    "of:0000000000000003/1": {
      "interfaces": [
        {
          "name": "vpls1h3",
          "vlan": "20"
        }
      ]
    },
    "of:0000000000000002/1": {
      "interfaces": [
        {
          "name": "vpls2h2",
          "vlan": "40"
        }
      ]
    }
  },
  "apps" : {
    "org.onosproject.vpls" : {
      "vpls" : {
        "vplsNetworks" : [
          {
            "name" : "VPLS1",
            "interfaces" : ["vpls1h1", "vpls1h2", "vpls1h3"],
            "encapsulation" : "VLAN" ###### Encapsulation can be set optionally to VLAN,MPLS or left to NONE
          },
          {
            "name" : "VPLS2",
            "interfaces" : ["vpls2h1", "vpls2h2"]
          }
        ]
      }
    }
  }
}

 

The same result can be achieved at run-time, using the interface configuration and VPLS CLI commands as follows (see section "CLI syntax" for more details):

onos> interface-add -v 10 of:0000000000000001/1 vpls1h1
onos> interface-add -v 10 of:0000000000000004/1 vpls1h2
onos> interface-add -v 20 of:0000000000000003/1 vpls1h3
onos> interface-add -v 30 of:0000000000000004/2 vpls2h1
onos> interface-add -v 40 of:0000000000000002/1 vpls2h2
onos> vpls-add VPLS1
onos> vpls-add-iface VPLS1 vpls1h1
onos> vpls-add-iface VPLS1 vpls1h2
onos> vpls-add-iface VPLS1 vpls1h3
onos> vpls-set-encap VLPS1 VLAN
onos> vpls-add VPLS2
onos> vpls-add-iface VPLS2 vpls2h1
onos> vpls-add-iface VPLS2 vpls2h2


As soon as two or more interfaces are added to the same VPLS network, intents to manage broadcast will be installed.

As soon as two or more hosts connected to the same VPLS get discovered by ONOS (and VPLS), intents to manage unicast traffic will be installed.

For more details on the VPLS architecture, internal workflow and intents used, please visit the VPLS Architecture Guide.

Mininet network file (topo-vpls.py)

Would you like to give VPLS a try, but it's too hard and long bringing up an entire network with hosts sending in packets on different VLANs? The mininet (python) file attached gives you an example of how to simulate a similar network. Just modify the file, creating the topology you like and letting Mininet point to your controller IP address.

CLI syntax

VPLS allows to define networks and attach or detach interfaces to them, also by command-line. There is also the option of cleaning all the state of the application for a clean start. Details on the CLI operations are detailed below:

# Adds a new network
onos> vpls-add $VPLS_NETWORK
# Removes an existing network
onos> vpls-del $VPLS_NETWORK
# Shows the list of networks
onos> vpls-list
     VPLS2
     VPLS1
# Shows the list of attached interfaces (for a given network) or all the list of networks and interfaces in each of them (if no network is provided)
onos> vpls-show [$VPLS_NETWORK]
     VPLS2: interface=[vpls2h1, vpls2h2]
     VPLS1: interface=[vpls1h1, vpls1h2, vpls1h3]
# Encapsulation (optional)
onos> vpls-set-encap $VPLS_NETWORK {VLAN|MPLS|NONE}
# Adds an existing interface (in netcfg) to an existing network
onos> vpls-add-iface $VPLS_NETWORK $INTERFACE_NAME
# Removes an existing interface from an existing network
onos> vpls-del-iface $VPLS_NETWORK $INTERFACE_NAME
# Cleans the status of the VPLS application (i.e., removes networks, detaches interfaces and withdraws intents)
onos> vpls-clean

Issues and Troubleshooting

Things not working as expected? Time to troubleshoot!

onos> hosts
id=00:00:00:00:00:01/10, mac=00:00:00:00:00:01, location=of:0000000000000001/1, vlan=10, ip(s)=[10.0.0.1], configured=false
id=00:00:00:00:00:02/10, mac=00:00:00:00:00:02, location=of:0000000000000004/1, vlan=10, ip(s)=[10.0.0.2], configured=false
id=00:00:00:00:00:03/20, mac=00:00:00:00:00:03, location=of:0000000000000003/1, vlan=20, ip(s)=[10.0.0.3], configured=false
id=00:00:00:00:00:04/30, mac=00:00:00:00:00:04, location=of:0000000000000004/2, vlan=30, ip(s)=[10.0.0.4], configured=false
id=00:00:00:00:00:05/40, mac=00:00:00:00:00:05, location=of:0000000000000002/1, vlan=40, ip(s)=[10.0.0.5], configured=false

Please, note that you should see results only for hosts that already sent traffic into the Network. This doesn’t happen for example with Mininet, where hosts are only processes without any application running by default.

Also, as in any related Intent based ONOS applications, there are certain best-practices to follow, to see what’s going in the Intent / Flow subsystems. Type "intents" to see the detailed list of intents, or "intent -s" for the intents summary.

Below is an approximation of what you should see for the network "VPLS2", after pinging between interfaces "vpls2h1" and "vpls2h2". Notice that two broadcast intents are installed at the beginning, and two unicast intents once the interfaces start pinging each other.

onos> intents
id=0x0, state=INSTALLED, key=VPLS2-brc-of:0000000000000002-1-FF:FF:FF:FF:FF:FF, type=SinglePointToMultiPointIntent, appId=org.onosproject.vpls
selector=[ETH_DST:FF:FF:FF:FF:FF:FF]
treatment=[NOACTION]
ingress=of:0000000000000002/1, egress=[of:0000000000000004/2]
id=0x1, state=INSTALLED, key=VPLS2-brc-of:0000000000000004-2-FF:FF:FF:FF:FF:FF, type=SinglePointToMultiPointIntent, appId=org.onosproject.vpls
selector=[ETH_DST:FF:FF:FF:FF:FF:FF]
treatment=[NOACTION]
ingress=of:0000000000000004/2, egress=[of:0000000000000002/1]
id=0x10, state=INSTALLED, key=VPLS2-uni-of:0000000000000002-1-00:00:00:00:00:05, type=MultiPointToSinglePointIntent, appId=org.onosproject.vpls
selector=[ETH_DST:00:00:00:00:00:05]
treatment=[NOACTION]
ingress=[of:0000000000000004/2], egress=of:0000000000000002/1
id=0x11, state=INSTALLED, key=VPLS2-uni-of:0000000000000004-2-00:00:00:00:00:04, type=MultiPointToSinglePointIntent, appId=org.onosproject.vpls
selector=[ETH_DST:00:00:00:00:00:04]
treatment=[NOACTION]
ingress=[of:0000000000000002/1], egress=of:0000000000000004/2

 

Still having issues? Write to us. We can help! Mailing Lists