Have questions? Stuck? Please check our FAQ for some common questions and answers.

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Overview

Virtual Private LAN Service (VPLS) is an ONOS application that allows the creation of L2 broadcast overlay networks on-demand, on top of OpenFlow infrastructures.

The application connects into broadcast network hosts connected to the data plane, sharing the same VLAN Id.


In order to let VPLS establish connectivity between two or more hosts, two things need to happen:

  1. At least two interfaces with the same VLAN Id need to be configured in the ONOS interfaces configuration;
  2. At least two hosts need to be connected to the OpenFlow network.

When 1 gets satisfied, any host a) attached to the ports where the interfaces have been configured, b) sending in traffic using the same VLAN that has been configured, will be able to send in broadcast traffic (i.e. ARP requests)  to the other hosts in the same overlay network. This is needed to make sure that all hosts get discovered properly, before establishing unicast communication.

When 1 and 2 get satisfied - meaning that ONOS discovers as hosts with a MAC address and a VLAN, at least two of the hosts configured - unicast communication between the hosts discovered on that specific overlay network.

General workflow

The VPLS workflow can be grouped in two main functions:

  • Collect information about the configuration and hosts attached

  • Install the flows needed to let the hosts communicate


Information collection

Information collection is grouped in two main functions called in sequence, that represent the main steps that the application performs at each operational cycle:

  • getConfigCPoints(...): it parses the ONOS interfaces configuration, looking for two or more attachment points with interfaces configured with the same VLAN Id and with no IP addresses configured. Looking for interfaces without an IP addresses means looking for pure Layer 2 interfaces and it’s done to don’t conflict with Layer 3 ONOS applications that use the same configuration mechanism. The interfaces found are grouped by VLAN Id (in a HashMap) and returned to the next method, pairAvailableHosts(...).

  • pairAvailableHosts(...): it parses (if not null) the data structure received from getConfigCPoints(...) and for each interface found finds hosts in the Host Service matching with the interfaces configured. If hosts are found, the original data structure is modified: the MAC addresses of the hosts found is bound to the related interfaces discovered in the configuration. The final data structure is then returned.

Intent installation

VPLS has a stand alone component called Intent Installer. Intent Installer analyzes the data structure returned by pairAvailableHosts and creates the intent installation requests. VPLS installs:

  • Single-Point to Multi-Point intents for broadcast traffic, from any ingress point to every egress point, configured in the same VLAN.

  • Multi-Point to Single-Point intents for unicast traffic, from every egress point to any ingress point in the same VLAN.

Functions above are grouped in a unique method called setupConnectivity(). The method is called

  • At the application startup;

  • Every time the interface configuration gets updated;

  • As soon as new hosts join the network and get discovered.

Configuration and Host listeners

VPLS has listeners for two event types:

  • Interface configuration updates: each time a node in the interface configuartion is added, for example pushing the network-cfg.json file or through CLI. Interfaces configuration is filtered to consider only the interfaces with VLANs configured, but not IP. Also, informations are considered by VPLS, only if two or more interfaces are correctly configured, using the same VLAN Id.

  • Host added  / Host updated: each time a host joins the network, meaning a new host get physically connected to the OpenFlow data plane and start to send in the network ARP packets. The Host Service will discover the host (with a MAC address, possiblly a VLAN, and possibly one or more IP addresses). VPLS listens for Host added / updated events and filters hosts that are attached to attachment points, and use VLANs, configured in the interfaces section..

Traffic provisioning and intent requests

The ONOS Intent Framework is used to provision both broadcast and unicast connectivity between the edge ports of the OpenFlow network, where hosts are attached to. Using the Intent Framework abstraction allows to mask the complexity of provisioning single flows on each switch and failover in case failures happen.

Broadcast traffic: single-point to multi-point intents

Broadcast connectivity is provisioned through single-point to multi-point intents. Within the same VLAN, for each source, a single-point to multi-point intent is instantiated.

Intents match on destination MAC address FF:FF:FF:FF:FF:FF (broadcast Ethernet address) and on the VLAN Id shared by the overlay network. As treatment, the traffic is carried through the best-path to all the other edge ports configured to use the same VLAN Id.

The (single) ingress point of each intent is the edge port where the soruce hosts (for that intent) is connected to. The egress point is any other edge port where destination hosts (for that intent) with the same VLAN Id, are connected to.

Assuming N edge ports have interfaces configured within the same VLAN Id, N single-point to multi-point intents for broadcast will be installed.

Intents for Broadcast traffic get generated, reguardless the hosts have been discovered or not. This is done since often hosts won’t be able to send traffic into the network (so get discovered by the Host Service) before ARP requests reach them and they reply to them with an ARP reply.

Unicast traffic: multi-point to single-point intents

Unicast connectivity is provisioned through multi-point to single-point intents. Within the same VLAN, for each destination, a multi-point to single-point intent is instantiated.

Intents match on unicast destination MAC of the source host and on the VLAN Id shared by the overlay network. As treatment, the traffic is carried through the best-path to the source edge port.

The (multiple) ingress points of each intent are the edge ports where the destination hosts (for that intent) are connected to. The egress point is the source edge port where the source host (for that intent) within the same VLAN Id, is connected to.

Assuming N edge ports have interfaces configured within the same VLAN Id and N related hosts have been discovered , N multi-point to single-point intents for unicast will be installed.


Intents for Unicast traffic get generated only if:

  • Two or more interfaces sharing the same VLAN Id (and with no IPs) are configured;

  • Two or more hosts send packet into the network sharing the same VLAN Id and on through the physical ports (configured above), and get discovered by the ONOS Host Service,

The reason for the second condition is that intents for unicast (see above) match on the MAC address of the hosts connected, which is configured by the operator, but instead provided by the host service.

Current Limitations

At the present stage:

  • Only end-points sharing the same VLAN Id can be connected together;

  • Only VLAN tags can be used, no MPLS;

  • Hosts need to send into the network tagged traffic. The application won’t let switches tag the switches automatically tag the traffic at the edge (edge ports act as trunk ports, not as access ports);

  • Single-point to multi-point intents and multi-point to single-point intents still don’t support encapsulation. Traffic cannot be incapsulated through the core in a third unique VLAN/MPLS packet, thus saving flow entries (and lookup time) in the core;

  • There’s no app related CLI. Currently, broadcast networks can’t be listed, created or modified. The only way to see what’s connected and troubleshoot is to use the intent framework. If new networks needs to be created, the interface configuration CLI can be used;

  • Run-time Interface configurtion doesn’t support persistency. Either the configuration is loaded through an external json file or the configuration won’t survive after an ONOS reboot.

  • No labels