Have questions? Stuck? Please check our FAQ for some common questions and answers.

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 24 Next »

Overview

This tutorial describes how ONOS manages resources, such as Central Processing Units (CPUs) and Network Interface Cards (NICs), on commodity servers.

Contributors

Georgios P. Katsikas <georgios.katsikas@ri.se>, <katsikas.gp@gmail.com>

Tom Barbette <tom.barbette@uliege.be>

Table of Contents

Controller Side

The ONOS control plane is extended with the server device driver, which is part of the ONOS drivers sub-system.

The server device driver exploits ONOS's REST-based Southbound Controller (i.e., RestSBController) to register server devices as REST-based devices.

As such, every commodity server registered to ONOS (through the server device driver) is represented as a RestServerSBDevice, which extends ONOS's RestSBDevice.

The extensions of RestServerSBDevice are related to the CPU and NIC resources present in commodity servers (but potentially not present in other REST-based devices).

Server Side

At the server side we need a process that is able to communicate with the server device driver using a REST-based channel (i.e., HTTP messages).

A complete implementation of the data plane side of a server device can be found here.

Communication Protocol

Server Device Registration

First, each server must "register" with the controller using the onos-netcfg facilities as follows:

onos-netcfg <ONOS-IP> device-description.json

An example JSON file "device-description.json" is provided below:

{
   "devices": {
       "rest:192.168.1.1:80": {
           "rest": {
               "username": "server",
               "password": "",
               "ip": "192.168.1.1",
               "port": 80,
               "protocol": "http",
               "url": "",
               "testUrl": "",
               "manufacturer": "GenuineIntel",
               "hwVersion": "Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz",
               "swVersion": "Click 2.1"
           },
           "basic": {
               "driver": "restServer"
           }
       }
   }
}

Note that manufacturer, hwVersion, swVersion, and driver fields are sensitive pieces of information related to the server device driver.

If your server has different hardware characteristics (e.g., an AMD processor instead of an Intel processor), then you should extend the file server-drivers.xml accordingly.


Server Device Discovery

Upon a successful registration of your server device, the server device driver issues resource discovery commands to the server in order to discover:

  • the ports (i.e., NICs) available on this server as well as their statistics,
  • the CPUs available on this server as well as their statistics,
  • any flow entries installed in this server's NIC(s).

The resource discovery command issued by the device driver hits the following resource path on the server:

HTTP GET: http://serverIp/metron/resources

A server with 2 Intel CPU cores and 1 Mellanox 100 GbE NIC might provide the following responce:

{
    "id":"metron:server:000001",
    "serial":"4Y6JZ42",
    "manufacturer":"GenuineIntel",
    "hwVersion":"Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz",
    "swVersion":"Click 2.1",
    "cpus":
    [
        {
            "id":0,
            "vendor":"GenuineIntel",
            "frequency":3200
        },
        {
            "id":1,
            "vendor":"GenuineIntel",
            "frequency":3200
        }
    ],
    "nics":
    [
        {
            "name":"fd0",
            "index":0,
            "vendor":"Unknown",
            "driver":"net_mlx5",
            "speed":"100000",
            "status":"1",
            "portType":"fiber",
            "hwAddr":"50:6B:4B:43:88:CB",
            "rxFilter":["flow"]
        }
    ]
}

Note that the unit of frequency field in each CPU is in MHz, while the unit of the speed field in each NIC is in Mbps.

To indicate that a NIC is active, one must set the status field to 1.

Also, each server must advertize a list of ways with which its NIC(s) can be programmed.

This is done by filling out a list of strings in the rxFilter field of each NIC.

Currently, the server device driver supports 4 programmability modes as follows:

  • flow: Refers to the Flow Director (e.g., Intel's Flow Director) component of modern NICs. In this mode the server device driver can send explicit flow rules to a server's NIC(s) in order to perform traffic classification (e.g., match the values of a packet's header field), modification (e.g., drop a packet), and dispatching (e.g., place a packet into a specific hardware queue associated with a CPU core)
  • mac: Refers to the Medium Access Control (MAC)-based Virtual Machine Device queues (VMDq) (e.g., Intel's VMDq) component of modern NICs. In this mode, a server associates a set of (virtual) MAC addresses with a set of CPU cores. Input packets' destination MAC address field is matched against these values and upon a successful match, the packet is redirected to the respective CPU core.
  • vlan: Refers to the Virtual Local Area (VLAN) identifier (ID)-based VMDq (e.g., Intel's VMDq) component of modern NICs. In this mode, a server associates a set of VLAN IDs with a set of CPU cores. Input packets' VLAN ID field is matched against these values and upon a successful match, the packet is redirected to the respective CPU core.
  • rss: Refers to the Receive-Side Scaling (RSS) (e.g., Intel's RSS) component of modern NICs. In this mode, the server's NIC applies a hash function to certain header field values of incoming packets, in order to identify the CPU core that is going to undertake the processing of each packet.

In the example above the server advertized Flow Director as a candidate mode for packet dispatching.

Server Device Monitoring

Once a server's resources are properly discovered by the server device driver, periodic CPU and NIC monitoring of these resources takes place.

The CPU and NIC monitoring command issued by the device driver hits the following resource path on the server:

HTTP GET: http://serverIp/metron/stats

A server with 2 Intel CPU cores and 1 Mellanox 100 GbE NIC might provide the following responce:

{
    "busyCpus":0,
    "freeCpus":2,
    "cpus":
    [
        {
            "id":0,
            "load":0,
            "busy":false
        },
        {
            "id":1,
            "load":0,
            "busy":false
        }
    ],
    "nics":
    [
        {
            "name":"fd0",
            "index":0,
            "rxCount":"0",
            "rxBytes":"0",
            "rxDropped":"0",
            "rxErrors":"0",
            "txCount":"0",
            "txBytes":"0",
            "txErrors":"0"
        }
    ]
}

NIC Rule Installation

A server with NICs in mode "flow" allows the server device driver to manage its rules.

To install NIC rules on a server's NIC, the device driver POST the following JSON to the server's resource:

HTTP POST: http://serverIp/metron/rules

{
    "rules":
    [
        {
            "id": "5057dd63-93ea-42ca-bb14-8a5e37e214da",
            "rxFilter":{"method": "flow"},
            "nics":
            [
                {
                    "nicName": "fd0",
                    "cpus":
                    [
                        {
                            "cpuId":0,
                            "cpuRules":
                            [
                                {
                                    "ruleId": 54043196136729470,
                                    "ruleContent":"ingress pattern eth type is 2048 / src is 192.168.100.7 dst is 192.168.1.7 / udp src is 53 / end actions queue index 0 / end"
                                }
                            ]
                        }
                    ]
                }
            ]
        }
    ]
}

Note that the ruleContent field contains a rule (with unique ID 54043196136729470) that follows the DPDK Flow API, as the NIC on this server is bound to a DPDK driver.

The example rule matches packets with source IP address 192.168.100.7, destination IP address 192.168.1.7, and source UDP port 53.

The action of this rule redirects the matched packets to hardware queue with index 0.

This queue is associated with CPU core 0, as indicated by the cpuId field.

NIC Rule Monitoring

The server device driver also performs periodic NIC rule monitoring, for those NICs in mode "flow".

The NIC rule monitoring command issued by the device driver hits the following resource path on the server:

HTTP GET: http://serverIp/metron/rules

A server with 1 NIC rule in NIC with name fd0 (associated with CPU core 0) might respond as follows:

{
"rules":
[
{
"id":"5057dd63-93ea-42ca-bb14-8a5e37e214da",
"rxFilter":{
"method":"flow"
},
"nics":
[
{
"nicName":"fd0",
"cpus":
[
{
"cpuId":0,
"cpuRules":
[
{
"ruleId":54043196136729470,
                                    "ruleContent":"ingress pattern eth type is 2048 / ipv4 src is 192.168.100.7 dst is 192.168.1.7 / udp src is 53 / end actions queue index 0 / end"
}
]
}
]
}
]
}
]

}

NIC Rule Deletion

To delete the above rule (once it has successfully been installed) with unique ID 54043196136729470, the server device driver needs to hist the following path:

HTTP DELETE: http://serverIp/metron/rules/54043196136729470

Server Driver User Interface

Server driver will soon provide a User Interface (UI) which will visualize the CPU utilization of commodity servers managed by this driver.

To access this UI, click on the Menu button (top left corner on the ONOS UI), then click on tab "Servers" at the bottom of the list in Section "Network".

Screenshots of this UI will soon follow.

Research Behind the Server Device Driver

The server device driver was implemented as part of the Metron research project, which was publised at the 15th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’18).

Metron realizes NFV service chains at the speed of the underlying hardware.

Metron's control plane uses the server device driver to manage chained packet processing pipelines (also known as chained network functions or service chains) running on commodity servers.

Metron's data plane extends FastClick, which in turn uses DPDK as a high performance network I/O subsystem.

If you use the server device driver or Metron in your research, please cite our paper as follows:

@inproceedings{katsikas-metron.nsdi18,
	author       = {Katsikas, Georgios P. and Barbette, Tom and Kosti\'{c}, Dejan and Steinert, Rebecca and Maguire Jr., Gerald Q.},
	title        = {{Metron: NFV Service Chains at the True Speed of the Underlying Hardware}},
	booktitle    = {15th USENIX Conference on Networked Systems Design and Implementation (NSDI 18)},
	series       = {NSDI'18},
	year         = {2018},
	isbn         = {978-1-931971-43-0},
	pages        = {171--186},
	numpages     = {16},
	url          = {https://www.usenix.org/system/files/conference/nsdi18/nsdi18-katsikas.pdf},
	address      = {Renton, WA},
	acmid        = {},
	publisher    = {USENIX Association}
}


  • No labels