Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

curl -X GET --header "Content-Type: application/json" http://serverIp/metron/controllers

A similar result could be achieved using the ONOS CLI as An example response follows:

...

onos:device-controllers rest:serverIp:serverPort

To set a (set of) controller(s) to a certain server, one could issue the following HTTP POST command:

...

{

    "controllers":
[
{
"ip":"192.

...

16.125.

...

167",
"port":80,
"type":"tcp"

...


}
]
}

A similar result could be achieved using the ONOS CLI as

...

follows:

onos:device-controllers rest:serverIp:serverPort tcp:192.168.125.7:80 tcp:192.168.125.8:80serverPort


To set Finally, to delete a (set of) controller(s) from to a certain server, one could issue an HTTP DELETE command to the resource 'delete_controllers' or an ONOS CLI command as follows:the following HTTP POST command:

curl -X POST --data '{"controllers":[{"ip":"onos:device-setcontrollers --remove rest:serverIp:serverPort tcp:192.168.125.7:80 tcp:","port":80,"type":"tcp"},{"ip":"192.168.125.8:80

Server System Operations

The driver supports a system-related operation which returns the time of the server as follows:

curl -X GET ","port":80,"type":"tcp"}]}' --header "Content-Type: application/json" http://serverIp/metron/server_timecontrollers

If the command succeeds it returns: success

A similar result could be achieved using the ONOS CLI as follows:

onos:device-time controllers rest:serverIp:serverPort

A reboot operation is also offered by this ONOS behavior, but the server driver does not implement it.

Server Device Discovery

Upon a successful registration of your server device, the server device driver issues resource discovery commands to the server in order to discover:

  • the CPUs available on this server as well as their statistics;
  • the CPU cache hierarchy of the server;
  • the main memory modules of the server along with relevant statistics;
  • the ports (i.e., NICs) available on this server as well as their statistics; and
  • any flow rule entries installed in this server's NIC(s).

The resource discovery command issued by the device driver hits the following resource path on the server:

:serverPort tcp:192.168.125.7:80 tcp:192.168.125.8:80


Finally, to delete a (set of) controller(s) from a server, one could issue an HTTP DELETE command to the resource 'delete_controllers' or an ONOS CLI command as follows:

onos:device-setcontrollers --remove rest:serverIp:serverPort tcp:192.168.125.7:80 tcp:192.168.125.8:80

Server System Operations

The driver supports a system-related operation which returns the time of the server as follows:

curl -X GET --header "Content-Type: application/json" http:curl -X GET --header "Content-Type: application/json" http://serverIp/metron/server_resourcestime

An example response followsA server with 2 Intel CPU cores in one socket, 16 GB of DDR4 DRAM, and a single-port Mellanox 100 GbE NIC might provide the following response:

{
    "idtime":1591261221344620712
}

A similar result could be achieved using the ONOS CLI as follows:

onos:device-time rest:serverIp:serverPort

A reboot operation is also offered by this ONOS behavior, but the server driver does not implement it.

Server Device Discovery

Upon a successful registration of your server device, the server device driver issues resource discovery commands to the server in order to discover:

  • the CPUs available on this server as well as their statistics;
  • the CPU cache hierarchy of the server;
  • the main memory modules of the server along with relevant statistics;
  • the ports (i.e., NICs) available on this server as well as their statistics; and
  • any flow rule entries installed in this server's NIC(s).

The resource discovery command issued by the device driver hits the following resource path on the server:

curl -X GET --header "Content-Type: application/json" http://serverIp/metron/server_resources

A server with 2 Intel CPU cores in one socket, 16 GB of DDR4 DRAM, and a single-port Mellanox 100 GbE NIC might provide the following response:

{
    "id":"metron:server:000001",
    "serial":"4Y6JZ42",
    "manufacturer"metron:server:000001",
    "serial":"4Y6JZ42",
    "manufacturer":"GenuineIntel",
    "hwVersion":"Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz",
    "swVersion":"Click 2.1",
    "cpus":
    [
        {
            "physicalId":0,
"logicalId":0,
"socket":0,
            "vendor":"GenuineIntel",
            "frequency":3200
        },
        {
            "physicalId":1,
"logicalId":1,
"socket":0,
            "vendor":"GenuineIntel",
      "hwVersion":"Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz",
    "swVersion":"Click 2.1",
    "frequencycpus":3200
    [
        }{
    ],
        "cpuCacheHierarchyphysicalId":0,
{
"socketslogicalId": 10,
"coressocket": 20,
            "levelsvendor": 3"GenuineIntel",
            "cpuCachesfrequency":3200
        },
        [{
          {
  "physicalId":1,
"vendorlogicalId": "GenuineIntel"1,
"levelsocket": "L1"0,
        "type": "Data    "vendor":"GenuineIntel",
            "frequency":3200
        }
    ],
"policycpuCacheHierarchy": "Set-Associative",
{
"sockets": 1,
"setscores": 642,
"levels": 3,
"wayscpuCaches":
8,
[
"lineLength": 64, {
"capacityvendor": 32768"GenuineIntel",
"sharedlevel": 0"L1",
} "type": "Data",
{
"vendor": "GenuineIntel",
"level": "L1",
"type": "Instruction",
"policy": "Set-Associative",
"sets": 64,
"ways": 8,
"lineLength": 64,
"capacity": 32768,
"shared": 0
},
{
"vendor": "GenuineIntel",
"level": "L2L1",
"type": "DataInstruction",
"policy": "Set-Associative",
"sets": 51264,
"ways": 8,
"lineLength": 64,
"capacity": 26214432768,
"shared": 0
},
{
"vendor": "GenuineIntel",
"level": "L3L2",
"type": "Data",
"policy": "Set-Associative",
"sets": 16384512,
"ways": 208,
"lineLength": 64,
"capacity": 20971520262144,
"shared": 10
},
]
},{
"memoryHierarchy":
{
"modulesvendor":
[
{ "GenuineIntel",
"typelevel": "DDR4L3",
"manufacturertype": "00CE00B300CEData",
"serialpolicy": "40078A0CSet-Associative",
"dataWidthsets": 6416384,
"totalWidthways": 7220,
"capacitylineLength": 1638464,
"speedcapacity": 213320971520,
"speedConfiguredshared": 21331
}
]
},
    "nicsmemoryHierarchy":
    [
     {
  {
            "namemodules":"fd0",
            "id":0,
            "vendor":"Unknown",
            "driver":"net_mlx5",
            "speed":"100000",
            "status":"1",
            "portType":"fiber",
            "hwAddr":"50:6B:4B:43:88:CB",
            "rxFilter":["flow"]
        }
    ]
}

Note that the unit of frequency field in each CPU is in MHz, while the unit of the speed field in each NIC is in Mbps.

To indicate that a NIC is active, one must set the status field to 1.

Also, each server must advertise a list of ways with which its NIC(s) can be programmed.

This is done by filling out a list of strings in the rxFilter field of each NIC.

Currently, the server device driver supports 4 programmability modes as follows:

  • flow: Refers to the Flow Dispatcher (e.g., Intel's Flow Director) component of modern NICs following the format of DPDK's Flow API. In this mode the server device driver can send explicit flow rules to a server's NIC(s) in order to perform traffic classification (e.g., match the values of a packet's header field), modification (e.g., drop a packet), and dispatching (e.g., place a packet into a specific hardware queue associated with a CPU core)
  • mac: Refers to the Medium Access Control (MAC)-based Virtual Machine Device queues (VMDq) (e.g., Intel's VMDq) component of modern NICs. In this mode, a server associates a set of (virtual) MAC addresses with a set of CPU cores. Input packets' destination MAC address field is matched against these values and upon a successful match, the packet is redirected to the respective CPU core.
  • vlan: Refers to the Virtual Local Area (VLAN) identifier (ID)-based VMDq (e.g., Intel's VMDq) component of modern NICs. In this mode, a server associates a set of VLAN IDs with a set of CPU cores. Input packets' VLAN ID field is matched against these values and upon a successful match, the packet is redirected to the respective CPU core.
  • rss: Refers to the Receive-Side Scaling (RSS) (e.g., Intel's RSS) component of modern NICs. In this mode, the server's NIC applies a hash function to certain header field values of incoming packets, in order to identify the CPU core that is going to undertake the processing of each packet.

In the example above the server advertised Flow Dispatcher as a candidate mode for packet dispatching.

Server Device Monitoring

Once a server's resources are properly discovered by the server device driver, periodic CPU, main memory, and NIC monitoring of these resources takes place.

The CPU, main memory, and NIC monitoring command issued by the device driver hits the following resource path on the server:

...

curl -X GET --header "Content-Type: application/json" http://serverIp/metron/server_stats


[
{
"type": "DDR4",
"manufacturer": "00CE00B300CE",
"serial": "40078A0C",
"dataWidth": 64,
"totalWidth": 72,
"capacity": 16384,
"speed": 2133,
"speedConfigured": 2133
}
]
},
    "nics":
    [
        {
            "name":"fd0",
            "id":0,
            "vendor":"Unknown",
            "driver":"net_mlx5",
            "speed":"100000",
            "status":"1",
            "portType":"fiber",
            "hwAddr":"50:6B:4B:43:88:CB",
            "rxFilter":["flow"]
        }
    ]
}

Note that the unit of frequency field in each CPU is in MHz, while the unit of the speed field in each NIC is in Mbps.

To indicate that a NIC is active, one must set the status field to 1.

Also, each server must advertise a list of ways with which its NIC(s) can be programmed.

This is done by filling out a list of strings in the rxFilter field of each NIC.

Currently, the server device driver supports 4 programmability modes as follows:

  • flow: Refers to the Flow Dispatcher (e.g., Intel's Flow Director) component of modern NICs following the format of DPDK's Flow API. In this mode the server device driver can send explicit flow rules to a server's NIC(s) in order to perform traffic classification (e.g., match the values of a packet's header field), modification (e.g., drop a packet), and dispatching (e.g., place a packet into a specific hardware queue associated with a CPU core)
  • mac: Refers to the Medium Access Control (MAC)-based Virtual Machine Device queues (VMDq) (e.g., Intel's VMDq) component of modern NICs. In this mode, a server associates a set of (virtual) MAC addresses with a set of CPU cores. Input packets' destination MAC address field is matched against these values and upon a successful match, the packet is redirected to the respective CPU core.
  • vlan: Refers to the Virtual Local Area (VLAN) identifier (ID)-based VMDq (e.g., Intel's VMDq) component of modern NICs. In this mode, a server associates a set of VLAN IDs with a set of CPU cores. Input packets' VLAN ID field is matched against these values and upon a successful match, the packet is redirected to the respective CPU core.
  • rss: Refers to the Receive-Side Scaling (RSS) (e.g., Intel's RSS) component of modern NICs. In this mode, the server's NIC applies a hash function to certain header field values of incoming packets, in order to identify the CPU core that is going to undertake the processing of each packet.

In the example above the server advertised Flow Dispatcher as a candidate mode for packet dispatching.

Server Device Monitoring

Once a server's resources are properly discovered by the server device driver, periodic CPU, main memory, and NIC monitoring of these resources takes place.

The CPU, main memory, and NIC monitoring command issued by the device driver hits the following resource path on the server:

curl -X GET --header "Content-Type: application/json" http://serverIp/metron/server_stats

A server with 2 Intel CPU cores, 16 GB of DDR4 DRAM, and a single-port Mellanox 100 GbE NIC might provide the following response:

{
     "busyCpus":2,
     "freeCpus":0,
     "cpus":
     [

A server with 2 Intel CPU cores, 16 GB of DDR4 DRAM, and a single-port Mellanox 100 GbE NIC might provide the following response:

{
     "busyCpus":2,
     "freeCpus":0,
     "cpus":
     [
         {
             "id":0,
             "load":0,
             "queue":0,
             "busy":true,
             “throughput”:
             {
                 “average”: 5000,
                 “unit”: “mbps
             },
             “latency”:
             {
                 “min”: 5000,
                 “median”: 6000,
                 “max”: 7000,
                 “unit”: “ns”
             }
         },
         {
             "id":10,
             "load":0,
             "queue":10,
             "busy":true,
             “throughput”:
             {
                 “average”: 40005000,
                 “unit”: “mbps
             },
             “latency”:
             {
                 “min”: 55005000,
                 “median”: 61006000,
                 “max”: 74007000,
                 “unit”: “ns”
             }
         },
      ],
"memory":
{
"unit": "kBytes",
"used": 8388608,
"free": 8388608,
"total": 16777216
},    "nics"   {
             "id":1,
             "load":0,
             "queue":1,
             "busy":true,
             “throughput”:
    [         {
            "name":"fd0"     “average”: 4000,
            "id":0,     “unit”: “mbps
            "rxCount":"1000" },
            "rxBytes":"64000", “latency”:
            "rxDropped":"0",
 {
              "rxErrors":"0",
   “min”: 5500,
                "txCount":"1000" “median”: 6100,
                "txBytes":"64000" “max”: 7400,
                "txErrors":"0" “unit”: “ns”
        },
    ]
}

Note that throughput and latency statistics per core are optional fields.

NIC Rule Installation

A server with NICs in mode "flow" allows the server device driver to manage its rules.

To install a NIC rule on a server's NIC with instance name fd0, associated with CPU core 0, the device driver issues the following HTTP POST command to the server:

...

curl -X POST --data '{"rules":[{"id":"5057dd63-93ea-42ca-bb14-8a5e37e214da", "rxFilter": {"method":"flow"}, "nics": [{"nicName":"fd0", "cpus": [{"id":0, "rules": [{ "id": 54324671113440126, "content":"ingress pattern eth type is 2048 \/ src is 192.168.100.7 dst is 192.168.1.7 \/ udp src is 53 \/ end actions queue index 0 \/ end"}]}]}]}]}' --header "Content-Type: application/json" http://serverIp/metron/rules

 }
        },
    ],
"memory":
{
"unit": "kBytes",
"used": 8388608,
"free": 8388608,
"total": 16777216
},
    "nics":
    [
        {
            "name":"fd0",
            "id":0,
            "rxCount":"1000",
            "rxBytes":"64000",
            "rxDropped":"0",
            "rxErrors":"0",
            "txCount":"1000",
            "txBytes":"64000",
            "txErrors":"0"
        },
    ]
}

Note that throughput and latency statistics per core are optional fields.

NIC Rule Installation

A server with NICs in mode "flow" allows the server device driver to manage its rules.

To install a NIC rule on a server's NIC with instance name fd0, associated with CPU core 0, the device driver issues the following HTTP POST command to the server:

curl -X POST --data '{"rules":[{"id":"5057dd63-93ea-42ca-bb14-8a5e37e214da", "rxFilter": {"method":"flow"}, "nics": [{"name":"fd0", "cpus": [{"id":0, "rules": [{ "id": 54324671113440126, "content":"ingress pattern eth type is 2048 \/ src is 192.168.100.7 dst is 192.168.1.7 \/ udp src is 53 \/ end actions queue index 0 \/ end"}]}]}]}]}' --header "Content-Type: application/json" http://serverIp/metron/rules


For your convenience the same rule is visualized below in a more user-friendly JSON format:
{
    "rules":
    [
        {
   

...

    "rules":
    [
        {
            "id": "5057dd63-93ea-42ca-bb14-8a5e37e214da",
            "rxFilter":
            {
                "method": "flow"
            },
            "nics":
            [
                {
                    "name": "fd0",
                    "cpus":
                    [
                        {
                            "id":0,
                            "rules":
                            [
                                {
                                    "id": 54043196136729470,
                                    "content":"ingress pattern eth type is 2048 / src is 192.168.100.7 dst is 192.168.1.7 / udp src is 53 / end actions queue index 0 / count / end"
                                }
         "id": "5057dd63-93ea-42ca-bb14-8a5e37e214da",
            "rxFilter":
      ]      {
                "method": "flow"
            },
            "nics":
            ][
                }{
                    "name": "fd0",
        ]        }     ]
}

Note that the "content" field contains a rule (with unique ID 54043196136729470) that follows the DPDK Flow API, as the NIC on this server is bound to a DPDK driver.

The example rule matches packets with source IP address 192.168.100.7, destination IP address 192.168.1.7, and source UDP port 53.

The action of this rule redirects the matched packets to hardware queue with index 0.

This queue is associated with CPU core 0, as indicated by the "id" field in the "cpus" attribute.

NIC Rule Monitoring

The server device driver also performs periodic NIC rule monitoring, for those NICs in mode "flow".

The NIC rule monitoring command issued by the device driver hits the following resource path on the server:

...

curl -X GET --header "Content-Type: application/json" http://serverIp/metron/rules

"cpus":
                    [
                        {
                            "id":0,
                            "rules":
                            [
                                {
                   

A server with 1 NIC rule in NIC with name fd0 (associated with CPU core 0) might respond as follows:

{
"rules":
[
{
"id":"5057dd63-93ea-42ca-bb14-8a5e37e214da" 54043196136729470,
         "rxFilter":            {
"methodcontent":"flow"
ingress pattern eth type is 2048 / src is 192.168.100.7 dst is 192.168.1.7 / udp src is 53 / end actions queue index 0 / count / end"
 },
"nics":
[
{
}
             "name":"fd0",
]
    "cpus":
[}
                        {]
                }
            "id":0,]
        }
    ]
}

Note that the "content" field contains a rule (with unique ID 54043196136729470) that follows the DPDK Flow API, as the NIC on this server is bound to a DPDK driver.

The example rule matches packets with source IP address 192.168.100.7, destination IP address 192.168.1.7, and source UDP port 53.

The action of this rule redirects the matched packets to hardware queue with index 0.

This queue is associated with CPU core 0, as indicated by the "id" field in the "cpus" attribute.

NIC Rule Monitoring

The server device driver also performs periodic NIC rule monitoring, for those NICs in mode "flow".

The NIC rule monitoring command issued by the device driver hits the following resource path on the server:

curl -X GET --header "Content-Type: application/json" http://serverIp/metron/rules

A server with 1 NIC rule in NIC with name fd0 (associated with CPU core 0) might respond as follows:

{
"rules":
[
{
"id":"5057dd63-93ea-42ca-bb14-8a5e37e214da",
"rxFilter":
            {
"method":"flow"
},
"nics":
[
{
"name":"fd0",
"cpus":
[
{
"id":0,
"rules":
[
{
"id":54043196136729470,
                                    "content":"ingress pattern eth type is 2048 / ipv4 src is 192.168.100.7 dst is 192.168.1.7 / udp src is 53 / end actions queue index 0 / end"
}
]
}
]
}
]
}
]

}

NIC Rule Deletion

To delete the above rule (once it has successfully been installed) with unique ID 54043196136729470, the server device driver needs to hit the following path:

HTTP DELETE: http://serverIp/metron/rules_delete/54043196136729470

To delete multiple rules at once, you should append a comma-separated rule IDs as follows:

HTTP DELETE: http://serverIp/metron/rules_delete/54043196136729470,54043196136729471

NIC Table Statistics

To retrieve statistics related to a server's NIC tables, the server device driver needs to hit the following path:

curl -X GET --header "Content-Type: application/json" http://serverIp/metron/rules_table_stats

A server with 2 NICs and 3 rules installed in the first NIC might respond as follows:
{
"nics":
[
{
"id":0,
"table":
[
{
"id":0,
"activeEntries":3,
"pktsLookedUp":0,
"pktsMatched":0
}
]
}
{
"id":1,
"table":
[
{
"id":0,
"activeEntries":0,
"pktsLookedUp":0,
"pktsMatched":0
"rules":
[
{
"id":54043196136729470,}
]
}
]
}

NIC Port Administration

To enable a NIC port, the server device driver needs to issue the following HTTP post command to a server:

curl -X POST --data '{"port":0, "portStatus":"enable"}' --header "Content-Type: application/json" http://serverIp/metron/nic_ports

If the command succeeds it returns: success


A similar result could be achieved using the ONOS CLI as follows:

onos:portstate rest:serverIp:serverPort nicID enable


Similarly, to disable a NIC port, the server device driver needs to issue the following HTTP post command to a server:

curl -X POST --data '{"port":0, "portStatus":"disable"}' --header "Content-Type: application/json" http://serverIp/metron/nic_ports

If the command succeeds it returns: success


A similar result could be achieved using the ONOS CLI as follows:

onos:portstate rest:serverIp:serverPort nicID disable

NIC Queue Configuration

The server device driver can also provide NIC queue configuration information through the following HTTP GET command:

curl -X GET --header "Content-Type: application/json" http://serverIp/metron/nic_queues

An example response of a server with 1 100GbE NIC and 2 queues follows:

{
"nics":
[ "content":"ingress pattern eth type is 2048 / ipv4 src is 192.168.100.7 dst is 192.168.1.7 / udp src is 53 / end actions queue index 0 / end"
}
]
}
]
}
]{
}
]

}

NIC Rule Deletion

To delete the above rule (once it has successfully been installed) with unique ID 54043196136729470, the server device driver needs to hit the following path:

...

HTTP DELETE: http://serverIp/metron/rules_delete/54043196136729470

To delete multiple rules at once, you should append a comma-separated rule IDs as follows:

...

HTTP DELETE: http://serverIp/metron/rules_delete/54043196136729470,54043196136729471

NIC Table Statistics

To retrieve statistics related to a server's NIC tables, the server device driver needs to hit the following path:

...

curl -X GET --header "Content-Type: application/json" http://serverIp/metron/rules_table_stats

NIC Port Administration

To enable a NIC port, the server device driver needs to issue the following HTTP post command to a server:

...

curl -X POST --data '{"port":0, "portStatus":"enable"}' --header "Content-Type: application/json" http://serverIp/metron/nic_ports

A similar result could be achieved using the ONOS CLI as follows:

...

onos:portstate rest:serverIp:serverPort nicID enable

Similarly, to disable a NIC port, the server device driver needs to issue the following HTTP post command to a server:

...

curl -X POST --data '{"port":0, "portStatus":"disable"}' --header "Content-Type: application/json" http://serverIp/metron/nic_ports

A similar result could be achieved using the ONOS CLI as follows:

...

onos:portstate rest:serverIp:serverPort nicID disable

NIC Queue Configuration

The server device driver can also provide NIC queue configuration information through the following HTTP GET command:

...

"id":0,
"queues":
[
{
"id":0,
"type":"MAX",
"maxRate":"100000"
},
{
"id":1,
"type":"MAX",
"maxRate":"100000"
}
]
}
]
}

This ONOS behavior also offers two additional methods, i.e., add/delete queue, but the server driver does not implement these methods.

...