Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

            {
"driver": "rest-server"
}
}
}
}

Server Device

...

Handshaker

Once the server agent is invoked it will automatically try to connect to the controller.

However, the driver allows explicit connect and disconnect messages.

To connect to a server agent you must issue the following HTTP post request:

Upon a successful registration of your server device, the server device driver issues resource discovery commands to the server in order to discover:

  • the CPUs available on this server as well as their statistics;
  • the CPU cache hierarchy of the server;
  • the main memory modules of the server along with relevant statistics;
  • the ports (i.e., NICs) available on this server as well as their statistics; and
  • any flow rule entries installed in this server's NIC(s).

The resource discovery command issued by the device driver hits the following resource path on the server:

HTTP GET:

curl -X POST --data 'connect' --header "Content-Type: application/json"

http://serverIp/metron/server_

resources

A server with 2 Intel CPU cores in one socket, 16 GB of DDR4 DRAM, and a single-port Mellanox 100 GbE NIC might provide the following response:

connect

To disconnect from a server agent you must issue the following HTTP post request:

curl -X POST --data 'disconnect' --header "Content-Type: application/json" http://serverIp/metron/server_disconnect

Controller Configuration

The server device driver can monitor/modify the controller's information on a server on the fly.

To get the associated controller of a certain server, one could issue the following HTTP GET command:

curl -X GET --header "Content-Type: application/json" http://serverIp/metron/controllers

A similar result could be achieved using the ONOS CLI as follows:

onos:device-controllers rest:serverIp:serverPort

To set a (set of) controller(s) to a certain server, one could issue the following HTTP POST command:

curl -X POST --data '{"controllers":[{"ip":"192.168.125.7","port":80,"type":"tcp"},{"ip":"192.168.125.8","port":80,"type":"tcp"}]}' --header "Content-Type: application/json" http://serverIp/metron/controllers

A similar result could be achieved using the ONOS CLI as follows:

onos:device-controllers rest:serverIp:serverPort tcp:192.168.125.7:80 tcp:192.168.125.8:80

Finally, to delete a (set of) controller(s) from a server, one could issue the following HTTP DELETE command:

curl -X DELETE --data '{"controllers":[{"ip":"192.168.125.7","port":80,"type":"tcp"},{"ip":"192.168.125.8","port":80,"type":"tcp"}]}' --header "Content-Type: application/json" http://serverIp/metron/controllers

A similar result could be achieved using the ONOS CLI as follows:

onos:device-setcontrollers --remove rest:serverIp:serverPort tcp:192.168.125.7:80 tcp:192.168.125.8:80

Server System Operations

The driver supports a system-related operation which returns the time of the server as follows:

curl -X GET --header "Content-Type: application/json" http://serverIp/metron/server_time

A similar result could be achieved using the ONOS CLI as follows:

onos:device-time rest:serverIp:serverPort

A reboot operation is also offered by this ONOS behavior, but the server driver does not implement it.

Server Device Discovery

Upon a successful registration of your server device, the server device driver issues resource discovery commands to the server in order to discover:

  • the CPUs available on this server as well as their statistics;
  • the CPU cache hierarchy of the server;
  • the main memory modules of the server along with relevant statistics;
  • the ports (i.e., NICs) available on this server as well as their statistics; and
  • any flow rule entries installed in this server's NIC(s).

The resource discovery command issued by the device driver hits the following resource path on the server:

HTTP GET: http://serverIp/metron/server_resources

A server with 2 Intel CPU cores in one socket, 16 GB of DDR4 DRAM, and a single-port Mellanox 100 GbE NIC might provide the following response:

{
    "id":"metron:server:000001",
    "serial":"4Y6JZ42",
    "manufacturer":"GenuineIntel",
    "hwVersion":"Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz",
    "swVersion":"Click 2.1",
    "cpus":
    [
        {
            "physicalId":0,
{
    "id":"metron:server:000001",
    "serial":"4Y6JZ42",
    "manufacturer":"GenuineIntel",
    "hwVersion":"Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz",
    "swVersion":"Click 2.1",
    "cpus":
    [
        {
            "physicalId":0,
"logicalId":0,
"socket":0,
            "vendor":"GenuineIntel",
            "frequency":3200
        },
        {
            "physicalId":1,
"logicalId":1,
"socket":0,
            "vendor":"GenuineIntel",
            "frequency":3200
        }
    ],
"cpuCacheHierarchy":
{
"sockets": 1,
"cores": 2,
"levels": 3,
"cpuCaches":
[
{
"vendor": "GenuineIntel",
"level": "L1",
"type": "Data",
"policy": "Set-Associative",
"sets": 64,
"ways": 8,
"lineLength": 64,
"capacity": 32768,
"shared": 0
},
{
"vendor": "GenuineIntel",
"level": "L1",
"type": "Instruction",
"policy": "Set-Associative",
"setslogicalId": 640,
"wayssocket": 80,
            "vendor":"GenuineIntel",
            "lineLengthfrequency": 64,3200
        },
        {
            "capacityphysicalId": 327681,
"sharedlogicalId": 01,
}"socket":0,
          {
  "vendor":"GenuineIntel",
            "frequency":3200
        }
    ],
"vendorcpuCacheHierarchy": "GenuineIntel",
{
"levelsockets": "L2"1,
"cores": 2,
"typelevels": "Data"3,
"cpuCaches":
"policy": "Set-Associative",
[
{
"sets": 512,
"waysvendor": 8"GenuineIntel",
"lineLengthlevel": 64"L1",
"capacitytype": 262144"Data",
"sharedpolicy": 0"Set-Associative",
},
{"sets": 64,
"vendorways": "GenuineIntel"8,
"levellineLength": "L3"64,
"typecapacity": "Data"32768,
"policyshared": "Set-Associative",0
},
"sets": 16384, {
"waysvendor": 20"GenuineIntel",
"lineLengthlevel": 64"L1",
"capacitytype": 20971520"Instruction",
"sharedpolicy": 1"Set-Associative",
}
]
}"sets": 64,
"memoryHierarchy":
{
"modulesways": 8,
[
{"lineLength": 64,
"typecapacity": "DDR4"32768,
"manufacturershared": "00CE00B300CE",0
},
{
"serialvendor": "40078A0CGenuineIntel",
"dataWidthlevel": 64"L2",
"totalWidthtype": 72"Data",
"capacitypolicy": 16384"Set-Associative",
"speedsets": 2133512,
"speedConfiguredways": 2133
8,
}
"lineLength": 64,
]
},
    "nicscapacity":
    [
        {
            "name":"fd0",
            "id":0,
            "vendor":"Unknown",
            "driver":"net_mlx5",
            "speed":"100000",
            "status":"1",
            "portType":"fiber",
            "hwAddr":"50:6B:4B:43:88:CB",
            "rxFilter":["flow"]
        }
    ]
}

Note that the unit of frequency field in each CPU is in MHz, while the unit of the speed field in each NIC is in Mbps.

To indicate that a NIC is active, one must set the status field to 1.

Also, each server must advertise a list of ways with which its NIC(s) can be programmed.

This is done by filling out a list of strings in the rxFilter field of each NIC.

Currently, the server device driver supports 4 programmability modes as follows:

  • flow: Refers to the Flow Dispatcher (e.g., Intel's Flow Director) component of modern NICs following the format of DPDK's Flow API. In this mode the server device driver can send explicit flow rules to a server's NIC(s) in order to perform traffic classification (e.g., match the values of a packet's header field), modification (e.g., drop a packet), and dispatching (e.g., place a packet into a specific hardware queue associated with a CPU core)
  • mac: Refers to the Medium Access Control (MAC)-based Virtual Machine Device queues (VMDq) (e.g., Intel's VMDq) component of modern NICs. In this mode, a server associates a set of (virtual) MAC addresses with a set of CPU cores. Input packets' destination MAC address field is matched against these values and upon a successful match, the packet is redirected to the respective CPU core.
  • vlan: Refers to the Virtual Local Area (VLAN) identifier (ID)-based VMDq (e.g., Intel's VMDq) component of modern NICs. In this mode, a server associates a set of VLAN IDs with a set of CPU cores. Input packets' VLAN ID field is matched against these values and upon a successful match, the packet is redirected to the respective CPU core.
  • rss: Refers to the Receive-Side Scaling (RSS) (e.g., Intel's RSS) component of modern NICs. In this mode, the server's NIC applies a hash function to certain header field values of incoming packets, in order to identify the CPU core that is going to undertake the processing of each packet.

In the example above the server advertised Flow Dispatcher as a candidate mode for packet dispatching.

Server Device Monitoring

Once a server's resources are properly discovered by the server device driver, periodic CPU, main memory, and NIC monitoring of these resources takes place.

The CPU, main memory, and NIC monitoring command issued by the device driver hits the following resource path on the server:

...

HTTP GET: http://serverIp/metron/server_stats

 262144,
"shared": 0
},
{
"vendor": "GenuineIntel",
"level": "L3",
"type": "Data",
"policy": "Set-Associative",
"sets": 16384,
"ways": 20,
"lineLength": 64,
"capacity": 20971520,
"shared": 1
}
]
},
"memoryHierarchy":
{
"modules":
[
{
"type": "DDR4",
"manufacturer": "00CE00B300CE",
"serial": "40078A0C",
"dataWidth": 64,
"totalWidth": 72,
"capacity": 16384,
"speed": 2133,
"speedConfigured": 2133
}
]
},
    "nics":
    [
        {
            "name":"fd0",
            "id":0,
            "vendor":"Unknown",
            "driver":"net_mlx5",
            "speed":"100000",
            "status":"1",
            "portType":"fiber",
            "hwAddr":"50:6B:4B:43:88:CB",
            "rxFilter":["flow"]
        }
    ]
}

Note that the unit of frequency field in each CPU is in MHz, while the unit of the speed field in each NIC is in Mbps.

To indicate that a NIC is active, one must set the status field to 1.

Also, each server must advertise a list of ways with which its NIC(s) can be programmed.

This is done by filling out a list of strings in the rxFilter field of each NIC.

Currently, the server device driver supports 4 programmability modes as follows:

  • flow: Refers to the Flow Dispatcher (e.g., Intel's Flow Director) component of modern NICs following the format of DPDK's Flow API. In this mode the server device driver can send explicit flow rules to a server's NIC(s) in order to perform traffic classification (e.g., match the values of a packet's header field), modification (e.g., drop a packet), and dispatching (e.g., place a packet into a specific hardware queue associated with a CPU core)
  • mac: Refers to the Medium Access Control (MAC)-based Virtual Machine Device queues (VMDq) (e.g., Intel's VMDq) component of modern NICs. In this mode, a server associates a set of (virtual) MAC addresses with a set of CPU cores. Input packets' destination MAC address field is matched against these values and upon a successful match, the packet is redirected to the respective CPU core.
  • vlan: Refers to the Virtual Local Area (VLAN) identifier (ID)-based VMDq (e.g., Intel's VMDq) component of modern NICs. In this mode, a server associates a set of VLAN IDs with a set of CPU cores. Input packets' VLAN ID field is matched against these values and upon a successful match, the packet is redirected to the respective CPU core.
  • rss: Refers to the Receive-Side Scaling (RSS) (e.g., Intel's RSS) component of modern NICs. In this mode, the server's NIC applies a hash function to certain header field values of incoming packets, in order to identify the CPU core that is going to undertake the processing of each packet.

In the example above the server advertised Flow Dispatcher as a candidate mode for packet dispatching.

Server Device Monitoring

Once a server's resources are properly discovered by the server device driver, periodic CPU, main memory, and NIC monitoring of these resources takes place.

The CPU, main memory, and NIC monitoring command issued by the device driver hits the following resource path on the server:

HTTP GET: http://serverIp/metron/server_stats

A server with 2 Intel CPU cores, 16 GB of DDR4 DRAM, and a single-port Mellanox 100 GbE NIC might provide the following response:

{
     "busyCpus":2,
     "freeCpus":0,
     "cpus":
     [
         {
             "id":0,
             "load":0,
             "queue":0,
             "busy":true,
             “throughput”:
             {
                 “average”: 5000,
                 “unit”: “mbps
             },
             “latency”:
             {
                 “min”: 5000,
                 “median”: 6000,
                 “max”: 7000,
                 “unit”: “ns”
             }
         },
         {
             "id":1,
             "load":0,
             "queue":1,
             "busy":true,
             “throughput”:
   

A server with 2 Intel CPU cores, 16 GB of DDR4 DRAM, and a single-port Mellanox 100 GbE NIC might provide the following response:

{
     "busyCpus":2,
     "freeCpus":0,
     "cpus":
     [
         {
             "id":0    “average”: 4000,
               "load":0  “unit”: “mbps
             },
             "queue":0, “latency”:
             "busy":true,
{
                 “throughput” “min”: 5500,
             {    “median”: 6100,
                 “average” “max”: 50007400,
                 “unit”: “mbps“ns”
             },
        },
    ],
"memory":
{
"unit": "kBytes",
"used": 8388608,
"free": 8388608,
"total": 16777216
},
    "nics":
    [ “latency”:
             {
                 “min”: 5000,
                 “median”: 6000,
                 “max”: 7000,
                 “unit”: “ns”
             }
        {
 },
         { "name":"fd0",
             "id":10,
             "loadrxCount":0"1000",
             "queuerxBytes":1"64000",
             "busyrxDropped":true"0",
             “throughput”:"rxErrors":"0",
             {"txCount":"1000",
                 “average”: 4000"txBytes":"64000",
            "txErrors":"0"
     “unit”: “mbps
    },
         },
             “latency”:
             {
                 “min”: 5500,
                 “median”: 6100,
                 “max”: 7400,
                 “unit”: “ns”
             }
        },]
}

Note that throughput and latency statistics per core are optional fields.

NIC Rule Installation

A server with NICs in mode "flow" allows the server device driver to manage its rules.

To install NIC rules on a server's NIC, the device driver POST the following JSON to the server's resource:

HTTP POST: http://serverIp/metron/rules

{
    "rules":
    [
        {
            "id": "5057dd63-93ea-42ca-bb14-8a5e37e214da",
            "rxFilter":
       ],
"memory":
{
"unit": "kBytes", {
        "used": 8388608,
"freemethod": 8388608,"flow"
        "total": 16777216
},    "nics": 
          {
            "namenics":"fd0",
            "id":0,
            "rxCount":"1000",
            "rxBytes":"64000",
            "rxDropped":"0",
            "rxErrors":"0",
            "txCount":"1000",
            "txBytes":"64000",
            "txErrors":"0"
        },
    ]
}

Note that throughput and latency statistics per core are optional fields.

NIC Rule Installation

A server with NICs in mode "flow" allows the server device driver to manage its rules.

To install NIC rules on a server's NIC, the device driver POST the following JSON to the server's resource:

...

HTTP POST: http://serverIp/metron/rules

            [
                {
                    "name": "fd0",
                    "cpus":
                    [
            
{
    "rules":
    [
            {
                            "id": "5057dd63-93ea-42ca-bb14-8a5e37e214da",0,
                            "rxFilterrules":
            {                "method": "flow"[
            },            "nics":            [{
                {                    "nameid": "fd0"54043196136729470,
                    "cpus":                "content":"ingress pattern eth type [is 2048 / src is 192.168.100.7 dst is 192.168.1.7 / udp src is 53 / end actions queue index 0 / count /  {end"
                            "id":0,    }
                        "rules":    ]
                        [}
                    ]
            {    }
            ]
        }
            "id": 54043196136729470,
                                    "content":"ingress pattern eth type is 2048 / src is 192.168.100.7 dst is 192.168.1.7 / udp src is 53 / end actions queue index 0 / count / end"
                                }
]
}

Note that the "content" field contains a rule (with unique ID 54043196136729470) that follows the DPDK Flow API, as the NIC on this server is bound to a DPDK driver.

The example rule matches packets with source IP address 192.168.100.7, destination IP address 192.168.1.7, and source UDP port 53.

The action of this rule redirects the matched packets to hardware queue with index 0.

This queue is associated with CPU core 0, as indicated by the "id" field in the "cpus" attribute.

NIC Rule Monitoring

The server device driver also performs periodic NIC rule monitoring, for those NICs in mode "flow".

The NIC rule monitoring command issued by the device driver hits the following resource path on the server:

HTTP GET: http://serverIp/metron/rules

A server with 1 NIC rule in NIC with name fd0 (associated with CPU core 0) might respond as follows:

{
"rules":
[ ]
}{
] }"id":"5057dd63-93ea-42ca-bb14-8a5e37e214da",
]"rxFilter":
        }    ]
}

Note that the "content" field contains a rule (with unique ID 54043196136729470) that follows the DPDK Flow API, as the NIC on this server is bound to a DPDK driver.

The example rule matches packets with source IP address 192.168.100.7, destination IP address 192.168.1.7, and source UDP port 53.

The action of this rule redirects the matched packets to hardware queue with index 0.

This queue is associated with CPU core 0, as indicated by the "id" field in the "cpus" attribute.

NIC Rule Monitoring

The server device driver also performs periodic NIC rule monitoring, for those NICs in mode "flow".

The NIC rule monitoring command issued by the device driver hits the following resource path on the server:

...

HTTP GET: http://serverIp/metron/rules

A server with 1 NIC rule in NIC with name fd0 (associated with CPU core 0) might respond as follows:

{
"rules":
[
{
"method":"flow"
},
"nics":
[
{
"name":"fd0",
"cpus":
[
{
"id":"5057dd63-93ea-42ca-bb14-8a5e37e214da",
"rxFilterid":0,
{
"methodrules":"flow"

},
"nics":[
[
{
{
"name":"fd0",
"cpusid":54043196136729470,
                              [
"content":"ingress pattern eth type is 2048 / ipv4 src is 192.168.100.7 dst is 192.168.1.7 / udp src is {
53 / end actions queue index 0 / end"
"id":0,
}
"rules":
]
[
}
]
{
}
]
}
"id":54043196136729470,
                                    "content":"ingress pattern eth type is 2048 / ipv4 src is 192.168.100.7 dst is 192.168.1.7 / udp src is 53 / end actions queue index 0 / end"
}
]
}
]
}
]
}
]

}

NIC Rule Deletion

To delete the above rule (once it has successfully been installed) with unique ID 54043196136729470, the server device driver needs to hit the following path:

]

}

NIC Rule Deletion

To delete the above rule (once it has successfully been installed) with unique ID 54043196136729470, the server device driver needs to hit the following path:

HTTP DELETE: http://serverIp/metron/rules_delete/54043196136729470

To delete multiple rules at once, you should append a comma-separated rule IDs as follows:

HTTP DELETE: http://serverIp/metron/rules_delete/54043196136729470,54043196136729471

NIC Table Statistics

To retrieve statistics related to a server's NIC tables, the server device driver needs to hit the following path:

HTTP GET: http://serverIp/metron/rules_table_stats

NIC Port Administration

To enable a NIC port, the server device driver needs to issue the following HTTP post command to a server:

curl -X POST --data '{"port":0, "portStatus":"enable"}' --header "Content-Type: application/json" http://serverIp/metron/nic_ports

A similar result could be achieved using the ONOS CLI as follows:

onos:portstate rest:serverIp:serverPort nicID enable


Similarly, to disable a NIC port, the server device driver needs to issue the following HTTP post command to a server:

curl -X POST --data '{"port":0, "portStatus":"disable"}' --header "Content-Type: application/json" http://serverIp/metron/nic_ports

A similar result could be achieved using the ONOS CLI as follows:

onos:portstate rest:serverIp:serverPort nicID disable

NIC Queue Configuration

The server device driver can also provide NIC queue configuration information through the following HTTP GET command:

HTTP DELETE:

curl -X GET --header "Content-Type: application/json"

http://serverIp/metron/

rules

nic_

delete/54043196136729470

To delete multiple rules at once, you should append a comma-separated rule IDs as follows:

...

queues

This ONOS behavior also offers two additional methods, i.e., add/delete queue, but the server driver does not implement these methods.


Server Driver User Interface

...