Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The performance evaluation points indicated in the diagram include: for latencies,

Latencies:

  • A - Switch

...

  • Connect/

...

  • Disconnect
  • B - Link

...

  • Up/

...

  • Down
  • C - Intent Batch Install/Withdraw/Re-

...

  • Route
  • I - Flow Batch Installation from REST API
  • L - Host Add
  • M - Mastership Failover

Throughput:

  • D - Intent Operations
  • F - Burst Flow Rule Installation
  • G - Flow-Mod (Cbench)

Capacities

  • E - Topology Scaling Operation
  • H - Max Intent Installation

...

General Experiment Setup:

...

For all experiments except switch and port  related related ones, which require Openflow interactions, we implemented in ONOS a set of Null Providers at the Adapter level to interact with the ONOS core. The Null Providers act as device, link, host producers as well as a sink of flow rules. By using the Null Providers, we  bypass bypass Openflow adapters and eliminate potential performance limits from having to use real or emulated Openflow devices.  

We also instrumented a number of load generators so that we can generate a high-level of loading from the application or the network interfaces to stretch ONOS's performance limits.  These generators include:

  • Intent an intent performance generator, “onos-app-intent-perf” that interfaces with intent API, and generates self-adjusting intent install and withdraw operations to the highest rate ONOS can sustain;
  • a flow rule installer utility python script that interfaces with ONOS flow subsystem to install and remove flow rules in the subsystem; 
  • a link event (flicker) generator in Null Link providers that sends link up/down descriptions to ONOS core at an elevated rate up to that which ONOS can sustain. 

...

We will describe more details on utilizing the generator setup in the individual test cases.


Anchor
General Test Environments
General Test Environments

General Test Environments:

...

  • Java HotSpot(TM) 64-Bit Server VM; version 1.8.0_31
  • JAVA_OPTS="${JAVA_OPTS:--Xms8G -Xmx8G}"onos-1.1.0 snapshot: commit a31e13471ee626abce2bc43c413fab17586f4fc3
  • Additional case-specific ONOS parameters to be described in specific case.

The following Child Pages will child pages describe further setup details, discuss and analyze the results from each test.:

Test Plan:

Test
Case
Description
RoadmapSCPFcbenchThis is a test of single instance onos Cbench performance with onos fwd app. This test is mainly used for regression monitoring on onos openflow layers.nowSCPFflowTp1g
Test Plan Article
SCPFswitchLatMeasure latencies of switch connect/disconnect as onos cluster scales from 1, to 3, 5, 7 nodes.Experiment A&B Plan - Topology (Switch, Link) Event Latency
SCPFportLatMeasure latencies of port connect/disconnect as onos cluster scales from 1, to 3, 5, 7 nodes.Experiment A&B Plan - Topology (Switch, Link) Event Latency
SCPFintentInstallWithdrawLatMeasure latencies of installing and withdrawing intents in a batch size of 1, 100, 1000, as onos cluster scales from 1, to 3, 5, 7 nodes. Both cases when FlowObjective intent compiler is on and off are tested.Experiment C Plan - Intent Install/Remove/Re-route Latency
SCPFintentRerouteLatMeasure latencies of installed intents being rerouted, in an event of network path change, when onos cluster scales from 1 to 3, 5, 7 nodes. Both cases when FlowObjective intent compiler is on and off are tested.Experiment C Plan - Intent Install/Remove/Re-route Latency 
SCPFintentEventTpMeasure onos intent operation throughput performance as onos scales
Test case to obtain onos flow rule subsystem throughput performance as onos is scaled
form 1 to 3, 5, 7
,
nodes. Also tested in each scale
flow rule
, intent "neighboring" scenarios, i.e. when
flow rules
intents are installed only on the local nodes, and all nodes in the cluster.Experiment D Plan - Intents Operations Throughput
SCPFscaleTopoMeasure the maximum size of topology that a 3-node onos cluster can discover and maintain.
now
Experiment E Plan - Topology Scaling Operation
SCPFflowTp1gMeasure onos flow rule subsystem
SCPFintentEventTpTest case to obtain onos intent operation
throughput performance as onos
is scaled
scales form 1 to 3, 5, 7
,
nodes. Also tested in each scale
, intent
flow rule "neighboring" scenarios, i.e. when
intents
flow rules are installed only on the local nodes, and all nodes in the cluster
.now
.Experiment F Plan - Flow Subsystem Burst Throughput
SCPFcbenchMeasure Cbench performance of single-instance onos with fwd app. This test is mainly used for regression monitoring on onos openflow layers.Experiment G Plan - Single-node ONOS Cbench
SCPFscalingMaxIntentsMeasure the maximum number of intents and corresponding flows that onos can hold as onos scales form 1
SCPFintentInstallWithdrawLatTest case to measure latencies of installing and withdrawing intents in a batch size of 1, 100, 100, as onos cluster scales from 1,
to 3, 5, 7 nodes. Both cases when FlowObjective intent compiler is on and off are tested.
nowSCPFintentRerouteLat
Experiment H Plan - Max Intent Installation and ReRoute
SCPFbatchFlowRespMeasure the latencies of flow batch installation and deletion via REST API on a single-instance onos.Experiment I Plan - Single Bench Flow Latency Test
SCPFhostLatMeasure latencies of host discovery as
Test case to measure latencies of installed intents being rerouted, in an event of network path change, when
onos cluster scales from 1, to 3, 5, 7 nodes.
now
Experiment L Plan - Host Add Latency
SCPFmastershipFailoverLatMasure the latencies of ONOS node recovery as ONOS cluster scales from 3, to
SCPFtopologyLatTest case to measure latencies of network events, specifically switch connect/disconnect, port connect/disconnect, as onos cluster scales from 1, to 3,
5, 7 nodes.
WIP
Experiment M Plan - Mastership Failover Latency