Reference: 

Test Setup:

Test Procedure:

1) Run "push-test-intents" on ONOS1 with specified intent batch size;

2) Collect response time on Install and Withdraw;

3) scale cluster from 1 to 3, 5, 7 nodes, repeat step 1 &2 for each scale.

For testing intent re-route (results TBD), we use a slightly different procedure as the following (for simulating topology change with Null Providers):

  1. use "push-test-intents -i" to install but not withdraw the intents with an initial "linkGraph.cfg" (which has a shortest path and a backup path);
  2. load a new link graph with shortest path cut; check onos log for the timestamp t0 when the new link graph is loaded, i.e. link event is triggered;
  3. after a wait time of a few seconds, check "intents-events-metrics" for a timestamp (t1) that records the last intent event changes;
  4. the different of t1 and t0 is the re-route time.

 

Result:

1Intent meanInstallWithdraw
1node15.513.1
3node43.539.3
5node44.540.0
7node43.140.8
10Intents meanInstallWithdraw
1node14.714.2
3node42.738.7
5node45.739.2
7node45.041.2
100Intents meanInstallWithdraw
1node19.418.3
3node60.152.3
5node75.949.5
7node58.751.5

 

 

1000Intents meanInstallWithdraw
1node48.047.4
3node175.5165.1
5node177.3150.4
7node133.2117.2
5000Intents meanInstallWithdraw
1node364.6263.8
3node930.3751.6
5node662.5716.8
7node606.4477.6

 

Result Discussions:

  1. The fact that Intent latencies increase from standalone ONOS to clustered ones is expected due to additional latencies to communicate EW-wise. Once ONOS is in multi-node cluster, the increase of cluster size from 3 to 7 does not introduce additional latency. This is a desirable behavior for a distributed system.
  2. With large batch intent size (particularly with 1000 and 5000 batch size), the latencies are lower from 3 to 7 nodes, this is because with larger cluster size, the batch of intents broken down for each node is smaller. Therefore each node can complete its operation quicker. However, with small batch sizes, such as under 100, this benefit is not significant compared with other processing overhead.