Due to a ransomware attack, the wiki was reverted to a July 2022 version. . We apologize for the lack of a more recent valid backup.
Reference:
- for Test Plan see: Test Plan - Perf & Scale-out;
- ONLabTest script - https://github.com/opennetworkinglab/ONLabTest:pushTestIntents
Test Setup:
- RC2 - commit f6a731a350138c06723d283a647262073dd0168b
- Using "push-test-intents" app; 7 Null Devices spread masterships across all active nodes;
- Bare-metal Servers: dual-Xeon E5-2670, 32GB DDR3 RAM, SSD, Cluster Network is 1Gbps
- JAVA_OPTS="${JAVA_OPTS:--Xms8G -Xmx8G}"
Test Procedure:
1) Run "push-test-intents" on ONOS1 with specified intent batch size;
2) Collect response time on Install and Withdraw;
3) scale cluster from 1 to 3, 5, 7 nodes, repeat step 1 &2 for each scale.
For testing intent re-route (results TBD), we use a slightly different procedure as the following (for simulating topology change with Null Providers):
- use "push-test-intents -i" to install but not withdraw the intents with an initial "linkGraph.cfg" (which has a shortest path and a backup path);
- load a new link graph with shortest path cut; check onos log for the timestamp t0 when the new link graph is loaded, i.e. link event is triggered;
- after a wait time of a few seconds, check "intents-events-metrics" for a timestamp (t1) that records the last intent event changes;
- the different of t1 and t0 is the re-route time.
Result:





Result Discussions:
- The fact that Intent latencies increase from standalone ONOS to clustered ones is expected due to additional latencies to communicate EW-wise. Once ONOS is in multi-node cluster, the increase of cluster size from 3 to 7 does not introduce additional latency. This is a desirable behavior for a distributed system.
- With large batch intent size (particularly with 1000 and 5000 batch size), the latencies are lower from 3 to 7 nodes, this is because with larger cluster size, the batch of intents broken down for each node is smaller. Therefore each node can complete its operation quicker. However, with small batch sizes, such as under 100, this benefit is not significant compared with other processing overhead.