Due to a ransomware attack, the wiki was reverted to a July 2022 version. . We apologize for the lack of a more recent valid backup.
...
commit 926bae6af02dc900e1155c2651ea653a863c5695 a18e2a6a1fe05d7e1d281a0ce2fa802d99a03727 (HEAD, origin/master, origin/HEAD, master)
Author: Ray Milkey [ray@onlab.us]
AuthorDate: Tue May 30 1415:3436:02 09 2017 -0700
Commit: Ray Milkey [ray@onlabYuta HIGUCHI [y-higuchi@onlab.us]
CommitDate: Tue Wed May 30 1431 21:3455:02 17 2017 -0700
Update SHA signatures for yang tools binaries+0000
Remove deprecated KryoSerializer class
HTML |
---|
<iframe src="https://onos-jenkins.onlab.us/job/HAscaling/plot/Plot-HA/getPlot?index=1&width=500&height=300"noborder="0" width="500" height="300" scrolling="yes" seamless="seamless"></iframe> |
...
- 6.1 Checking ONOS Logs for errors - PASS
- 6.2 Start new nodes - PASS
- 6.3 Checking if ONOS is up yet - PASS
- 6.4 Starting ONOS CLI sessions - PASS
- 6.5 Checking ONOS nodes - FAIL
- Failed to rerun for election
Case 8: Compare ONOS Topology view to Mininet topology -
...
PASS
Compare topology objects between Mininet and ONOS
- 8.1 Comparing ONOS topology to MN topology - PASS
- 8.2 Hosts view is consistent across all ONOS nodes - PASS
- 8.3 Hosts information is correct - PASS
- 8.4 Host attachment points to the network - PASS
- 8.5 Clusters view is consistent across all ONOS nodes - PASS
- 8.6 There is only one SCC - PASS
- 8.7 Device information is correct - PASS
- 8.8 Links are correct - PASS
- 8.9 Hosts are correct - PASS
- 8.10 Checking ONOS nodes - FAIL
- Nodes check NOT successful
- PASS
Case 7: Running ONOS Constant State Tests -
...
FAIL
- 7.1 Check that each switch has a master - PASS
- 7.2 Read device roles from ONOS - PASS
- 7.3 Check for consistency in roles from each controller - PASS
- 7.4 Get the intents and compare across all nodes - FAIL Error in reading intents from ONOS
- - PASS
- 7.5 Check for consistency in Intents from each controller - PASS
- 7.6 Compare current intents with intents before the scaling - PASS
- 7.7 Get the OF Table entries and compare to before component scaling - PASS
- 7.8 Leadership Election is still functional - FAIL
- Something went wrong with Leadership election
Case 4: Verify connectivity by sending traffic across Intents - PASS
Ping across added host intents to check functionality and check the state of the intent
- 4.1 Check Intent state - PASS
- 4.2 Ping across added host intents - PASS
- 4.3 Check leadership of topics - PASS
- 4.4 Wait a minute then ping again - PASS
Case 15: Check that Leadership Election is still functional - FAIL
- 15.1 Run for election on each node - PASS
- 15.2 Check that each node shows the same leader and candidates - PASS
- 15.3 Find current leader and withdraw - PASS
- 15.4 Check that a new node was elected leader - FAIL
- Something went wrong with Leadership election
- 15.5 Check that that new leader was the candidate of old leader - FAIL
- Incorrect Candidate Elected
- 15.6 Run for election on old leader( just so everyone is in the hat ) - PASS
- 15.7 Check that oldLeader is a candidate, and leader if only 1 node - PASS
Case 17: Check for basic functionality with distributed primitives - FAIL
Test the methods of the distributed primitives (counters and sets) throught the cli
- 17.1 Increment then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.2 Get then Increment a default counter on each node - FAIL
- Error incrementing default counter
- 17.3 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.4 Add -8 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.5 Add 5 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.6 Get then add 5 to a default counter on each node - FAIL
- Error incrementing default counter
- 17.7 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.8 Distributed Set get - FAIL
- Set elements are incorrect
- 17.9 Distributed Set size - FAIL
- Set sizes are incorrect
- 17.10 Distributed Set add() - FAIL
- Set add was incorrect
- 17.11 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.12 Distributed Set contains() - FAIL
- Set contains failed
- 17.13 Distributed Set containsAll() - PASS
- 17.14 Distributed Set remove() - FAIL
- Set remove was incorrect
- 17.15 Distributed Set removeAll() - FAIL
- Set removeAll was incorrect
- 17.16 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.17 Distributed Set clear() - FAIL
- Set clear was incorrect
- 17.18 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.19 Distributed Set retain() - FAIL
- Set retain was incorrect
- 17.20 Partitioned Transactional maps put - PASS
- 17.21 Partitioned Transactional maps get - FAIL
- Partitioned Transactional Map values incorrect
- 17.22 Get the value of a new value - FAIL
- Error getting atomic Value None, found: ['null', None, None]
- 17.23 Atomic Value set() - FAIL
- Error setting atomic Value[1, -1, -1]
- 17.24 Get the value after set() - FAIL
- Error getting atomic Value foo, found: ['foo', None, None]
- 17.25 Atomic Value compareAndSet() - PASS
- 17.26 Get the value after compareAndSet() - FAIL
- Error getting atomic Value bar, found: ['bar', None, None]
- 17.27 Atomic Value getAndSet() - PASS
- 17.28 Get the value after getAndSet() - FAIL
- Error getting atomic Value: expected baz, found: ['baz', None, None]
- 17.29 Atomic Value destory() - PASS
- 17.30 Get the value after destroy() - FAIL
- Error getting atomic Value None, found: ['null', None, None]
- 17.31 Work Queue add() - PASS
- 17.32 Check the work queue stats - No Result