...
commit 83f9b297705a259fa7d8b38493e63900e4e8178f 5130129f56ea35730201fe82606d604917b2edf8 (HEAD, origin/master, origin/HEAD, master)
Author: Yuta HIGUCHI [y-higuchi@ak.jp.nec.Yi Tseng [a86487817@gmail.com]
AuthorDate: Tue Aug 1 16:23:25 Fri Jul 28 13:02:59 2017 -0700
Commit: Yuta HIGUCHI [y-higuchi@opennetworkingCharles Chan [charles@opennetworking.org]
CommitDate: Tue Sat Aug 1 235 00:5910:03 52 2017 +0000
Yang tools 2.2.0-b3
[CORD-1614] Refactor DHCP relay app
HTML |
---|
<iframe src="https://onos-jenkins.onlab.us/job/HAscaling/plot/Plot-HA/getPlot?index=1&width=500&height=300"noborder="0" width="500" height="300" scrolling="yes" seamless="seamless"></iframe> |
...
- 1.1 Constructing test variables - PASS
- 1.2 Apply cell to environment - PASS
- 1.3 Uninstalling ONOS package - PASS
- 1.4 Setup server for cluster metadata file - PASS
- 1.5 Generate initial metadata file - PASS
- 1.6 Starting Mininet - PASS
- 1.7 Copying backup config files - PASS
- 1.8 Creating ONOS package - PASS
- 1.9 Installing ONOS package - PASS
- 1.10 Set up ONOS secure SSH - PASS
- 1.11 Starting Checking ONOS service - PASS
- 1.12 Starting ONOS CLI sessions - PASS
- 1.13 Clean up ONOS service changes - No Result
- 1.14 Checking ONOS nodes - PASS
- 1.15 Activate apps defined in the params file - No Result
- 1.16 Set ONOS configurations - PASS
- 1.17 App Ids check Check app ids - PASS
Case 2: Assigning devices to controllers - PASS
...
- 5.1 Check that each switch has a master - PASS
- 5.2 Get the Mastership of each switch from each controller - No Result
- 5.3 Read device roles from ONOS - PASS
- 5.3 4 Check for consistency in roles from each controller - PASS
- 5.4 5 Get the intents from each controller - PASS
- 5.5 6 Check for consistency in Intents from each controller - PASS
- 5.6 7 Get the flows from each controller - PASS
- 5.7 8 Check for consistency in Flows from each controller - PASS
- 5.8 9 Get the OF Table entries - No Result
- 5.9 10 Start continuous pings - No Result
- 5.10 11 Collecting topology information from ONOS - No Result
- 5.11 12 Host view is consistent across ONOS nodes - PASS
- 5.12 13 Each host has an IP address - PASS
- 5.13 14 Cluster view is consistent across ONOS nodes - PASS
- 5.14 15 Cluster view correct across ONOS nodes - PASS
- 5.15 16 Comparing ONOS topology to MN - PASS
- 5.16 17 Device information is correct - PASS
- 5.17 18 Links are correct - PASS
- 5.18 19 Hosts are correct - PASS
Case 14: Start Leadership Election app - PASS
...
- 6.1 Checking ONOS Logs for errors - PASS
- 6.2 Start new nodes - PASS
- 6.3 Checking if ONOS is up yet Set up ONOS secure SSH - PASS
- 6.4 Starting Checking ONOS CLI sessions service - PASS
- 6.5 Checking ONOS nodes - FAIL
- Failed to rerun for election
Case 8: Compare ONOS Topology view to Mininet topology - PASS
Compare topology objects between Mininet and ONOS
- 8.1 Comparing ONOS topology to MN topology - PASS
- 8.2 Hosts view is consistent across all ONOS nodes - PASS
- 8.3 Hosts information is correct - PASS
- 8.4 Host attachment points to the network - PASS
- 8.5 Clusters view is consistent across all ONOS nodes - PASS
- 8.6 There is only one SCC - PASS
- 8.7 Device information is correct - PASS
- 8.8 Links are correct - PASS
- 8.9 Hosts are correct - PASS
- 8.10 Checking ONOS nodes - PASS
Case 7: Running ONOS Constant State Tests - FAIL
- 7.1 Check that each switch has a master - PASS
- 7.2 Read device roles from ONOS - PASS
- 7.3 Check for consistency in roles from each controller - PASS
- 7.4 Get the intents and compare across all nodes - PASS
- 7.5 Check for consistency in Intents from each controller - PASS
- 7.6 Compare current intents with intents before the scaliing - PASS
- 7.7 Get the OF Table entries and compare to before component scaliing - PASS
- 7.8 Leadership Election is still functional - FAIL
- Something went wrong with Leadership election
Case 4: Verify connectivity by sending traffic across Intents - PASS
Ping across added host intents to check functionality and check the state of the intent
- 4.1 Check Intent state - PASS
- 4.2 Ping across added host intents - PASS
- 4.3 Check leadership of topics - PASS
- 4.4 Wait a minute then ping again - PASS
Case 15: Check that Leadership Election is still functional - FAIL
- 15.1 Run for election on each node - PASS
- 15.2 Check that each node shows the same leader and candidates - PASS
- 15.3 Find current leader and withdraw - PASS
- 15.4 Check that a new node was elected leader - FAIL
- Something went wrong with Leadership election
- 15.5 Check that that new leader was the candidate of old leader - FAIL
- Incorrect Candidate Elected
- 15.6 Run for election on old leader( just so everyone is in the hat ) - PASS
- 15.7 Check that oldLeader is a candidate, and leader if only 1 node - PASS
Case 17: Check for basic functionality with distributed primitives - FAIL
Test the methods of the distributed primitives (counters and sets) throught the cli
- 17.1 Increment then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.2 Get then Increment a default counter on each node - FAIL
- Error incrementing default counter
- 17.3 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.4 Add -8 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.5 Add 5 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.6 Get then add 5 to a default counter on each node - FAIL
- Error incrementing default counter
- 17.7 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.8 Distributed Set get - FAIL
- Set elements are incorrect
- 17.9 Distributed Set size - FAIL
- Set sizes are incorrect
- 17.10 Distributed Set add() - FAIL
- Set add was incorrect
- 17.11 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.12 Distributed Set contains() - FAIL
- Set contains failed
- 17.13 Distributed Set containsAll() - PASS
- 17.14 Distributed Set remove() - FAIL
- Set remove was incorrect
- 17.15 Distributed Set removeAll() - FAIL
- Set removeAll was incorrect
- 17.16 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.17 Distributed Set clear() - FAIL
- Set clear was incorrect
- 17.18 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.19 Distributed Set retain() - FAIL
- Set retain was incorrect
- 17.20 Partitioned Transactional maps put - PASS
- 17.21 Partitioned Transactional maps get - FAIL
- Partitioned Transactional Map values incorrect
- 17.22 Get the value of a new value - FAIL
- Error getting atomic Value None, found: ['null', None, None]
- 17.23 Atomic Value set() - FAIL
- Error setting atomic Value[1, -1, -1]
- 17.24 Get the value after set() - FAIL
- Error getting atomic Value foo, found: ['foo', None, None]
- 17.25 Atomic Value compareAndSet() - PASS
- 17.26 Get the value after compareAndSet() - FAIL
- Error getting atomic Value bar, found: ['bar', None, None]
- 17.27 Atomic Value getAndSet() - PASS
- 17.28 Get the value after getAndSet() - FAIL
- Error getting atomic Value: expected baz, found: ['baz', None, None]
- 17.29 Atomic Value destory() - PASS
- 17.30 Get the value after destroy() - FAIL
- Error getting atomic Value None, found: ['null', None, None]
- 17.31 Work Queue add() - PASS
- 17.32 Check the work queue stats - No Result
Case 6: Scale the number of nodes in the ONOS cluster - FAIL
- 6.1 Checking ONOS Logs for errors - PASS
- 6.2 Start new nodes - PASS
- 6.3 Checking if ONOS is up yet - PASS
- 6.4 Starting ONOS CLI sessions - PASS
- 6.5 Checking ONOS nodes - FAIL
- Failed to rerun for election
Case 8: Compare ONOS Topology view to Mininet topology - PASS
Compare topology objects between Mininet and ONOS
- 8.1 Comparing ONOS topology to MN topology - PASS
- 8.2 Hosts view is consistent across all ONOS nodes - PASS
- 8.3 Hosts information is correct - PASS
- 8.4 Host attachment points to the network - PASS
- 8.5 Clusters view is consistent across all ONOS nodes - PASS
- 8.6 There is only one SCC - PASS
- 8.7 Device information is correct - PASS
- 8.8 Links are correct - PASS
- 8.9 Hosts are correct - PASS
- 8.10 Checking ONOS nodes - PASS
Case 7: Running ONOS Constant State Tests - FAIL
- 7.1 Check that each switch has a master - PASS
- 7.2 Read device roles from ONOS - PASS
- 7.3 Check for consistency in roles from each controller - PASS
- 7.4 Get the intents and compare across all nodes - PASS
- 7.5 Check for consistency in Intents from each controller - PASS
- 7.6 Compare current intents with intents before the scaliing - PASS
- 7.7 Get the OF Table entries and compare to before component scaliing - PASS
- 7.8 Leadership Election is still functional - FAIL
- Something went wrong with Leadership election
Case 4: Verify connectivity by sending traffic across Intents - PASS
Ping across added host intents to check functionality and check the state of the intent
- 4.1 Check Intent state - PASS
- 4.2 Ping across added host intents - PASS
- 4.3 Check leadership of topics - PASS
- 4.4 Wait a minute then ping again - PASS
Case 15: Check that Leadership Election is still functional - FAIL
- 15.1 Run for election on each node - PASS
- 15.2 Check that each node shows the same leader and candidates - PASS
- 15.3 Find current leader and withdraw - PASS
- 15.4 Check that a new node was elected leader - FAIL
- Something went wrong with Leadership election
- 15.5 Check that that new leader was the candidate of old leader - FAIL
- Incorrect Candidate Elected
- 15.6 Run for election on old leader( just so everyone is in the hat ) - PASS
- 15.7 Check that oldLeader is a candidate, and leader if only 1 node - PASS
Case 17: Check for basic functionality with distributed primitives - FAIL
Test the methods of the distributed primitives (counters and sets) throught the cli
- 17.1 Increment then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.2 Get then Increment a default counter on each node - FAIL
- Error incrementing default counter
- 17.3 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.4 Add -8 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.5 Add 5 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.6 Get then add 5 to a default counter on each node - FAIL
- Error incrementing default counter
- 17.7 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.8 Distributed Set get - FAIL
- Set elements are incorrect
- 17.9 Distributed Set size - FAIL
- Set sizes are incorrect
- 17.10 Distributed Set add() - FAIL
- Set add was incorrect
- 17.11 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.12 Distributed Set contains() - FAIL
- Set contains failed
- 17.13 Distributed Set containsAll() - PASS
- 17.14 Distributed Set remove() - FAIL
- Set remove was incorrect
- 17.15 Distributed Set removeAll() - FAIL
- Set removeAll was incorrect
- 17.16 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.17 Distributed Set clear() - FAIL
- Set clear was incorrect
- 17.18 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.19 Distributed Set retain() - FAIL
- Set retain was incorrect
- 17.20 Partitioned Transactional maps put - PASS
- 17.21 Partitioned Transactional maps get - FAIL
- Partitioned Transactional Map values incorrect
- 17.22 Get the value of a new value - FAIL
- Error getting atomic Value None, found: ['null', None, None, None, None]
- 17.23 Atomic Value set() - FAIL
- Error setting atomic Value[1, -1, -1, -1, -1]
- 17.24 Get the value after set() - FAIL
- Error getting atomic Value foo, found: ['foo', None, None, None, None]
- 17.25 Atomic Value compareAndSet() - PASS
- 17.26 Get the value after compareAndSet() - FAIL
- Error getting atomic Value bar, found: ['bar', None, None, None, None]
- 17.27 Atomic Value getAndSet() - PASS
- 17.28 Get the value after getAndSet() - FAIL
- Error getting atomic Value: expected baz, found: ['baz', None, None, None, None]
- 17.29 Atomic Value destory() - PASS
- 17.30 Get the value after destroy() - FAIL
- Error getting atomic Value None, found: ['null', None, None, None, None]
- 17.31 Work Queue add() - PASS
- 17.32 Check the work queue stats - No Result
Case 6: Scale the number of nodes in the ONOS cluster - FAIL
- 6.1 Checking ONOS Logs for errors - PASS
- 6.2 Start new nodes - PASS
- 6.3 Checking if ONOS is up yet - PASS
- 6.4 Starting ONOS CLI sessions - PASS
- 6.5 Checking ONOS nodes - FAIL
- Failed to rerun for election
Case 8: Compare ONOS Topology view to Mininet topology - PASS
Compare topology objects between Mininet and ONOS
- 8.1 Comparing ONOS topology to MN topology - PASS
- 8.2 Hosts view is consistent across all ONOS nodes - PASS
- 8.3 Hosts information is correct - PASS
- 8.4 Host attachment points to the network - PASS
- 8.5 Clusters view is consistent across all ONOS nodes - PASS
- 8.6 There is only one SCC - PASS
- 8.7 Device information is correct - PASS
- 8.8 Links are correct - PASS
- 8.9 Hosts are correct - PASS
- 8.10 Checking ONOS nodes - PASS
Case 7: Running ONOS Constant State Tests - FAIL
- 7.1 Check that each switch has a master - PASS
- 7.2 Read device roles from ONOS - PASS
- 7.3 Check for consistency in roles from each controller - PASS
- 7.4 Get the intents and compare across all nodes - PASS
- 7.5 Check for consistency in Intents from each controller - PASS
- 7.6 Compare current intents with intents before the scaliing - PASS
- 7.7 Get the OF Table entries and compare to before component scaliing - PASS
- 7.8 Leadership Election is still functional - FAIL
- Something went wrong with Leadership election
Case 4: Verify connectivity by sending traffic across Intents - FAIL
Ping across added host intents to check functionality and check the state of the intent
- 4.1 Check Intent state - PASS
- 4.2 Ping across added host intents - FAIL
- Intents have not been installed correctly, pings failed.
- 4.3 Check leadership of topics - PASS
- 4.4 Wait a minute then ping again - FAIL
- Intents have not been installed correctly, pings failed.
Case 15: Check that Leadership Election is still functional - FAIL
- 15.1 Run for election on each node - PASS
- 15.2 Check that each node shows the same leader and candidates - PASS
- 15.3 Find current leader and withdraw - PASS
- 15.4 Check that a new node was elected leader - FAIL
- Something went wrong with Leadership election
- 15.5 Check that that new leader was the candidate of old leader - FAIL
- Incorrect Candidate Elected
- 15.6 Run for election on old leader( just so everyone is in the hat ) - PASS
- 15.7 Check that oldLeader is a candidate, and leader if only 1 node - PASS
Case 17: Check for basic functionality with distributed primitives - FAIL
Test the methods of the distributed primitives (counters and sets) throught the cli
- 17.1 Increment then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.2 Get then Increment a default counter on each node - FAIL
- Error incrementing default counter
- 17.3 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.4 Add -8 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.5 Add 5 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.6 Get then add 5 to a default counter on each node - FAIL
- Error incrementing default counter
- 17.7 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.8 Distributed Set get - FAIL
- Set elements are incorrect
- 17.9 Distributed Set size - FAIL
- Set sizes are incorrect
- 17.10 Distributed Set add() - FAIL
- Set add was incorrect
- 17.11 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.12 Distributed Set contains() - FAIL
- Set contains failed
- 17.13 Distributed Set containsAll() - PASS
- 17.14 Distributed Set remove() - FAIL
- Set remove was incorrect
- 17.15 Distributed Set removeAll() - FAIL
- Set removeAll was incorrect
- 17.16 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.17 Distributed Set clear() - FAIL
- Set clear was incorrect
- 17.18 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.19 Distributed Set retain() - FAIL
- Set retain was incorrect
- 17.20 Partitioned Transactional maps put - PASS
- 17.21 Partitioned Transactional maps get - FAIL
- Partitioned Transactional Map values incorrect
- 17.22 Get the value of a new value - FAIL
- Error getting atomic Value None, found: ['null', None, None, None, None, None, None]
- 17.23 Atomic Value set() - FAIL
- Error setting atomic Value[1, -1, -1, -1, -1, -1, -1]
- 17.24 Get the value after set() - FAIL
- Error getting atomic Value foo, found: ['foo', None, None, None, None, None, None]
- 17.25 Atomic Value compareAndSet() - PASS
- 17.26 Get the value after compareAndSet() - FAIL
- Error getting atomic Value bar, found: ['bar', None, None, None, None, None, None]
- 17.27 Atomic Value getAndSet() - PASS
- 17.28 Get the value after getAndSet() - FAIL
- Error getting atomic Value: expected baz, found: ['baz', None, None, None, None, None, None]
- 17.29 Atomic Value destory() - PASS
- 17.30 Get the value after destroy() - FAIL
- Error getting atomic Value None, found: ['null', None, None, None, None, None, None]
- 17.31 Work Queue add() - PASS
- 17.32 Check the work queue stats - No Result
Case 6: Scale the number of nodes in the ONOS cluster - FAIL
- 6.1 Checking ONOS Logs for errors - PASS
- 6.2 Start new nodes - PASS
- 6.3 Checking if ONOS is up yet - PASS
- 6.4 Starting ONOS CLI sessions - PASS
- 6.5 Checking ONOS nodes - FAIL
- Failed to rerun for election
Case 8: Compare ONOS Topology view to Mininet topology - PASS
Compare topology objects between Mininet and ONOS
- 8.1 Comparing ONOS topology to MN topology - PASS
- 8.2 Hosts view is consistent across all ONOS nodes - PASS
- 8.3 Hosts information is correct - PASS
- 8.4 Host attachment points to the network - PASS
- 8.5 Clusters view is consistent across all ONOS nodes - PASS
- 8.6 There is only one SCC - PASS
- 8.7 Device information is correct - PASS
- 8.8 Links are correct - PASS
- 8.9 Hosts are correct - PASS
- 8.10 Checking ONOS nodes - PASS
Case 7: Running ONOS Constant State Tests - FAIL
- 7.1 Check that each switch has a master - PASS
- 7.2 Read device roles from ONOS - PASS
- 7.3 Check for consistency in roles from each controller - PASS
- 7.4 Get the intents and compare across all nodes - PASS
- 7.5 Check for consistency in Intents from each controller - PASS
- 7.6 Compare current intents with intents before the scaliing - FAIL
- The Intents changed during scaliing
- 7.7 Get the OF Table entries and compare to before component scaliing - FAIL
- Changes were found in the flow tables
- 7.8 Leadership Election is still functional - FAIL
- Something went wrong with Leadership election
Case 4: Verify connectivity by sending traffic across Intents - FAIL
Ping across added host intents to check functionality and check the state of the intent
- 4.1 Check Intent state - FAIL
- Intents are not all in INSTALLED state
- 4.2 Ping across added host intents - FAIL
- Intents have not been installed correctly, pings failed.
- 4.3 Check leadership of topics - PASS
- 4.4 Wait a minute then ping again - FAIL
- Intents have not been installed correctly, pings failed.
Case 15: Check that Leadership Election is still functional - FAIL
- 15.1 Run for election on each node - PASS
- 15.2 Check that each node shows the same leader and candidates - PASS
- 15.3 Find current leader and withdraw - PASS
- 15.4 Check that a new node was elected leader - FAIL
- Something went wrong with Leadership election
- 15.5 Check that that new leader was the candidate of old leader - FAIL
- Incorrect Candidate Elected
- 15.6 Run for election on old leader( just so everyone is in the hat ) - PASS
- 15.7 Check that oldLeader is a candidate, and leader if only 1 node - PASS
Case 17: Check for basic functionality with distributed primitives - FAIL
Test the methods of the distributed primitives (counters and sets) throught the cli
- 17.1 Increment then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.2 Get then Increment a default counter on each node - FAIL
- Error incrementing default counter
- 17.3 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.4 Add -8 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.5 Add 5 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.6 Get then add 5 to a default counter on each node - FAIL
- Error incrementing default counter
- 17.7 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.8 Distributed Set get - FAIL
- Set elements are incorrect
- 17.9 Distributed Set size - FAIL
- Set sizes are incorrect
- 17.10 Distributed Set add() - FAIL
- Set add was incorrect
- 17.11 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.12 Distributed Set contains() - FAIL
- Set contains failed
- 17.13 Distributed Set containsAll() - PASS
- 17.14 Distributed Set remove() - FAIL
- Set remove was incorrect
- 17.15 Distributed Set removeAll() - FAIL
- Set removeAll was incorrect
- 17.16 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.17 Distributed Set clear() - FAIL
- Set clear was incorrect
- 17.18 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.19 Distributed Set retain() - FAIL
- Set retain was incorrect
- 17.20 Partitioned Transactional maps put - PASS
- 17.21 Partitioned Transactional maps get - FAIL
- Partitioned Transactional Map values incorrect
- 17.22 Get the value of a new value - FAIL
- Error getting atomic Value None, found: ['null', None, None, None, None, None, None]
- 17.23 Atomic Value set() - FAIL
- Error setting atomic Value[1, -1, -1, -1, -1, -1, -1]
- 17.24 Get the value after set() - FAIL
- Error getting atomic Value foo, found: ['foo', None, None, None, None, None, None]
- 17.25 Atomic Value compareAndSet() - PASS
- 17.26 Get the value after compareAndSet() - FAIL
- Error getting atomic Value bar, found: ['bar', None, None, None, None, None, None]
- 17.27 Atomic Value getAndSet() - PASS
- 17.28 Get the value after getAndSet() - FAIL
- Error getting atomic Value: expected baz, found: ['baz', None, None, None, None, None, None]
- 17.29 Atomic Value destory() - PASS
- 17.30 Get the value after destroy() - FAIL
- Error getting atomic Value None, found: ['null', None, None, None, None, None, None]
- 17.31 Work Queue add() - PASS
- 17.32 Check the work queue stats - No Result
Case 6: Scale the number of nodes in the ONOS cluster - FAIL
- 6.1 Checking ONOS Logs for errors - PASS
- 6.2 Start new nodes - PASS
- 6.3 Checking if ONOS is up yet - PASS
- 6.4 Starting ONOS CLI sessions - PASS
- 6.5 Checking ONOS nodes - FAIL
- Failed to rerun for election
Case 8: Compare ONOS Topology view to Mininet topology - PASS
Compare topology objects between Mininet and ONOS
- 8.1 Comparing ONOS topology to MN topology - PASS
- 8.2 Hosts view is consistent across all ONOS nodes - PASS
- 8.3 Hosts information is correct - PASS
- 8.4 Host attachment points to the network - PASS
- 8.5 Clusters view is consistent across all ONOS nodes - PASS
- 8.6 There is only one SCC - PASS
- 8.7 Device information is correct - PASS
- 8.8 Links are correct - PASS
- 8.9 Hosts are correct - PASS
- 8.10 Checking ONOS nodes - PASS
Case 7: Running ONOS Constant State Tests - FAIL
- 7.1 Check that each switch has a master - PASS
- 7.2 Read device roles from ONOS - PASS
- 7.3 Check for consistency in roles from each controller - PASS
- 7.4 Get the intents and compare across all nodes - PASS
- 7.5 Check for consistency in Intents from each controller - PASS
- 7.6 Compare current intents with intents before the scaliing - FAIL
- The Intents changed during scaliing
- 7.7 Get the OF Table entries and compare to before component scaliing - FAIL
- Changes were found in the flow tables
- 7.8 Leadership Election is still functional - FAIL
- Something went wrong with Leadership election
Case 4: Verify connectivity by sending traffic across Intents - FAIL
Ping across added host intents to check functionality and check the state of the intent
- 4.1 Check Intent state - FAIL
- Intents are not all in INSTALLED state
- 4.2 Ping across added host intents - FAIL
- Intents have not been installed correctly, pings failed.
- 4.3 Check leadership of topics - PASS
- 4.4 Wait a minute then ping again - FAIL
- Intents have not been installed correctly, pings failed.
Case 15: Check that Leadership Election is still functional - FAIL
- 15.1 Run for election on each node - PASS
- 15.2 Check that each node shows the same leader and candidates - PASS
- 15.3 Find current leader and withdraw - PASS
- 15.4 Check that a new node was elected leader - FAIL
- Something went wrong with Leadership election
- 15.5 Check that that new leader was the candidate of old leader - FAIL
- Incorrect Candidate Elected
- 15.6 Run for election on old leader( just so everyone is in the hat ) - PASS
- 15.7 Check that oldLeader is a candidate, and leader if only 1 node - PASS
Case 17: Check for basic functionality with distributed primitives - FAIL
Test the methods of the distributed primitives (counters and sets) throught the cli
- 17.1 Increment then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.2 Get then Increment a default counter on each node - FAIL
- Error incrementing default counter
- 17.3 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.4 Add -8 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.5 Add 5 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.6 Get then add 5 to a default counter on each node - FAIL
- Error incrementing default counter
- 17.7 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.8 Distributed Set get - FAIL
- Set elements are incorrect
- 17.9 Distributed Set size - FAIL
- Set sizes are incorrect
- 17.10 Distributed Set add() - FAIL
- Set add was incorrect
- 17.11 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.12 Distributed Set contains() - FAIL
- Set contains failed
- 17.13 Distributed Set containsAll() - PASS
- 17.14 Distributed Set remove() - FAIL
- Set remove was incorrect
- 17.15 Distributed Set removeAll() - FAIL
- Set removeAll was incorrect
- 17.16 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.17 Distributed Set clear() - FAIL
- Set clear was incorrect
- 17.18 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.19 Distributed Set retain() - FAIL
- Set retain was incorrect
- 17.20 Partitioned Transactional maps put - PASS
- 17.21 Partitioned Transactional maps get - FAIL
- Partitioned Transactional Map values incorrect
- 17.22 Get the value of a new value - FAIL
- Error getting atomic Value None, found: ['null', None, None, None, None, None, None]
- 17.23 Atomic Value set() - FAIL
- Error setting atomic Value[1, -1, -1, -1, -1, -1, -1]
- 17.24 Get the value after set() - FAIL
- Error getting atomic Value foo, found: ['foo', None, None, None, None, None, None]
- 17.25 Atomic Value compareAndSet() - PASS
- 17.26 Get the value after compareAndSet() - FAIL
- Error getting atomic Value bar, found: ['bar', None, None, None, None, None, None]
- 17.27 Atomic Value getAndSet() - PASS
- 17.28 Get the value after getAndSet() - FAIL
- Error getting atomic Value: expected baz, found: ['baz', None, None, None, None, None, None]
- 17.29 Atomic Value destory() - PASS
- 17.30 Get the value after destroy() - FAIL
- Error getting atomic Value None, found: ['null', None, None, None, None, None, None]
- 17.31 Work Queue add() - PASS
- 17.32 Check the work queue stats - No Result
Case 6: Scale the number of nodes in the ONOS cluster - FAIL
- 6.1 Checking ONOS Logs for errors - PASS
- 6.2 Start new nodes - PASS
- 6.3 Checking if ONOS is up yet - PASS
- 6.4 Starting ONOS CLI sessions - PASS
- 6.5 Checking ONOS nodes - FAIL
- Failed to rerun for election
...