...
commit a9017217068ec45ce29b467f9fba349112895375 c1f29a4fe850cf419daf8f723014bb11e2d92b89 (HEAD, origin/onos-1.10, onos-1.10)
Author: Andrea Campanella [andrea@opennetworking.orgSean Condon [sean.condon@microsemi.com]
AuthorDate: Mon Oct 23 : Thu Nov 2 13:15:46:36 08 2017 +02000000
Commit: Andrea Campanella [andrea@opennetworking.orgSean Condon [sean.condon@microsemi.com]
CommitDate: Wed Fri Nov 8 1117 14:4524:01 22 2017 -0800
Fixing Netconf Subscription session reopen+0000
Additions to the L2 monitoring for CFM and SOAM
HTML |
---|
<iframe src="https://onos-jenkins.onlab.us/job/HAswapNodes/plot/Plot-HA/getPlot?index=2&width=500&height=300"noborder="0" width="500" height="300" scrolling="yes" seamless="seamless"></iframe> |
...
- 17.1 Increment then get a default counter on each node - PASS
- 17.2 Get then Increment a default counter on each node - PASS
- 17.3 Counters we added have the correct values - PASS
- 17.4 Add -8 to then get a default counter on each node - PASS
- 17.5 Add 5 to then get a default counter on each node - PASS
- 17.6 Get then add 5 to a default counter on each node - PASS
- 17.7 Counters we added have the correct values - PASS
- 17.8 Distributed Set get - PASS
- 17.9 Distributed Set size - PASS
- 17.10 Distributed Set add() - PASS
- 17.11 Distributed Set addAll() - PASS
- 17.12 Distributed Set contains() - PASS
- 17.13 Distributed Set containsAll() - PASS
- 17.14 Distributed Set remove() - PASS
- 17.15 Distributed Set removeAll() - PASS
- 17.16 Distributed Set addAll() - PASS
- 17.17 Distributed Set clear() - PASS
- 17.18 Distributed Set addAll() - PASS
- 17.19 Distributed Set retain() - PASS
- 17.20 Partitioned Transactional maps put - PASS
- 17.21 Partitioned Transactional maps get - PASS
- 17.22 Get the value of a new value - PASS
- 17.23 Atomic Value set() - PASS
- 17.24 Get the value after set() - PASS
- 17.25 Atomic Value compareAndSet() - PASS
- 17.26 Get the value after compareAndSet() - PASS
- 17.27 Atomic Value getAndSet() - PASS
- 17.28 Get the value after getAndSet() - PASS
- 17.29 Atomic Value destory() - PASS
- 17.30 Get the value after destroy() - PASS
- 17.31 Work Queue add() - PASS
- 17.32 Check the work queue stats - PASS
- 17.33 Work Queue addMultiple() - PASS
- 17.34 Check the work queue stats - PASS
- 17.35 Work Queue takeAndComplete() 1 - PASS
- 17.36 Check the work queue stats - PASS
- 17.37 Work Queue takeAndComplete() 2 - PASS
- 17.38 Check the work queue stats - PASS
- 17.39 Work Queue destroy() - PASS
- 17.40 Check the work queue stats - PASS
Case 6: Swap some of the ONOS nodes - FAIL
- 6.1 Checking ONOS Logs for errors - No Result
- 6.2 Generate new metadata file - PASS
- 6.3 Start new nodes - PASS
- 6.4 Checking if ONOS is up yet - PASS
- 6.5 Starting ONOS CLI sessions - PASS
- 6.6 Checking ONOS nodes - FAIL
- Failed to rerun for election
- 6.7 Reapplying cell variable to environment - PASS
Case 8: Compare ONOS Topology view to Mininet topology - FAIL
Compare topology objects between Mininet and ONOS
- 8.1 Comparing ONOS topology to MN topology - PASS
- 8.2 Hosts view is consistent across all ONOS nodes - PASS
- 8.3 Hosts information is correct - PASS
- 8.4 Host attachment points to the network - PASS
- 8.5 Clusters view is consistent across all ONOS nodes - PASS
- 8.6 There is only one SCC - PASS
- 8.7 Device information is correct - PASS
- 8.8 Links are correct - PASS
- 8.9 Hosts are correct - PASS
- 8.10 Checking ONOS nodes - FAIL
- Nodes check NOT successful
Case 3: Adding host Intents - FAIL
Discover hosts by using pingall then assign predetermined host-to-host intents. After installation, check that the intent is distributed to all nodes and the state is INSTALLED
- 3.1 Install reactive forwarding app - PASS
- 3.2 Check app ids - FAIL
- Something is wrong with app Ids
- 3.3 Discovering Hosts( Via pingall for now ) - FAIL
- Reactive Pingall failed, one or more ping pairs failed
- 3.4 Uninstall reactive forwarding app - PASS
- 3.5 Check app ids - PASS
- 3.6 Add host intents via cli - PASS
- 3.7 Intent Anti-Entropy dispersion - FAIL
Case 7: Running ONOS Constant State Tests - FAIL
- 7.1 Check that each switch has a master - PASS
- 7.2 Read device roles from ONOS - PASS
- 7.3 Check for consistency in roles from each controller - PASS
- 7.4 Get the intents and compare across all nodes - PASS
- 7.5 Check for consistency in Intents from each controller - PASS
- 7.6 Compare current intents with intents before the scaling - FAIL
- The Intents changed during scaling
- 7.7 Get the OF Table entries and compare to before component scaling - FAIL
- Changes were found in the flow tables
- 7.8 Leadership Election is still functional - FAIL
- Something went wrong with Leadership election
Case 4: Verify connectivity by sending traffic across Intents - PASS
Ping across added host intents to check functionality and check the state of the intent
- 4.1 Check Intent state - PASS
- 4.2 Ping across added host intents - PASS
- 4.3 Check leadership of topics - PASS
- 4.4 Wait a minute then ping again - PASS
Case 15: Check that Leadership Election is still functional - PASS
- 15.1 Run for election on each node - PASS
- 15.2 Check that each node shows the same leader and candidates - PASS
- 15.3 Find current leader and withdraw - PASS
- 15.4 Check that a new node was elected leader - PASS
- 15.5 Check that that new leader was the candidate of old leader - PASS
- 15.6 Run for election on old leader( just so everyone is in the hat ) - PASS
- 15.7 Check that oldLeader is a candidate, and leader if only 1 node - PASS
Case 17: Check for basic functionality with distributed primitives - FAIL
Test the methods of the distributed primitives (counters and sets) throught the cli
- 17.1 Increment then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.2 Get then Increment a default counter on each node - FAIL
- Error incrementing default counter
- 17.3 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.4 Add -8 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.5 Add 5 to then get a default counter on each node - FAIL
- Error incrementing default counter
- 17.6 Get then add 5 to a default counter on each node - FAIL
- Error incrementing default counter
- 17.7 Counters we added have the correct values - FAIL
- Added counters are incorrect
- 17.8 Distributed Set get - FAIL
- Set elements are incorrect
- 17.9 Distributed Set size - FAIL
- Set sizes are incorrect
- 17.10 Distributed Set add() - FAIL
- Set add was incorrect
- 17.11 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.12 Distributed Set contains() - FAIL
- Set contains failed
- 17.13 Distributed Set containsAll() - PASS
- 17.14 Distributed Set remove() - FAIL
- Set remove was incorrect
- 17.15 Distributed Set removeAll() - FAIL
- Set removeAll was incorrect
- 17.16 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.17 Distributed Set clear() - FAIL
- Set clear was incorrect
- 17.18 Distributed Set addAll() - FAIL
- Set addAll was incorrect
- 17.19 Distributed Set retain() - FAIL
- Set retain was incorrect
- 17.20 Partitioned Transactional maps put - PASS
- 17.21 Partitioned Transactional maps get - FAIL
- Partitioned Transactional Map values incorrect
- 17.22 Get the value of a new value - FAIL
- Error getting atomic Value None, found: ['null', 'null', 'null', None, None]
- 17.23 Atomic Value set() - FAIL
- Error setting atomic Value[1, 1, 1, -1, -1]
- 17.24 Get the value after set() - FAIL
- Error getting atomic Value foo, found: ['foo', 'foo', 'foo', None, None]
- 17.25 Atomic Value compareAndSet() - PASS
- 17.26 Get the value after compareAndSet() - FAIL
- Error getting atomic Value bar, found: ['bar', 'bar', 'bar', None, None]
- 17.27 Atomic Value getAndSet() - PASS
- 17.28 Get the value after getAndSet() - FAIL
- Error getting atomic Value: expected baz, found: ['baz', 'baz', 'baz', None, None]
- 17.29 Atomic Value destory() - PASS
- 17.30 Get the value after destroy() - FAIL
- Error getting atomic Value None, found: ['null', 'null', 'null', None, None]
- 17.31 Work Queue add() - PASS
- 17.32 Check the work queue stats - No Result
Case 9: Turn off a link to ensure that Link Discovery is working properly - PASS
- 9.1 Kill Link between s3 and s28 - PASS
Case 8: Compare ONOS Topology view to Mininet topology - FAIL
Compare topology objects between Mininet and ONOS
- 8.1 Comparing ONOS topology to MN topology - PASS
- 8.2 Hosts view is consistent across all ONOS nodes - PASS
- 8.3 Hosts information is correct - PASS
- 8.4 Host attachment points to the network - PASS
- 8.5 Clusters view is consistent across all ONOS nodes - PASS
- 8.6 There is only one SCC - PASS
- 8.7 Device information is correct - PASS
- 8.8 Links are correct - PASS
- 8.9 Hosts are correct - PASS
- 8.10 Checking ONOS nodes - FAIL
- Nodes check NOT successful
Case 4: Verify connectivity by sending traffic across Intents - FAIL
Ping across added host intents to check functionality and check the state of the intent
- 4.1 Check Intent state - FAIL
- Intents are not all in INSTALLED state
- 4.2 Ping across added host intents - FAIL
- Intents have not been installed correctly, pings failed.
- 4.3 Check leadership of topics - PASS
- 4.4 Wait a minute then ping again - FAIL
- Intents have not been installed correctly, pings failed.
Case 10: Restore a link to ensure that Link Discovery is working properly - PASS
- 10.1 Bring link between s3 and s28 back up - PASS
Case 8: Compare ONOS Topology view to Mininet topology - FAIL
Compare topology objects between Mininet and ONOS
- 8.1 Comparing ONOS topology to MN topology - PASS
- 8.2 Hosts view is consistent across all ONOS nodes - PASS
- 8.3 Hosts information is correct - PASS
- 8.4 Host attachment points to the network - PASS
- 8.5 Clusters view is consistent across all ONOS nodes - PASS
- 8.6 There is only one SCC - PASS
- 8.7 Device information is correct - PASS
- 8.8 Links are correct - PASS
- 8.9 Hosts are correct - PASS
- 8.10 Checking ONOS nodes - FAIL
- Nodes check NOT successful
Case 4: Verify connectivity by sending traffic across Intents - FAIL
Ping across added host intents to check functionality and check the state of the intent
- 4.1 Check Intent state - PASS
- 4.2 Ping across added host intents - FAIL
- Intents have not been installed correctly, pings failed.
- 4.3 Check leadership of topics - PASS
- 4.4 Wait a minute then ping again - FAIL
- Intents have not been installed correctly, pings failed.
Case 11: Killing a switch to ensure it is discovered correctly - PASS
- 11.1 Kill s5 - PASS
Case 8: Compare ONOS Topology view to Mininet topology - FAIL
Compare topology objects between Mininet and ONOS
- 8.1 Comparing ONOS topology to MN topology - PASS
- 8.2 Hosts view is consistent across all ONOS nodes - PASS
- 8.3 Hosts information is correct - PASS
- 8.4 Host attachment points to the network - PASS
- 8.5 Clusters view is consistent across all ONOS nodes - PASS
- 8.6 There is only one SCC - PASS
- 8.7 Device information is correct - PASS
- 8.8 Links are correct - PASS
- 8.9 Hosts are correct - PASS
- 8.10 Checking ONOS nodes - FAIL
- Nodes check NOT successful
Case 4: Verify connectivity by sending traffic across Intents - FAIL
Ping across added host intents to check functionality and check the state of the intent
- 4.1 Check Intent state - FAIL
- Intents are not all in INSTALLED state
- 4.2 Ping across added host intents - FAIL
- Intents have not been installed correctly, pings failed.
- 4.3 Check leadership of topics - PASS
- 4.4 Wait a minute then ping again - FAIL
- Intents have not been installed correctly, pings failed.
Case 12: Adding a switch to ensure it is discovered correctly - PASS
- 12.1 Add back s5 - PASS
Case 8: Compare ONOS Topology view to Mininet topology - FAIL
Compare topology objects between Mininet and ONOS
- 8.1 Comparing ONOS topology to MN topology - PASS
- 8.2 Hosts view is consistent across all ONOS nodes - PASS
- 8.3 Hosts information is correct - PASS
- 8.4 Host attachment points to the network - PASS
- 8.5 Clusters view is consistent across all ONOS nodes - PASS
- 8.6 There is only one SCC - PASS
- 8.7 Device information is correct - PASS
- 8.8 Links are correct - PASS
- 8.9 Hosts are correct - PASS
- 8.10 Checking ONOS nodes - FAIL
- Nodes check NOT successful
Case 4: Verify connectivity by sending traffic across Intents - FAIL
Ping across added host intents to check functionality and check the state of the intent
- 4.1 Check Intent state - FAIL
- Intents are not all in INSTALLED state
- 4.2 Ping across added host intents - FAIL
- Intents have not been installed correctly, pings failed.
- 4.3 Check leadership of topics - PASS
- 4.4 Wait a minute then ping again - FAIL
- Intents have not been installed correctly, pings failed.
Case 13: Test Cleanup - PASS
- 13.1 Killing tcpdumps - No Result
- 13.2 Stopping Mininet - PASS
- 13.3 Checking ONOS Logs for errors - No Result
- 13.4 Stopping webserver - PASS