Due to a ransomware attack, the wiki was reverted to a July 2022 version. . We apologize for the lack of a more recent valid backup.
...
| HTML |
|---|
<img src="https://jenkins.onosproject.org/view/QA/job/postjob-VM/lastSuccessfulBuild/artifact/HAfullNetPartition_master_20-builds_graph.jpg", alt="HAfullNetPartition", style="width:525px;height:350px;border:0"> |
commit dd54d5657f40fbd9b2835f9776bedcaf51ff89e9 a5212170cc0944e5d8af5e4d6d801217fec35431 (HEAD, origin/master, origin/HEAD, master)
Author: Andrea Campanella Jian Li [andrea@opennetworkingpyguni@gmail.orgcom]
AuthorDate: Wed Mon Oct 7 1612 02:4407:21 30 2020 +02000900
Commit: Andrea Campanella Jian Li [andrea@opennetworkingpyguni@gmail.orgcom]
CommitDate: Fri Sun Oct 9 0711 18:5544:37 07 2020 +0000
[ONOS-8121][VOL-3259] Fixing null filter for felix fileinstall
Support SNATing POD traffic to internet at k8s passthrough mode
--
(cherry picked from commit cf3284174edba18f921995fe01fb0e5f802caf7e2622d5a002e7eba4c00271885845be787866358a)
Case 1: Constructing test variables and building ONOS package - PASS
...
Case 8: Compare ONOS Topology view to Mininet topology - FAIL
...
topology
...
- 8.1 Comparing ONOS topology to MN topology - FAIL
- ONOS topology don't match Mininet
- 8.2 Hosts view is consistent across all ONOS nodes - PASS
- 8.3 Hosts information is correct - PASS
- 8.4 Host attachment points to the network - PASS
- 8.5 Clusters view is consistent across all ONOS nodes - FAIL
- ONOS nodes have different views of clusters
- 8.6 There is only one SCC - FAIL
- ONOS shows 12 SCCs
- 8.7 Device information is correct - FAIL
- Device information is incorrect
- 8.8 Links are correct - FAIL
- Links are incorrect
- 8.9 Hosts are correct - PASS
- 8.10 Checking ONOS nodes - PASS
Case 7: Running ONOS Constant State Tests - FAIL
- 7.1 Check that each switch has a master - FAIL
- Some devices don't have a master assigned
- 7.2 Read device roles from ONOS - FAIL
- Error in reading roles from ONOS
- 7.3 Check for consistency in roles from each controller - FAIL
- ONOS nodes have different views of switch roles
- 7.4 Get the intents from each controller - PASS
- 7.5 Check for consistency in Intents from each controller - FAIL
- ONOS nodes have different views of intents
- 7.6 Compare current intents with intents before the failure - FAIL
- The Intents changed during failure
- 7.7 Get the OF Table entries and compare to before component failure - FAIL
- Changes were found in the flow tables
- 7.8 Leadership Election is still functional - FAIL
- Something went wrong with Leadership election
Case 4: Verify connectivity by sending traffic across Intents - FAIL
Ping across added host intents to check functionality and check the state of the intent
- 4.1 Check Intent state - FAIL
- Intents are not all in INSTALLED state
- 4.2 Ping across added host intents - FAIL
- Intents have not been installed correctly, pings failed.
- 4.3 Check leadership of topics - PASS
- 4.4 Wait a minute then ping again - FAIL
- Intents have not been installed correctly, pings failed.
Case 15: Check that Leadership Election is still functional - FAIL
- 15.1 Run for election on each node - FAIL
- At least one node failed to run for leadership
- Skipping the rest of this case.
Case 17: Check for basic functionality with distributed primitives - FAIL
Test the methods of the distributed primitives (counters and sets) throught the cli
- 17.1 Increment then get a default counter on each node - PASS
- 17.2 Get then Increment a default counter on each node - PASS
- 17.3 Counters we added have the correct values - PASS
- 17.4 Add -8 to then get a default counter on each node - PASS
- 17.5 Add 5 to then get a default counter on each node - PASS
- 17.6 Get then add 5 to a default counter on each node - PASS
- 17.7 Counters we added have the correct values - PASS
- 17.8 Distributed Set get - FAIL
- Set elements are incorrect
- 17.9 Distributed Set size - FAIL
- Set sizes are incorrect
- 17.10 Distributed Set add() - PASS
- 17.11 Distributed Set addAll() - PASS
- 17.12 Distributed Set contains() - PASS
- 17.13 Distributed Set containsAll() - PASS
- 17.14 Distributed Set remove() - PASS
- 17.15 Distributed Set removeAll() - PASS
- 17.16 Distributed Set addAll() - PASS
- 17.17 Distributed Set clear() - PASS
- 17.18 Distributed Set addAll() - PASS
- 17.19 Distributed Set retain() - FAIL
- Set retain was incorrect
- 17.20 Partitioned Transactional maps put - PASS
- 17.21 Partitioned Transactional maps get - FAIL
- Partitioned Transactional Map values incorrect
- 17.22 Get the value of a new value - FAIL
- Error getting atomic Value None, found: ['null', 'null', None]
- 17.23 Atomic Value set() - FAIL
- Error setting atomic Value[1, 1, -1]
- 17.24 Get the value after set() - FAIL
- Error getting atomic Value foo, found: ['foo', 'foo', None]
- 17.25 Atomic Value compareAndSet() - FAIL
- Error setting atomic Value:-1
- 17.26 Get the value after compareAndSet() - FAIL
- Error getting atomic Value bar, found: ['bar', 'bar', None]
- 17.27 Atomic Value getAndSet() - PASS
- 17.28 Get the value after getAndSet() - FAIL
- Error getting atomic Value: expected baz, found: ['baz', 'baz', None]
- 17.29 Atomic Value destory() - PASS
- 17.30 Get the value after destroy() - FAIL
- Error getting atomic Value None, found: ['null', 'null', None]
- 17.31 Work Queue add() - FAIL
- Error adding to Work Queue
- 17.32 Check the work queue stats - No Result
Case 62: Healing Partition - FAIL
...