HAswapNodes at 05 Jun 2016 05:09:12
commit 4fb3142fd5bc878e1ec28fdd4e4f393bda873ad1 (HEAD, origin/onos-1.6, onos-1.6)
Author: Madan Jampani [madan@onlab.us]
AuthorDate: Thu Jun 2 14:08:16 2016 -0700
Commit: Gerrit Code Review [gerrit@onlab.us]
CommitDate: Sat Jun 4 20:27:02 2016 +0000
Fix hashing logic for storge partitions to get good distribution
--
(cherry picked from commit 2843ec8914fa3900f634809ec5f1226df70f1576)
<iframe src="https://onos-jenkins.onlab.us/job/HAswapNodes/plot/Plot-HA/getPlot?index=0&width=500&height=300"noborder="0" width="500" height="300" scrolling="yes" seamless="seamless"></iframe>
|
Case 1: Setting up test environment - PASS
Setup the test environment including installing ONOS, starting Mininet and ONOScli sessions.
- 1.1 Create cell file - No Result

- 1.2 Applying cell variable to environment - PASS

- 1.3 Verify connectivity to cell - PASS

- 1.4 Setup server for cluster metadata file - PASS

- 1.5 Generate initial metadata file - PASS

- 1.6 Starting Mininet - PASS

- 1.7 Git checkout and pull master - No Result

- 1.8 Using mvn clean install - PASS

- 1.9 Copying backup config files - PASS

- 1.10 Creating ONOS package - PASS

- 1.11 Installing ONOS package - PASS

- 1.12 Checking if ONOS is up yet - PASS

- 1.13 Checking ONOS nodes - PASS

- 1.14 Activate apps defined in the params file - No Result

- 1.15 Set ONOS configurations - PASS

- 1.16 App Ids check - PASS

Case 2: Assigning devices to controllers - PASS
Assign switches to ONOS using 'ovs-vsctl' and check that an ONOS node becomes the master of the device.
- 2.1 Assign switches to controllers - PASS

Case 8: Compare ONOS Topology view to Mininet topology - FAIL
Compare topology objects between Mininet and ONOS
- 8.1 Comparing ONOS topology to MN topology - FAIL

- ONOS topology don't match Mininet
- 8.2 Hosts view is consistent across all ONOS nodes - PASS

- 8.3 Hosts information is correct - PASS

- 8.4 Host attachment points to the network - PASS

- 8.5 Clusters view is consistent across all ONOS nodes - FAIL

- ONOS nodes have different views of clusters
- 8.6 There is only one SCC - FAIL

- 8.7 Device information is correct - FAIL

- Device information is incorrect
- 8.8 Links are correct - FAIL

- 8.9 Hosts are correct - PASS

- 8.10 Checking ONOS nodes - FAIL

- Nodes check NOT successful
Case 21: Assigning Controller roles for switches - FAIL
Check that ONOS is connected to each device. Then manually assign mastership to specific ONOS nodes using 'device-role'
- 21.1 Assign mastership of switches to specific controllers - PASS

- 21.2 Check mastership was correctly assigned - FAIL

- Switches were not successfully reassigned
Case 3: Adding host Intents - FAIL
Discover hosts by using pingall then assign predetermined host-to-host intents. After installation, check that the intent is distributed to all nodes and the state is INSTALLED
- 3.1 Install reactive forwarding app - PASS

- 3.2 Check app ids - PASS

- 3.3 Discovering Hosts( Via pingall for now ) - FAIL

- Reactive Pingall failed, one or more ping pairs failed
- 3.4 Uninstall reactive forwarding app - PASS

- 3.5 Check app ids - PASS

- 3.6 Add host intents via cli - FAIL

- Error looking up host ids
- 3.7 Intent Anti-Entropy dispersion - FAIL

Case 8: Compare ONOS Topology view to Mininet topology - FAIL
Compare topology objects between Mininet and ONOS
- 8.1 Comparing ONOS topology to MN topology - FAIL

- ONOS topology don't match Mininet
- 8.2 Hosts view is consistent across all ONOS nodes - PASS

- 8.3 Hosts information is correct - PASS

- 8.4 Host attachment points to the network - PASS

- 8.5 Clusters view is consistent across all ONOS nodes - PASS

- 8.6 There is only one SCC - FAIL

- 8.7 Device information is correct - PASS

- 8.8 Links are correct - FAIL

- 8.9 Hosts are correct - PASS

- 8.10 Checking ONOS nodes - FAIL

- Nodes check NOT successful
Case 4: Verify connectivity by sending traffic across Intents - FAIL
Ping across added host intents to check functionality and check the state of the intent
- 4.1 Check Intent state - FAIL

- Intents are not all in INSTALLED state
- 4.2 Ping across added host intents - FAIL

- Intents have not been installed correctly, pings failed.
- 4.3 Check leadership of topics - PASS

- 4.4 Wait a minute then ping again - FAIL

- Intents have not been installed correctly, pings failed.
Case 5: Setting up and gathering data for current state - FAIL
- 5.1 Check that each switch has a master - PASS

- 5.2 Get the Mastership of each switch from each controller - PASS

- 5.3 Check for consistency in roles from each controller - PASS

- 5.4 Get the intents from each controller - PASS

- 5.5 Check for consistency in Intents from each controller - FAIL

- ONOS nodes have different views of intents
- 5.6 Get the flows from each controller - PASS

- 5.7 Check for consistency in Flows from each controller - PASS

- 5.8 Get the OF Table entries - No Result

- 5.9 Start continuous pings - No Result

- 5.10 Collecting topology information from ONOS - No Result

- 5.11 Host view is consistent across ONOS nodes - PASS

- 5.12 Each host has an IP address - PASS

- 5.13 Cluster view is consistent across ONOS nodes - FAIL

- ONOS nodes have different views of clusters
- 5.14 Cluster view correct across ONOS nodes - FAIL

- 5.15 Comparing ONOS topology to MN - PASS

- 5.16 Device information is correct - PASS

- 5.17 Links are correct - FAIL

- 5.18 Hosts are correct - PASS

Case 14: Start Leadership Election app - PASS
- 14.1 Install leadership election app - PASS

- 14.2 Run for election on each node - PASS

- 14.3 First node was elected leader - PASS

Case 16: Install Primitives app - PASS
- 16.1 Install Primitives app - PASS

Case 17: Check for basic functionality with distributed primitives - PASS
Test the methods of the distributed primitives (counters and sets) throught the cli
- 17.1 Increment then get a default counter on each node - PASS

- 17.2 Get then Increment a default counter on each node - PASS

- 17.3 Counters we added have the correct values - PASS

- 17.4 Add -8 to then get a default counter on each node - PASS

- 17.5 Add 5 to then get a default counter on each node - PASS

- 17.6 Get then add 5 to a default counter on each node - PASS

- 17.7 Counters we added have the correct values - PASS

- 17.8 Distributed Set get - PASS

- 17.9 Distributed Set size - PASS

- 17.10 Distributed Set add() - PASS

- 17.11 Distributed Set addAll() - PASS

- 17.12 Distributed Set contains() - PASS

- 17.13 Distributed Set containsAll() - PASS

- 17.14 Distributed Set remove() - PASS

- 17.15 Distributed Set removeAll() - PASS

- 17.16 Distributed Set addAll() - PASS

- 17.17 Distributed Set clear() - PASS

- 17.18 Distributed Set addAll() - PASS

- 17.19 Distributed Set retain() - PASS

- 17.20 Partitioned Transactional maps put - PASS

- 17.21 Partitioned Transactional maps get - PASS
