Have questions? Stuck? Please check our FAQ for some common questions and answers.

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 30 Next »

HAswapNodes at 10 Jul 2017 12:19:56

commit 52f5828004e9c59774a94542e74af7dcc18d9771 (HEAD, origin/onos-1.9, onos-1.9)
Author: Ray Milkey [ray@onlab.us]
AuthorDate: Fri Jun 23 12:46:32 2017 -0700
Commit: Ray Milkey [ray@onlab.us]
CommitDate: Fri Jun 23 12:46:32 2017 -0700

Starting snapshot 1.9.3-SNAPSHOT

Case 1: Setting up test environment - PASS

Setup the test environment including installing ONOS, starting Mininet and ONOScli sessions.

  • 1.1 Create cell file - No Result (warning)
  • 1.2 Applying cell variable to environment - PASS (tick)
  • 1.3 Verify connectivity to cell - PASS (tick)
  • 1.4 Setup server for cluster metadata file - PASS (tick)
  • 1.5 Generate initial metadata file - PASS (tick)
  • 1.6 Starting Mininet - PASS (tick)
  • 1.7 Git checkout and pull master - No Result (warning)
  • 1.8 Using mvn clean install - PASS (tick)
  • 1.9 Copying backup config files - PASS (tick)
  • 1.10 Creating ONOS package - PASS (tick)
  • 1.11 Installing ONOS package - PASS (tick)
  • 1.12 Set up ONOS secure SSH - PASS (tick)
  • 1.13 Checking if ONOS is up yet - PASS (tick)
  • 1.14 Starting ONOS CLI sessions - PASS (tick)
  • 1.15 Checking ONOS nodes - PASS (tick)
  • 1.16 Activate apps defined in the params file - No Result (warning)
  • 1.17 Set ONOS configurations - PASS (tick)
  • 1.18 App Ids check - PASS (tick)

Case 2: Assigning devices to controllers - PASS

Assign switches to ONOS using 'ovs-vsctl' and check that an ONOS node becomes the master of the device.

  • 2.1 Assign switches to controllers - PASS (tick)

Case 8: Compare ONOS Topology view to Mininet topology - PASS

Compare topology objects between Mininet and ONOS

  • 8.1 Comparing ONOS topology to MN topology - PASS (tick)
  • 8.2 Hosts view is consistent across all ONOS nodes - PASS (tick)
  • 8.3 Hosts information is correct - PASS (tick)
  • 8.4 Host attachment points to the network - PASS (tick)
  • 8.5 Clusters view is consistent across all ONOS nodes - PASS (tick)
  • 8.6 There is only one SCC - PASS (tick)
  • 8.7 Device information is correct - PASS (tick)
  • 8.8 Links are correct - PASS (tick)
  • 8.9 Hosts are correct - PASS (tick)
  • 8.10 Checking ONOS nodes - PASS (tick)

Case 21: Assigning Controller roles for switches - PASS

Check that ONOS is connected to each device. Then manually assign mastership to specific ONOS nodes using 'device-role'

  • 21.1 Assign mastership of switches to specific controllers - PASS (tick)
  • 21.2 Check mastership was correctly assigned - PASS (tick)

Case 3: Adding host Intents - PASS

Discover hosts by using pingall then assign predetermined host-to-host intents. After installation, check that the intent is distributed to all nodes and the state is INSTALLED

  • 3.1 Install reactive forwarding app - PASS (tick)
  • 3.2 Check app ids - PASS (tick)
  • 3.3 Discovering Hosts( Via pingall for now ) - PASS (tick)
  • 3.4 Uninstall reactive forwarding app - PASS (tick)
  • 3.5 Check app ids - PASS (tick)
  • 3.6 Add host intents via cli - PASS (tick)
  • 3.7 Intent Anti-Entropy dispersion - PASS (tick)

Case 8: Compare ONOS Topology view to Mininet topology - PASS

Compare topology objects between Mininet and ONOS

  • 8.1 Comparing ONOS topology to MN topology - PASS (tick)
  • 8.2 Hosts view is consistent across all ONOS nodes - PASS (tick)
  • 8.3 Hosts information is correct - PASS (tick)
  • 8.4 Host attachment points to the network - PASS (tick)
  • 8.5 Clusters view is consistent across all ONOS nodes - PASS (tick)
  • 8.6 There is only one SCC - PASS (tick)
  • 8.7 Device information is correct - PASS (tick)
  • 8.8 Links are correct - PASS (tick)
  • 8.9 Hosts are correct - PASS (tick)
  • 8.10 Checking ONOS nodes - PASS (tick)

Case 4: Verify connectivity by sending traffic across Intents - PASS

Ping across added host intents to check functionality and check the state of the intent

  • 4.1 Check Intent state - PASS (tick)
  • 4.2 Ping across added host intents - PASS (tick)
  • 4.3 Check leadership of topics - PASS (tick)
  • 4.4 Wait a minute then ping again - PASS (tick)

Case 5: Setting up and gathering data for current state - PASS

  • 5.1 Check that each switch has a master - PASS (tick)
  • 5.2 Get the Mastership of each switch from each controller - PASS (tick)
  • 5.3 Check for consistency in roles from each controller - PASS (tick)
  • 5.4 Get the intents from each controller - PASS (tick)
  • 5.5 Check for consistency in Intents from each controller - PASS (tick)
  • 5.6 Get the flows from each controller - PASS (tick)
  • 5.7 Check for consistency in Flows from each controller - PASS (tick)
  • 5.8 Get the OF Table entries - No Result (warning)
  • 5.9 Start continuous pings - No Result (warning)
  • 5.10 Collecting topology information from ONOS - No Result (warning)
  • 5.11 Host view is consistent across ONOS nodes - PASS (tick)
  • 5.12 Each host has an IP address - PASS (tick)
  • 5.13 Cluster view is consistent across ONOS nodes - PASS (tick)
  • 5.14 Cluster view correct across ONOS nodes - PASS (tick)
  • 5.15 Comparing ONOS topology to MN - PASS (tick)
  • 5.16 Device information is correct - PASS (tick)
  • 5.17 Links are correct - PASS (tick)
  • 5.18 Hosts are correct - PASS (tick)

Case 14: Start Leadership Election app - PASS

  • 14.1 Install leadership election app - PASS (tick)
  • 14.2 Run for election on each node - PASS (tick)
  • 14.3 First node was elected leader - PASS (tick)

Case 16: Install Primitives app - PASS

  • 16.1 Install Primitives app - PASS (tick)

Case 17: Check for basic functionality with distributed primitives - PASS

Test the methods of the distributed primitives (counters and sets) throught the cli

  • 17.1 Increment then get a default counter on each node - PASS (tick)
  • 17.2 Get then Increment a default counter on each node - PASS (tick)
  • 17.3 Counters we added have the correct values - PASS (tick)
  • 17.4 Add -8 to then get a default counter on each node - PASS (tick)
  • 17.5 Add 5 to then get a default counter on each node - PASS (tick)
  • 17.6 Get then add 5 to a default counter on each node - PASS (tick)
  • 17.7 Counters we added have the correct values - PASS (tick)
  • 17.8 Distributed Set get - PASS (tick)
  • 17.9 Distributed Set size - PASS (tick)
  • 17.10 Distributed Set add() - PASS (tick)
  • 17.11 Distributed Set addAll() - PASS (tick)
  • 17.12 Distributed Set contains() - PASS (tick)
  • 17.13 Distributed Set containsAll() - PASS (tick)
  • 17.14 Distributed Set remove() - PASS (tick)
  • 17.15 Distributed Set removeAll() - PASS (tick)
  • 17.16 Distributed Set addAll() - PASS (tick)
  • 17.17 Distributed Set clear() - PASS (tick)
  • 17.18 Distributed Set addAll() - PASS (tick)
  • 17.19 Distributed Set retain() - PASS (tick)
  • 17.20 Partitioned Transactional maps put - PASS (tick)
  • 17.21 Partitioned Transactional maps get - PASS (tick)

Case 6: Swap some of the ONOS nodes - FAIL

  • 6.1 Checking ONOS Logs for errors - No Result (warning)
  • 6.2 Generate new metadata file - PASS (tick)
  • 6.3 Start new nodes - PASS (tick)
  • 6.4 Checking if ONOS is up yet - PASS (tick)
  • 6.5 Starting ONOS CLI sessions - PASS (tick)
  • 6.6 Checking ONOS nodes - FAIL (error)
    • Failed to rerun for election
  • 6.7 Reapplying cell variable to environment - PASS (tick)

Case 8: Compare ONOS Topology view to Mininet topology - FAIL

Compare topology objects between Mininet and ONOS

  • 8.1 Comparing ONOS topology to MN topology - PASS (tick)
  • 8.2 Hosts view is consistent across all ONOS nodes - PASS (tick)
  • 8.3 Hosts information is correct - PASS (tick)
  • 8.4 Host attachment points to the network - PASS (tick)
  • 8.5 Clusters view is consistent across all ONOS nodes - PASS (tick)
  • 8.6 There is only one SCC - PASS (tick)
  • 8.7 Device information is correct - PASS (tick)
  • 8.8 Links are correct - PASS (tick)
  • 8.9 Hosts are correct - PASS (tick)
  • 8.10 Checking ONOS nodes - FAIL (error)
    • Nodes check NOT successful

Case 3: Adding host Intents - FAIL

Discover hosts by using pingall then assign predetermined host-to-host intents. After installation, check that the intent is distributed to all nodes and the state is INSTALLED

  • 3.1 Install reactive forwarding app - PASS (tick)
  • 3.2 Check app ids - PASS (tick)
  • 3.3 Discovering Hosts( Via pingall for now ) - PASS (tick)
  • 3.4 Uninstall reactive forwarding app - PASS (tick)
  • 3.5 Check app ids - FAIL (error)
    • Something is wrong with app Ids
  • 3.6 Add host intents via cli - PASS (tick)
  • 3.7 Intent Anti-Entropy dispersion - FAIL (error)

Case 7: Running ONOS Constant State Tests - FAIL

  • 7.1 Check that each switch has a master - PASS (tick)
  • 7.2 Read device roles from ONOS - PASS (tick)
  • 7.3 Check for consistency in roles from each controller - PASS (tick)
  • 7.4 Get the intents and compare across all nodes - PASS (tick)
  • 7.5 Check for consistency in Intents from each controller - FAIL (error)
    • ONOS nodes have different views of intents
  • 7.6 Compare current intents with intents before the scaling - FAIL (error)
    • The Intents changed during scaling
  • 7.7 Get the OF Table entries and compare to before component scaling - FAIL (error)
    • Changes were found in the flow tables
  • 7.8 Leadership Election is still functional - FAIL (error)
    • Something went wrong with Leadership election

Case 4: Verify connectivity by sending traffic across Intents - FAIL

Ping across added host intents to check functionality and check the state of the intent

  • 4.1 Check Intent state - FAIL (error)
    • Intents are not all in INSTALLED state
  • 4.2 Ping across added host intents - PASS (tick)
  • 4.3 Check leadership of topics - PASS (tick)
  • 4.4 Wait a minute then ping again - PASS (tick)

Case 15: Check that Leadership Election is still functional - PASS

  • 15.1 Run for election on each node - PASS (tick)
  • 15.2 Check that each node shows the same leader and candidates - PASS (tick)
  • 15.3 Find current leader and withdraw - PASS (tick)
  • 15.4 Check that a new node was elected leader - PASS (tick)
  • 15.5 Check that that new leader was the candidate of old leader - PASS (tick)
  • 15.6 Run for election on old leader( just so everyone is in the hat ) - PASS (tick)
  • 15.7 Check that oldLeader is a candidate, and leader if only 1 node - PASS (tick)

Case 17: Check for basic functionality with distributed primitives - FAIL

Test the methods of the distributed primitives (counters and sets) throught the cli

  • 17.1 Increment then get a default counter on each node - FAIL (error)
    • Error incrementing default counter
  • 17.2 Get then Increment a default counter on each node - FAIL (error)
    • Error incrementing default counter
  • 17.3 Counters we added have the correct values - FAIL (error)
    • Added counters are incorrect
  • 17.4 Add -8 to then get a default counter on each node - FAIL (error)
    • Error incrementing default counter
  • 17.5 Add 5 to then get a default counter on each node - FAIL (error)
    • Error incrementing default counter
  • 17.6 Get then add 5 to a default counter on each node - FAIL (error)
    • Error incrementing default counter
  • 17.7 Counters we added have the correct values - FAIL (error)
    • Added counters are incorrect
  • 17.8 Distributed Set get - FAIL (error)
    • Set elements are incorrect
  • 17.9 Distributed Set size - FAIL (error)
    • Set sizes are incorrect
  • 17.10 Distributed Set add() - FAIL (error)
    • Set add was incorrect
  • 17.11 Distributed Set addAll() - FAIL (error)
    • Set addAll was incorrect
  • 17.12 Distributed Set contains() - FAIL (error)
    • Set contains failed
  • 17.13 Distributed Set containsAll() - PASS (tick)
  • 17.14 Distributed Set remove() - FAIL (error)
    • Set remove was incorrect
  • 17.15 Distributed Set removeAll() - FAIL (error)
    • Set removeAll was incorrect
  • 17.16 Distributed Set addAll() - FAIL (error)
    • Set addAll was incorrect
  • 17.17 Distributed Set clear() - FAIL (error)
    • Set clear was incorrect
  • 17.18 Distributed Set addAll() - FAIL (error)
    • Set addAll was incorrect
  • 17.19 Distributed Set retain() - FAIL (error)
    • Set retain was incorrect
  • 17.20 Partitioned Transactional maps put - PASS (tick)
  • 17.21 Partitioned Transactional maps get - FAIL (error)
    • Partitioned Transactional Map values incorrect

Case 9: Turn off a link to ensure that Link Discovery is working properly - PASS

  • 9.1 Kill Link between s3 and s28 - PASS (tick)

Case 8: Compare ONOS Topology view to Mininet topology - FAIL

Compare topology objects between Mininet and ONOS

  • 8.1 Comparing ONOS topology to MN topology - PASS (tick)
  • 8.2 Hosts view is consistent across all ONOS nodes - PASS (tick)
  • 8.3 Hosts information is correct - PASS (tick)
  • 8.4 Host attachment points to the network - PASS (tick)
  • 8.5 Clusters view is consistent across all ONOS nodes - PASS (tick)
  • 8.6 There is only one SCC - PASS (tick)
  • 8.7 Device information is correct - PASS (tick)
  • 8.8 Links are correct - PASS (tick)
  • 8.9 Hosts are correct - PASS (tick)
  • 8.10 Checking ONOS nodes - FAIL (error)
    • Nodes check NOT successful

Case 4: Verify connectivity by sending traffic across Intents - FAIL

Ping across added host intents to check functionality and check the state of the intent

  • 4.1 Check Intent state - FAIL (error)
    • Intents are not all in INSTALLED state
  • 4.2 Ping across added host intents - FAIL (error)
    • Intents have not been installed correctly, pings failed.
  • 4.3 Check leadership of topics - PASS (tick)
  • 4.4 Wait a minute then ping again - FAIL (error)
    • Intents have not been installed correctly, pings failed.

Case 10: Restore a link to ensure that Link Discovery is working properly - PASS

  • 10.1 Bring link between s3 and s28 back up - PASS (tick)

Case 8: Compare ONOS Topology view to Mininet topology - FAIL

Compare topology objects between Mininet and ONOS

  • 8.1 Comparing ONOS topology to MN topology - PASS (tick)
  • 8.2 Hosts view is consistent across all ONOS nodes - PASS (tick)
  • 8.3 Hosts information is correct - PASS (tick)
  • 8.4 Host attachment points to the network - PASS (tick)
  • 8.5 Clusters view is consistent across all ONOS nodes - PASS (tick)
  • 8.6 There is only one SCC - PASS (tick)
  • 8.7 Device information is correct - PASS (tick)
  • 8.8 Links are correct - PASS (tick)
  • 8.9 Hosts are correct - PASS (tick)
  • 8.10 Checking ONOS nodes - FAIL (error)
    • Nodes check NOT successful

Case 4: Verify connectivity by sending traffic across Intents - FAIL

Ping across added host intents to check functionality and check the state of the intent

  • 4.1 Check Intent state - FAIL (error)
    • Intents are not all in INSTALLED state
  • 4.2 Ping across added host intents - PASS (tick)
  • 4.3 Check leadership of topics - PASS (tick)
  • 4.4 Wait a minute then ping again - FAIL (error)
    • Intents have not been installed correctly, pings failed.

Case 11: Killing a switch to ensure it is discovered correctly - PASS

  • 11.1 Kill s5 - PASS (tick)

Case 8: Compare ONOS Topology view to Mininet topology - FAIL

Compare topology objects between Mininet and ONOS

  • 8.1 Comparing ONOS topology to MN topology - PASS (tick)
  • 8.2 Hosts view is consistent across all ONOS nodes - PASS (tick)
  • 8.3 Hosts information is correct - PASS (tick)
  • 8.4 Host attachment points to the network - PASS (tick)
  • 8.5 Clusters view is consistent across all ONOS nodes - PASS (tick)
  • 8.6 There is only one SCC - PASS (tick)
  • 8.7 Device information is correct - PASS (tick)
  • 8.8 Links are correct - PASS (tick)
  • 8.9 Hosts are correct - PASS (tick)
  • 8.10 Checking ONOS nodes - FAIL (error)
    • Nodes check NOT successful

Case 4: Verify connectivity by sending traffic across Intents - FAIL

Ping across added host intents to check functionality and check the state of the intent

  • 4.1 Check Intent state - FAIL (error)
    • Intents are not all in INSTALLED state
  • 4.2 Ping across added host intents - FAIL (error)
    • Intents have not been installed correctly, pings failed.
  • 4.3 Check leadership of topics - PASS (tick)
  • 4.4 Wait a minute then ping again - FAIL (error)
    • Intents have not been installed correctly, pings failed.

Case 12: Adding a switch to ensure it is discovered correctly - PASS

  • 12.1 Add back s5 - PASS (tick)

Case 8: Compare ONOS Topology view to Mininet topology - FAIL

Compare topology objects between Mininet and ONOS

  • 8.1 Comparing ONOS topology to MN topology - PASS (tick)
  • 8.2 Hosts view is consistent across all ONOS nodes - PASS (tick)
  • 8.3 Hosts information is correct - PASS (tick)
  • 8.4 Host attachment points to the network - PASS (tick)
  • 8.5 Clusters view is consistent across all ONOS nodes - PASS (tick)
  • 8.6 There is only one SCC - PASS (tick)
  • 8.7 Device information is correct - PASS (tick)
  • 8.8 Links are correct - PASS (tick)
  • 8.9 Hosts are correct - PASS (tick)
  • 8.10 Checking ONOS nodes - FAIL (error)
    • Nodes check NOT successful

Case 4: Verify connectivity by sending traffic across Intents - FAIL

Ping across added host intents to check functionality and check the state of the intent

  • 4.1 Check Intent state - FAIL (error)
    • Intents are not all in INSTALLED state
  • 4.2 Ping across added host intents - FAIL (error)
    • Intents have not been installed correctly, pings failed.
  • 4.3 Check leadership of topics - PASS (tick)
  • 4.4 Wait a minute then ping again - FAIL (error)
    • Intents have not been installed correctly, pings failed.

Case 13: Test Cleanup - PASS

  • 13.1 Killing tcpdumps - No Result (warning)
  • 13.2 Stopping Mininet - PASS (tick)
  • 13.3 Checking ONOS Logs for errors - No Result (warning)
  • 13.4 Stopping webserver - PASS (tick)
  • No labels