Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published from Jenkins build: https://jenkins.onosproject.org/job/SRHA-pipeline-master/299/

...

HTML
<img src="https://jenkins.onosproject.org/view/QA/job/postjob-Fabric/lastSuccessfulBuild/artifact/HAsanity_master_20-builds_graph.jpg", alt="HAsanity", style="width:525px;height:350px;border:0">

commit ef0761c2118a363c199433e106094de8ac9f6c43 44628d6c294871b57e4262597b100f1517cffa1c (HEAD -] master, origin/master, origin/HEAD)
Author: Ruchi Sahota [Ruchi_Sahota@infosysjaegonkim [jaegon77.kim@samsung.com]
AuthorDate: Mon Jan 28 01:08:18 2019 +0000Sun Apr 7 10:30:32 2019 +0900
Commit: Charles Chan [rascov@gmailJaegon Kim [jaegon77.kim@samsung.com]
CommitDate: Wed Fri Apr 10 1412 22:5334:49 01 2019 -0700
Route reprogamming using group substitution during next hop movement
+0000

[ONOS-7732] Automating switch workflow - checking workflow definitition

Case 1: Constructing test variables and building ONOS package - PASS

...

Case 17: Check for basic functionality with distributed primitives - FAIL

Test the methods of the distributed primitives (counters and sets) throught the cli

  • 17.1 Increment then get a default counter on each node - FAIL (error)
    • Error incrementing default counter
  • 17.2 Get then Increment a default counter on each node - FAIL (error)
    • Error incrementing default counter
  • 17.3 Counters we added have the correct values - FAIL (error)
    • Added counters are incorrect
  • 17.4 Add -8 to then get a default counter on each node - FAIL (error)
    • Error incrementing default counter
  • 17.5 Add 5 to then get a default counter on each node - FAIL (error)
    • Error incrementing default counter
  • 17.6 Get then add 5 to a default counter on each node - FAIL (error)
    • Error incrementing default counter
  • 17.7 Counters we added have the correct values - FAIL (error)
    • Added counters are incorrect
  • 17.8 Distributed Set get - PASS (tick)
  • 17.9 Distributed Set size - PASS (tick)
  • 17.10 Distributed Set add() - PASS (tick)
  • 17.11 Distributed Set addAll() - PASS (tick)
  • 17.12 Distributed Set contains() - PASS (tick)
  • 17.13 Distributed Set containsAll() - PASS (tick)
  • 17.14 Distributed Set remove() - PASS (tick)
  • 17.15 Distributed Set removeAll() - PASS (tick)
  • 17.16 Distributed Set addAll() - PASS (tick)
  • 17.17 Distributed Set clear() - PASS (tick)
  • 17.18 Distributed Set addAll() - PASS (tick)
  • 17.19 Distributed Set retain() - PASS (tick)
  • 17.20 Partitioned Transactional maps put - PASS (tick)
  • 17.21 Partitioned Transactional maps get - PASS (tick)
  • 17.22 Get the value of a new value - PASS (tick)
  • 17.23 Atomic Value set() - PASS (tick)
  • 17.24 Get the value after set() - PASS (tick)
  • 17.25 Atomic Value compareAndSet() - PASS (tick)
  • 17.26 Get the value after compareAndSet() - PASS (tick)
  • 17.27 Atomic Value getAndSet() - PASS (tick)
  • 17.28 Get the value after getAndSet() - PASS (tick)
  • 17.29 Atomic Value destory() - PASS (tick)
  • 17.30 Get the value after destroy() - PASS (tick)
  • 17.31 Work Queue add() - PASS (tick)
  • 17.32 Check the work queue stats - PASS (tick)
  • 17.33 Work Queue addMultiple() - PASS (tick)
  • 17.34 Check the work queue stats - PASS (tick)
  • 17.35 Work Queue takeAndComplete() 1 - PASS (tick)
  • 17.36 Check the work queue stats - PASS (tick)
  • 17.37 Work Queue takeAndComplete() 2 - PASS (tick)
  • 17.38 Check the work queue stats - PASS (tick)
  • 17.39 Work Queue destroy() - PASS (tick)
  • 17.40 Check the work queue stats - PASS (tick)

Case 6: Wait 60 seconds instead of inducing a failure - PASS

    Case 8: Compare ONOS Topology view to Mininet topology - FAIL

    Compare topology objects between Mininet and ONOS

    • 8.1 Comparing ONOS topology to MN topology - FAIL (error)
      • ONOS topology don't match Mininet
    • 8.2 Hosts view is consistent across all ONOS nodes - PASS (tick)
    • 8.3 Hosts information is correct - FAIL (error)
      • Host information is incorrect
    • 8.4 Host attachment points to the network - PASS (tick)
    • 8.5 Clusters view is consistent across all ONOS nodes - PASS (tick)
    • 8.6 There is only one SCC - FAIL (error)
      • ONOS shows 0 SCCs
    • 8.7 Device information is correct - PASS (tick)
    • 8.8 Links are correct - PASS (tick)
    • 8.9 Hosts are correct - FAIL (error)
      • Hosts are incorrect
    • 8.10 Checking ONOS nodes - PASS (tick)

    Case 7: Running ONOS Constant State Tests - PASS

    • 7.1 Check that each switch has a master - PASS (tick)
    • 7.2 Read device roles from ONOS - PASS (tick)
    • 7.3 Check for consistency in roles from each controller - PASS (tick)
    • 7.4 Compare switch roles from before failure - PASS (tick)
    • 7.5 Get the intents from each controller - PASS (tick)
    • 7.6 Check for consistency in Intents from each controller - PASS (tick)
    • 7.7 Compare current intents with intents before the failure - PASS (tick)
    • 7.8 Get the OF Table entries and compare to before component failure - PASS (tick)
    • 7.9 Leadership Election is still functional - PASS (tick)

    Case 104: Check connectivity - FAIL

    • 104.1 Ping between all hosts - FAIL (error)
      • Failed to ping between all hosts
    • 104.2 Ping between all hosts - FAIL (error)
      • Failed to ping between all hosts

    Case 15: Check that Leadership Election is still functional - PASS

    • 15.1 Run for election on each node - PASS (tick)
    • 15.2 Check that each node shows the same leader and candidates - PASS (tick)
    • 15.3 Find current leader and withdraw - PASS (tick)
    • 15.4 Check that a new node was elected leader - PASS (tick)
    • 15.5 Check that that new leader was the candidate of old leader - PASS (tick)
    • 15.6 Run for election on old leader( just so everyone is in the hat ) - PASS (tick)
    • 15.7 Check that oldLeader is a candidate, and leader if only 1 node - No Result (warning)

    Case 17: Check for basic functionality with distributed primitives - FAIL

    Test the methods of the distributed primitives (counters and sets) throught the cli

    • 17.1 Increment then get a default counter on each node - FAIL (error)
      • Error incrementing default counter
    • 17.2 Get then Increment a default counter on each node - FAIL (error)
      • Error incrementing default counter
    • 17.3 Counters we added have the correct values - FAIL (error)
      • Added counters are incorrect
    • 17.4 Add -8 to then get a default counter on each node - FAIL (error)
      • Error incrementing default counter
    • 17.5 Add 5 to then get a default counter on each node - FAIL (error)
      • Error incrementing default counter
    • 17.6 Get then add 5 to a default counter on each node - FAIL (error)
      • Error incrementing default counter
    • 17.7 Counters we added have the correct values - FAIL (error)
      • Added counters are incorrect
    • 17.8 Distributed Set get - PASS (tick)
    • 17.9 Distributed Set size - PASS (tick)
    • 17.10 Distributed Set add() - PASS (tick)
    • 17.11 Distributed Set addAll() - PASS (tick)
    • 17.12 Distributed Set contains() - PASS (tick)
    • 17.13 Distributed Set containsAll() - PASS (tick)
    • 17.14 Distributed Set remove() - PASS (tick)
    • 17.15 Distributed Set removeAll() - PASS (tick)
    • 17.16 Distributed Set addAll() - PASS (tick)
    • 17.17 Distributed Set clear() - PASS (tick)
    • 17.18 Distributed Set addAll() - PASS (tick)
    • 17.19 Distributed Set retain() - PASS (tick)
    • 17.20 Partitioned Transactional maps put - PASS (tick)
    • 17.21 Partitioned Transactional maps get - PASS (tick)
    • 17.22 Get the value of a new value - PASS (tick)
    • 17.23 Atomic Value set() - PASS (tick)
    • 17.24 Get the value after set() - PASS (tick)
    • 17.25 Atomic Value compareAndSet() - PASS (tick)
    • 17.26 Get the value after compareAndSet() - PASS (tick)
    • 17.27 Atomic Value getAndSet() - PASS (tick)
    • 17.28 Get the value after getAndSet() - PASS (tick)
    • 17.29 Atomic Value destory() - PASS (tick)
    • 17.30 Get the value after destroy() - PASS (tick)
    • 17.31 Work Queue add() - PASS (tick)
    • 17.32 Check the work queue stats - PASS (tick)
    • 17.33 Work Queue addMultiple() - PASS (tick)
    • 17.34 Check the work queue stats - PASS (tick)
    • 17.35 Work Queue takeAndComplete() 1 - PASS (tick)
    • 17.36 Check the work queue stats - PASS (tick)
    • 17.37 Work Queue takeAndComplete() 2 - PASS (tick)
    • 17.38 Check the work queue stats - PASS (tick)
    • 17.39 Work Queue destroy() - PASS (tick)
    • 17.40 Check the work queue stats - PASS (tick)

    Case 9: Turn off a link to ensure that Link Discovery is working properly - PASS

    • 9.1 Kill Link between spine102 and leaf1 - PASS (tick)

    Case 8: Compare ONOS Topology view to Mininet topology - FAIL

    Compare topology objects between Mininet and ONOS

    • 8.1 Comparing ONOS topology to MN topology - FAIL (error)
      • ONOS topology don't match Mininet
    • 8.2 Hosts view is consistent across all ONOS nodes - PASS (tick)
    • 8.3 Hosts information is correct - FAIL (error)
      • Host information is incorrect
    • 8.4 Host attachment points to the network - PASS (tick)
    • 8.5 Clusters view is consistent across all ONOS nodes - PASS (tick)
    • 8.6 There is only one SCC - FAIL (error)
      • ONOS shows 0 SCCs
    • 8.7 Device information is correct - PASS (tick)
    • 8.8 Links are correct - PASS (tick)
    • 8.9 Hosts are correct - FAIL (error)
      • Hosts are incorrect
    • 8.10 Checking ONOS nodes - PASS (tick)

    Case 104: Check connectivity - FAIL

    • 104.1 Ping between all hosts - FAIL (error)
      • Failed to ping between all hosts
    • 104.2 Ping between all hosts - FAIL (error)
      • Failed to ping between all hosts

    Case 10: Restore a link to ensure that Link Discovery is working properly - PASS

    • 10.1 Bring link between spine102 and leaf1 back up - PASS (tick)

    Case 8: Compare ONOS Topology view to Mininet topology - FAIL

    Compare topology objects between Mininet and ONOS

    • 8.1 Comparing ONOS topology to MN topology - FAIL (error)
      • ONOS topology don't match Mininet
    • 8.2 Hosts view is consistent across all ONOS nodes - PASS (tick)
    • 8.3 Hosts information is correct - FAIL (error)
      • Host information is incorrect
    • 8.4 Host attachment points to the network - PASS (tick)
    • 8.5 Clusters view is consistent across all ONOS nodes - PASS (tick)
    • 8.6 There is only one SCC - FAIL (error)
      • ONOS shows 0 SCCs
    • 8.7 Device information is correct - PASS (tick)
    • 8.8 Links are correct - PASS (tick)
    • 8.9 Hosts are correct - FAIL (error)
      • Hosts are incorrect
    • 8.10 Checking ONOS nodes - PASS (tick)

    Case 104: Check connectivity - FAIL

    • 104.1 Ping between all hosts - FAIL (error)
      • Failed to ping between all hosts
    • 104.2 Ping between all hosts - FAIL (error)
      • Failed to ping between all hosts

    Case 13: Test Cleanup - PASS

    • 13.1 Checking raft log size - PASS (tick)
    • 13.2 Killing tcpdumps - No Result (warning)
    • 13.3 Checking ONOS Logs for errors - No Result (warning)
    • 13.4 Stopping Mininet - PASS (tick)