Have questions? Stuck? Please check our FAQ for some common questions and answers.

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

----------------- Under construction -----------------

Summary

CHO (Continuous Hours of Operation) test runs on an experimental framework called CHOTestMonkey inside TestON. Instead of running a predefined sequence of test cases, CHOTestMonkey breaks test cases into atomic test logics called events and provides a highly customizable way to organize and execute these events. With CHOTestMonkey, it becomes much easier and more flexible to maintain various pieces of test logic and assemble them in different ways for different test purposes.

For instance, one can start CHOTestMonkey without any predefined test logic, and then trigger various events from an external python script. A pseudo code look like:

# Add a host intent between h1 and h2
triggerEvent('APP_INTENT_HOST_ADD', 'h1', 'h2')
# Randomly bring down a link
triggerEvent('NETWORK_LINK_DOWN', 'random')
# Verify network topology
triggerEvent('CHECK_TOPO')
# Pause the test
triggerEvent('TEST_PAUSE')

In this way, one can easily customize the test logic with much less efforts. CHOTestMonkey also provides a CLI which is especially useful when debugging:

CHO> add-host-intent h1 h2
CHO> link-down random
CHO> check-topo
CHO> pause-test

Please continue reading If you are interested in our new test framework.

Background

CHO test focuses on testing ONOS longevity. In previous versions of CHO test, we loop a predefined sequence of test cases (e.g. intents installation/withdrawal, link down/up, verify network topology, etc.) which fully follows current TestON structure and logic. However, as the existing CHO test becomes mature, we have come to realize its limitation and consider a redesign of the CHO test with two main goals:

  1. Simulating a long time running of ONOS in practical networks;
  2. Improving debuggability of CHO test. 

Goal-1 requires at least two changes: first, we need a new way to organize and execute test cases (or logic inside test cases). A predefined sequence of test logic is not a good simulation of user/network behaviors in practical networks. Second, we should allow running multiple test cases in parallel, e.g. installing intents when network failure happens. For Goal-2, since CHO test is expected to run for several days or longer, it becomes much more difficult for debugging due to not only large log files but also not being able to interact with the test while running (e.g. change test configurations or even test logic in real time). Besides, reproduction of failures in CHO test is always costly.

To address these issues, we propose to build a new experimental test framework inside TestON for CHO, which we call CHOTestMonkey. The suffix "Monkey" implies both the Chaos Monkey style testing and the year of the Monkey 2016. CHOTestMonkey has the following core ideas:

  • First, we break test cases into smaller blocks of test logic which we call events. Each event is an atomic operation to the SDN network, e.g. installing one intent, bringing down one link, verifying onos status, etc. Test cases can be built by assembling different events, which makes CHOTestMonkey backwards compatible. 
  • Second, we introduce an event generator which accepts a list of event generation rules as input and outputs events generated. For instance, it can be called to generate a random link down event, or a host-to-host intent event according to some network models. 
  • Third, an event scheduler is designed to flexibly execute events according to different strategies. Normally events can be executed in parallel, while some events may require to be executed after other events. For example, intent installations can run in different threads, but a topology check event should wait until all pending topology events end. In conclusion, event scheduler ensures all events generated run efficiently without conflict with each other. 
  • At last, besides automatically generating events inside CHO, we also allow CHO accepts commands from outside for event generation by setting up a connection between CHO and any third party python script. Based on this feature, we implement a CLI for CHOTestMonkey which accepts user-friendly commands such as "add-host-intent h1 h2" and "check-topo" and then triggers event generation inside CHO test. With the help of the CHO CLI, testers can easily pause/resume CHO test at anytime and check network status or change test logic by inserting any events into the test in real time.

By realizing the above ideas, we greatly improve the flexibility and debuggability of CHO test.

CHOTestMonkey Framework


Overview

The figure above demonstrates the framework of CHOTestMonkey. We abstracted all the test logics in the old CHO test into different types of events. Each event stands for an atomic test logic such as installing an intent, bringing down a link, check ONOS status and so on. Basically we have four event families including … We have several ways to inject the events into the test. We can still specify a list of events to run from the params file, or we can inject arbitrary events from external scripts or CLI at any time during the test. Under the hood we have a listener for the event triggers from outside, which will then trigger the generation of events in the eventGenerator. All the generated events will go to the eventScheduler, transit from a pending event to a running event. We can implement different scheduling methods. We may want to run some events in parallel, or block some events until others finish. For example, we may want to finish all the checks before injecting the next failure event. And we can also reschedule the events when they fail.

Events

 

Event Scheduler

 

Event Generator

 

How to Run CHOTestMonkey

 

.Params

 

Python Script

 

CLI

 

How to Contribute to CHOTestMonkey

Add more events

 

  • No labels