Have questions? Stuck? Please check our FAQ for some common questions and answers.

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

Under construction

Background

CHO (Continuous Hours of Operation) test focuses on testing ONOS longevity. In previous versions of CHO test, we loop a predefined sequence of test cases (e.g. intents installation/withdrawal, link down/up, verify network topology, etc.) which fully follows current TestON structure and logic. However, as the existing CHO test becomes mature, we have come to realize its limitation and consider a redesign of the CHO test with two main goals:

  1. Simulating a long time running of ONOS in practical networks;
  2. Improve debuggability of CHO test. 

Goal-1 requires at least two changes: first, we need a new way to execute test cases (or test logic). A predefined sequence of test logic is not a good simulation of user/network behaviors in practical networks. Second, we should allow running multiple test cases in parallel, e.g. installing intents when network failure happens. For Goal-2, since CHO test is expected to run for several days or longer, it becomes much more difficult for debugging due to not only large log files but also not being able to interact with the test while running (e.g. change test configurations or even test logic in real time). Besides, reproduction of failures in CHO test is always costly.

To address these issues, we propose to build a new experimental test framework inside TestON for CHO, which we call CHOTestMonkey. The suffix "Monkey" implies both the Chaos Monkey style testing and the year of the Monkey 2016. CHOTestMonkey has the following core ideas: first, we break test cases into smaller blocks of test logic which we call events. Each event is an atomic operation to the SDN network, e.g. installing one intent, bringing down one link, verifying onos status, etc. Test cases can be built by assembling different events, which makes CHOTestMonkey backwards compatible. Second, we introduce an event generator which accepts a list of event generation rules as input and outputs events generated. For instance, it can be called to generate a random link down event, or a host-to-host intent event according to some network models. Third, an event scheduler is designed to flexibly execute events according to different strategies. Normally events can be executed in parallel, while some events may require to be executed after other events. For example, intent installations can run in different threads, but a topology check event should wait until all pending topology events end. In conclusion, event scheduler ensures all events generated run efficiently without conflict with each other. By realizing the above ideas, we greatly improve the flexibility and debuggability of CHO. 

Before going to the details of its framework, here is a list of CHOTestMonkey features:

 

Framework Overview


This is a framework of CHOTestMonkey. We abstracted all the test logics in the old CHO test into different types of events. Each event stands for an atomic test logic such as installing an intent, bringing down a link, check ONOS status and so on. Basically we have four event families including … We have several ways to inject the events into the test. We can still specify a list of events to run from the params file, or we can inject arbitrary events from external scripts or CLI at any time during the test. Under the hood we have a listener for the event triggers from outside, which will then trigger the generation of events in the eventGenerator. All the generated events will go to the eventScheduler, transit from a pending event to a running event. We can implement different scheduling methods. We may want to run some events in parallel, or block some events until others finish. For example, we may want to finish all the checks before injecting the next failure event. And we can also reschedule the events when they fail.


  • No labels