Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

3. The Leader/Primary will publish event stream to a distributed local store. There will be a flag indicating whether this event got published shared counter indicating the last published event to Kafka Server.      After publishing the event to the Kafka Server, it will update the flag counter accordingly.      The Backup will have read only access of the events posted to distributed store.  All other members will update their local buffers to remove events that have been published. This is based on the shared counter value.

4. In case of Leader failure, the Backup will get read/write access to the distributed store. It will try to post events that were not sent to the Kafka server. These are basically events present in its local store.

5. As a mechanism to detect duplicated events being sent, we could have seq_id numbers for every event that gets posted. This way the external app can detect duplicates. The sequence number could be a simple timestamp that every event will carry.

DESIGN DECISIONS

  1. There will be one topic per event type. Each external app will be given a unique consumer groupId. 
  2. Event subscription by external apps is a two step process - They must first register and then subscribe to a specific event call. 
  3. As a first step we will only export Device Events  and Link Events to consumers and worry about Packet Ins and Packet Out later. 

  4. Once the framework is in place it should be relatively easy to add support for other event types. 
  5. In the scenario where the external app loses connectivity with the Kafka server, and does not come back up within the retention period (the time duration for which Kafka will retain messages)  the onus is on the non native app to rebuild its current view of the network state via existing ONOS REST APIs.

...