Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The Application loads a json config file at start up. This config file will have Kafka config information.

CLUSTERING SUPPORT

...

  1. We will be using a Distributed Work Queue(DWQ) as the primitive.

...

  1. There will be Leadership elections for two separate topics – WORK_QUEUE_PUBLISHER, WORK_QUEUE_CONSUMER.

...

  1. The Leader for WORK_QUEUE_PUBLISHER will publish events to DWQ.

...

  1. The Leader for WORK_QUEUE_CONSUMER will consume events from DWQ and send it to KafkaServer.

...

  1. In case of failure of one of the Leaders and the Leader has failed to mark the task consumed from DWQ as complete, the task(in our case event) will be returned to the DWQ. The new Leader will pick this up from DWQ.

...

  1. There is a rare possibility that the Leader for publisher failed before it can queue the event to DWQ. This will result in a loss of event.

...

  1. Since there is only one consumer from DWQ FIFO ordering of events is guaranteed.

DESIGN DECISIONS

  1. There will be one topic per event type. Each external app will be given a unique consumer groupId. 
  2. Event subscription by external apps is a two step process - They must first register and then subscribe to a specific event call. 
  3. As a first step we will only export Device Events  and Link Events to consumers and worry about Packet Ins and Packet Out later. 

  4. Once the framework is in place it should be relatively easy to add support for other event types. 
  5. In the scenario where the external app loses connectivity with the Kafka server, and does not come back up within the retention period (the time duration for which Kafka will retain messages)  the onus is on the non native app to rebuild its current view of the network state via existing ONOS REST APIs.

...