...
indicates that you are in mininet.
Startup multiple docker instances
Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. In this tutorial we will use docker to provide encapsulated runtime environments for running instances of ONOS. For docker cli help visit: https://docs.docker.com/reference/commandline/cli/
We will be using docker to spawn multiple ONOS instances. So before we dive into the code, let's provision some docker instances that will run ONOS. First, you should see that there is already an ONOS distributed tutorial images present on your system:
Code Block |
---|
distributed@mininet-vm:~/onos$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
onos/tutorial-dist latest 666b3c862984 13 hours ago 666.1 MB
ubuntu-upstart 14.10 e2b2af39309a 7 days ago 264.2 MB |
At this point you are ready to spawn your ONOS instances. To do this we will spawn three docker instances that will be detached and running in the background. Later on we will run an instance of ONOS in each docker instance. Let's spawn our docker instances:
Code Block |
---|
distributed@mininet-vm:~/onos$ sudo docker run -t -P -i -d --name onos-1 onos/tutorial-dist
<docker-instance-id>
distributed@mininet-vm:~/onos$ sudo docker run -t -P -i -d --name onos-2 onos/tutorial-dist
<docker-instance-id>
distributed@mininet-vm:~/onos$ sudo docker run -t -P -i -d --name onos-3 onos/tutorial-dist
<docker-instance-id> |
If you get the following error message:
Code Block |
---|
distributed@mininet-vm:~$ sudo docker run -t -P -i -d --name onos-1 onos/tutorial-dist
2014/12/11 10:55:53 Error response from daemon: Conflict, The name onos-1 is already assigned to 26d8c84f8a50. You have to delete (or rename) that container to be able to assign onos-1 to a container again.
distributed@mininet-vm:~$ |
which should only happen if you have already build your docker instance then you only need to start it:
Code Block |
---|
distributed@mininet-vm:~$ sudo docker start onos-1 |
Now you should have three docker instances up and running
Code Block |
---|
distributed@mininet-vm:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fc08370eb3d0 onos/tutorial-dist:latest "/sbin/init" About a minute ago Up About a minute 0.0.0.0:49168->22/tcp, 0.0.0.0:49169->6633/tcp, 0.0.0.0:49170->8181/tcp onos-3
bc725b09deed onos/tutorial-dist:latest "/sbin/init" About a minute ago Up About a minute 0.0.0.0:49165->22/tcp, 0.0.0.0:49166->6633/tcp, 0.0.0.0:49167->8181/tcp onos-2
26d8c84f8a50 onos/tutorial-dist:latest "/sbin/init" About a minute ago Up About a minute 0.0.0.0:49162->22/tcp, 0.0.0.0:49163->6633/tcp, 0.0.0.0:49164->8181/tcp onos-1
distributed@mininet-vm:~$
|
Ok now that we have all our docker instances up and running we simply need to set them up with ONOS. To do this we will use the standard ONOS toolset which would be the same set of commands if you were to deploy ONOS on a VM or bare metal machine.
Setting up ONOS in spawned docker instances (also known as docking the docker )
First, let's start by making sure our environment is correctly setup.
Code Block |
---|
distributed@mininet-vm:~$ cell docker
ONOS_CELL=docker
OCI=172.17.0.2
OC1=172.17.0.2
OC2=172.17.0.3
OC3=172.17.0.4
OCN=localhost
ONOS_FEATURES=webconsole,onos-api,onos-core,onos-cli,onos-rest,onos-gui,onos-openflow,onos-app-fwd,onos-app-proxyarp,onos-app-mobility
ONOS_USER=root
ONOS_NIC=172.17.0.* |
Setting up a 3-node ONOS cluster
There is already a 3-node LXC-based container cluster set up in the tutorial VM. We're now going to install and start ONOS on all 3 nodes to give us a cluster to work with.
First, let's start by making sure our environment is correctly setup.First thing we need to do is setup passwordless access to our instances, this will safe you a ton of time especially when developing and pushing your component frequently. ONOS provides a script that will push your local key to the instance:
Code Block |
---|
distributed@mininet-vm:~$ onos-push-keys $OC1 root@172.17.0.5's password: onosrocks distributed@mininet-vm:~$ onos-push-keys $OC2 root@172.17.0.5's password: onosrocks distributed@mininet-vm:~$ onos-push-keys $OC3 root@172.17.0.5's password: onosrocks cell 3node ONOS_CELL=3node OCI=10.0.3.11 OC1=10.0.3.11 OC2=10.0.3.12 OC3=10.0.3.13 ONOS_FEATURES=webconsole,onos-api,onos-core,onos-cli,onos-rest,onos-gui,onos-openflow ONOS_NIC=10.0.3.* |
The password for your instance is onosrocks. You will need to do this for each instance. Now we just need to package ONOS by running:
...
Code Block |
---|
distributed@mininet-vm:~$ onos-install -f $OC1 onos start/running, process 308 distributed@mininet-vm:~$ onos-install -f $OC2 onos start/running, process 302 distributed@mininet-vm:~$ onos-install -f $OC3 onos start/running, process 300 distributed@mininet-vm:~$ |
This has now installed ONOS on your docker instances.
Verifying that ONOS is deployed
...
Code Block |
---|
onos> devices id=of:0000000100000001, available=true, role=MASTER, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 id=of:0000000100000002, available=true, role=STANDBY, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 id=of:0000000200000001, available=true, role=MASTER, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 id=of:0000000200000002, available=true, role=STANDBY, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 id=of:0000000300000001, available=true, role=MASTER, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 id=of:0000000300000002, available=true, role=STANDBY, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 id=of:0000010100000000, available=true, role=STANDBY, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 id=of:0000010200000000, available=true, role=STANDBY, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 id=of:0000020100000000, available=true, role=MASTER, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 id=of:0000020200000000, available=true, role=STANDBY, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 id=of:0000030100000000, available=true, role=STANDBY, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 id=of:0000030200000000, available=true, role=MASTER, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 id=of:1111000000000000, available=true, role=STANDBY, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 id=of:2222000000000000, available=true, role=STANDBY, type=SWITCH, mfr=Nicira, Inc., hw=Open vSwitch, sw=2.1.3, serial=None, protocol=OF_10 |
Let's
...
install reactive forwarding and see
...
if
...
we
...
can
...
forward
...
traffic:
Code Block |
---|
onos> feature:install onos-app-fwd |
In the mininet console:
Code Block |
---|
mininet> pingall *** Ping: testing ping reachability h111 -> h112 h121 h122 h211 h212 h221 h222 h311 h312 h321 h322 h112 -> h111 h121 h122 h211 h212 h221 h222 h311 h312 h321 h322 h121 -> h111 h112 h122 h211 h212 h221 h222 h311 h312 h321 h322 h122 -> h111 h112 h121 h211 h212 h221 h222 h311 h312 h321 h322 h211 -> h111 h112 h121 h122 h212 h221 h222 h311 h312 h321 h322 h212 -> h111 h112 h121 h122 h211 h221 h222 h311 h312 h321 h322 h221 -> h111 h112 h121 h122 h211 h212 h222 h311 h312 h321 h322 h222 -> h111 h112 h121 h122 h211 h212 h221 h311 h312 h321 h322 h311 -> h111 h112 h121 h122 h211 h212 h221 h222 h312 h321 h322 h312 -> h111 h112 h121 h122 h211 h212 h221 h222 h311 h321 h322 h321 -> h111 h112 h121 h122 h211 h212 h221 h222 h311 h312 h322 h322 -> h111 h112 h121 h122 h211 h212 h221 h222 h311 h312 h321 *** Results: 0% dropped (132/132 received) |
...
Code Block |
---|
onos> masters
172.17.0.2: 5 devices
of:0000000100000001
of:0000000200000001
of:0000000300000001
of:0000020100000000
of:0000030200000000
172.17.0.3: 4 devices
of:0000000100000002
of:0000000300000002
of:0000020200000000
of:1111000000000000
172.17.0.4: 5 devices
of:0000000200000002
of:0000010100000000
of:0000010200000000
of:0000030100000000
of:2222000000000000 |
of:2222000000000000 |
Now we should uninstall reactive forwarding so it doesn't get in the way for the next part of the tutorial.
Code Block |
---|
onos> feature:uninstall onos-app-fwd |
At this point, you multi-instance ONOS deployment is functional. Let's move on to writing some code.
...
Code Block | ||
---|---|---|
| ||
public interface NetworkStore { /** * Create a named network. * * @param network network name */ void putNetwork(String network); /** * Removes a named network. * * @param network network name */ void removeNetwork(String network); /** * Returns a set of network names. * * @return a set of network names */ Set<String> getNetworks(); /** * Adds a host to the given network. * * @param network network name * @param hostId host id * @return updated set of hosts in the network (or an empty set if the host * has already been added to the network) */ Set<HostId> addHost(String network, HostId hostId); /** * Removes a host from the given network. * * @param network network name * @param hostId host id the network) */ voidSet<HostId> removeHostaddHost(String network, HostId hostId); /** * ReturnsRemoves alla thehost hostsfrom inthe agiven network. * * @param network network name * @return@param set ofhostId host idsid */ Set<HostId>void getHostsremoveHost(String network, HostId hostId); /** * AddsReturns aall setthe ofhosts intents toin a network. * * @param network network name * @param@return intents set of host intentsids */ voidSet<HostId> addIntentsgetHosts(String network, Set<Intent> intents); /** * ReturnsAdds a set of intents givento a network and a host. * * @param network network name * @param hostId host id name * @param @returnintents set of intents */ Set<Intent>void removeIntentsaddIntents(String network, HostIdSet<Intent> hostIdintents); /** * Returns a set of intents given a network. and * @param network network namea host. * @return set of intents */ @param network network Set<Intent>name removeIntents(String network); } |
Alright so now you have an interface for the NetworkStore, that makes IntelliJ happy. But someone should implement that interface right? Let's create a new class which implements the NetworkStore interface.
Code Block | ||
---|---|---|
| ||
@Component(immediate = true, enabled* = true) @Service public class SimpleNetworkStore@param hostId host id * @return set implementsof NetworkStoreintents { private*/ static Logger log =Set<Intent> LoggerFactory.getLogger(SimpleNetworkStore.classremoveIntents(String network, HostId hostId); /** * Returns privatea finalset Map<String,of Set<HostId>>intents networksgiven =a Mapsnetwork.newHashMap(); private final* Map<String,@param Set<Intent>>network intentsPerNet = Maps.newHashMap(); network name @Activate * @return set protectedof voidintents activate() { */ Set<Intent> removeIntents(String network); } |
Alright so now you have an interface for the NetworkStore, that makes IntelliJ happy. But someone should implement that interface right? Let's create a new class which implements the NetworkStore interface.
Code Block | ||
---|---|---|
| ||
@Component(immediate = true, enabled = true) @Service public class SimpleNetworkStore implements NetworkStore { log.info("Started"); } @Deactivate protected void deactivate() { private static Logger log log= LoggerFactory.info("Stopped"getLogger(SimpleNetworkStore.class); } } |
Now as an exercise you must implement the methods of SimpleNetworkStore. Don't hesitate to ask questions here!
Add some Intents
Now that we have a simple store implementation, let's have byon program the network when hosts are added. For this we are going to need the intent framework, so let's grab a reference to it in the network manager.
Code Block | ||
---|---|---|
| ||
@Reference(cardinality = ReferenceCardinality.MANDATORY_UNARY)
protected IntentService intentService; |
And we will need the following code to implement the mesh of the hosts in each virtual network.
Code Block | ||
---|---|---|
| ||
private Set<Intent> addToMesh(HostId src, Set<HostId> existing) { if (existing.isEmpty())private final Map<String, Set<HostId>> networks = Maps.newHashMap(); private final Map<String, Set<Intent>> intentsPerNet = Maps.newHashMap(); @Activate protected void activate() { log.info("Started"); } @Deactivate protected void deactivate() { return Collections.emptySet(log.info("Stopped"); } } |
Now as an exercise you must implement the methods of SimpleNetworkStore. Don't hesitate to ask questions here!
Add some Intents
Now that we have a simple store implementation, let's have byon program the network when hosts are added. For this we are going to need the intent framework, so let's grab a reference to it in the network manager.
Code Block | ||
---|---|---|
| ||
@Reference(cardinality = ReferenceCardinality.MANDATORY_UNARY)
protected IntentService intentService; |
And we will need the following code to implement the mesh of the hosts in each virtual network.
Code Block | ||
---|---|---|
| ||
private Set<Intent> addToMesh(HostId src, Set<HostId> existing } IntentOperations.Builder builder = IntentOperations.builder(appId); existing.forEach(dst -> { if (!src.equals(dst)) { if (existing.isEmpty()) { return builderCollections.addSubmitOperation(new HostToHostIntent(appId, src, dst)emptySet(); } Set<Intent> submitted = new }HashSet<>(); existing.forEach(dst });-> { IntentOperations ops = builder.build();if (!src.equals(dst)) { intentService.execute(ops); return ops.operations().stream().map(IntentOperation::intent) Intent intent = new HostToHostIntent(appId, src, dst); submitted.collect(Collectors.toSet())add(intent); } private void removeFromMesh(Set<Intent> intents) { intentService.submit(intent); } IntentOperations.Builder builder = IntentOperations.builder(appId}); return submitted; } private void removeFromMesh(Set<Intent> intents.forEach(intent -> builder.addWithdrawOperation(intent.id()));) { intents.forEach(i -> intentService.execute(builder.build(withdraw(i)); } |
Verify that everything works
...