Meetings related to this use case happen every two weeks; All of the slides from past meetings are available here.
Below are some notes and direct links.
Jan 20 2016
notes
- ON.LAB
- lot of work with Lumentum. no major blocks.
- multi-layer ctrl plane - link discovery, disc. bandwidth, etc progressing
- API for MEF SCA API app in REST (swagger) progressing, but compatibility with ONOS REST API is the challenge.
- device availability
- OF13 meter functions: Accton+OF-DPA or OVS don't support them, but CPqD does.
- power budget
- Lumentum will have a design later today, to be shared with e.g. Calient
- Many types of connectors, so need converters - how many/which type?
- distance between ROADMs, transponders, and optical switch?
- request from Ciena: if orchestrating marketing/messaging, work with community manager.
Dec 16 2015
notes
Started steps in integrating different components
based on simulators - Lumentum's at this moment - working with Ciena, Fujitsu for theirs
Delegation of hierarchy layers in ctrl plane
MEF services - how to leverage XOS to add VNF/cloud-type services in datapath
Power budget calculations w/ Lumentum - sanity check of physical testbed
Question: what's the accepted range of power levels?
We're waiting on Fujitsu for transponder datasheet.
We should have it for Ciena waveserver.
Bill Rembert identify requirements for minimum set of capabilities for carrier Ethernet services
Pretty confident in terms of progress
Netconf sim - Fujitsu - has, but figuring out how to sign it off
testbed - rack and power available, but no fiber for interconnect, etc. (Jimmy Jin : oplink can help.)
equiptment/tools to test connetivity, throughput, gather stats, etc. still needed
Demo at both ONF and OFC - likely booth for ON.Lab, with equipment at show floor
Dec 8 2015
Discussion about which portions of the MEF specs relate to the use case.
Dec 3 2015
- (no slides)
notes
- A few responses for the previous meeting's questionnaires are missing, notably Fujitsu
- Company shutdown from 3rd week, but we (ON.Lab use case members) will probably be responsive
- MEF - On OpenCE
- Plan to work alongside (E)CORD
- Aims:
- Create MEF service components, work with this group about what's needed for realization
- Show services (connectivity, OEM, orchestration...), including ones amenable to cloud setting
- We should pick and choose a subset of topics to cover within MEF spec.
- Questions for MEF work: Is it focused on LSO, or is it something else?
- Initial work will be 'basic lower layer specs' (see above)
- Extend to include orchestration and CORD-like services
Nov 19 2015
notes
Questionnaire for all participants with devices to be part of the use case
These are part of the slides, please answer/ let us know about any missing things
Need to understand physical parameters of different components (power, signal qualities, etc)
Basically we ask: can we make it work?
a preliminary list
link design/engineering when we put everything together?
testbed (details)
Anyone who can lend us fiber? What test gear might we need (traffic generators, etc)?
it's possible to emulate CORD in Mininet, but it's better if we have real equipment.
how many U's is our test rack?
What are the functions of this PoC?
L2 VPN, video delivery (E2E view of idea)
Two main challenges - ROADM platform, and multicontroller coordination
CORD components - requirements? There's still some internal debate on how much of CORD we want to leverage.
What's needed for E2E provisioning? CORD in this respect is for connectivity
Do we want to Introduce services?
Nov 4 2015
notes
- Control plane topology - we'll stick to one logical controller sees all components (transponder, OCS/cross connect, ROADM, etc)
- Connectivity matrix model (port-port connectivity constraints in DWDM's) for correct routing
- Sample model based on YANG (a generic device connectivity model), but can be PCEP or anything else
- For passing info to ONOS
- i.e. "vendor X's transponder can't be reached from transponder of vendor Y"
- Perhaps a tentative API on northbound, especially since we don't support YANG north
- PoC storyboard.
- Services that convey SLA + connectivity (like E-Line) and traditionally multicast, shifting to on-demand (like video)
- CORD controllers (segmentrouting, vRouter)
- our PoC doesn't focus on access technologies (G.Fast/GPON) connecting to CE
- Metro cluster
- uses aforementioned connectivity matrix between vRouter and ROADM
- has no insight into CORD section of network
- What is our inter-controller communication mechanism?
- RPC-provider (bidirectional, using grpc as a starter) on metro controller side + Topology abstraction app on CORD controller
- ROADMs w/ two transponders each from different vendors, connected to CORD-type deployment
- All parts fit into a single rack (ROADMs/transponders, CORD HW + ctrls, Metro controllers)
- Assume just one CORD rack as both a source and sink of service traffic (this is mostly for frugality).
- We should be careful to show the traffic hair-pin route.
- Also, hairpins might not be possible between two different vendor's transponders.
- What are we showing?
- Given that transponders behave, we'll be able to demo the interoperability at ROADM level.
- We might be able to demo data-plane level interoperability at transponder level if we turn off FEC, modulation schemes match
- but, it's a diff problem that we may or may not be able to solve, in which case, we need matching vendor endpoints.
- Thoughts/comments?
- GUI and observability? We need to think about it soon.
- Supervisory channel/topology discovery amongst ROADMs - leave out for now and use network config system, though we will need it eventually.
- Need to have vendors sanity check timeline/APIs
Oct 21 2015
notes
- Metro architecture: What are the levels of interaction between CORD and metro controllers?
- One idea to have metro controller see simplified topology for CORD domains (i.e., a big switch).
- How to see resource outside of CO? How to look into avail resource for full path for i.e. QoS?
- Response for fallback in case of failure? Who has what visibility into resource information?
- It would be nice to have an interface (managerial/visualization/etc) to peer into everything.
- The displayed data doesn't have to be literally available everywhere, and the visualization can be based on aggregated info
- CORD sites (Huawei): E-Line across metro using CORD architecture
- clarifications - VNFs instantiated by OpenStack (XOS), and ONOS learns about them via XOS's calculation results. XOS in turn knows about the services/chaining requirements since it creates the VMs/instantiates services.
- E-Line storyboard: XOS and interaction between local/global ctrl's? open question (currernt capabilities of XOS, etc)
- VNF - VM/Docker or a bundle in ONOS? the former. The latter just sets up flows to chain them together
- There are two types of services, per-subscriber or multi-tenant.
- ONOS just sees hosts, or maybe even just switches and acting on traffic
- Dissagregated ROADM: Is the device controller per-device or rack/racks?
- The mapping should be tunable depending on need of visibility to one versus all devices
- Why not provide ONOS as a VNF? We probably don't want 20 ONOS for 20 racks, or per-domain configs, and that would fit into the model of global orchestration (which we need)
Sep 24 2015
Sep 9 2015