Next week’s HeavyReading event “OSS in the Era of SDN and NFV” should be interesting. It’s certainly timely.
Because judging by last month’s SDN Congress in The Hague, there’s a real lack of clarity over how established OSS will fit with the emerging virtualized functions.
As highlighted elsewhere, published reference architectures for NFV all include a reference to an OSS function. A casual box to the edge of a diagram, representing hundreds of millions of dollars worth of accumulated wisdom and years – decades – of development cost.
OSS-heads could be forgiven for feeling overlooked from the virtualization party invite list.
But the industry is now starting to ask questions about how that OSS function will work with the new virtualized functions and management (let’s shorthand this with “orchestration” for now) systems. More specifically, the discussion is about moving on from the technology of interoperating to the practicalities of interoperating.
At the SDN Congress last month, several people referred to NFV efforts reaching a plateau, a hiatus, a pause, while the wider implications of NFV came into view on the horizon. That doesn’t represent a failure, quite the reverse – it suggests that NFV is important enough to require assimilation into the full family of business and operational support systems, with all their quirks.
We already know that there is a general concern about the readiness of today’s OSS to unlock the potential of virtualization. The question is how telcos are actually responding to that. Just last week, LightReading reported that Telecom Italia suggested that the alternative to transformation isn’t after all (as we were supposed to believe) “death” but in fact, “don’t”. [To be more precise, their conclusion was a sort of overlay, but that’s hardly been part of the industry lexicon in recent years.]
There’s no question that the ideas and implications of SDN and NFV represent a very significant challenge to conventional wisdom in telecom, at all levels. How many recent presenters recently have acknowledged that the biggest challenges are not technical but cultural? That this requires a complete rethink of the vendor/buyer relationship?
The current comparison of rapid, collaborative, open-source-based development against formal standardisation via groups like TMForum and ETSI actually reflects the tension between the new, and the established. And that is right and good – competition in ideas is healthy, and if virtualization doesn’t necessitate creative thinking, what does?
Right now, Virtualization and OSS seem like two tectonic plates, beginning to collide. The result will either be some new peaks of capability or for one or other to be subsumed back to the depths.
So here are some questions I’m hoping will get some airtime next week.
- How are OSS leaders adjusting their OSS plans to support the arrival of virtualized functions?
- Do OSS leaders see NFV plateauing as a box-replacement enabler?
- What’s the point of a service catalogue in a world where new types of services can be created on the fly?
- Which OSS-bound processes will be most impacted? In what order?
- Will self-service portals replace (or simply bypass) Order Management systems?
- How will operators decide whether and when to build out physical vs virtual resources? Will one single process manage that? Or will we have two (or more!) stacks? Silo, anyone?
- Will rapid, dynamic changes to resources result in fragmentation, like some colossal, cloud-based hard disk?
- Who (or what) orchestrates the orchestrators?
- What proportion of spending on OSS is being diverted to support virtualization?
- What is the most significant change made so far to OSS fundamental drivers and architectures?
As a vendor with a foot in both camps, of course, we have a view. But it starts with asking “what’s best for the business?” and how that can be determined – whatever services consist of.
We look forward to seeing you there!