While “optimisation” is self-evidently A Good Thing in principle, it isn’t always clear what exactly it means in practice – with frustrating consequences. Witness telecom analyst Dean Bubley’s recent comment:
Does that mean we should disregard every claim of optimisation? Or can we actually solve the problem?
Part of the problem is that “optimisation” always comes with a frame of reference, whether that is explicitly stated or not. As Dean’s observation implies, what’s optimal from a technology perspective might be decidedly not when viewed from a financial or business strategy one.
Anyone looking to truly optimise a process or a design should have two requirements in mind:
- A way to factor in as many objectives and constraints as possible – more is better (if harder)
- An objective measure of outcome desirability (i.e. “what makes option A better than option B?”)
Objectives and Constraints
In a typical telco or OTT provider, there are multiple objectives and constraints in play at the same. For example:
|Technical||Route a request for service over the shortest path possible…||…but meet requirements for resilience|
|Commercial||Maximize profit on services delivered…||…but keep within capex spend limits|
|Regulatory||Minimize carbon footprint…||…but don’t route traffic through politically sensitive territories|
All of these are perfectly reasonable and valid objectives, and constraints. And it’s easy to see how optimising in one domain alone (say shortest path routing) leads to sub-optimal outcome for another (providing service at a loss).
Clearly, the challenge of “optimizing” becomes more complex as you consider more objectives and constraints. However, the value also increases: less internal to-and-fro, and a faster arrival at a better outcome overall.
Time adds an extra dimension to the data set. That is, understanding how current trends or future events might affect the immediate decision. At a simple level, knowing that a network node is due to be decommissioned might influence a right-now decision, in order to avoid additional work in the near future.
The more objectives and constraints the optimisation process can handle simultaneously, the better (more optimal) the outcome has a chance of being.
Even with a way to model many orthogonal or competing constraints, it’s also vital to have an objective way to compare Outcome A with Outcome B.
In most cases, this is surprisingly EASY.
Let’s imagine an Enterprise Account Manager for a global service provider talking to her pre-sales engineer:
“Sunil, How’s that 50-site VPN design for GlobalCorp coming?
“All good – I’ve narrowed it down to two options. I can satisfy all the requirements. Option A is technically neater, but option B is more profitable.”
Can you guess which option is most likely to be preferred?
By and large, effect on profit is the clearest, simplest and most compelling reason to prefer any design or decision over any other. If we’re going to talk optimisation, surely the ultimate measure of that is impact on the bottom line?
The key part is that the optimisation processing must already have factored in as many constraints as possible.
Optimising is an Attitude, Not a Technology
With this perspective, we can see why “so-called ‘optimisation’” can lead to sub-optimal results: use of a narrow frame of reference, or a measure of “good” that is either subjective or parochial.
Optimising certainly requires technology, just as any information-rich problem set does. Artificial Intelligence is particularly well-suited to solving the multiple constraint problems that telecom and OTT providers routinely face.
But just as important is the attitude that seeks to understand what is ultimately good for the business, and to actively encapsulate the multiple constraints and perspectives required to arrive at truly optimal decisions.
For more on approaches to optimisation, see here.