Software Defined Networking is dead, long live Software Defined Networking.
Software Defined Networking (SDN) never had a clear role in the traditional enterprise. It came in the guise of Open Flow as a solution looking for a problem. And the only niche it managed to settle in was network monitoring.
Then it resurfaced as network virtualization looking to power the next generation of data center migrations from traditional virtual environments into private and hybrid cloud. The initial hype had SDN transform enterprise data centers and deliver huge savings in infrastructure operations by enabling workloads to seamlessly move around.
The reality is much more prosaic. What happened to SDN is what happens to the vast majority of enterprise technology. It took years to mature to a state of operational readiness, found the right set of use cases that drive implementation, and became another (integral) component of the enterprise infrastructure stack.
"All enterprises going through the software defined data center transformation are leaving no stone unturned trying to discover said mythical creatures"
And as a component of the overall stack is where SDN has real impact. The move to virtualize the network has enabled the creation of a fully automated fully orchestrated software defined data center that can be deployed on commodity hardware. SDN is a technology not seen in isolation anymore but instead through the prism of an enabler of that automation and orchestration where the real value lies.
From where we currently stand, if you look at this software defined data center minus the commodity hardware, you see basically a mainframe–a better, faster, cheaper mainframe running on commodity hardware. The mainframe effect has transformed most significantly two main parts of the enterprise ecosystem–enterprise operations and enterprise supply chain.
The enterprise supply chain in the data center has become a zero-sum game where the winner takes all. Enterprises no longer source servers from one supplier, networks from another, storage from a third and the same goes for the virtual layer. Instead there is a single supplier providing infrastructure, the software to virtualize the infrastructure, the software to automate and orchestrate the infrastructure, the software to operate and maintain the infrastructure, and the software to provision workloads on top of that infrastructure.
Since traditionally a single vendor would be unlikely to have the breadth of products to provide all of the above unless they sell a complete mainframe, vendors become critically dependent on the strength of their ecosystem of partners in order to be successful. Or as in the case of Dell/ EMC, try to assemble a consortium of companies that can fulfill all infrastructure needs of an enterprise. And no, this is not the same as hyper converged infrastructure since it goes much broader and much deeper than what hyper converged infrastructure vendors can provide on their own.
The second major transformation is within enterprise operations. Going back to the mainframe days, an operator had to understand all components of the system, how they are integrated together, and how they interact with each other. There was much less specialization and a lot of the complexity was abstracted and hidden under the hood. Only after the mainframe started being broken up into its integral components with the raise of client/server computing, did we start specializing our skills into network, storage, and server admins and engineers.
And now we’re back full circle with the emergence of the Full Stack Engineer (or DevOps Engineer or Site Reliability Engineer). This is a mythical creature that understands all components of the stack and can operate the enterprise private or hybrid cloud. All enterprises going through the software defined data center transformation are leaving no stone unturned trying to discover said mythical creatures. However as most discover sooner or later, instead of externally they will need to look into their own ranks to find it.
Enterprises need to invest in training and continuous improvement and push the existing infrastructure specialist operators to start picking up new skills. Every operator within a modern software defined data center needs to understand automation, orchestration, and continuous integration/continuous development (CI-CD). Every operator needs to understand the basic computer science principles of object-oriented programming, source control and versioning, and have some ability to at least read and write simple scripts.
Large enterprises operating at scale will always need specialists with deep knowledge in each component of the stack. This is just a broad foundational base from which each of us has to build their skill set. And we need to better understand the dependencies of the adjacent components and of the system as a whole. So the network specialist needs to have some knowledge of compute, storage, virtualization, and the private cloud platform.
Complexity within the software defined data center is only increasing and not decreasing. However, it’s abstracted and hidden from sight by an extra layer of software which makes it simpler to operate until something breaks down. To draw an analogy with cars, a car with automatic gears is simpler to drive but there is more complexity within the engine. And not everyone that operates a car needs to be a mechanic but every mechanic needs to know how to operate a car.