Unless you have been living under a rock, you have probably heard of the new and innovative networking movement sweeping datacenters worldwide. This new movement is called Software Defined Networking or “SDN” and it has grown to pose quite a threat to traditional networking ideas.

What is SDN?

By definition, SDN is an approach to networking in which network logic is decoupled from hardware and given to a software application called a controller. In simpler terms SDN removes the network operating system from each individual device and moves it to a server. Switches are then given rules or “flows” by the controller which describe how to forward specific traffic across the network. This allows for more network speed, efficiency, and innovation.

Traditional network vs SDN network

An overview of a Software Defined Network

Why is SDN better?

Network efficiency and speed are improved with SDN in several ways…

  • Network processing is handled by a much more able-bodied server 
  • Switches evaluate route packets at line speed since they only perform forwarding decisions 
  • Centralized network control 
  • Custom defined network behavior 
  • Virtualized and sliced networks 
  • Multi-vendor device interoperability 
  • The ability for pay for hardware alone, without the unnecessary added features 

The need for innovation

Networks today are not evolving quickly enough to satisfy the growing needs of consumers. Networks are built using switches, routers, and other devices from various vendors that have become exceedingly complex because they implement protocols standardized by the IETF and use proprietary interfaces. Many network administrators need network features tailored for their needs, which is a request very difficult to fulfill using standardized protocols and proprietary features. SDN solves this problem by allowing network administrators to define a network behaviour that suits their needs.

Although SDN is in it’s early stages, research and development is already under way. One very popular and developer friendly controller is provided by the OpenFlow based startup, Big Switch Networks. Big Switch actively supports “Floodlight”, an entirely open source OpenFlow controller written in java and licensed under the Apache license. You can clone the project from Github and start programming your own custom network behaviour in a matter of minutes. Bringing an open source option to networking not only gives administrators choice, it gives anyone a platform to innovate with. On the other hand large companies like Google have already implemented their own private versions of OpenFlow in their own datacenters.

Marist, IBM, and OpenFlow

Here at the IBM/Marist Joint Study we are developing an OpenFlow testbed to evaluate the effectiveness of different OpenFlow devices and controllers. Our goal is to contribute our findings back to the community in hopes to progress the adaptiveness of OpenFlow in modern networks.

So where does the Mainframe come into play?

As an intern at the IBM/Marist Joint Study I have the privilege of working with other interns involved in various mainframe related projects. Consequently we share rack space in the same datacenter that holds our very own mainframe, used to support the research done by our fellow interns. After joking around about running an OpenFlow controller on the Z, we realized that we may have a very rare opportunity on our hands. After all, how many OpenFlow researchers have a z114 dedicated solely for research in the same datacenter as their own rack?

As a team we are now making steps towards researching the benefits of combining the two robust enterprise solutions. We imagine that the mainframe could be a favorable solution for a robust, scalable, and efficient enterprise software defined network. The mainframe would be able to handle a high volume of network processes, scale in the event of a network traffic spike, and maintain high availability, all while running as efficiently as possible.

We have already begun making steps towards evaluating the benefits of running OpenFlow controllers on the Z ensemble. We teamed up with our fellow intern Doug Rohde and started by configuring and running a tool called “CBench”, a controller benchmarking tool. We ran the tool on a zBX blade and managed to average a performance rating of nearly 3 million flows per second. Although there is no thoroughly documented research concerning the use of CBench with Floodlight, other researchers have been finding similar results on comparable systems.

Since our team’s primary goal is not really affiliated with the mainframe, we can only give the concept so much attention. For this reason progress is slow, but we have some more ideas that we think will yield valuable results. At this point our next step is to test and benchmark the controller on zLinux. The rest of our high level ideas will be presented next week in detail by my partner, Ryan Flaherty.

About the Author:

Jason Parraga
B.S. Computer Science, Marist College (In Progress)

Jason is currently a Junior doing OpenFlow research for the IBM/Marist College Joint Study. As an OpenFlow researcher Jason is a hybrid student with a passion for Computer Science as well as Information Technology. This allows him to apply his knowledge of networking concepts by programming modules and functions for open source OpenFlow controllers such as Floodlight.
Connect with Jason on LinkedIn

Posted in Uncategorized.