Unless you have been living under a rock, you have probably heard of the new and innovative networking movement sweeping datacenters worldwide. This new movement is called Software Defined Networking or “SDN” and it has grown to pose quite a threat to traditional networking ideas.

What is SDN?

By definition, SDN is an approach to networking in which network logic is decoupled from hardware and given to a software application called a controller. In simpler terms SDN removes the network operating system from each individual device and moves it to a server. Switches are then given rules or “flows” by the controller which describe how to forward specific traffic across the network. This allows for more network speed, efficiency, and innovation.

Traditional network vs SDN network

An overview of a Software Defined Network

Why is SDN better?

Network efficiency and speed are improved with SDN in several ways…

  • Network processing is handled by a much more able-bodied server 
  • Switches evaluate route packets at line speed since they only perform forwarding decisions 
  • Centralized network control 
  • Custom defined network behavior 
  • Virtualized and sliced networks 
  • Multi-vendor device interoperability 
  • The ability for pay for hardware alone, without the unnecessary added features 

The need for innovation

Networks today are not evolving quickly enough to satisfy the growing needs of consumers. Networks are built using switches, routers, and other devices from various vendors that have become exceedingly complex because they implement protocols standardized by the IETF and use proprietary interfaces. Many network administrators need network features tailored for their needs, which is a request very difficult to fulfill using standardized protocols and proprietary features. SDN solves this problem by allowing network administrators to define a network behaviour that suits their needs.

Although SDN is in it’s early stages, research and development is already under way. One very popular and developer friendly controller is provided by the OpenFlow based startup, Big Switch Networks. Big Switch actively supports “Floodlight”, an entirely open source OpenFlow controller written in java and licensed under the Apache license. You can clone the project from Github and start programming your own custom network behaviour in a matter of minutes. Bringing an open source option to networking not only gives administrators choice, it gives anyone a platform to innovate with. On the other hand large companies like Google have already implemented their own private versions of OpenFlow in their own datacenters.

Marist, IBM, and OpenFlow

Here at the IBM/Marist Joint Study we are developing an OpenFlow testbed to evaluate the effectiveness of different OpenFlow devices and controllers. Our goal is to contribute our findings back to the community in hopes to progress the adaptiveness of OpenFlow in modern networks.

So where does the Mainframe come into play?

As an intern at the IBM/Marist Joint Study I have the privilege of working with other interns involved in various mainframe related projects. Consequently we share rack space in the same datacenter that holds our very own mainframe, used to support the research done by our fellow interns. After joking around about running an OpenFlow controller on the Z, we realized that we may have a very rare opportunity on our hands. After all, how many OpenFlow researchers have a z114 dedicated solely for research in the same datacenter as their own rack?

As a team we are now making steps towards researching the benefits of combining the two robust enterprise solutions. We imagine that the mainframe could be a favorable solution for a robust, scalable, and efficient enterprise software defined network. The mainframe would be able to handle a high volume of network processes, scale in the event of a network traffic spike, and maintain high availability, all while running as efficiently as possible.

We have already begun making steps towards evaluating the benefits of running OpenFlow controllers on the Z ensemble. We teamed up with our fellow intern Doug Rohde and started by configuring and running a tool called “CBench”, a controller benchmarking tool. We ran the tool on a zBX blade and managed to average a performance rating of nearly 3 million flows per second. Although there is no thoroughly documented research concerning the use of CBench with Floodlight, other researchers have been finding similar results on comparable systems.

Since our team’s primary goal is not really affiliated with the mainframe, we can only give the concept so much attention. For this reason progress is slow, but we have some more ideas that we think will yield valuable results. At this point our next step is to test and benchmark the controller on zLinux. The rest of our high level ideas will be presented next week in detail by my partner, Ryan Flaherty.

About the Author:

Jason Parraga
B.S. Computer Science, Marist College (In Progress)

Jason is currently a Junior doing OpenFlow research for the IBM/Marist College Joint Study. As an OpenFlow researcher Jason is a hybrid student with a passion for Computer Science as well as Information Technology. This allows him to apply his knowledge of networking concepts by programming modules and functions for open source OpenFlow controllers such as Floodlight.
Connect with Jason on LinkedIn

Posted in Uncategorized.

Traditionally, IBM Mainframe system administrators must be very proficient at navigating and operating z/VM through the z/VM command line, knowing all the commands, understanding what operations and files are on every disc and learning how to create a new guest and allocate resources. All of which are intricate and difficult to learn with no prior experience. However, when I started working with our research z114 and zBladeCenter at the Marist/IBM Joint Studies, one thing took me by surprise: much of the operations of z/vm were now automated thanks to IBM’s new Unified Resource Manager.

The Unified Resource Manager (URM) also known as zManager is a new interface supported on the z196 and z114 mainframes. It provides the user with a single graphical view of every guest on the z/VM system through the Hardware Management Console. From this single screen an administrator can see the amount of processors allocated to it, the amount of memory and its current operating state.

(Above) The Graphical view of z/vm guests provided to the HMC by zManager.

zManager also provides complete graphical automation when a new guest server is to be spawned off. Creating a new virtual server is as easy as going through a few screens which prompts you for information such as the name, disk that is to be used, the workload, resources and network card/vLan settings. Once finished going through the steps, URM will automatically create the virtual server and it will appear along with the other guest operating systems. From this point, the server can be activated through the Hardware Management Console and you operate the server from z/VM by logging in and IPLing as usual. This replaces the complicated process of creating a new directory entry for the userid or going through dirMaint to create the server.

(Above) The Graphical confirmation screen for creating a new Guest Server with URM

zManager does not stop at managing z/VM, it also excels at managing zBladeCenter blades as well. Similar to z/VM, URM provides an overall view of the virtual servers on every blade and provides automation in creating and managing the virtual machines.

The Unified Resource Manager is a great step for IBM. The simplification of creating and managing guests on z/VM or blades makes it much easier for administrators who are new to system z to effectively operate the system. This is extremely valuable today because there are very few millennials graduating with any kind of mainframe experience, let alone z/VM. The system must be friendly to new users otherwise; when the current z/VM system administrators are gone there will be no one with the skills necessary to replace them.

Although complete automation is valuable when first starting off, it is very important for a new system z administrator to learn the inner workings of z/VM and understand exactly what is happening during the automated process. It is important not to be reliant on the easy automation because you will soon learn (just as I did), that there are many operations which you cannot just automate, and you must login to maint and do it yourself.

About the Author

Douglas Rohde
B.S Information Technology, Marist College (In Progress)

Doug is a National Science Foundation funded student intern Systems Administrator for Marist College’s research Mainframe. He has a passion for working in the System Z environment which he found not long after he began working with big iron on a daily basis. As a full-time student he hopes his coursework and work experience will ensure an easy transition into a career of mainframe administration in a production environment when he graduates in December. Connect with Doug on LinkedIn!

With the widespread use of mainframes today, it is absolutely necessary that they have excellent security. They are widely used by banks, credit card companies, government entities, government contractors, and nearly all other large organizations. Everyday millions of transactions pass through mainframes; with poor security, this can lead to the loss of massive amounts of money and data. Mainframe security is a must for business continuity and has continuously evolved over the years to where it is today. When the mainframe became more networked with other devices and connected to end users on computers other than the original “dumb terminal,” its security really broadened as the traditional physical security was no longer enough.

This article will introduce several security concepts in use on the mainframe. These topics will continue to be expanded upon along with new topics in future posts to come.

“Security Though Obscurity”

As defined by Wikipedia, the phrase, “security through obscurity”, is defined as, “a pejorative referring to a principle in security engineering, which attempts to use secrecy of design or implementation to provide security.” While the functionality and design of a mainframe are not kept secret, not many people are well versed in them. There is enough of a knowledge gap between typical Windows or Linux workstations/servers and System Z to consider this a minor form of security that exists in the world of mainframes. One must first learn an entirely new set of terminology (even a file is called a “dataset”). Next one has to become familiar with the architecture and operating system(s) unique to the mainframe to finally begin searching for vulnerabilities. And all of this typically must be done without much access to an actual mainframe, unless of course you happen to have a few hundred thousand dollars to spend on one.

Secure by Design

However, System Z certainly does not rely on obscurity as its main source of security. In fact, System Z holds the highest security rating/classification for any commercially available server – with the Common Criteria Evaluation Assurance Level 5 (EAL5) awarded by the International Standards Organization (ISO). Check out the Wikipedia page to read more on Evaluation Assurance Levels. As you will read in the rest of the article, and in future articles, System Z is certainly not lacking in security and has yet to have a major security breach.

Security Software – IBM Security Server and RACF

RACF, or Resource Access Control Facility, is one of several software add-on security products available for the mainframe’s base IBM Security Server component. The two other main competitors are ACF2 and Top Secret; these are both produced by CA Technologies, Inc. This article will focus on the base IBM Security Server, commonly referred to by its most well-known component, RACF.

The following is a list and brief description of the components that make up the IBM Security Server:

  • DCE Security Server: Provides a fully functional OSF Distributed Computing Environment (DCE) that runs on z/OS. 
  • Lightweight Directory Access Protocol (LDAP) Server: This server is based on a client/server model that provides client access to an LDAP server. This provides an easy way to maintain directory information in a central location for storage, update, retrieval, and exchange. 
  • z/OS Firewall Technologies: This is an IPv4/IPv6 network security firewall program for z/OS. This allows the mainframe to be safely connected to the internet without any intervening hardware.
  • Network Authentication Service for z/OS: This provides Kerberos security service without the need to purchase or use a middleware product such as a DCE. 
  • Enterprise Identity Mapping (EIM): Provides a method to manage multiple user registries and user identities in an enterprise setting. 
  • PKI Services: This service provides the ability to set up a public key infrastructure as well as serve as a certificate authority for both internal and external users. Using this, you can issue digital certificates using your company’s security policies. 

We will go into more detail on the final part of the security server, RACF.

RACF is an add-on software product that provides access control on the mainframe. It controls nearly everything security related on the mainframe and decides what users can use, change, or view. RACF stores all the information about users in its database. This information about users, resources, and access authorities is maintained in structures called profiles within this database. Additionally, RACF holds information on login times, password hashes, and password expirees.

RACF allows you to do the following in order to accomplish access control:

  • Identify and authenticate users 
  • Authorize users to access protected resources 
  • Log and report various attempts of unauthorized access to protected resources 
  • Control the means of access to resources 
  • Allow applications to use the RACF macros 

Using RACF, administrators are able to set very specific rules on the password policy. This includes minimum length, lack of repeating characters, lack of adjacent characters on the keyboard, and the use of numeric or capital letters. It can even get as detailed as having a rule that requires, for example, the 5th character to be an uppercase letter. Passwords on the mainframe are able to contain upper/lowercase letters, numeric values, and the symbols #, $, and @. One thing you may find surprising about passwords on the mainframe is that they are typically limited to just 7 characters. However, what made this secure on the mainframe system I was learning on last spring was that RACF locked out the account after 3 failed login attempts – I learned this the hard way.

The following picture shows a RACF logon screen for a RACF defined user.

And here is an error message you would receive after entering an incorrect password.

Additionally, RACF will tell you if the user name you entered is not a valid RACF user. In the picture below I tried logging in with the user “ALEXB”, which does not exist. The security implication with this, as demonstrated in the video at the end of this post, is that it allows you enumerate all existing RACF users.

So where is all this RACF information stored? Well, it can be located anywhere on the mainframe. To find it, simply use the RVARY command.

On this system, the primary RACF database is stored in the SYS1.RACFPRM1 dataset and a backup is stored in SYS1.RACFBCK1. Changes to the RACF database are written to both these datasets simultaneously. Therefore, the backup is for hardware failure and not user error. Due to the contents of these files, permission to even read these datasets needs to be very limited.

If you are interested in how RACF progressed over the years since September 1976 when it first came out, check out the link to the history of RACF located in the resources section of this article.

Additionally, I would highly recommend checking out this video of a presentation by Phil Young given at BSides Las Vegas 2012. It includes a lot of the RACF information found in this article and points out that mainframe skills are very valuable in the security consulting industry. He also identifies several potential security vulnerabilities present on System Z. I talked with Phil briefly and one thing he pointed out is that not too many people are really looking for these kinds of vulnerabilities on mainframes. Typically, at many companies, the security staff is not poking around on the mainframe asking “can I do this” or “what will happen if I do this.” Usually, only the specific mainframe security staff is even allowed near the mainframe and the rest of the IT security staff do not have anything to do with it. However, one major reason for the lack of poking around is simply because the mainframe is usually the heart of the company’s IT infrastructure and they cannot afford any accidents to occur on the system. Though as more people begin to research into mainframe security and more tools begin to increase their support for mainframe systems, time will tell whether the security or the obscurity is a larger player in the lack of vulnerabilities on mainframes. But for the time being, it does not get much more secure than System Z.


About the Author

Alex Belcher

Alex is currently a student at the Rochester Institute of Technology pursuing a B.S. in Information Security and Forensics with a minor in Computer Science. He works with the school’s ITS Department with Network Communications as a Communications Specialist Assistant and with Distributed Support Services as a Lab Assistant. Additionally, he assists with his high school marching band during summers. During his first year at RIT, he took a Large Scale Computing Seminar; this was his first exposure to mainframe computing and he is interested in further research on the topic.