Mention the mainframe to your average computer user, and you’ll often illicit a nostalgic discussion about huge machines that occupied entire rooms, accepted input in the form of punch cards, and required the use of gigantic drives that couldn’t hold much data. Many people will even tell you that the mainframe is dead. We would have thought the exact same thing before our freshman year as undergraduates at the Rochester Institute of Technology (RIT). However, as we quickly learned, the idea that the mainframe is somehow irrelevant in the modern computing world couldn’t be further from the truth.

Although we were only freshman at RIT, straight out of high school, we were afforded the opportunity to take a large scale computing seminar class during our spring quarter. The class, which was part of the IBM Academic Initiative, followed the Introduction to the New Mainframe: z/OS Basics IBM Redbook and provided a solid introduction to the IBM mainframe environment. Aside from the fantastic technical information that the class provided, it also afforded an opportunity to see the incredible computing power of the mainframe paradigm and its relevance to modern businesses. These weren’t the mainframes of nostalgic memories that were unable to compete with the processor in your cellphone. Instead, the modern mainframe presents an entirely new computing paradigm. It’s a paradigm that focuses on the exceptional power, customizability, redundancy, and support that the modern enterprise computing environment needs to remain competitive.

The mainframe is optimized for performing two types of mission critical workloads for any enterprise: batch jobs and online transaction processing. Batch jobs require no human interaction. Tasks such as generating business reports are usually run as batch jobs. The incredible power of the mainframe allows for hundreds of terabytes of input, all stored on redundant disks and accessed via high-speed I/O channels, to be processed seamlessly at impressive speeds. Although completing batch jobs is essential for any business, most workloads require interaction with an end-user, often through a specific application or web interface. The mainframe allows for these types of online transaction processing workloads and can support thousands of simultaneous users. The mainframe isn’t just limited to one type of job. It can be set up to handle both batch jobs and transactions depending on the workload requirements of the specific time. Mainframes provide an unparalleled way to handle the workload needs of the fast-paced business world.

The workload processing abilities of the mainframe are met because of its ability to be completely customized. As we learned during a trip to IBM in Poughkeepsie, each mainframe is made to order within 24 – 48 hours to meet the specific requirements of the customer. Everything, from the type of processor to the ISPF user interface on z/OS, can be customized to meet the needs of the end-user. Specific processors exist to meet specific needs, and can be activated on an as-needed basis by IBM. Furthermore, the mainframe is able to support the legacy applications that many businesses rely on. The fast-paced world of modern business computing demands a system that can be customized and scaled to meet real-time needs, and the modern mainframe completely satisfies this requirement.

The computing power of the mainframe would be irrelevant to modern businesses if it lacked the exceptional redundancy that it provides. Virtually everything, from power supplies to central processors, have some form of monitoring, healing, and backup abilities. There is always two (and often more) of every essential component for fail-over purposes. Systems can further be arranged in Sysplex configurations to increase redundancy across greater distances. Overall, the redundancy of the mainframe ties together its other core features and ensures that they are available to all users at all times.

So all of these technical facts about mainframes are cool, but why should millennials care about studying the mainframe? Well, there is certainly no shortage of jobs in the mainframe world. In fact, mainframe expertise is in high demand among many employers. There is no doubt that many jobs are available for those who become proficient with mainframes. The following list provides a solid overview of typical job positions in a mainframe setting.

The major roles on the mainframe system include the following:

System Programmer – The roles of the system programmer include installing, customizing, and maintaining the operating system. They plan hardware and system upgrades and make system performance tuning to meet required levels of operation. They also install and upgrade products used on the mainframe. They must be skilled at debugging software problems on the system and are responsible for maintaining middleware such as DB2 or CICS.

Common tasks performed by the System Programmer includes:

  • Planning hardware and software system upgrades and changes in configuration
  • Training system operators and application programmers
  • Automating operations
  • Capacity planning
  • Running installation jobs and scripts
  • Performing installation-specific customization tasks
  • Integration-testing the new products with existing applications and user procedures
  • System-wide performance tuning to meet required levels of service

System Administrator – The system administrator performs the day-to-day tasks that maintain the critical business data that resides on the mainframe. They often work directly with the application programmers and end users to make sure that the administrative aspects of the applications are met.

Other tasks performed by the system administrator may include:

  • Installing software
  • Adding and deleting users and maintaining user profiles
  • Maintaining security resource access lists
  • Managing storage devices and printers
  • Managing networks and connectivity
  • Monitoring system performance



Application Developer – The application developer designs, builds, tests, and delivers mainframe applications to the company’s users and customers. This role may be further broken down into application designer and application programmer. Based on business and end user requirements, the application designer creates a design specification which the application programmer codes. In addition to creating new application code, the application must be thoroughly tested. The programmer is also responsible for maintaining the mainframe’s current applications.

System Operator – The system operator monitors and controls the operation of the mainframe hardware and software. The operator starts and stops system tasks, monitors for unusual conditions, and ensures the health/normal operations of the system. They are responsible for ensuring that new applications from the system programmers run smoothly on the system. New applications are typically given to an Operations Staff along with a run book of instructions. This book identifies the specific operational requirements of the application, which operators need to be aware of during job execution. The operator is also responsible for starting and stopping the major subsystems, such as transaction processing systems, database systems, and the operating system itself.

Production Control Analyst – The production control analyst is responsible for making sure that batch workloads run to completion without error or delay. The production control analyst ensures that changes follow the proper procedures. This control of change is necessary in order to ensure the mainframe availability and performance.

Now that you know more about the responsibilities and tasks that correspond with each role, check out the System z Job Board located at http://systemzjobs.com/. This site clearly shows just how many jobs are available in case there was any doubt. It provides a great resource to search for and apply to jobs. As you will see on the site, there are many other job positions than just the positions listed above. For example, you could work on the mainframe as a database administrator, Java programmer, COBOL programmer, J2EE Developer, etc.

Anthony Critelli and Alex Belcher are students at the Rochester Institute of Technology and members of the Millennial Mainframer team.  They have recently completed their freshman year, and they are looking forward to using their well-earned summer break to study COBOL and prepare for the zMastery certification.  For more information on Anthony, Alex, and the rest of the Millennial Mainframer team, please visit the Staff section.

I was chatting with a friend of mine the other day, and he related to me how he was dealing with a lot of issues with outages in some key production I/T systems. His opinion was that the majority of this particular company’s issues were due to people problems and not to software or hardware defects. They were issues with lack of testing or configuration errors or just plain sloppy work. Interesting, but not unsurprising.

My first job out of college was as a mainframe systems programmer. I installed ISV products, IBM software subsystems, and eventually worked my way up to be an MVS systems programmer. I installed the OS, I installed and implemented maintenance, I debugged problems, shot dumps, trained operators, and did all the stuff that a sysprog does. And when I implemented changes, I had to package up those changes into a batch/scripted job that could be run after hours and install the changes with minimal human interaction. I also had to provide a back-out script that would allow the changes to be reversed if necessary. There was a team of individuals who did this FOR me – “Change Control”. The Change Control group scheduled, coordinated and implemented the changes. That way they could keep track of what changed and in most cases could isolate problems to a particular change.
So after I heard the horror stories from my friend, I reflected back on this first job experience and thought about how different it was then from what we generally see now where change is often much less controlled, and there is a lack of rigor in systems management processes and practices. Many of the issues we have with I/T systems are with the people and how they are administering systems, and not with the quality of the hardware and software.
Like Soylent Green, “IT is People!”
That’s where I come to The Mainframe Mentality. Around 2002, I put together a presentation called “Introduction to the Mainframe”, where I would spend about 2 hours introducing the mainframe to folks who had never worked with the platform. The last slide in the presentation was titled “The Mainframe Mentality”, and it was intended to help my distributed systems colleagues understand what mainframe people were like, and why. It was a bit of a psychology lesson.
IT is People.
While IBM System z hardware and software is indeed well-engineered for reliability and high-availability, technology can only go so far. In order for these components to deliver the quality of service (QoS) that it is capable of, the people that are managing, monitoring and administering the system must do it in a way that enables the technology to do its thing. If we are to expect System z – or any other information technology platform – to deliver, the people and processes must be an integral part of it. Over the years the IT industry has realized the criticality of this thinking. ITIL (IT Infrastructure Library) defines standards for processes and practices along the lines of what those of us with The Mainframe Mentality have attempted to do with systems/service management.
In my last blog post, I made a crack about “the good ol’ days”. I’m not really a “millenial,” per se. I have sons that are. 🙂 And I work with a lot of them. And I like to think that I’m mentally a lot younger than my body indicates. But I have a lot of years of experience under my belt in working with this stuff and I’ve seen many, many mainframe customers and how they do business. Almost universally, mainframe customers have well-defined systems management practices and processes that help them maintain excellent QoS. They have change management, capacity and performance planning, problem management, and other processes. They lock down system resources with security systems like RACF. They have good monitoring tools to watch what’s happening on the systems. The people aspects of system management are refined and well-defined. That’s part of “the mainframe mentality”
But there’s a flip side to this. There are a lot of mainframe folks who seem averse to change. They like the old stuff, like 3270 interfaces. They like to program in Assembler because it’s fast and it does exactly what they want it to do. They like online systems like CICS and IMS and look down their noses at WebSphere and more current technology. But why is this? Is it because it’s a threat? I don’t think so. I believe it goes back to The Mentality. These folks are concerned about keeping the lights on. They want to maintain that high level of availability and reliability and to be sure that the business stays running. This thinking can sometimes be a problem and a hinderance, but in the end these folks are pros at ensuring that mission-critical systems stay running. Those traditional technologies just plain work.
But so do the new technologies! A lot of folks forget that once upon a time, DB2 was a beta product with a lot of bugs and flaws. So were IMS and CICS. We have to dedicate the same kind of effort to hardening and refining implementations of some of the new tools and products on System z. Many of these “newfangled” products like WebSphere for z/OS are now on their fifth or sixth versions and have been shaken out over more than ten years of deployments now. They’re there. They work. But they still require industrial-strength processes to ensure that they have the same kind of bullet-proofness (is that a word?) as the traditional systems. And they’re sitting on an operating system and hardware base that can deliver higher levels of performance, scale and availability than any other.
But I/T is people. Those people still need “The Mainframe Mentality” to keep the transactions and data flowing…to make sure that I can get money out of my ATM or make reservations for my flight to Minneapolis….as long as it’s not in the winter.
Posted in Uncategorized.

I can pull up by the curb,
I can make it on the road,
Goin’ mobile
I can stop in any street
And talk with people that we meet
Goin’ mobile
Keep me moving…

Apologies to The Who, but I just couldn’t resist. Computing horsepower is at our fingertips these days – whether we’re at the curb, on the road, at home, while talking to “people that we meet”, etc. Last week I finally joined the ranks of the iPhone users. Mine has 32GB of storage – more space than the entire DASD farm of my first IBM customer. It’s rather staggering to see the horsepower available to mobile users in comparison to the computing capability that some of us worked with back in “the good ol’ days”.

But what does this have to do with mainframes, other than to trigger the “Back in my day…” reminiscing? A lot, actually. All of this circles around computing architectures, including thin vs. fat clients, centralized vs. decentralized computing, and ultimately into the cloud.

First, some background on where we are and where we’ve come from.

According to a recent Gartner Group survey, mobile computing is the second highest tech priority for CIOs. “BYOD” is the watch word in corporate IT now…and is garnering much concern from IT management that must protect and secure the assets in the enterprise. It’s a very similar picture to the early 90s when “client/server” technology was pushed from the ground level by PC users who began downloading corporate data to their desktops and building innovative new applications that used technology in new ways.

After the client/server boom exposed flaws in pulling all this data down to workstations & servers, another paradigm shift began – to “thin clients”. Smartphones, tablets and other “pervasive” devices are computers with little or no local storage but with fairly significant compute capacity. IBM dabbled with this concept of the thin client in the mid/late ’90s with the “Network Station“…a diskless workstation that required an external monitor, no hard disk, and ran a JVM only. Applications were to be downloaded from the server side and executed on the workstation, but some apps could be pre-loaded. It was a very early cloud computing implementation but didn’t go too far because of limitations in JVM technology, applets and a huge lack of network bandwidth. Today’s thin clients are mobile devices, but the capability that is now built into the average smartphone or tablet dwarfs that of the old Network Station…hardly “thin.”

Now users are finding new ways to use existing data and applications on their mobile phones – the New Thin Client – and are demanding that corporations support these new mobile devices. Always on, everywhere on…it’s being referred to as “Generation C” (for “Connected”). It’s a technology boom that again is being driven by the empowerment of the end user to drive the technology/IT experience, rather than being driven from the IT department downhill.

But even though the users are empowered with this new technology, we still have this problem of applications and data that resides on “the mother ship” that must be accessed from this New Thin Client. And in our case, that “server side” is the mainframe.

In the mobile/mainframe context, I’m tempted to immediately mention support for 3270 terminal emulation on mobile devices, but that’s really not where the action is. The more intriguing question is: how can we leverage the mobile platform and interfaces not just to expose applications (3270 or otherwise) and data, but how can we modernize and simplify? There are a number of approaches to integrating mobile computing platforms with mainframe data and applications, and much of this technology is based on what IBM has been building into the System z software portfolio for years. It revolves largely around support for integration with z/OS subsystems such as CICS, IMS and DB2 and on access to virtualized servers running on Linux under z/VM – all technologies that have been in place for a number of years.

So how might a company use a mobile platform to take advantage of the resources on the mainframe? Let’s start by looking at the user interface – the mobile device itself.

The “app marketplace” has become a cottage industry for software development. Apps on iOS, Android, and even (coming soon) Microsoft’s Metro interface have re-ignited the world of fat client applications (if you consider any mobile device to be “fat”!). Developer tools such as Apple’s Xcode IDE, Eclipse and Android’s plethora of open source tools have made it far easier to build these interfaces, and with the pending acquisition of Worklight, IBM now moves into a much stronger position as a “platform agnostic” provider of mobile development tools. And beyond the single-purpose business-aligned apps there are a new breed of general-purpose business apps such as Cognos Mobile, which can be used to access Cognos data/reports that might be generated from Cognos data residing on System z.

If the mobile device application is designed as a true “thin client”, with little/no logic residing on the device, then there is little difference between a browser app on a PC and a browser app on the Smartphone. And for years we’ve been writing apps that run on back-end web application servers that, in turn, invoke transactions or pull data from mainframe systems. But there are a couple of twists here with respect to the user interface. Until recently, mobile browsers were pretty low-functioning, could not support the modern functions of Javascript and related APIs, and the screen real estate was too small to use effectively. With the advent of mobile Safari, Chrome and others, rendering a standard web page on a mobile is pretty easy. However, there are still some pretty significant usability and viewing considerations for a browser app that runs on a 3″ handheld screen vs. a 10″ tablet vs. a 27″ iMac. So UI design is critical, even for browsers. A key emerging technology for building thin-client browser apps for mobile is HTML 5. Some companies have moved from the app model to HTML 5 implementations – the Kindle Cloud Reader and the Financial Times apps have now moved into the browser, and their function is basically indiscernable from a “real” app.

On the mainframe/server side, there are a couple of key technologies to enable data/transaction access by mobile devices. Existing applications can be “mobile-enabled” by exposing their functionality as services. The data and transactions must be exposed for access by standardized APIs. Web Services and REST technologies can be used to access IMS, CICS and DB2 on the mainframe. For example, there is a great sample of how to build an iPhone browser application that can access CICS transaction data and integrates with Google Maps on the mobile device. CICS Web Services is used to call an existing CICS application from a “Rich UI” application developed using Rational and WebSphere tools. A similar application model could be used to access IMS transactions or even DB2 data on existing mainframe systems. A very nice Redbook on the topic of using z/OS-based technologies with mobile devices is “System z On The Go – Accessing z/OS from Smartphones“. It contains a lot of details on how to use several z/OS-centric technologies with mobile devices.

New and rewritten applications on System z should be designed with service usability in mind. Service-Oriented Architecture design principles are key to providing interfaces that a modern UI developer can access easily. Data design is key to easy access via relational queries or new business intelligence and analytics tools. Again, I’m particularly intrigued by the possibilities in using Cognos Mobile to access mainframe-hosted data. Other such technologies, such as IMS, Adabas and IDMS provide relational access paths that can also open those repositories to mobile data tools.

You’ll notice that I haven’t spent a lot of time talking about specific mainframe integration technologies. That’s because little has changed between the mobile and the “traditional” mainframe integration/enablement that we’ve been doing for years now. Mobile is a UI-layer technology that does little at a technology level to impact System z. Like the browser enablement of the last decade or so, mobile UI enablement will further boost the demand for integration at the back end and will drive even more demand for System z transaction and data resources as users discover how easy it is to access information. If you need information on how to approach mainframe integration, check out an IBM Redbook that I co-authored a few years ago: SOA Transition Scenarios for the z/OS Platform. In that book, we describe many of the service integration and enablement technologies that would be needed to expose mainframe applications and data for access by a mobile device.

Oh, and if you insist on going mobile with 3270, there is a way.

But the key point is: mobile technology is NOT a matter of shifting FROM mainframe applications to mobile applications – it’s about shifting the user interface of mainframe applications to one that is friendly to mobile or browsers or whatever the Next Big Thing is in user interface design.

Posted in IBM.