I was chatting with a friend of mine the other day, and he related to me how he was dealing with a lot of issues with outages in some key production I/T systems. His opinion was that the majority of this particular company’s issues were due to people problems and not to software or hardware defects. They were issues with lack of testing or configuration errors or just plain sloppy work. Interesting, but not unsurprising.

My first job out of college was as a mainframe systems programmer. I installed ISV products, IBM software subsystems, and eventually worked my way up to be an MVS systems programmer. I installed the OS, I installed and implemented maintenance, I debugged problems, shot dumps, trained operators, and did all the stuff that a sysprog does. And when I implemented changes, I had to package up those changes into a batch/scripted job that could be run after hours and install the changes with minimal human interaction. I also had to provide a back-out script that would allow the changes to be reversed if necessary. There was a team of individuals who did this FOR me – “Change Control”. The Change Control group scheduled, coordinated and implemented the changes. That way they could keep track of what changed and in most cases could isolate problems to a particular change.
So after I heard the horror stories from my friend, I reflected back on this first job experience and thought about how different it was then from what we generally see now where change is often much less controlled, and there is a lack of rigor in systems management processes and practices. Many of the issues we have with I/T systems are with the people and how they are administering systems, and not with the quality of the hardware and software.
Like Soylent Green, “IT is People!”
That’s where I come to The Mainframe Mentality. Around 2002, I put together a presentation called “Introduction to the Mainframe”, where I would spend about 2 hours introducing the mainframe to folks who had never worked with the platform. The last slide in the presentation was titled “The Mainframe Mentality”, and it was intended to help my distributed systems colleagues understand what mainframe people were like, and why. It was a bit of a psychology lesson.
IT is People.
While IBM System z hardware and software is indeed well-engineered for reliability and high-availability, technology can only go so far. In order for these components to deliver the quality of service (QoS) that it is capable of, the people that are managing, monitoring and administering the system must do it in a way that enables the technology to do its thing. If we are to expect System z – or any other information technology platform – to deliver, the people and processes must be an integral part of it. Over the years the IT industry has realized the criticality of this thinking. ITIL (IT Infrastructure Library) defines standards for processes and practices along the lines of what those of us with The Mainframe Mentality have attempted to do with systems/service management.
In my last blog post, I made a crack about “the good ol’ days”. I’m not really a “millenial,” per se. I have sons that are. 🙂 And I work with a lot of them. And I like to think that I’m mentally a lot younger than my body indicates. But I have a lot of years of experience under my belt in working with this stuff and I’ve seen many, many mainframe customers and how they do business. Almost universally, mainframe customers have well-defined systems management practices and processes that help them maintain excellent QoS. They have change management, capacity and performance planning, problem management, and other processes. They lock down system resources with security systems like RACF. They have good monitoring tools to watch what’s happening on the systems. The people aspects of system management are refined and well-defined. That’s part of “the mainframe mentality”
But there’s a flip side to this. There are a lot of mainframe folks who seem averse to change. They like the old stuff, like 3270 interfaces. They like to program in Assembler because it’s fast and it does exactly what they want it to do. They like online systems like CICS and IMS and look down their noses at WebSphere and more current technology. But why is this? Is it because it’s a threat? I don’t think so. I believe it goes back to The Mentality. These folks are concerned about keeping the lights on. They want to maintain that high level of availability and reliability and to be sure that the business stays running. This thinking can sometimes be a problem and a hinderance, but in the end these folks are pros at ensuring that mission-critical systems stay running. Those traditional technologies just plain work.
But so do the new technologies! A lot of folks forget that once upon a time, DB2 was a beta product with a lot of bugs and flaws. So were IMS and CICS. We have to dedicate the same kind of effort to hardening and refining implementations of some of the new tools and products on System z. Many of these “newfangled” products like WebSphere for z/OS are now on their fifth or sixth versions and have been shaken out over more than ten years of deployments now. They’re there. They work. But they still require industrial-strength processes to ensure that they have the same kind of bullet-proofness (is that a word?) as the traditional systems. And they’re sitting on an operating system and hardware base that can deliver higher levels of performance, scale and availability than any other.
But I/T is people. Those people still need “The Mainframe Mentality” to keep the transactions and data flowing…to make sure that I can get money out of my ATM or make reservations for my flight to Minneapolis….as long as it’s not in the winter.
Posted in Uncategorized.