I was chatting with a friend of mine the other day, and he related to me how he was dealing with a lot of issues with outages in some key production I/T systems. His opinion was that the majority of this particular company’s issues were due to people problems and not to software or hardware defects. They were issues with lack of testing or configuration errors or just plain sloppy work. Interesting, but not unsurprising.

My first job out of college was as a mainframe systems programmer. I installed ISV products, IBM software subsystems, and eventually worked my way up to be an MVS systems programmer. I installed the OS, I installed and implemented maintenance, I debugged problems, shot dumps, trained operators, and did all the stuff that a sysprog does. And when I implemented changes, I had to package up those changes into a batch/scripted job that could be run after hours and install the changes with minimal human interaction. I also had to provide a back-out script that would allow the changes to be reversed if necessary. There was a team of individuals who did this FOR me – “Change Control”. The Change Control group scheduled, coordinated and implemented the changes. That way they could keep track of what changed and in most cases could isolate problems to a particular change.
So after I heard the horror stories from my friend, I reflected back on this first job experience and thought about how different it was then from what we generally see now where change is often much less controlled, and there is a lack of rigor in systems management processes and practices. Many of the issues we have with I/T systems are with the people and how they are administering systems, and not with the quality of the hardware and software.
Like Soylent Green, “IT is People!”
That’s where I come to The Mainframe Mentality. Around 2002, I put together a presentation called “Introduction to the Mainframe”, where I would spend about 2 hours introducing the mainframe to folks who had never worked with the platform. The last slide in the presentation was titled “The Mainframe Mentality”, and it was intended to help my distributed systems colleagues understand what mainframe people were like, and why. It was a bit of a psychology lesson.
IT is People.
While IBM System z hardware and software is indeed well-engineered for reliability and high-availability, technology can only go so far. In order for these components to deliver the quality of service (QoS) that it is capable of, the people that are managing, monitoring and administering the system must do it in a way that enables the technology to do its thing. If we are to expect System z – or any other information technology platform – to deliver, the people and processes must be an integral part of it. Over the years the IT industry has realized the criticality of this thinking. ITIL (IT Infrastructure Library) defines standards for processes and practices along the lines of what those of us with The Mainframe Mentality have attempted to do with systems/service management.
In my last blog post, I made a crack about “the good ol’ days”. I’m not really a “millenial,” per se. I have sons that are. ūüôā And I work with a lot of them. And I like to think that I’m mentally a lot younger than my body indicates. But I have a lot of years of experience under my belt in working with this stuff and I’ve seen many, many mainframe customers and how they do business. Almost universally, mainframe customers have well-defined systems management practices and processes that help them maintain excellent QoS. They have change management, capacity and performance planning, problem management, and other processes. They lock down system resources with security systems like RACF. They have good monitoring tools to watch what’s happening on the systems. The people aspects of system management are refined and well-defined. That’s part of “the mainframe mentality”
But there’s a flip side to this. There are a lot of mainframe folks who seem averse to change. They like the old stuff, like 3270 interfaces. They like to program in Assembler because it’s fast and it does exactly what they want it to do. They like online systems like CICS and IMS and look down their noses at WebSphere and more current technology. But why is this? Is it because it’s a threat? I don’t think so. I believe it goes back to The Mentality. These folks are concerned about keeping the lights on. They want to maintain that high level of availability and reliability and to be sure that the business stays running. This thinking can sometimes be a problem and a hinderance, but in the end these folks are pros at ensuring that mission-critical systems stay running. Those traditional technologies just plain work.
But so do the new technologies! A lot of folks forget that once upon a time, DB2 was a beta product with a lot of bugs and flaws. So were IMS and CICS. We have to dedicate the same kind of effort to hardening and refining implementations of some of the new tools and products on System z. Many of these “newfangled” products like WebSphere for z/OS are now on their fifth or sixth versions and have been shaken out over more than ten years of deployments now. They’re there. They work. But they still require industrial-strength processes to ensure that they have the same kind of bullet-proofness (is that a word?) as the traditional systems. And they’re sitting on an operating system and hardware base that can deliver higher levels of performance, scale and availability than any other.
But I/T is people. Those people still need “The Mainframe Mentality” to keep the transactions and data flowing…to make sure that I can get money out of my ATM or make reservations for my flight to Minneapolis….as long as it’s not in the winter.
Posted in Uncategorized.

I can pull up by the curb,
I can make it on the road,
Goin’ mobile
I can stop in any street
And talk with people that we meet
Goin’ mobile
Keep me moving…

Apologies to The Who, but I just couldn’t resist. Computing horsepower is at our fingertips these days – whether we’re at the curb, on the road, at home, while talking to “people that we meet”, etc. Last week I finally joined the ranks of the iPhone users. Mine has 32GB of storage – more space than the entire DASD farm of my first IBM customer. It’s rather staggering to see the horsepower available to mobile users in comparison to the computing capability that some of us worked with back in “the good ol’ days”.

But what does this have to do with mainframes, other than to trigger the “Back in my day…” reminiscing? A lot, actually. All of this circles around computing architectures, including thin vs. fat clients, centralized vs. decentralized computing, and ultimately into the cloud.

First, some background on where we are and where we’ve come from.

According to a recent Gartner Group survey, mobile computing is the second highest tech priority for CIOs. “BYOD” is the watch word in corporate IT now…and is garnering much concern from IT management that must protect and secure the assets in the enterprise. It’s a very similar picture to the early 90s when “client/server” technology was pushed from the ground level by PC users who began downloading corporate data to their desktops and building innovative new applications that used technology in new ways.

After the client/server boom exposed flaws in pulling all this data down to workstations & servers, another paradigm shift began – to “thin clients”. Smartphones, tablets and other “pervasive” devices are computers with little or no local storage but with fairly significant compute capacity. IBM dabbled with this concept of the thin client in the mid/late ’90s with the “Network Station“…a diskless workstation that required an external monitor, no hard disk, and ran a JVM only. Applications were to be downloaded from the server side and executed on the workstation, but some apps could be pre-loaded. It was a very early cloud computing implementation but didn’t go too far because of limitations in JVM technology, applets and a huge lack of network bandwidth. Today’s thin clients are mobile devices, but the capability that is now built into the average smartphone or tablet dwarfs that of the old Network Station…hardly “thin.”

Now users are finding new ways to use existing data and applications on their mobile phones – the New Thin Client – and are demanding that corporations support these new mobile devices. Always on, everywhere on…it’s being referred to as “Generation C” (for “Connected”). It’s a technology boom that again is being driven by the empowerment of the end user to drive the technology/IT experience, rather than being driven from the IT department downhill.

But even though the users are empowered with this new technology, we still have this problem of applications and data that resides on “the mother ship” that must be accessed from this New Thin Client. And in our case, that “server side” is the mainframe.

In the mobile/mainframe context, I’m tempted to immediately mention support for 3270 terminal emulation on mobile devices, but that’s really not where the action is. The more intriguing question is: how can we leverage the mobile platform and interfaces not just to expose applications (3270 or otherwise) and data, but how can we modernize and simplify? There are a number of approaches to integrating mobile computing platforms with mainframe data and applications, and much of this technology is based on what IBM has been building into the System z software portfolio for years. It revolves largely around support for integration with z/OS subsystems such as CICS, IMS and DB2 and on access to virtualized servers running on Linux under z/VM – all technologies that have been in place for a number of years.

So how might a company use a mobile platform to take advantage of the resources on the mainframe? Let’s start by looking at the user interface – the mobile device itself.

The “app marketplace” has become a cottage industry for software development. Apps on iOS, Android, and even (coming soon) Microsoft’s Metro interface have re-ignited the world of fat client applications (if you consider any mobile device to be “fat”!). Developer tools such as Apple’s Xcode IDE, Eclipse and Android’s plethora of open source tools have made it far easier to build these interfaces, and with the pending acquisition of Worklight, IBM now moves into a much stronger position as a “platform agnostic” provider of mobile development tools. And beyond the single-purpose business-aligned apps there are a new breed of general-purpose business apps such as Cognos Mobile, which can be used to access Cognos data/reports that might be generated from Cognos data residing on System z.

If the mobile device application is designed as a true “thin client”, with little/no logic residing on the device, then there is little difference between a browser app on a PC and a browser app on the Smartphone. And for years we’ve been writing apps that run on back-end web application servers that, in turn, invoke transactions or pull data from mainframe systems. But there are a couple of twists here with respect to the user interface. Until recently, mobile browsers were pretty low-functioning, could not support the modern functions of Javascript and related APIs, and the screen real estate was too small to use effectively. With the advent of mobile Safari, Chrome and others, rendering a standard web page on a mobile is pretty easy. However, there are still some pretty significant usability and viewing considerations for a browser app that runs on a 3″ handheld screen vs. a 10″ tablet vs. a 27″ iMac. So UI design is critical, even for browsers. A key emerging technology for building thin-client browser apps for mobile is HTML 5. Some companies have moved from the app model to HTML 5 implementations – the Kindle Cloud Reader and the Financial Times apps have now moved into the browser, and their function is basically indiscernable from a “real” app.

On the mainframe/server side, there are a couple of key technologies to enable data/transaction access by mobile devices. Existing applications can be “mobile-enabled” by exposing their functionality as services. The data and transactions must be exposed for access by standardized APIs. Web Services and REST technologies can be used to access IMS, CICS and DB2 on the mainframe. For example, there is a great sample of how to build an iPhone browser application that can access CICS transaction data and integrates with Google Maps on the mobile device. CICS Web Services is used to call an existing CICS application from a “Rich UI” application developed using Rational and WebSphere tools. A similar application model could be used to access IMS transactions or even DB2 data on existing mainframe systems. A very nice Redbook on the topic of using z/OS-based technologies with mobile devices is “System z On The Go – Accessing z/OS from Smartphones“. It contains a lot of details on how to use several z/OS-centric technologies with mobile devices.

New and rewritten applications on System z should be designed with service usability in mind. Service-Oriented Architecture design principles are key to providing interfaces that a modern UI developer can access easily. Data design is key to easy access via relational queries or new business intelligence and analytics tools. Again, I’m particularly intrigued by the possibilities in using Cognos Mobile to access mainframe-hosted data. Other such technologies, such as IMS, Adabas and IDMS provide relational access paths that can also open those repositories to mobile data tools.

You’ll notice that I haven’t spent a lot of time talking about specific mainframe integration technologies. That’s because little has changed between the mobile and the “traditional” mainframe integration/enablement that we’ve been doing for years now. Mobile is a UI-layer technology that does little at a technology level to impact System z. Like the browser enablement of the last decade or so, mobile UI enablement will further boost the demand for integration at the back end and will drive even more demand for System z transaction and data resources as users discover how easy it is to access information. If you need information on how to approach mainframe integration, check out an IBM Redbook that I co-authored a few years ago: SOA Transition Scenarios for the z/OS Platform. In that book, we describe many of the service integration and enablement technologies that would be needed to expose mainframe applications and data for access by a mobile device.

Oh, and if you insist on going mobile with 3270, there is a way.

But the key point is: mobile technology is NOT a matter of shifting FROM mainframe applications to mobile applications – it’s about shifting the user interface of mainframe applications to one that is friendly to mobile or browsers or whatever the Next Big Thing is in user interface design.

Posted in IBM.

In honor of our 1,000th hit, behold System Z’s answer to Ruby on Rails:¬†COBOL on Cogs!

In all seriousness, the modern mainframe has come a long way towards embracing modern web technologies.  Due to the integration of UNIX into z/OS and the popularity of Linux on the z/VM hypervisor, TCP/IP has become a foundational technology of the zEcosystem.  This is demonstrated by companies such as Marriott making the zEnterpise the heart of their IT infrastructure by adopting a service oriented architecture tied to XML, web technologies, and custom APIs.  Although unimaginable during the era of the S/370 and the Systems Networking Architecture (SNA), companies are adopting APIs as a means to simplify and accelerate the integration of their mainframe and zEnterprise systems into web and mobile apps.  This has the potential to promote the use of the zEnterprise as an Infrastructure/Platform/Software as a Service solution accessible to developers through a standard API.

Even more interesting, it is possible that a private cloud on zEnterprise could follow the steps of Eucalyptus (a public cloud solution) and run an API that matched the syntax of an API stack such as Amazon Web Services (AWS)¬†or IBM’s SmartCloud. ¬†Such a move would allow the instant portability of ubiquitous cloud-based front ends to a private mainframe clouds, potentially following in the footsteps of industry standard technologies (such as TCP/IP, UNIX, Linux, Java) to further open up and promote the mainframe as the centralized “system of systems” of a complex heterogeneous IT environment. ¬†In the web development world, developers have benefited for quite some time from Google and Amazon’s simple yet powerful APIs. ¬†I can’t help but wonder how similar tools could affect the deployment and utilization of the zEnterprise environment in the future.

I challenge you, dear readers, to consider how one could build and deploy an mainframe API that would provide the strengths of flexibility,¬†inter-connectivity, and ease of use without compromising traditional strengths in security and efficiency. ¬†Have you worked with APIs in the past? ¬†Do you think that there is a role for such tools on the mainframe? ¬†What sort of impact would the use of such tools have on the mainframe? ¬†Let’s hear your thoughts in the comments.

Here are some interesting resources related to this idea:
Info on the IBM HTTP Server
Toys and Tools for z/OS UNIX System Services
Guide for Porting POSIX complaint Apps to z/OS UNIX System Services
PHP for z/OS Guide
IBM HTTP Server Cookbook
Porting Apache to z/OS
Coding AJAX Apps on z/OS
System Z APIs
tcACCESS

Happy COBOLing!

Posted in IBM.