As yet more evidence that mainframes are popular, at the DEF CON 22 “Hacker Conference” Philip Young presented techniques to break into (mostly carelessly managed or unmanaged to the point of IT malpractice) z/OS systems. He describes many of his findings on his blog.

I agree with his central point: many, many organizations are performing horribly when it comes to IT security, including mainframe-related security. And he’s got some solid data to back up his critique. For example, in his survey of Internet-connected, TN3270-accessible mainframes, 53% don’t even bother to TLS/SSL-encrypt connections. Yes, that’s right: RACF (or ACF2, or TopSecret) user IDs and passwords, along with everything else, flies across the public Internet in clear text to/from over half the mainframes he surveyed. (And those are the mainframes that are Internet-reachable. I would strongly argue that it’s IT security malpractice not to TLS/SSL-encrypt TN3270 connections across “internal” WANs.) Moreover, among the 47% that do TLS/SSL-encrypt TN3270 connections, only 49% of those have proper server certificates signed by a valid, well-known CA.

I’ll just go ahead and say it: that’s pathetic performance.

It is true that a fraction of the z/OS systems identified in the survey are truly public. For example, there are a few public and university library catalogs on mainframes that are Internet-reachable. There are also some non-mainframes: individuals running usually obsolete and unlicensed z/OS releases on machine emulators without permission. That said, Young’s survey found several major corporations and government organizations with real mainframes that flunked this oh-so-basic security check. Shame on them, really.

I recall helping a particular government get their mainframe services Internet-reachable securely 15 years ago. It’s 2014! WTF? You know when IBM added SSL encryption to TN3270 (and FTP), at no additional charge? Nearly 20 years ago, way back in the OS/390 releases, way before even z/OS 1.1. IBM even added TLS/SSL client certificate authentication to RACF back in the late 1990s. RACF passphrases? z/OS 1.8 (with TSO exploitation of passphrases soon after).

So why are over 75% of Internet-reachable z/OS systems not properly using what IBM has provided in z/OS (and its predecessors) for nearly two decades?

Pathetic. Sorry, but that’s the demonstrated reality. On this point (at least) I agree with Young — or at least heavily sympathize. There’s just no valid excuse here.

So, as a friendly reminder, here are two basic steps you must take, immediately, if you have not done so already:

1. Survey all the connections to your enterprise servers (including mainframes) that are directly or indirectly (via a proxy, for example) Internet- or WAN-reachable, especially (but not exclusively) via TN3270/TN3270E and FTP. Turn on TLS, and turn off the unencrypted ports. (Make sure you have CPACF enabled on your machine at least, and consider getting a CryptoExpress adapter.) Use signed certificates from well-known certificate authorities, and put in place operational procedures for rotating and revoking those certificates. Yes, that includes development, test, “demo,” and training LPARs. (Users are unfortunately likely to have common passwords across systems, so you must take at least basic steps to protect the “softer” systems, too.)

2. Turn on passphrases in RACF (or ACF2, or TopSecret, as applicable), and manage them well.

There are other steps to take, but considering there are so many mainframe operators that haven’t implemented these two very basic security features, I’m addressing a large audience right now.

To be clear and upfront, I’m not thrilled with the term “batch modernization” — or at least I’m concerned the term is misinterpreted. I’ll now stand on a soapbox and offer some hopefully insightful views on batch computing and what businesses and governments ought to be doing — and not doing.

First of all, there’s a school of thought that believes that “batch modernization” means you write lots of code in Programming Language J to recreate, usually badly, functional code already written in Programming Language C or Programming Language P. Then you get to do the same thing all over again when Programming Language Q debuts. You might be forgiven for having that misunderstanding. IBM’s “batch modernization” (BM?) publications, for example, aren’t always careful in explaining that what I just described isn’t BM. (Or maybe it is a very different type of BM!)

No, that isn’t batch modernization. I can’t think of anything modern — or particularly smart — in reimplementing something that already works. There’s nothing modern in make busy work.

That said, if you want to write batch programs in Programming Language J (or C, or P, or XYZ) — or if you want to extend and enhance the batch programs you already have using a second or third or fifth programming language — go for it! Java, for example, is a programming language and runtime environment provided with z/OS at no additional charge. It’s there, it works extremely well, and it’s available. Write (or borrow, legally) as much or as little Java code as you want, and take advantage of it, now. There’s nothing stopping you except perhaps your own inertia.

What is batch modernization then? There are whole books on the subject, but I’ll offer parts of the definition in focusing on some key elements that I see as critical to businesses and governments. Here’s my list, not necessarily in order:

1. Batch programs concurrently executed with online transactional programs, round the clock — “concurrent batch and online” in fewer words. z/OS and its associated middleware subsystems (CICS Transaction Server, IMS Transaction Manager, WebSphere Application Server, DB2, VSAM RLS, and many others) support concurrent batch and online fabulously and have for many, many years. There are so many business use cases requiring concurrent batch and online, so please focus on addressing this gap if you still have it.

2. As mentioned, the ability to freely intermix technologies (such as programming languages, object libraries, etc.) as you maintain and functionally enhance your batch programs. IBM Language Environment, the z/OS Batch Runtime, the JZOS Batch Toolkit, and z/OS Connect are among the many technologies available to help you leverage any/every best-for-the-mission technology as your batch programs and environment naturally evolve and grow.

3. Reduction or elimination of needless data movement. Or, in other words, take steps to address the “ETL problem.” It’s often less expensive, more secure, and more reliable to perform your data analysis, reporting, and other information-related processing directly on your operational databases — or at least much closer to your operational databases. Data volumes are exploding, and if you cannot keep up with data movement requirements now using old fashioned (non-modern) deployment architectures, then the problems are only going to get worse. The IBM DB2 Analytics Accelerator (IDAA) is one fine example of many technologies that can help reduce or eliminate needless data movement. “Omit needless FTP,” basically.

4. Extremely relatedly, many of today’s batch programs are performing the role of application integration, linking two or more online/real-time transactional systems together. That’s not what batch programs ought to be doing. As you examine your inventory of batch programs — as you look at your file transfers, in particular — ask yourself (and your organization) whether those file transfer linkages still make sense (or ever made sense) given the business processes they support. Do your customers want to wait overnight for a loan approval or a new insurance policy, put in place because you thought you were saving 2 MIPS shuffling data off your mainframe to do a bit of processing elsewhere? (And no, you probably aren’t saving anything. Quite the opposite. How much of your mainframe processing is occupied by moving data? A quarter or more is not uncommon, sadly.) Do these excess file transfers make your business competitive? All too often, no. Look for these critical batch programs that are serving to link together two or more important line-of-business applications and spend some effort modernizing those, in some cases by converting those batch interactions to real-time synchronous or near real-time asynchronous application interactions. Again, IBM is literally throwing great technologies at you that’ll help, often without your having to buy anything extra. Examples include z/OS Connect, CICS Web Services, IMS Web Services, WebSphere MQ, and JSON support for CICS and IMS, among others.

5. Should you be programming at all? Programming itself isn’t particularly modern. Ada Lovelace was programming in the early 1800s. Take a serious look at rules engines (e.g. IBM Operational Decision Manager for z/OS), scoring engines (IBM SPSS Real-Time Scoring for DB2 for z/OS), data format/transformation engines (e.g. IBM WebSphere Transformation Extender for z/OS), mathematical optimization engines for business (e.g. IBM CPLEX Optimizer for z/OS), IBM’s Scalable Architecture for Financial Reporting (SAFR), and the numerous other ways to effectively and dramatically cut down on code maintenance and development burden.

6. Scheduling, job management, etc. Is your idea of batch scheduling adding another cron job, or do you have a modern workload management tool that addresses increasingly sophisticated enterprise-level requirements? IBM Tivoli Workload Scheduler is an excellent example. Concurrent batch and online processing requires superb, flawless choreography to meet service levels. Do not underinvest in this area, either in people or in tools.

7. A lot of batch processing simply generates paper reports, directly or indirectly. That batch processing may still be required, but the paper typically isn’t. Move the paper into electronic formats. IBM’s Content Manager solutions for z/OS are great ways to do that, and you can pay for them quickly with the printer-related savings.

8. Technical solutions. If you have I/O-bound batch programs, have you looked at I/O acceleration techniques such as zHPF, MIDAW, HyperPAV, SSD, and even (with some careful thought) switching from synchronous cross-site storage replication to asynchronous? If you’re having problems — or could have problems — with long batch jobs that are interrupted, have you looked at checkpointing? Would BatchPipes do anything to help improve batch efficiency? (It’s easy to check.) Are your DB2 instances properly indexed? Are you retrieving DB2 data one record at a time, or are you taking advantage of DB2’s newer, more bulk-oriented query features? Are your batch programs taking inefficient code paths on every record that ought to be exceptional code paths? (Do you have the right set of application development-related tools to even know?) Though it may seem like a strange question, are you running your CPU-bound batch jobs too quickly, and thus could you set a defined capacity (“softcap”) across one or more LPARs to get more value from your batch processing environment? Or do you have extraordinary (or ordinarily extraordinary) annual or quarterly batch processing peaks that would merit implementation of On-Off Capacity on Demand (OOCoD)? Have you recompiled at least your most performance-sensitive batch programs with the latest compilers, such as Enterprise COBOL Version 5, to target the latest possible processor instruction sets? Do you have enough memory allocated to your LPARs, and have you taken steps to consume more memory when and where merited in your batch programs in order to improve efficiency, performance, and throughput? Should you be running compiled instead of interpreted REXX?

I could certainly recommend other aspects of batch modernization, but these areas are among those I see as common themes that most businesses and governments ought to focus on.