To be clear and upfront, I’m not thrilled with the term “batch modernization” — or at least I’m concerned the term is misinterpreted. I’ll now stand on a soapbox and offer some hopefully insightful views on batch computing and what businesses and governments ought to be doing — and not doing.

First of all, there’s a school of thought that believes that “batch modernization” means you write lots of code in Programming Language J to recreate, usually badly, functional code already written in Programming Language C or Programming Language P. Then you get to do the same thing all over again when Programming Language Q debuts. You might be forgiven for having that misunderstanding. IBM’s “batch modernization” (BM?) publications, for example, aren’t always careful in explaining that what I just described isn’t BM. (Or maybe it is a very different type of BM!)

No, that isn’t batch modernization. I can’t think of anything modern — or particularly smart — in reimplementing something that already works. There’s nothing modern in make busy work.

That said, if you want to write batch programs in Programming Language J (or C, or P, or XYZ) — or if you want to extend and enhance the batch programs you already have using a second or third or fifth programming language — go for it! Java, for example, is a programming language and runtime environment provided with z/OS at no additional charge. It’s there, it works extremely well, and it’s available. Write (or borrow, legally) as much or as little Java code as you want, and take advantage of it, now. There’s nothing stopping you except perhaps your own inertia.

What is batch modernization then? There are whole books on the subject, but I’ll offer parts of the definition in focusing on some key elements that I see as critical to businesses and governments. Here’s my list, not necessarily in order:

1. Batch programs concurrently executed with online transactional programs, round the clock — “concurrent batch and online” in fewer words. z/OS and its associated middleware subsystems (CICS Transaction Server, IMS Transaction Manager, WebSphere Application Server, DB2, VSAM RLS, and many others) support concurrent batch and online fabulously and have for many, many years. There are so many business use cases requiring concurrent batch and online, so please focus on addressing this gap if you still have it.

2. As mentioned, the ability to freely intermix technologies (such as programming languages, object libraries, etc.) as you maintain and functionally enhance your batch programs. IBM Language Environment, the z/OS Batch Runtime, the JZOS Batch Toolkit, and z/OS Connect are among the many technologies available to help you leverage any/every best-for-the-mission technology as your batch programs and environment naturally evolve and grow.

3. Reduction or elimination of needless data movement. Or, in other words, take steps to address the “ETL problem.” It’s often less expensive, more secure, and more reliable to perform your data analysis, reporting, and other information-related processing directly on your operational databases — or at least much closer to your operational databases. Data volumes are exploding, and if you cannot keep up with data movement requirements now using old fashioned (non-modern) deployment architectures, then the problems are only going to get worse. The IBM DB2 Analytics Accelerator (IDAA) is one fine example of many technologies that can help reduce or eliminate needless data movement. “Omit needless FTP,” basically.

4. Extremely relatedly, many of today’s batch programs are performing the role of application integration, linking two or more online/real-time transactional systems together. That’s not what batch programs ought to be doing. As you examine your inventory of batch programs — as you look at your file transfers, in particular — ask yourself (and your organization) whether those file transfer linkages still make sense (or ever made sense) given the business processes they support. Do your customers want to wait overnight for a loan approval or a new insurance policy, put in place because you thought you were saving 2 MIPS shuffling data off your mainframe to do a bit of processing elsewhere? (And no, you probably aren’t saving anything. Quite the opposite. How much of your mainframe processing is occupied by moving data? A quarter or more is not uncommon, sadly.) Do these excess file transfers make your business competitive? All too often, no. Look for these critical batch programs that are serving to link together two or more important line-of-business applications and spend some effort modernizing those, in some cases by converting those batch interactions to real-time synchronous or near real-time asynchronous application interactions. Again, IBM is literally throwing great technologies at you that’ll help, often without your having to buy anything extra. Examples include z/OS Connect, CICS Web Services, IMS Web Services, WebSphere MQ, and JSON support for CICS and IMS, among others.

5. Should you be programming at all? Programming itself isn’t particularly modern. Ada Lovelace was programming in the early 1800s. Take a serious look at rules engines (e.g. IBM Operational Decision Manager for z/OS), scoring engines (IBM SPSS Real-Time Scoring for DB2 for z/OS), data format/transformation engines (e.g. IBM WebSphere Transformation Extender for z/OS), mathematical optimization engines for business (e.g. IBM CPLEX Optimizer for z/OS), IBM’s Scalable Architecture for Financial Reporting (SAFR), and the numerous other ways to effectively and dramatically cut down on code maintenance and development burden.

6. Scheduling, job management, etc. Is your idea of batch scheduling adding another cron job, or do you have a modern workload management tool that addresses increasingly sophisticated enterprise-level requirements? IBM Tivoli Workload Scheduler is an excellent example. Concurrent batch and online processing requires superb, flawless choreography to meet service levels. Do not underinvest in this area, either in people or in tools.

7. A lot of batch processing simply generates paper reports, directly or indirectly. That batch processing may still be required, but the paper typically isn’t. Move the paper into electronic formats. IBM’s Content Manager solutions for z/OS are great ways to do that, and you can pay for them quickly with the printer-related savings.

8. Technical solutions. If you have I/O-bound batch programs, have you looked at I/O acceleration techniques such as zHPF, MIDAW, HyperPAV, SSD, and even (with some careful thought) switching from synchronous cross-site storage replication to asynchronous? If you’re having problems — or could have problems — with long batch jobs that are interrupted, have you looked at checkpointing? Would BatchPipes do anything to help improve batch efficiency? (It’s easy to check.) Are your DB2 instances properly indexed? Are you retrieving DB2 data one record at a time, or are you taking advantage of DB2’s newer, more bulk-oriented query features? Are your batch programs taking inefficient code paths on every record that ought to be exceptional code paths? (Do you have the right set of application development-related tools to even know?) Though it may seem like a strange question, are you running your CPU-bound batch jobs too quickly, and thus could you set a defined capacity (“softcap”) across one or more LPARs to get more value from your batch processing environment? Or do you have extraordinary (or ordinarily extraordinary) annual or quarterly batch processing peaks that would merit implementation of On-Off Capacity on Demand (OOCoD)? Have you recompiled at least your most performance-sensitive batch programs with the latest compilers, such as Enterprise COBOL Version 5, to target the latest possible processor instruction sets? Do you have enough memory allocated to your LPARs, and have you taken steps to consume more memory when and where merited in your batch programs in order to improve efficiency, performance, and throughput? Should you be running compiled instead of interpreted REXX?

I could certainly recommend other aspects of batch modernization, but these areas are among those I see as common themes that most businesses and governments ought to focus on.

Keeping it light today (it’s likely I’m going to get in trouble for my tongue in cheek humor) but it was a fantastic opportunity to create a lesson for someone and an article for MM.

Today I received an email from a user.


TDSz is not working, keep getting this error.



A pet peeve of mine, but common knowledge at our shop is that our default HLQ (high level qualifier) for all of our datasets is our USERID.

For example if my USERID was PAULY123, then the HLQ would default to this in several places in ISPF and TSO, unless I indicate otherwise with single quotes(‘) around the dataset.





Documentation from several ISV products, and IBM as well indicate this is the default.


From IBM’s “DFSMShsm Managing Your Own Data” manual:


Specifying Data Set Names

When you specify a data set name with a DFSMShsm user command, the data set name must conform to TSO data set naming conventions. The qualified name consists of the following fields:

  • Your user prefix (required); defaults to user ID can be redefined by using the TSO PROFILE command
  • A user-supplied name (required)
  • A descriptive qualifier (optional)

The following example shows all three fields:



Therefore often when installing new products, receiving files, etc. you have to be conscious of this fact.

In addition to our USERID being the default for out HLQ, our Storage Administrators have define datasets that begin with our USERID, magically using SMS with their LOTR Elf magic storage ways, as temporary storage class. Meaning at the end of the day they’re often deleted (unless specified otherwise).

As a workaround for running Tivoli Decision Support z/OS with a local profile dataset that by default uses the USERID as the HLQ, I created a small REXX EXEC to copy when needed as described here in my post to developerWorks.

I’ve received some great solutions commented on this post, which I haven’t implemented yet because we’re still testing.

Hence, why this user sent me the email with that error.

Their PROFILE dataset, which was their USERID as the HLQ, was being deleted EVERY day and they were confused as to why they kept getting this message.

This portion of the message:

Initial startup for the day…
Creating temporary profile…

That “error message” using a simple REXX EXEC was my contribution.  However it was obviously not a clear message to users what was happening.

TDSz was trying to ALLOCATE USER.PROF which was not there…and my REXX EXEC was copying over “SOME.DATASET.PROF” to fix the problem.

HOWEVER as soon as someone saw that MISSING DATASET error at the beginning of the screen they would stop reading the rest of the information that followed and I’d receive an email or phone call.

I could have… maybe should have… just accepted the fact  I would have to explain what the issue was each and every time, as that WOULD BE the mature and professional thing to do.

Although on this Friday I thought I would have some fun with them!


Dear User,


That’s pretty serious!!!!

Looks like the BLT might be corrupted and went AWOL.

Here’s how to fix it….

Go in and DELETE….yes DELETE any datasets that have your USERID.DRL*

That [USERID].DSQPRINTcan cause issues as well, so better delete that one as well.

And try again…



This would make sure that copied PROF dataset was deleted and they would receive the error again.

Before I sent this email I changed the REXX exec:

My REXX EXEC example is available here.

Now they will be greeted with this message next time they start TDSz:


Now they’ll STOP, READ, and REALIZE…everything is working.

Thank you for making my Friday!