To be clear and upfront, I’m not thrilled with the term “batch modernization” — or at least I’m concerned the term is misinterpreted. I’ll now stand on a soapbox and offer some hopefully insightful views on batch computing and what businesses and governments ought to be doing — and not doing.

First of all, there’s a school of thought that believes that “batch modernization” means you write lots of code in Programming Language J to recreate, usually badly, functional code already written in Programming Language C or Programming Language P. Then you get to do the same thing all over again when Programming Language Q debuts. You might be forgiven for having that misunderstanding. IBM’s “batch modernization” (BM?) publications, for example, aren’t always careful in explaining that what I just described isn’t BM. (Or maybe it is a very different type of BM!)

No, that isn’t batch modernization. I can’t think of anything modern — or particularly smart — in reimplementing something that already works. There’s nothing modern in make busy work.

That said, if you want to write batch programs in Programming Language J (or C, or P, or XYZ) — or if you want to extend and enhance the batch programs you already have using a second or third or fifth programming language — go for it! Java, for example, is a programming language and runtime environment provided with z/OS at no additional charge. It’s there, it works extremely well, and it’s available. Write (or borrow, legally) as much or as little Java code as you want, and take advantage of it, now. There’s nothing stopping you except perhaps your own inertia.

What is batch modernization then? There are whole books on the subject, but I’ll offer parts of the definition in focusing on some key elements that I see as critical to businesses and governments. Here’s my list, not necessarily in order:

1. Batch programs concurrently executed with online transactional programs, round the clock — “concurrent batch and online” in fewer words. z/OS and its associated middleware subsystems (CICS Transaction Server, IMS Transaction Manager, WebSphere Application Server, DB2, VSAM RLS, and many others) support concurrent batch and online fabulously and have for many, many years. There are so many business use cases requiring concurrent batch and online, so please focus on addressing this gap if you still have it.

2. As mentioned, the ability to freely intermix technologies (such as programming languages, object libraries, etc.) as you maintain and functionally enhance your batch programs. IBM Language Environment, the z/OS Batch Runtime, the JZOS Batch Toolkit, and z/OS Connect are among the many technologies available to help you leverage any/every best-for-the-mission technology as your batch programs and environment naturally evolve and grow.

3. Reduction or elimination of needless data movement. Or, in other words, take steps to address the “ETL problem.” It’s often less expensive, more secure, and more reliable to perform your data analysis, reporting, and other information-related processing directly on your operational databases — or at least much closer to your operational databases. Data volumes are exploding, and if you cannot keep up with data movement requirements now using old fashioned (non-modern) deployment architectures, then the problems are only going to get worse. The IBM DB2 Analytics Accelerator (IDAA) is one fine example of many technologies that can help reduce or eliminate needless data movement. “Omit needless FTP,” basically.

4. Extremely relatedly, many of today’s batch programs are performing the role of application integration, linking two or more online/real-time transactional systems together. That’s not what batch programs ought to be doing. As you examine your inventory of batch programs — as you look at your file transfers, in particular — ask yourself (and your organization) whether those file transfer linkages still make sense (or ever made sense) given the business processes they support. Do your customers want to wait overnight for a loan approval or a new insurance policy, put in place because you thought you were saving 2 MIPS shuffling data off your mainframe to do a bit of processing elsewhere? (And no, you probably aren’t saving anything. Quite the opposite. How much of your mainframe processing is occupied by moving data? A quarter or more is not uncommon, sadly.) Do these excess file transfers make your business competitive? All too often, no. Look for these critical batch programs that are serving to link together two or more important line-of-business applications and spend some effort modernizing those, in some cases by converting those batch interactions to real-time synchronous or near real-time asynchronous application interactions. Again, IBM is literally throwing great technologies at you that’ll help, often without your having to buy anything extra. Examples include z/OS Connect, CICS Web Services, IMS Web Services, WebSphere MQ, and JSON support for CICS and IMS, among others.

5. Should you be programming at all? Programming itself isn’t particularly modern. Ada Lovelace was programming in the early 1800s. Take a serious look at rules engines (e.g. IBM Operational Decision Manager for z/OS), scoring engines (IBM SPSS Real-Time Scoring for DB2 for z/OS), data format/transformation engines (e.g. IBM WebSphere Transformation Extender for z/OS), mathematical optimization engines for business (e.g. IBM CPLEX Optimizer for z/OS), IBM’s Scalable Architecture for Financial Reporting (SAFR), and the numerous other ways to effectively and dramatically cut down on code maintenance and development burden.

6. Scheduling, job management, etc. Is your idea of batch scheduling adding another cron job, or do you have a modern workload management tool that addresses increasingly sophisticated enterprise-level requirements? IBM Tivoli Workload Scheduler is an excellent example. Concurrent batch and online processing requires superb, flawless choreography to meet service levels. Do not underinvest in this area, either in people or in tools.

7. A lot of batch processing simply generates paper reports, directly or indirectly. That batch processing may still be required, but the paper typically isn’t. Move the paper into electronic formats. IBM’s Content Manager solutions for z/OS are great ways to do that, and you can pay for them quickly with the printer-related savings.

8. Technical solutions. If you have I/O-bound batch programs, have you looked at I/O acceleration techniques such as zHPF, MIDAW, HyperPAV, SSD, and even (with some careful thought) switching from synchronous cross-site storage replication to asynchronous? If you’re having problems — or could have problems — with long batch jobs that are interrupted, have you looked at checkpointing? Would BatchPipes do anything to help improve batch efficiency? (It’s easy to check.) Are your DB2 instances properly indexed? Are you retrieving DB2 data one record at a time, or are you taking advantage of DB2’s newer, more bulk-oriented query features? Are your batch programs taking inefficient code paths on every record that ought to be exceptional code paths? (Do you have the right set of application development-related tools to even know?) Though it may seem like a strange question, are you running your CPU-bound batch jobs too quickly, and thus could you set a defined capacity (“softcap”) across one or more LPARs to get more value from your batch processing environment? Or do you have extraordinary (or ordinarily extraordinary) annual or quarterly batch processing peaks that would merit implementation of On-Off Capacity on Demand (OOCoD)? Have you recompiled at least your most performance-sensitive batch programs with the latest compilers, such as Enterprise COBOL Version 5, to target the latest possible processor instruction sets? Do you have enough memory allocated to your LPARs, and have you taken steps to consume more memory when and where merited in your batch programs in order to improve efficiency, performance, and throughput? Should you be running compiled instead of interpreted REXX?

I could certainly recommend other aspects of batch modernization, but these areas are among those I see as common themes that most businesses and governments ought to focus on.

JCL (Job Control Language)


Whenever new mainframers referencing JCL on the “interweb” or attending an in-depth overpriced JCL course,  it seems that the Job Control Language is presented as something more difficult than it really is.

I blame this difficulty with not being fluent in what I call IBM’rish’.  A word I just made up referring to the cryptic  language IBMers and IBM manuals may use.  Signs of those fluent in IBM’rish include robot dance moves and entire conversations using only acronyms.

(yes, I’ve been accused of IBM’rish)

(The great thing about JCL is that I use an old reference book: "System 370 Job Control Language", by Gary DeWard Brown, that's been passed around at work over the years.

Published in 1977 and it's still relevant.  For a more up to date reference try googling "z/os mvs jcl reference".)

The JCL Structure

In its rawest form JCL can be broken down to these 3 basic parts:

  3. DD (Data Definition) STATEMENT

I know there’s several veterans out there that will protest in rage that JCL is not as simple as that, but let’s examine some code and decide for ourselves.

Here is a simple JCL job that uses TRSMAIN – this program will ‘TERSE’ files, which is IBM’s own compression.  Think of this as ‘.ZIP’ files, but for the mainframe.

Here’s the layout:


...                  REGION_STORAGE_SIZE_FOR_STEPS
000004 //* COMMENT
000005 //* COMMENT


...           'PARM=PACK means compress the files'


...           'This Data Definition defines where all 
               messages from the program are sent.'    

000013 /*     'The delimiter.  The "end-of-file" statement'

Nothing too complicated.  This job will compress the ‘BFCU.TOMZOS.ICAT.INSTLIB‘ library to a single file called ‘USER123.BFCU.TOMZOS.ICAT.INSTLIB.PACKED

Then this single file could be FTP to IBM or somewhere else.

After ‘Submit‘, or ‘SUB‘ on the command line above it’s sent to JES2 (Job Entry Subsystem) and if there’s no errors or syntax mistakes will run successfully with a RC (Return Code) = CC (Condition Code) 0000.

Here’s the unreadable output resulting from this job:



A similar job to ‘UNPACK‘ this TERSE file would be necessary.


Whoa!!!  Something is different in this job!

We’ve stepped it up a notch and now we have the same basics (JOB CARD, EXEC, DD) however we’ve added some JCL symbols and a procedure (PROC).

Lines of interest:

000006 // SET HLQ='USER123'
000008 //UNPKSS PROC DIR=',400'
000013 //          UNIT=SYSDA,SPACE=(TRK,(300,300&DIR),
000015 // PEND

Basically the difference and flow with this job is:

  1. start JOB at the JOBCARD on line 000001
  2. EXECute procedure (PROC) UNPKSS on line 000017
  3. PROC UNPKSS begins on line 000008 and ends on line 000015
  4. PROC UNPKSS puts the &HLQ. value set on line 000006, which is ‘USER123’
  5. PROC also reference the PARM that was pass on line 000017, ‘NSMPIN=’BFCU.TOMZOS.ICAT.INSTLIB
  6. line 000011 is translated to:  USER123.BFCU.TOMZOS.ICAT.INSTLIB.PACKED
  7. line 000012 is translated to: USER123.BFCU.TOMZOS.ICAT.INSTLIB.UNPACKED

You’ll notice lines 000018 to 000025 were commented out (‘//*‘).  If there was a list of files that required to be UNPACKED but with different names, the comments (‘//*’) would be replace with ‘//’ and the statement would run with the PARMS passed, as would be reflected in the UNPKSS PROC.

Hopefully this gives you a taste of the power and simplicity of JCL.

Happy coding and job submitting!