JCL (Job Control Language)

JCL_book

Whenever new mainframers referencing JCL on the “interweb” or attending an in-depth overpriced JCL course,  it seems that the Job Control Language is presented as something more difficult than it really is.

I blame this difficulty with not being fluent in what I call IBM’rish’.  A word I just made up referring to the cryptic  language IBMers and IBM manuals may use.  Signs of those fluent in IBM’rish include robot dance moves and entire conversations using only acronyms.

(yes, I’ve been accused of IBM’rish)

(The great thing about JCL is that I use an old reference book: "System 370 Job Control Language", by Gary DeWard Brown, that's been passed around at work over the years.

Published in 1977 and it's still relevant.  For a more up to date reference try googling "z/os mvs jcl reference".)

The JCL Structure

In its rawest form JCL can be broken down to these 3 basic parts:

  1. JOB CARD / OR PROC NAME
  2. EXEC STATEMENT
  3. DD (Data Definition) STATEMENT

I know there’s several veterans out there that will protest in rage that JCL is not as simple as that, but let’s examine some code and decide for ourselves.

JCL1
Here is a simple JCL job that uses TRSMAIN – this program will ‘TERSE’ files, which is IBM’s own compression.  Think of this as ‘.ZIP’ files, but for the mainframe.

Here’s the layout:

Lines:

000001 //JOBNAME  JOB ACCOUNTING_INFO,NOTIFY_USER,        
000002 //            CPU_TIME_LIMIT,MESSAGE_OUTPUT_CLASS, 
...                  REGION_STORAGE_SIZE_FOR_STEPS
000003 //LABEL    OUTPUT JES_OUTPUT_LOCATIONS 
000004 //* COMMENT
000005 //* COMMENT

000006 //TERSE    EXEC PGM=TRSMAIN,PARM=PACK 

...           'PARM=PACK means compress the files'

000007 //SYSPRINT DD LOCATION

...           'This Data Definition defines where all 
               messages from the program are sent.'    

000008 //INFILE   DD THE_FILE_LOCATION_AND_SHARING_DETAILS
000009 //OUTFILE  DD THE_NEW_OUTPUT_FILE_LOC_AND_DETAILS
...
000013 /*     'The delimiter.  The "end-of-file" statement'

Nothing too complicated.  This job will compress the ‘BFCU.TOMZOS.ICAT.INSTLIB‘ library to a single file called ‘USER123.BFCU.TOMZOS.ICAT.INSTLIB.PACKED

Then this single file could be FTP to IBM or somewhere else.

After ‘Submit‘, or ‘SUB‘ on the command line above it’s sent to JES2 (Job Entry Subsystem) and if there’s no errors or syntax mistakes will run successfully with a RC (Return Code) = CC (Condition Code) 0000.

Here’s the unreadable output resulting from this job:

new_packed_dataset

JCL PROCS and SYMBOLS

A similar job to ‘UNPACK‘ this TERSE file would be necessary.

JCL2_unpack

Whoa!!!  Something is different in this job!

We’ve stepped it up a notch and now we have the same basics (JOB CARD, EXEC, DD) however we’ve added some JCL symbols and a procedure (PROC).

Lines of interest:

000006 // SET HLQ='USER123'
...
000008 //UNPKSS PROC DIR=',400'
...
000011 //INFILE DD DISP=SHR,DSN=&HLQ..&NSMPIN..PACKED
000012 //OUTFILE DD DISP=(NEW,CATLG),
                   DSN=&HLQ..&NSMPIN..UNPACKED,
000013 //          UNIT=SYSDA,SPACE=(TRK,(300,300&DIR),
                   RLSE
...
000015 // PEND
...
000017 //F01 EXEC UNPKSS,NSMPIN='BFCU.TOMZOS.ICAT.INSTLIB'

Basically the difference and flow with this job is:

  1. start JOB at the JOBCARD on line 000001
  2. EXECute procedure (PROC) UNPKSS on line 000017
  3. PROC UNPKSS begins on line 000008 and ends on line 000015
  4. PROC UNPKSS puts the &HLQ. value set on line 000006, which is ‘USER123’
  5. PROC also reference the PARM that was pass on line 000017, ‘NSMPIN=’BFCU.TOMZOS.ICAT.INSTLIB
  6. line 000011 is translated to:  USER123.BFCU.TOMZOS.ICAT.INSTLIB.PACKED
  7. line 000012 is translated to: USER123.BFCU.TOMZOS.ICAT.INSTLIB.UNPACKED

You’ll notice lines 000018 to 000025 were commented out (‘//*‘).  If there was a list of files that required to be UNPACKED but with different names, the comments (‘//*’) would be replace with ‘//’ and the statement would run with the PARMS passed, as would be reflected in the UNPKSS PROC.

Hopefully this gives you a taste of the power and simplicity of JCL.

Happy coding and job submitting!

Transferring files via FTP to z/OS seems to cause quite a bit of trouble with new Millennial Mainframers. This is partly because it’s not very intuitive sending and receiving files from the mainframe. Fortunately, there are a few ways to FTP.

One of the easiest ways is using IBM’s Personal Communications (or PCOMM). PCOMM is a 3270 terminal emulator, which is a client-side application that many mainframers use to communicate with the host. In some ways, PCOMM is a software implementation of the Ninjutsu (忍術) technique Hensōjutsu (変装術), which involved disguise, impersonation, and infiltration. From the perspective of the mainframe, z/OS thinks that it’s communicating with a traditional 3270 “green screen” terminal. However, in reality, the mainframe is really communicating with a Macbook Air or Microsoft Surface in disguise.

One of the great things about PCOMM is that is has FTP functionality built right into it. That’s right! PCOMM’s got additional Ninjutsu (忍術) powers!

In this case, PCOMM uses a host file-transfer program called IND$FILE to practice a technique of file transfer resembling Shurikenjutsu (手裏剣術). However, instead of throwing deadly weapons at the heart of your enemies, IND$FILE throws your choice of files into the heart of the z/OS filesystem!

If you’d like to learn more about the secretive art of IND$FILE, check out this great document from GSF Software.

For more general information about PCOMM, go here.

Send File to Host Example

Let’s go ahead and send a file to the mainframe. In this case, we’re going to send up a snapshot of the Earth taken by an Astronaut on the Apollo 17 mission. Since the System/360 mainframe was a key technology used by NASA to send men to the moon, this is pretty appropriate!

To follow along, grab this photo from Wikipedia here:

To start the transfer,  go to the “ISPF Command Shell” (Option 6 from the ISPF Primary Option Menu). Once on the panel, make sure that the cursor is on the beginning of the command line.

Then click on Actions -> Send File to Host…

This will open up Send Files to Host dialog.

Click on the ‘Browse…’ to select the Earth photo from your PC.

You’ll notice by default that the dialog automatically generates a Host File Name based on the file name appended with “bin” to signify that this is a binary file.  However, this will not work, as “The_Earth_seen_from_Apollo_17.jpgbin” does not follow z/OS dataset naming conventions.

According to convention, a z/OS dataset (file) is uniquely identified by a series of strings between one and eight characters, each separated by the dot (“.”) operator.  In the mainframe world, these names are each called “level qualifiers.” The leftmost or beginning string is called the dataset’s “high-level qualifier.” The subsequent middle strings are called “middle-level” qualifiers. The rightmost or last string is called the dataset’s “low-level qualifier”.

For example, in the z/OS dataset PAULY01.WORK.CLIBS:

  • PAULY01 is the High-Level Qualifier
  • WORK is a Middle-Level Qualifier
  • CLIBS is the Low-Level Qualifier
Oftentimes, the high-level qualifier is shared between numerous datasets to act as a form of organization similar to a directory.  The low-level qualified is often used to show the type of data stored in the dataset, similar to a file extension.

To correct this naming issue, rename the photo to ‘[USERID].EARTH,’ where USERID is your current z/OS user ID. I also make sure that “binary” is selected in the Transfer Type drop-down box, since with is binary file (.jpg) and not a text file.

It is critical to select “binary” because z/OS uses EBCDIC for character encoding, which is different from the ASCII scheme used by other platforms. Selecting “text” in the Transfer Type drop-down box causes the file to be translated between the two schemes to ensure the that text is readable by mainframe tools such as ISPF Editor. If you select this for your image, the binary data in the jpg photo of the Earth will become scrambled and unreadable.

Once the dialog is filled out, click “Update in List” to update the transfer file changes.

Finally, click “Send” to initiate the transfer.  The pop-up window will disappear when the transfer is complete.

Let’s check to make sure that our photo actually made it into z/OS. Go back to the “ISPF Primary Option Menu” and select option “3 Utilities” and “4 Dslist” to enter the DSList utility.  Once in DSList, enter [USERID].EARTH in the panel and hit execute. We now can see our file in z/OS.

We can go ahead an browse the dataset, but we’ll only see the JPG binary data associated with the Apollo 17 snapshot of Earth. Since z/OS doesn’t have a GUI like Windows, Linux, or Mac OS X, there is no direct way to view the photo directly.

FTP with Windows’ FTP.exe

Now that we’ve uploaded the binary data of “Earth.jpg” to z/OS dataset named [USERID].EARTH, let’s go ahead and download the file back to our PC using the Microsoft FTP Utility so that we can make sure that the picture is intact.
To do this, open up a DOS prompt (Start > Run > cmd) and enter ftp to start ftp.exe

Then enter the ftp command “open” followed by the IP address or DNS name of your mainframe.

If you don’t know this, then you’ll have to talk with your network admin to find out the IP or DNS name (or if it’s even activated).

In this example, we connected to host at “server.ca”

After connecting to the mainframe, enter your login userid (BBBBBPG) and password.

Notice that upon connecting, your “working directory” is set to BBBBBPG. This “working directory” is really the High-Level Qualifier of the datasets in the filesystem. Since a common z/OS convention is to set the high level qualifier of datasets to the USERID that owns the dataset, this is logical.

Similar to when we uploaded the photo, change the transfer type to “binary” by entering the FTP command ‘bin’

Finally, receive the file from the host by issuing ‘get earth’

Notice the message ‘Sending data set BBBBBPG.EARTH.’

At this point, the transfer is successful. The file is now on our local hard drive. However, the High-Level Qualifier of the file has been stripped off, so the file is called “earth” without a file extension.

To help your operating system understand that this file contains picture data, rename the file to add the “.jpg” file extension in the DOS prompt with the command “rename oldfile newfile.jpg”

Now, we can open the file and automatically using a photo viewer.  Success!  The entire Earth is there.

From coast to coast and continent to continent, you are now the Mainframe Ninja!

Paul Gamble
Graduate of Georgian College, Computer Programmer Analyst program

After doing a four month internship with the Canadian Government, Paul was offered the rare opportunity to work in their on-site Data Centre. Starting out as an Operator for four years before becoming a Systems Programmer rolling out different independent software vendors (ISVs) and different IBM Tivoli products for monitoring and automation to the z/OS platform. Paul enjoys the constant learning the z/OS operating system offers in comparison to other platforms. On weekends you’ll find Paul getting his adrenalin fix by instructing and coaching at the local skydiving drop-zone.
Connect with Paul on LinkedIn

“You do not really understand something unless you can explain it to your grandmother.” 
-Albert Einstein

Here’s my attempt to explain DB2 to my grandma!

With the world heading towards yottabyte of data, I assume that we all understand any raw, unorganized facts that needs further processing is data. We obtain data from numerous sources such as websites, retail stores and social media (this is a big one) to name a few. There is so much of user generated content off late that companies are struggling to:

  1. Manage the amount of data 
  2. Organize the data to gain intelligence 
  3. Make use of the information to gain business value.

How do we manage & organize data?

With all the data stored in databases, we need a Database Management System in place to create, maintain and control the databases on a regular basis. DB2 is a Database Management System and more specifically a Relational Database Management System. (RDBMS)

For those of you who are new to databases, here’s a list of top 5 reasons to use a Database Management System (DBMS):

  • It manages creating, accessing & managing data efficiently
  • It provides a greater level of security by granting different levels of security access based on the user
  • It allows multiple programs to access the data simultaneously while still keeping the data consistent (Data Integrity)
  • It permits the hiding of confidential data in separate views to prevent unauthorized access
  • It comes with a number of tools to perform activities like performance tuning, backup & recovery of data, implement tighter security controls and so on 

Fun facts about DB2

  • Top 59 banks in the world use DB2
  • 23 out of top 25 retailers use DB2
  • 9 out of top 10 insurance providers use DB2
  • IBM has over 1600 core developers and researchers for DB2 across the world
  • DB2 runs on many operating systems, such as z/OS, IBM i, Linux, UNIX, Windows, and Solaris 

You might be wondering why some of these large organizations use DB2 for z/OS to run their business. The answer is simple. These companies need a reliable database server which provides superior availability & scalability. These companies process huge (and I mean HUGE) volumes of information that would require millions of concurrent updates on a daily basis. DB2 with the power & functionalities of z/OS has undoubtedly met/exceeded those needs.

11 Things you need to know about DB2

  1. DB2 Data Sharing – DB2 data sharing runs on an IBM mainframe cluster configuration called a parallel sysplex and enables applications that run on more than one DB2 for z/OS subsystem to read from and write to the same set of data concurrently. Data Sharing gives businesses the ability to add new DB2 subsystems into a data sharing group, or cluster, as the need arises and without disruption which makes it the most highly available, highly scalable data-serving platform in the market. 
  2. IDAA – DB2 for z/OS offers IBM DB2 Analytics Accelerator (IDAA) which helps in excellent query performance solution for Data Warehousing and Analytics (Remember we talked about analyzing data to make sense out of it). 
  3. Application Portability – You can develop applications with SQL that are common across DB2 data servers and port them from one DB2 operating system to another with minimal modification. (Porting means moving an application from one operating system to another.) 
  4. Temporal Data – Many IT systems need to keep some form of historical information for their data and it is now possible with the latest version of DB2 which supports temporal data (Business time & System Time). With the use of a history table DB2 can now provide a value of data at a specific time in history. 
  5. Built-in Compression – DB2 has inbuilt compression technology which converts your data that is used frequently into shorter strings for efficient storage & retrieval. The mapping information of short strings with the actual data is stored in a compression dictionary which is used for decompressing the data. 
  6. Built-in Encryption – DB2 provides built-in data encryption and decryption functions that you can use to encrypt sensitive data, such as credit card numbers and medical record numbers. You can encrypt data at the column or value level. 
  7. Clone Tables – you can clone tables in DB2 with the exact replica of your base table and you can switch between the clones easily. This is very critical for businesses who cannot tolerate any downtime since clone tables ensure superior availability. 
  8. Large Objects – DB2 has a unique capability of storing Large Objects (LOB) which would mean storing a picture or a text document as a single object in the database. 
  9. Storing XML – You can also store XML on your DB2 Database. It has in-built functions to convert the stored XML and return it as a DB2 table to your application. 
  10. DB2 Express C is a community Edition which is a no charge edition from IBM 
  11. DB2 Tools – DB2 comes with a set of tools that are broadly classified in the following six categories. You will be hearing more about some of the DB2 Tools in our future posts. 
    • Database Administration 
    • Utility Management 
    • Performance Management 
    • Recovery Management 
    • Replication Management 
    • Application Management 

Phew!! That’s enough DB2 for today. I enjoyed writing about DB2 on Z as much as you did reading. So the feeling is mutual here. One last thought! We at Millennial Mainframers are fairly new to the Z world and we are learning Z just as you are. You call the shots if you would like to hear about something in specific and we would be more than happy to step up to the plate. Come back and see us for more cool information on how Z rules the world always & forever! ~ Riya ~

About the Author

Sri (Riya) Shanmugam
MBA. Entrepreneurship, McCoy School of Business, Texas
B.Tech. I.T., Amrita School of Engineering, India

Riya is a Product Specialist at IBM by profession and a nerd by choice. She closely follows the start-up world, believes that Rome was not built in a day, loves technology, fashion, food, interior decor, Jazzercise, long weekends and oh, did we say Fashion?? Her new crush is her Canon SLR and she is extremely psyched about Z. She is a Sun Certified Java Programmer and has been involved in all parts of Software Development Life Cycle (SDLC) at Infosys, AMD, IBM and an Austin based start-up. Here’s how you find her on twitter. @RiyaKSri
Posted in Uncategorized.