It’s July 17, 2017. Happy New Mainframe Day, everyone!

Like clockwork, at about 12:01 a.m. New York time, IBM issued this press release to kick off New Mainframe Day. I’ll have more details and comments to offer in due course. In the meantime, it’s clear what IBM is emphasizing: security. The new IBM z14 is designed to encrypt everything, all the time, in multi-layered fashion. The world is far more dangerous than ever, with businesses (and arguably even some governments) literally dying because of security breaches. This new machine hugely helps.

If you have a mainframe, fantastic! Please keep it current. Use it more effectively and more fully to protect your business (or government) from current and emerging threats.  Put your mainframe in full charge of protecting your organization’s vital information and associated information services. And get on board the IBM z14 as soon as you can.

If you don’t have a mainframe, now’s the time to take a fresh look if you want to survive. That might be a mainframe in the cloud (such as IBM’s High Security Business Network for Blockchain) or in a hybrid cloud deployment. Either way, IBM Z is a big part of the answer to the world’s information security calamities.

There are a couple speed bumps on the computing horizon: the years 2038 and 2042. Specifically, January 19, 2038, and September 17, 2042 (UTC dates). How dangerous these speed bumps will be depends on you, dear Millennial Mainframers. Maybe your parents had fun (or “fun”) getting ready for Y2K. Now it’s your turn to help keep the modern world modern.

The 2038 problem is rooted in the classic UNIX time epoch. UNIX and UNIX-like operating systems, and more than a few UNIX-influenced programmers, decided to represent time in the form of a 32-bit signed integer. This integer (time_t) expresses the number of seconds since 00:00:00 UTC on January 1, 1970. (Bell Labs gave birth to UNIX close to that date, and 32 bits seemed like enough back then.) Negative integers are allowed and represent times and dates before 1970. That 32-bit signed integer can represent time up through and including 03:14:07 UTC on January 19, 2038. After that, the 32-bit time_t will wrap around to a negative value representing the very early part of the 20th century. Which won’t be so much fun in 2038, of course.

The 2042 problem is conceptually similar but is, as far as I know, unique to IBM mainframes. In that case a 64-bit unsigned integer counting the number of 2-12 microseconds since January 1, 1900, will wrap.

These two problems are quite close together chronologically, so you ought to view them as two parts of the same basic problem. Fortunately, the 2042 problem should be easier to avoid. IBM expanded the time of day clock in mainframe hardware starting with the Generation 6 ESA/390 processors first introduced nearly 20 years ago. That means every 64-bit IBM mainframe also includes the expanded TOD clock at the system level. There is still some software, including operating system software, that only “sees” the 64-bit fragment of the expanded TOD integer. That’s now changing, so all you should need to do in terms of operating systems and middleware is to stay at least relatively current in your operating system and middleware release levels. Then you simply check code, such as tools and applications, to make sure they also “see” beyond the 64-bit TOD value. IBM has some tools that can help you identify TOD-challenged code, such as this one.

The “bigger fish” is the 2038 problem. UNIX and the classic UNIX time format have spread practically everywhere, and they keep spreading, mostly in the form of the 32-bit Linux kernel on other architectures that are popular for embedded devices. In 2013 the OpenBSD community decided to break strict compatibility and expanded time_t to 64 bits in that operating system. NetBSD took a slightly different approach, preserving compatibility with existing binaries that expect a 32-bit time_t but not allowing any new binary compilations except with a 64-bit time_t value. The Linux kernel is still a work in progress but will likely adopt a similar approach to NetBSD. And that’s just at the operating system level. There is undoubtedly a lot of middleware, tool, and application code inspired by or derived from UNIX (and UNIX-like projects) that adopted the heritage UNIX time format.

The 64-bit Linux kernel uses a 64-bit time_t value, so 64-bit Linux distributions should be safe as far as the operating system goes. The Linux community discontinued support for 31-bit Linux kernels on IBM z Systems and LinuxONE servers a couple years ago, and now only the 64-bit kernels are available. As long as you retire any residual 31-bit Linux instances before 2038, preferably well before, you should be OK from an operating system point of view.

z/OS is a fully certified UNIXTM operating system, but again as long as you stay at least reasonably current you should have no operating system-level problems. You’ll still have due diligence to perform on other code (middleware, applications, tools, etc.) It’s the same with other operating systems (z/VM, z/TPF, z/VSE, etc.)

Unlike the Year 2000 problem, the 2038 and 2042 problems probably will not cause many difficulties before the wraps actually happen. That’s both good news and bad news. The good news is that there shouldn’t be much breakage until then. There’s a lot of bad news: the risk of complacency, the near simultaneous onset of any breakages, and the world’s progressively greater dependence on computing technologies that might break.

I wish you all the best of luck in keeping the world running smoothly.

The European Court of Justice has ruled that “Safe Harbor” provisions as they’ve existed for about 15 years are not adequate to protect Europeans’ data privacy interests. The BBC has posted a fairly extensive story on the ruling, and IBM has an official reaction.

If I understand IBM’s official reaction correctly (and the reactions of other technology companies), there’s great concern about regulatory uncertainties and, in particular, inconsistencies. That’s perfectly understandable and sensible. Nobody wants to deal with 28 or more unique data protection rulesets and legal regimes. According to the BBC’s report, the European Commission seems at least aware of that potential problem, which is encouraging.

In the wake of the ruling, businesses and other organizations must have “model contract clauses” in place (and obey those clauses!) in order to transfer personal data from Europe to the United States (and, I assume, to any other countries outside the EU/EEA/Switzerland). Those model clauses require the parties to take due care in how they use and secure Europeans’ personal data — the “rules of the road” for protecting privacy. For about a decade and a half, between Europe and the U.S. specifically, businesses could rely on a single “master” set of rules called “Safe Harbor,” but no more. Fifteen years ago European regulators feared that commercial entities would abuse personal data, inspiring “Safe Harbor.” Now the ECJ recognizes that governments are potentially or actually infringing individuals’ privacy rights, so the Court ruled that “Safe Harbor” isn’t enough.

So what does all this regulatory turmoil have to do with mainframes? As I’ve written before in various ways, businesses and other organizations handling personal data simply need to become much better stewards and protectors of those data. That was true before the ECJ ruling, and it’s even more true now. Mainframes and their middleware (e.g. DB2 for z/OS) are extraordinarily powerful, effective tools to help protect personal data and only to authorize access strictly according to complex, evolving rulesets. Mainframes uniquely minimize data movement and data duplication since they facilitate complex, concurrent information and application processing across a single instance of data. They are also excellent “cloud outposts” if/when they need to be. A single mainframe, even the smallest zBC12 model, is a whole “data center in a box.” The mainframe uniquely offers strict (and certified) security “zones” to preserve personal data separations within a single footprint. So if you build at least the privacy-protecting “System of Record” parts of your cloud infrastructure on IBM z Systems, you can much more easily and cost-effectively roll with evolving regulatory punches.

That’s not to say people like to have to worry about regulatory turmoil, especially if you already haven’t been adequately protecting personal data. (The IT industry has a lot to answer for in this respect, and so do regulators. There’s much work ahead, though only some of that work is a result of this ruling.) Fortunately there are some powerful tools available, mainframes included. Regulators (and courts) get concerned and act when industry fails, so, first and foremost, let’s not fail. Hopefully everybody can agree that privacy and protection of personal data are really, really important. Consistently important we also hope.