Specialty Engines

iphone_mcm

Many of my colleagues know that in the last several years I have become a big Apple-holic. Back in 2007, I bought my first Macbook.  I now have ejected all Microsoft Windows-based hardware from my household, and we have/had a couple of Macbooks, a couple of iPads, iPhones, a Apple TV and a few other misc. items.  I’m pretty impressed by Apple’s hardware and how they integrate the hardware and software to create a system that performs well and provides high value to the user.

This should sound familiar, as it’s the same proposition we have with System z.  Over the last 8-10 years, IBM has made great strides towards improving System z’s usability, performance, and cost of ownership.  One key way we have done that is by introducing the “Specialty Processor” or “Specialty Engines“.

Now for years, System z has offloaded a lot of different functions from the main central processing “engines” and improved system performance and reliability.  For example, the System Assist Processor (SAP) is an engine in System z that is dedicated to I/O processing.  It allows offloading of I/O functions and frees the general-purpose processors to execute user programs.  We also introduced other specialty processors such as the zAAP (z Application Assist Processor), zIIP (z Integrated Information Processor), and a few others such as the on-chip Crypto Coprocessor.

But what does this have to do with the new iPhone?

When I was reading the various press pieces on the iPhone 5S announce, I ran across this article from Computerworld.  It pointed out a new function that I had completely missed in the announcement – the M7 “Motion Coprocessor”.

Now “co-processing” is not exactly new in the small systems world – PCs have been using discrete graphics processors for a long time now, which, like IBM’s System z Specialty Engines, offloads functionality to free up resources on the main processor.  On the iPhone and iPad, Apple uses ARM architecture chips, most recently the A6 on the iPhone 5 and new 5C, and now the A7 on the 5s, for “general processing” of applications.    The latest iPads use an A6X processor that has offloaded graphics to a 4-core graphics processor.  And now the M7 chip will offload functions such as accelerometer, gyroscope and compass processing.

So why is this a big deal?

Mobile computing is often thought of as applications and functions we carry around on our phones and tablet computers.  But it’s much more than that.  It’s wearable devices…Google Glass is but one example of this…it also includes those monitor devices that track your heartbeat, your temperature and other functions while you exercise. It’s medical devices and equipment that can measure your vital signs.  It’s on-board computers in cars.  It’s tiny computing devices in shipping containers or cartons.

Just about any kind of computing device that moves or transmits data we often refer to as “The Internet of Things“.   I’ll note that IBM actually envisioned all of this many years ago.  Most recently we referred to it as “Pervasive Computing“.  The kinds of information that iPhone’s M7 processor can handle is but one category of “pervasive” technology/data that will increase the demands of processing data and continue to drive the “Big Data” strategy of companies around the world.

So now Apple has discovered what IBM System z did – using specialty engines helps your computing platform deliver higher performance and better value by moving specialized computing functions to other offload processors so the main computing cores can work on the important tasks of processing the user’s applications.  I think that’s pretty cool.

Posted in Uncategorized.
On occasion, the IBM mainframe will make an appearance in an IBM ad campaign.   A couple of great ones produced in the mid-2000s focused on mainframe virtualization:

Now I’m not quite sure if IBM produced this one, or if a clever mainframe fan put this one together…but it’s pretty funny!

Great humor here, but…think about the messages within these – they convey two key themes:

  1. Extreme virtualization – System z can run hundreds, if not thousands of virtualized guest OS instances. As the ads illustrate, this really does enable System z customers to combine huge server farms into a single System z frame.
  2. Security – While System z isn’t a very good babysitter for your kids, it does a pretty nice job babysitting your valuable information resources. System z security is unparalleled in the software and server marketplace, with built-in functionality for encryption, access control & resource authorization, and some of the best logging & auditing technology in the industry.

There’s a wealth of great System z material on YouTube. We’ll be posting more in the future!

Posted in Uncategorized.

I was chatting with a friend of mine the other day, and he related to me how he was dealing with a lot of issues with outages in some key production I/T systems. His opinion was that the majority of this particular company’s issues were due to people problems and not to software or hardware defects. They were issues with lack of testing or configuration errors or just plain sloppy work. Interesting, but not unsurprising.

My first job out of college was as a mainframe systems programmer. I installed ISV products, IBM software subsystems, and eventually worked my way up to be an MVS systems programmer. I installed the OS, I installed and implemented maintenance, I debugged problems, shot dumps, trained operators, and did all the stuff that a sysprog does. And when I implemented changes, I had to package up those changes into a batch/scripted job that could be run after hours and install the changes with minimal human interaction. I also had to provide a back-out script that would allow the changes to be reversed if necessary. There was a team of individuals who did this FOR me – “Change Control”. The Change Control group scheduled, coordinated and implemented the changes. That way they could keep track of what changed and in most cases could isolate problems to a particular change.
So after I heard the horror stories from my friend, I reflected back on this first job experience and thought about how different it was then from what we generally see now where change is often much less controlled, and there is a lack of rigor in systems management processes and practices. Many of the issues we have with I/T systems are with the people and how they are administering systems, and not with the quality of the hardware and software.
Like Soylent Green, “IT is People!”
That’s where I come to The Mainframe Mentality. Around 2002, I put together a presentation called “Introduction to the Mainframe”, where I would spend about 2 hours introducing the mainframe to folks who had never worked with the platform. The last slide in the presentation was titled “The Mainframe Mentality”, and it was intended to help my distributed systems colleagues understand what mainframe people were like, and why. It was a bit of a psychology lesson.
IT is People.
While IBM System z hardware and software is indeed well-engineered for reliability and high-availability, technology can only go so far. In order for these components to deliver the quality of service (QoS) that it is capable of, the people that are managing, monitoring and administering the system must do it in a way that enables the technology to do its thing. If we are to expect System z – or any other information technology platform – to deliver, the people and processes must be an integral part of it. Over the years the IT industry has realized the criticality of this thinking. ITIL (IT Infrastructure Library) defines standards for processes and practices along the lines of what those of us with The Mainframe Mentality have attempted to do with systems/service management.
In my last blog post, I made a crack about “the good ol’ days”. I’m not really a “millenial,” per se. I have sons that are. 🙂 And I work with a lot of them. And I like to think that I’m mentally a lot younger than my body indicates. But I have a lot of years of experience under my belt in working with this stuff and I’ve seen many, many mainframe customers and how they do business. Almost universally, mainframe customers have well-defined systems management practices and processes that help them maintain excellent QoS. They have change management, capacity and performance planning, problem management, and other processes. They lock down system resources with security systems like RACF. They have good monitoring tools to watch what’s happening on the systems. The people aspects of system management are refined and well-defined. That’s part of “the mainframe mentality”
But there’s a flip side to this. There are a lot of mainframe folks who seem averse to change. They like the old stuff, like 3270 interfaces. They like to program in Assembler because it’s fast and it does exactly what they want it to do. They like online systems like CICS and IMS and look down their noses at WebSphere and more current technology. But why is this? Is it because it’s a threat? I don’t think so. I believe it goes back to The Mentality. These folks are concerned about keeping the lights on. They want to maintain that high level of availability and reliability and to be sure that the business stays running. This thinking can sometimes be a problem and a hinderance, but in the end these folks are pros at ensuring that mission-critical systems stay running. Those traditional technologies just plain work.
But so do the new technologies! A lot of folks forget that once upon a time, DB2 was a beta product with a lot of bugs and flaws. So were IMS and CICS. We have to dedicate the same kind of effort to hardening and refining implementations of some of the new tools and products on System z. Many of these “newfangled” products like WebSphere for z/OS are now on their fifth or sixth versions and have been shaken out over more than ten years of deployments now. They’re there. They work. But they still require industrial-strength processes to ensure that they have the same kind of bullet-proofness (is that a word?) as the traditional systems. And they’re sitting on an operating system and hardware base that can deliver higher levels of performance, scale and availability than any other.
But I/T is people. Those people still need “The Mainframe Mentality” to keep the transactions and data flowing…to make sure that I can get money out of my ATM or make reservations for my flight to Minneapolis….as long as it’s not in the winter.
Posted in Uncategorized.