In discussions on efforts to combat the impending mainframe skills shortage, programs such as the IBM Academic Initiative and the Master the Mainframe Contest typically play center stage.  The Millennial Mainframer team has already written about the positives and negatives in the design and execution of these programs, but as with most articles on these topics, we have largely hitherto neglected discussing one of the most critical determinants of the success of training and hiring new Millennial Mainframers: corporate partnerships.

Some quarters seem to have a “Field of Dreams” attitude towards mainframe education.  The thought is that if IBM creates new and fresh (and preferably free) mainframe training materials presented in a millennial-friendly way, then bright young minds will become Millennial Mainframers to take the place of retiring mainframers.  In other words, “if you build it, [they] will come.”

However, based on my experience designing an IBM Academic Initiative course, I’ve learned that this is a fallacious line of reasoning because universities are very sensitive to the utility of new courses they introduce, particularly those that are tied to specialized proprietary technologies, like the IBM mainframe. For a new course to be approved by a department chair, a certain number of students must sign up to generate enough tuition revenue to cover the expense of offering the course.  If insufficient students sign up, then the course is cancelled.  Because universities exist off student tuition, they are incented to offer the sorts of courses that students want to take.  The central thought problem with solving the mainframe skills shortage via the IBM Academic Initiative is therefore the following question:

Why would an eighteen to twenty-one year old student want to pursue coursework in mainframe technologies rather than courses in web design, mobile app development, etc.?

Based on the experiences of successful IBM Academic Initiative universities, the resounding answer to this question is that a certain type of college student chooses to study mainframe technologies when they perceive that graduates that have taken mainframe courses have achieved higher job placement rates and satisfaction than students that have pursued other specializations.  This means that the success of any mainframe training program is  100% tied to its ability to train and place students in high-paid technical positions at a rate that exceeds the career prospects of other technical specializations.  Absent external support, this is a tall order for an educator of any kind, particularly given the general ignorance about mainframe computing among millennials and Computer Science academics.

I therefore propose that the only way to ensure this sort of success is through deep partnership between training programs and the Fortune 1000 corporations that will ultimately need to hire Millennial Mainframers to keep their critical IT infrastructure running.  By demonstrating a high-level commitment to the mainframe for the foreseeable future and directly supporting mainframe courses and training programs through joint marketing, internships  / co-ops, and a commitment to hire graduates that complete these courses, leaders of the Fortune 1000 companies that depend on mainframes can shape the thinking of millennials and encourage them to consider becoming a Millennial Mainframer.

I recently learned that MetLife has done precisely that through their collaboration with the IBM Master the Mainframe Contest.  Much of this work has been spearheaded by a forward-thinking MetLife Vice President named David Ditillo.  Recognizing MetLife’s decision to build a technology center in Raleigh and the need to begin to build a pipeline of technology talent, David reached out to the IBM Academic Initiative to learn how MetLife could partner with IBM’s efforts to help ensure a consistent pipeline of Millennial Mainframers.

One of the results of this collaboration is the following video:

There are a number of remarkable aspects to this video which I think clearly demonstrate the fantastic job opportunities for new Millennial Mainframers at MetLife.  If you listened closely to this video, you learned the following:

  • The “Mainframe will continue to be the bedrock of technology at MetLife; the foundation on which innovation, technology, and business grows.”
  • “The Mainframe is a Powerhouse,” which suggests that “if you’re into power computing and IT, then the mainframe is the place that you need to be!”
  • MetLife envisions that technological excellence will be driven by “bringing together Mainframe and Emerging Technology and taking those solutions to places never though possible.”
  • “The mainframe is a dynamic platform… [that] IBM is evolving to be mainstream, for instance… [through] Java.”

Most importantly, David Ditillo expressed the thought that as “the future of technology,” the training and recruitment of Millennial Mainframers is critical to driving the next generation of innovators capable of synthesizing mainframe and emerging technologies into solutions to allow them to leapfrog over their competitors.  For this reason, David and the rest of the MetLife team “would love the opportunity to have you part of our MetLife family.”  Indeed, MetLife has already begun hiring Millennial Mainframers.  One of these was Natalie Chalco, a recent hire that explained her decision to join MetLife as follows:

“Right now MetLife Technology is undergoing a huge change.  They are… becom[ing] leading edge.  I have the opportunity to get my hands in.  Being new the field, it was a great opportunity to me.”

If you are interested in learning more about mainframe opportunities at MetLife, I would highly suggest checking out their synapse web site.  The design for the site is fantastic, and I highly approve of the way that they portray their mainframer:

metlife mainframer

Kevin Dooley - http://www.flickr.com/photos/pagedooley/ Beat the Bashers!

Being the Mainframe Junkie that I am, I’ve been in a lot of situations where my platform has been under severe attack.

No, I’m not talking DDoS-attacks, but more of an ideological attack rooted in the (false) belief that the mainframe sucks, is old and will not continue to be among us for very much longer.

I have been working on The Mainframe (yes double capitals) since 1998 and have always heard them say my precious Mainframe will be made redundant within the next 5 years.

As of today, The Mainframe is still here and it’s getting bigger and bigger, not smaller and smaller. The Mainframe is still the dominant architecture in most of the Fortune-500 companies, a lot of governmental institutes and as of lately is growing out to be the number one platform when it comes to Large Linux Infrastructures.

Seeing as I’ve been invited to join the (ever growing ranks) of The Millennial Mainframer I’d deemed it fit to share some of my experiences where the unknowing started bashing The Mainframe and how I have ‘retorted’ to these blasphemies.

For my first post I will rant about Cost and Legacy……

Cost: “It’s too expensive

Expensive is a relative term to start with. Yes you have to dish out large amounts of money to acquire a machine packed to the rim with all the awesomeness that makes up a Mainframe. But when you take a step backwards and realize this is one machine to host a multitude of workloads, with the capability of being managed with a small amount of FTE’s ( Full-Time Equivalents) it’s a whole different ball game all of a sudden!

There’s no ‘let’s buy another server for this application’-approach. There’s no linear growth in the ‘workloads-to-manpower’-graph and due to the sheer fact utilization rates on Mainframes are a multitude of the rates on platforms of a lesser order there’s a much more economically viable picture to be drawn.

Set aside from plain acquirement and personnel costs there is the ‘software licensing cost’. My advice : get down to the bottom of the figures when it comes to cost.  For one I am pretty sure the PVU Pricing Model for WebSphere will turn out to be cheaper on a Mainframe Platform.

  [EDITOR ~ Paul Gamble:  If any readers could comment and walk me through this I would be eternally grateful…and smarter!!!] .

The Mainframe can host a variety of different workloads it’s a dead give away that it will be cheaper to bring extra workload to The Mainframe than to offload workload elsewhere…..

Legacy: “It’s Dinosaur stuff

This must be the biggest frustrations on my end. The similarity between Dino’s and The Mainframe is only true when you look at their awesomeness. For the rest the comparison goes completely bust.

I shall be the last to disagree with the fact that Mainframes date back from ‘ancient times’ (much like dinosaurs). But where at one stage T-Rex and his friends became extinct (possibly due to changing environments) The Mainframe has shown us that it is capable of continuing it’s evolution to keep sustainable.

The Mainframe has been virtualizing since like forever. From before I was born it was capable of running multiple workloads, in multiple ‘ADDRESS SPACES’ (images they say) on hardware less powerful than the CPU’s today. 

But my …….. (insert any X86 argument) has a gazillion virtualization instructions.

Yeah I know, sorry for ya, we’ve got one on The Mainframe, it’s called SIE (Start Interpretive Execution) and we don’t pronounce it ‘sigh’ for nothing 🙂

[Follow Henri Kuiper @  http://zdevops.tumblr.com/

Henri KuiperAbout Henri:

I’m a level 76 Fire Mage.
Oh noes, born in 1976, computer junkie since I was like 7 (BBC Micro) and moved up from Commodore to MSX then Amiga then Linux. Been a Mainframe Junkie since 1998 and consider myself a full-fledged systems programmer. I understand (and work on) the mainframe from the IO drawerr all the way up to the end-user application.

Proud member of the Millennial Mainframer 🙂

What is Virtualization?

Virtualization is the process of consolidating physical machines and/or servers onto one physical machine, thus creating “Virtual Machines” that share the central Server’s resources (Processor, Memory, Storage, etc.) virtualization allows the creation of multiple systems, the physical systems that are already in use can have its data and information moved to the mainframe and used as if it was still a “physical” box.  This also will allow that system to be used from virtually anywhere access is allowed.  By being moved to the mainframe system, these existing systems will also become more isolated and secure than it was as an individual system.

Virtualization consolidation

As a Virtual Machine, or VM, the system has more controlled and secure access to its data. With adjusted settings, the VM will be completely isolated from other systems and completely unaffected by other systems on the mainframe, whereas in its physical state, the machine could be intruded or accidentally affected, or crashed, by other systems within that network.  Virtual Machines are impenetrable within a mainframe environment. Virtualization ultimately allows multiple, different operating systems to run on the mainframe at the same time.

hypervisor

Advantages of virtualization

Virtualization

The virtual machine environment is highly flexible and adaptable.  New Linux guests can be added to a VM system quickly and easily without requiring dedicated resources.  This is useful for replicating servers in addition to giving users a highly flexible test environment

Sharing resources

Resources can be shared among multiple Linux images running on the same VM system. These resources include: CPU cycles, memory, storage devices, and network adapters.

Server hardware consolidation

Running tens or hundreds of Linux instances on a single System z server offers customers savings in space and personnel required to manage real hardware.

System z advantages

Running Linux on VM means the Linux guest(s) can transparently take advantage of VM support for System z hardware architecture and features

z/VM Connectivity

z/VM provides high-performance communication among virtual machines running Linux and other operating systems on the same processor.  Simplification of the network by using HiperSockets may provide savings and reduce cabling, hubs, switches, and routers, as well as help to reduce maintenance effort.

Horizontal growth

An effective way to grow your Linux workload capacity is to add more Linux guests to a VM system.  z/VM V5 supports Integrated Facility for Linux (IFL) processors, the attractively-priced hardware feature for Linux workloads available for System z.   An IFL is an dedicated processor for running Linux on the mainframe, designed to reduce software costs specifically.   

IBM introduced a new engine-based Value Unit pricing announced for z/VM V5, replacing the per-engine pricing model that was available with z/VM V4.  Engine-based Value Unit pricing is designed to provide a decreasing price curve as hardware capacities and workload grow, which may help improve price/performance.

So, why use Linux on the mainframe?

Linux is more secure and also stable than most OS’s.  Since its creation, Linux has been able to remain clear of mass-spread viruses or spyware.  This will disable the need for the organization to purchase any antivirus software and the subscriptions such as it would on Windows.   Also, by Linux being open source, the vast majority of Linux based software is also completely free and available for download at any time.

Mainframe customers just about always buy support.   Annual subscriptions for Red Hat is 15 to 18 thousand per IFL.   Suse is similar.  Linux on z is typically used to run licensed enterprise software rather than free open source, so there are costs as with most servers.  However  there is still cost savings for many workloads due to consolidation.

Some real examples of the benefits of virtualization, specifically Linux, includes Nationwide Insurance.  Nationwide.com runs on WebSphere¸ one of IBM’s lead web application server products, on Linux for System z.  They expect to save $16M over the next 3 years from their initial deployment of the Linux application.  The initial deployment consolidated over 250 Production, Development & Test Servers down to only 6 IFLs, or Integrated Facility for Linux.  

Bank of New Zealand also engaged in a consolidation of its own.  The bank reduced over 200 Sun servers down to one System z10 running Red Hat Enterprise Linux.  They also reduced their datacenter footprint by 30%, reduced the heat output by 33%, and power consumption by very close to 40%.  Also, they saved in human resources by having only one assigned administrator per 200 virtual servers.

As both of these examples show, benefits from virtualization are indeed REAL!

linux_ibm

 

(*** Please follow me at mainframemindedaggie ****)