Sunday 27 January 2013

Guest blog - IT made simple? Automation lessons from the mainframe

This week, for a change, I’m publishing a blog entry from Marcel den Hartog, Principal Product Marketing for CA Technologies Mainframe Solutions.

Many moons ago, some smart people invented machines to do things (that were previously done manually) faster, better, and more consistently. Good examples are the car industry, the way we make (made) light bulbs, and how we produce clothes or other fabrics. Not much later, people invented machines to automate administrative tasks, so we could calculate faster and have better insights in our financials, with fewer people than ever before. Programmable typewriters are an early example, but the first simple computers did exactly that, add and subtract numbers faster and with fewer errors than an employee could (and ideally with no errors at all!).

The main reason for people inventing “machines” like this was to do things more efficiently, which in turn would make them more competitive. It was as simple as that. True, the equipment was a bit more complex, but the advantages far outweighed the problems of complexity.

This is where Information Technology was born. We soon needed people to drive these “machines”, maintain them, and program them to perform new tasks. And before we knew it, we had simplified a lot of manual tasks, invented some that we didn’t even know existed, and, probably even more important, we were seen as just another part of the business. A part that was able to help the company to run more efficiently and become more competitive.

Automation, in the most general sense, also helped our civilization to evolve faster and faster. Automating more meant we had more time to study, more time to invent even better technology that could automate even more...

However, somehow, at some point in the recent past, something went wrong. We kept implementing more technology that helped us to automate more bits and pieces, but we somehow forgot the next step of automation – automating our automation. Sounds weird? Well, look at how many manual interventions today’s IT systems need to keep them healthy. Ask your IT peers why it still takes 10 days to implement a new server (or 3 days if you do it on a virtualized server farm). Go and ask your Help Desk staff how much time it takes for a performance issue to bubble up, and get fixed again. Ask your operations staff how many of their events (across platforms) have automated actions attached to them that do stuff that could replace manual interventions...

Now, please don’t get me wrong, I know that there are many IT departments that have implemented automation in their IT environment. But in all fairness, it’s not really enough to cope with the complexity of today’s environments. There is another reason, however, for bringing this up now. In the past four years, many IT departments have implemented virtualization in some way. Some have been more successful than others, but there is one thing most people seem to agree on: the current mix of servers – Enterprise Servers like the IBM mainframe, standalone Unix, Linux, and Windows servers, and many virtualized systems running different kinds of OSs – is already hard to manage, and automation is already quite difficult. If the signs are right, we will add a lot of new stuff making it even harder to monitor, manage, and control everything we have.


With the addition of support for mobile devices, Cloud initiatives, and Big Data, we will be confronted with new and unpredictable behaviours arising from our systems. So if there were ever a time to give some extra attention to automate the things we already have, it’s now – at least then we will be more prepared for the unknowns that these new initiatives will bring us. I really think that this is also one of the ways to demonstrate to “the business” that IT really is ready to bring in the new IT services that the business requires. In the past, IT has often been accused of spending too much money to “keep the lights on”. Part of this, as we all know, is the fact that we still spend too much money and time on manual work (and interventions ) to keep these same lights on.

Now, please go back to my first paragraph and tell me if this sounds familiar. The business is now ready for new initiatives like Cloud, Big Data, and more and better support for mobile devices because it will help them to work in a more cost-effective way. IT is also ready for the next step, when we will be required to automate more of what is now running the business, to make sure that it runs as an efficient engine that needs a lot less attention, but also to free up the resources needed to run the new services that the business requires us to run...

I think we can all agree that what I just wrote makes sense. So why not go back to history once more for a final lesson?

Many Fortune 500 companies still run the majority of their mission-critical business services on a mainframe. And for good reason: it is a reliable environment, cost effective, and it is a platform that doesn’t confront managers with a lot of surprises. Some people might call this “boring”, others would say that it’s the way things are supposed to be run. Because of the nature of the mainframe (it was once the ONLY platform to run IT services on) automation has been perfected on this platform in the past decades. Some companies have tens of thousands of “rules” that kick in once unexpected things happen. Looking for proof? See how many people the average mainframe is managed with, compare that with the amount of mission-critical transactions running on that same mainframe, and everybody will agree that you need fewer people to manage a mainframe than you do to manage other environments. With mainframes, you really can have IT made simple.

People learn from the past. Experience is what has brought mankind to where it is today. But for some strange reason, we tend to think of history as something that goes 50+ years back. In IT, going back in history to learn something simply means going back 6-10 years. Talk to your mainframe peers, learn from their experience, and find how you can benefit from their automation expertise. It will not only make your current infrastructure run better, it will also help you to demonstrate to the business that you are ready for the next bulk of new and innovative IT projects – and you will save some money at the same time. And who doesn’t want to do that these days?

Sunday 20 January 2013

The Arcati Mainframe Yearbook 2013 has been published

Every year, about this time, mainframe users are excited to get their hands on the latest edition of the Arcati Mainframe Yearbook. What makes the Yearbook stand out is that it’s an excellent reference work for all IBM mainframe professionals – no matter how many years of experience they have.

What makes this annual publication so important? The answer is that it provides a one-stop shop for everything a mainframer needs to know. For example, the technical specification section includes model numbers, MIPS, and MSUs for zEnterprise processors (zEC12s, z196s and z114s). There’s also a hardware timeline, and a display of mainframe operating system evolution.

In addition, there’s the glossary of terminology section explaining what all those acronyms stand for, in a way that means you can understand them.

One section provides a media guide for IBM mainframers. This includes information on newsletters, magazines, user groups, blogs, and social networking information resources for the z/OS environment. Amongst the things it highlights are Enterprise Tech Journal, IBM Listservs, SHARE’s Five Minute Briefing on the Data Center, Facebook (fan) pages, and LinkedIn discussions. As well as user groups such as SHARE and IDUG.

The vendor directory section contains an up-to-date list of vendors, consultants, and service providers working in the z/OS environment. There’s a summary of the products they supply and contact information. As usual, there are a number of new organizations in the list this year, and, sadly, a few familiar names have ceased trading.

The mainframe strategy section contains articles by industry gurus and vendors on topics such as: Where is the COBOL in your SOA?; Lost without a trace?; and Challenging times lead to new tools for z/OS control and network management.

For many people the highlight each year is the mainframe user survey. This illustrates just what’s been happening at users’ sites. It’s a good way for mainframers to compare what they are planning to do with what other sites have done. I will be looking at some of the survey highlights in a future blog.

The other great thing about the Yearbook – as far as many of the 20,000 people who download it are concerned – is that it is completely FREE.

It can only be free because some organizations have been prepared to sponsor it or advertise in it. This year’s sponsors were: Software AG, Software Diversified Services (SDS), and William Data Systems.

To see this year’s Arcati Mainframe Yearbook, click on www.arcati.com/newyearbook13. If you don’t want to download everything at once, again this year, each section is available as a separate PDF file.

Don’t miss out on this excellent publication.

Monday 14 January 2013

How much data is enough?

In the past, the only way of storing data was in your head. So, in less developed societies, older people were revered because their heads contained information about how to deal with problems. They remembered how the tribe or village survived last time everywhere flooded or the last major drought. They knew high spots or springs that don’t dry up.

Many societies have tried to make this information available to others by creating songs or poems that recite the lore – so that future generations will not forget the deeds of great heroes, but, more importantly, the list of tips and tricks for surviving in difficult situations.

Then came writing. There was now the ability to list the kings, so it was obvious who should be the next ruler. Or to list the goods that were to be bartered for some other set of goods. Or to record just how many sheep and goats someone owned at the end of each year. And writing in stone, which could last for many years, was later replaced by writing on papyrus, and eventually on paper. With the advent of printing, it became possible to create definitive lists of laws or stories that everyone had access to (well, provided they could read!).

And in our headlong rush through time, we get to the early computers. People could store information on them – like the milk yields of their cows and how much food the herd consumed, or how many coffees were sold in each coffee shop in a chain, or what you wanted for Christmas. And not only could you store data, you could calculate answers to questions. But it was all pretty much driven by humans for humans. And generally you knew where it was stored – even if it was on a mainframe in a different country, you could find the address of site.

But now two of those factors are changing, and changing in a very big way. Now, a lot of the data isn’t created by humans, it’s coming from devices. Let me expand on that. Every time you use your credit card, information about your purchases and where you purchased them is very likely stored somewhere, so the retailer can target adverts at you, and so they have a better idea of what types of product sell well in different parts of the country. Requiring even less human intervention is all the footage that gets stored from security cameras. No humans are involved in the recording and storage. And there are many other examples of devices that just store information. As time goes on, and more-and-more information is stored, there’s going to be a requirement for more-and-more storage. I’m sure someone somewhere has worked out the numbers, but if 500GB of storage occupies half a square foot, and the amount of data stored doubles every year, then in 20 years time, the whole planet will be covered in storage devices to the depth of 18 inches!!

But where is your data? Originally it was on your hard drive or on DASD in your organization’s buildings. But nowadays, I have most of my customer-facing files on Google Drive, and, so long as I’m in a wifi area, I use the CloudOn app on my tablet to show customers PowerPoint presentations as well as Word documents and Excel spreadsheets. I also use Dropbox for sharing files with friends. The point I’m making is that I have no idea where those files are physically stored. And, for business files, that brings a huge security issue. For example, European organizations are pretty much prevented from using US-based, or indeterminate location, because they’re outside the EU.

It seems at this rate that there will always be more data to store (somewhere) – and hopefully there will always be enough storage, without covering the whole planet. But will there be any more knowledge in the world?

Sunday 6 January 2013

2012 at iTech-Ed Ltd

As it’s the start of the New Year, I thought I’d review events of 2012 through the lens of events at iTech-Ed Ltd.

In January, The Arcati Mainframe Yearbook 2012 was published. It had around 20,000 downloads during the course of the year, and I’m Editorial Director for this highly-respected annual source of mainframe information. My blog, “Gazing into the IT Crystal Ball...”, was published on the Destination z Web site. And the Virtual CICS user group meeting on 17 January had a presentation from IBM Hursley’s Andrew Smithson entitled, “CICS Transaction Gateway V8.1”.

In February, my blog “Staying in Touch With the Mainframe Community” was published on the Destination z Web site, and an earlier blog was quoted from on the Dancing Dinosaur site. The Virtual IMS user group had a presentations from Neil Price, a senior DBA and systems programmer with TNT Express ICS on 7 February. His talk was entitled, “Memoirs of a HALDBA”.

My article entitled: “The z114: Delivering Game-Changing Opportunities” was published in the March/April issue of z/Journal. My blog “Saving Money With Mainframes” can be found on the Destination z Web site. And of the 6 March, the Virtual CICS user group meeting had a presentation from Fundi Software’s Jim Martin entitled, “Analyzing CICS function shipping applications using Transaction Analysis Workbench”.

3 April saw a presentation to the Virtual IMS user group by Fundi Software’s James Martin, entitled, “Using IMS Performance Solution Pack for z/OS to analyze IMS performance problems”. And my blog “The Sincerest Form of Flattery” was published on the Destination z Web site.

In May, my blog “The Mainframe’s Potential to Get Social” was published on the Destination z Web site. And on 8 May, at the Virtual CICS user group meeting, Stephen Mitchell, Managing Director of Matter of Fact Software Limited, gave a presentation entitled, “Utilizing the Dojo Toolkit for Web browser-driven applications from CICS”.

The 12 June saw a presentation by GT Software’s Dusty Rivers to the Virtual IMS user group entitled, “Modernizing IMS”. And the Destination z Web site published my blog “Atomic Weapons and the Dawn of the Computer Age”.

On 10 July, Ezriel Gross, CEO of Circle Software Inc, gave a presentation to the Virtual CICS user group entitled, “CICS introduction to Web services for the system programmer”. Also in July, my blog “IBM and Augmented Reality” was published on the Destination z Web site.

The Virtual IMS user group enjoyed a presentation by SQData’s CEO, Scott Quillicy, on 7 August entitled, “Best practices: IMS to relational data movement”. My blog “BYOD and Network Security” was published on the Destination z Web site in August.

In September, my blog “IMS: There’s Still Life in the Old Dog!” appeared on the Destination z Web site. Also in September, the Virtual CICS user group enjoyed a presentation entitled, “Success paths for integrating CICS with new technologies” from Don Spoerke, Principle Solutions Engineer with GT Software.

October saw my blog “What’s New With CICS?” published here on the Destination z Web site. In the same month, IBM’s “Think BIG” Information On Demand conference site published “IOD from Afar” and “IOD News” on their official IOD blog site.

In November, Colin Pearce gave a very enjoyable and detailed presentation to the Virtual CICS user group about “CICS security”. The IBM ‘Proud to be part of z’ poster came out that month. Look out for the person in the bottom row, third from the left! And the Destination z Web site published my blog “New Poster Captures the Mainframer Mosiac”.

On 4 December, the Virtual IMS user group had a presentation by BMC’s Bill Chapin about “Nearing 24x7 availability with structure changes too!” Also in December, Trevor’s blog “Whatever Happened to the Network Guy?” was published on the Destination z Web site.

Also last year, I was made a 2012 IBM Champion – that’s four years in a row I’ve been an IBM Champion. I continued to blog at mainframeupdate.blogspot.com and it.toolbox.com/blogs/mainframe-world/. And I continued to tweet at twitter.com/t_eddolls, twitter.com/virtualims, and twitter.com/virtualcics. And on Facebook you can find me at fb.com/itech-ed, fb.com/VirtualIMS, and fb.com/VirtualCICS. And I have groups on LinkedIn for Virtual IMS (www.linkedin.com/groups?gid=379256) and Virtual CICS (www.linkedin.com/groups?gid=3847862) – come on LinkedIn, do short names! And I’m on Google plus (if you like, you can find me at gplus.to/teddolls).

And don’t forget, the Arcati Mainframe Yearbook 2013, will be published very shortly. And look out for this year’s meetings of the Virtual IMS and Virtual CICS user groups.

Trevor Eddolls