Sunday, 18 December 2011

2011 at iTech-Ed Ltd

Well, as another year comes partying to an end, and everyone stops checking their e-mails on their smartphones or tablets and finally starts to let their hair down and enjoy a glass of something alcoholic, I thought I’d review the year through the lens of my company – iTech-Ed Ltd (

January started the year, as most Januaries do, with the publication of the Arcati Mainframe Yearbook. The 2011 edition is still available for download from The 2012 edition will be available in a couple of weeks. As always the Arcati Mainframe Yearbook  includes its annual user survey, an up-to-date directory of vendors and consultants, a media guide, a strategy section with papers on mainframe trends and directions, a glossary of terminology, and a technical specification section. And each year, it gets downloaded by around 15,000 mainframe professionals.

February saw the launch of the new series of Virtual IMS user group meetings. The user group is now sponsored by Fundi Software and hosted at The first speaker was Jim Martin from Fundi Software, whose presentation was called, “Solving the problem when IMS isn't the cause”.

In March, everyone seemed to be talking about cloud computing.

April’s meeting of the Virtual IMS user group included a presentation from Ron Haupert, a Senior Technologist with Rocket Software. His talk was called, “Simplify and improve database administration by leveraging your storage system”.

In May, Mark Lillycrop, Director of Arcati Ltd and I took part in a ‘Scheduled Chat’ in the ‘House of Mainframe’ section of CA’s May Mainframe Madness month. May also witnessed the launch of the new Virtual CICS user group – again sponsored by Fundi – with its Web site at Our opening presentation was from Fundi’s Jim Martin talking about, “Solving the problem when CICS isn't the cause”.

In June, I was asked by ITToolbox to lead a discussion in the Data Center Infrastructure section of their Web site. At the Virtual IMS user group meeting, Gary Weinhold a Systems Engineer and Verna Bartlett Head of Marketing with Data Kinetics talked about, “MSU reduction due to in-memory table management with (any) IMS applications”.

In July, I was selected for the Destination z ( member spotlight. The Virtual CICS user group saw a presentation from Jeff Geminder, Principal Consultant with CA, called, “Cross-enterprise application performance monitoring and CICS-specific drill-down: approaches to finding the performance problem needle in the heterogeneous haystack”. I was also a guest blogger on the Destination z Web site.

In August, my article CICS Top Performance and Tuning Issues was published in z/Journal. I had a guest blog published on Destination z. The Virtual IMS user group had a presentation from Scott Quillicy, CEO and Founder of SQData. His talk was called, “IMS replication for high-availability”.

For the September meeting, Charles Jones, from the Product Management group at Rocket Software, gave a talk to the Virtual CICS user group called, “CICS TS 4.2: Leveraging event processing and high-performance Java”. I wrote a guest blog for the Destination z Web site.

October saw a presentation from Rosemary Galvan, Principal Software Consultant – IMS, with BMC. Her talk to the Virtual IMS user group was called, “Database Performance – Could Have, Should Have, Would Have”. I had a guest blog on the Destination z Web site.

In November, my Mainframe Update blog at was a finalist in the Computer Weekly Social Media Awards 2011. Also in November the Arcati Mainframe Yearbook user survey was launched. And Eugene S Hudders, president of C\TREK Corp, gave a presentation to the Virtual CICS user group called, “CICS TS Performance – Tuning LSR Pools”. I also had a guest blog on the Destination z Web site.

And finally, in December, I had an article entitled, Ways to Save Money and Improve IT Services published in z/Journal. The final speaker for the year at the Virtual IMS user group was Suzie Wendler, a Consulting IT Specialist in the IBM IMS Advanced Technical Skills organization, who talked about, “IMS V12”. I chaired a webinar for SQData entitled, “How Important is Continous Availability of Critical Applications to Your Company?”And there was a guest blog on the Destination z Web site.

What else, well apart from a full year of writing and consultancy work,  I was made an IBM Champion for the third year running.

Looking forward to 2012, we have the launch of the Arcati Mainframe Yearbook in January, and a presentation from Andrew Smithson of IBM Hursley on CICS Transaction Gateway V8.1 for the Virtual CICS user group.

If you do celebrate it, Merry Christmas and a happy New Year. I’ll be back blogging in January.
Trevor Eddolls

Sunday, 11 December 2011

Sunk without trace

There was a time when using the trace facility was really the final strategy. You’d perhaps have tried everything else to find what was going wrong first. And when nothing seemed to have worked, you’d equip yourself with all the necessary manuals – and that could be quite a few – and run the trace and start the hard job of interpreting the results. And then try to fix the problem. Those days are long gone thanks to more modern software tools, but, to many people, the memories linger on!

I recently bumped into William Data Systems’ Tony Amies, who took the time to show me some of the things he was working on. And one of those things was making trace much, much, more user friendly.

Tony showed me WDS’s ZEN product, which, as you may know, allows lots of network monitoring information to be collated and viewed from anywhere using a browser. Information can appear as coloured boxes, which once you clicked on them display more-and-more information in a clever drill-down manner. Fairly quickly, you can identify the component that has exceeded some predetermined threshold.

WDS has a number of products in the ZEN family and you can use buttons on the browser to switch between them – giving you information about different aspects of performance. ZIM the ZEN IP MONITOR can detect error conditions, then ZEN TRACE and SOLVE (ZTS – which used to be called EXIGENCE) can be used to start, stop, and view traces. Now that has got to be so much easier than in the Old Days!

Tony showed how a TCP trace could be carried out in seconds, explaining that there were lots of commands embedded in it. Tony explained how network tracing can be so difficult. For example, using Enterprise Extender, which allows SNA applications to run over TCP networks, results in encapsulated messages. Tony demonstrated software that was able to look inside the message to see what was there – in terms of different types of header. He then explained how this works with FMH5, UDP, IP, APPN, HPR, and more. He explained that sites using the Cisco load balancing GRE tunnelling protocol can also be opened to see the true header for the message. All very clever stuff – and no manuals in sight.

In fact, on a number of occasions a right mouse click on some information in the display would produce a pop-up box explaining exactly what some term or other actually meant. So there was no need for any manuals. The display could show delays, highlight response time problems, and the TCP window size.

Tony also showed me a piece of software that drew a diagram of a Sysplex Distributor – which shows the IP addresses and links on a mainframe system. The software also highlighted where there were issues. And, like the rest of the software we looked at, you could drill down to find exactly where any problem were. In fact, Tony was sure that this would allow customers to identify potential issues before their users did. Behind the scenes, information from netstat and other commands were being used to drive the display.

We talked about customers being able to build business service views of what was going on their system and how useful that would be for each of their customers. That kind of bespoke requirement wasn’t something that Tony could necessarily build into the software, but all it requires is a knowledge of REXX to make it happen. And most z/OS sites have at least one person who code in REXX.

Lastly, we talked about problem resolution when you have two or more systems that don’t seem to be talking to each other. Currently, you need to log into each system and run traces to find out which of the systems has the problem. Tony plans to implement a ‘grouptrace’ feature that allows the user to tell the software to run a trace on these two (or more) systems. The results will come back from both systems and be visible from the browser. The results will be displayed in timestamp order and it will be possible to see on which of the systems the problem is. As easy that.

Too often we’d be sunk without a trace facility. Now we have an example of a way to be able to use trace across multiple systems and simply click to drill down to identify the problem.

Sunday, 4 December 2011

The future - gamification and augmented reality

I remember many years ago saying to my children that one day, when they walked around London or any capital city, they’d be able to hold up their phone in front of a statue or building and information would appear on screen explaining what the statue commemorated, etc.
But how about if you could hold up your phone in front of the mainframe or some x86 server, and on screen would appear statistics about usage and performance? You could then take appropriate action to resolve hot spots and capacity issues. All just a dream? Apparently not.
Beverley Head’s blog at IT Wire ( from last week suggests that BMC is exploring how it can harness gamification and augmented reality techniques in the next generation of its systems management tools. Beverley reports Suhas Kelkar, a chief technology officer for BMC, describing the server example I gave above. Suhas adds: “If someone comes across an intelligent solution they should add it to the knowledge base. But hardly anyone does it. But what if you gamify the system and reward people for doing that?”
So there we have it... Augmented reality is the appearance on your phone of information about server capacity. And it could be about anything else. Wouldn’t it be great to hold your phone over a cable and read off the upstream and downstream broadband speeds?
Gamification – a new word, so try to drop into conversations, if you want to sound up-to-date – then is the fun part of using software. The part that is all too often missing!
Interestingly, I found an article about gamification from back in May this year at Daniel Nitsikopoulos talks about “Gamification: Making fun of the web”. He asserts that: “Gamification is one of the newest and I believe one of the biggest movements in the creative world today. It is the concept that you can apply game mechanics (elements that make games fun, engaging, and in some cases competitive) to things that aren’t typically considered a game, or even fun! From work, to health, to socialising, to cooking, to just about anything!”
So if BMC is looking at gamification and augmented reality, you can bet CA Technologies is as well. And that other big software supplier, IBM! But I would bet that the really exciting stuff is going to come from smaller companies. And I would also predict that these smaller companies will one-by-one be swallowed up by the existing software giants.
It definitely gets my vote as a direction I’d like technology to move in. Some equivalent to Google Goggles that not only identifies what you’re looking at (the Web server, or the z/Linux LPAR, or whatever) and provides current performance information. And then makes it fun to resolve any problems that might have been identified. Maybe when you look at the x86 server, it appears in red if there are issues. Then the length of time you take to resolve the problem is entered onto a leader board. And at the end of the week you can see who is the fastest techie in your team! Or perhaps the only green screen you’ll see will mean ‘game over’!

Sunday, 27 November 2011

Managing expectations

Have you ever been out for a few drinks with friends. Maybe you’ve had more to drink than usual. What happens next? Well the answer seems to depend on which country you and the people you’re drinking with come from.

It seems that in some countries, people take the view that alcohol is so strong and people are so weak that anything is permissible. You can stand up in court and explain your actions – whatever they may be – by saying that you’d drunk too much. In other countries – like Italy – alcohol is grouped with food in the minds of people. You drink when you eat. You eat and drink with your friends and family. Using the defence of excessive alcohol would seem as absurd as using the defence of having eaten too many burgers to explain antisocial behaviour.

And it’s exactly the same with users. If they expect nanosecond response times to a CICS transaction they will be miffed when a response takes a second or two. Whereas, if they are used to a response taking a few seconds, they will be pleased when it takes less than two seconds for their screen to refresh.

Managing expectations can be the difference between happy users and unhappy users. In the same way it can be the difference between alcoholic destruction of everything on the way home and a great night out.

Banks seem to use the opposite technique. They pretend that they offer great service, but as every customer knows, they don’t. The news is always full of demands that the banks should lone more – particularly to small businesses. Speaking as the owner of a small business, I think this is not the real problem. I think the problem for most small businesses is the fact that banks charge too much for their services.

Now I don’t mind banks charging for the work they do – that’s the same model I use to stay in business! What I object to is the amount they charge. And I think this is part of the problem most small businesses face. For example, here in the UK, I get a lot of dollar cheques from the USA. I get an exchange rate that’s clearly in the bank’s favour and then I get charged for paying the money into my account. I get charged for paying in UK cheques. And I get charged even more for paying in cash!

So I guess my expectations are that banks are going to rip me off. They do nothing to manage that and make things better. And they really are the reason that a lot of small businesses are having a hard time during this recession – or whatever we’re calling it.

Just revisiting the psychology again. There are experiments where two groups of students were given free drinks all evening. Both groups got equally drunk. Then the experimenters explained that one group had drunk alcohol and the other group hadn’t. Once this second group were told they hadn’t had any alcohol, they immediately sobered up. Their expectations changed completely and they now behaved in a different way.

So, while IT strives to offer the best service to its users. It’s important that conversations take place between the two groups so that users can describe their expectations of the service they want to receive, and IT can explain how the service is being delivered and give a realistic idea of what an end user shoould expect. Most sites have SLAs (Service Level Agreements), but these tend to be gathering dust somewhere rather than being constantly referred to. The importance of the conversation is to manage expectations and make sure both groups can continue to work, happy in the knowledge that they are getting or delivering the level of service that everyone expects.

Don’t forget that on Thursday 1 December there’s a webinar entitled: “How Important is the Continuous Availability of Your Critical Applications?” at 2pm GMT. You can register for the event at

And this is the last week that you can complete the Arcati Mainframe Yearbook user survey at We need all the completed surveys by Friday evening.

Saturday, 19 November 2011

Continuous availability – no longer a dream?

Zero downtime is a goal that many companies are striving for. It sounds so straighforward, and yet it’s not that simple to achieve – especially when it involves the continuous availability of large, high-volume databases. One of the inherent problems is that data replication for high-availability is filled with many nuances that need to be addressed for a successful deployment, including maintaining sub-second latency, active/active considerations, scalability options, conflict detection/resolution, recovery, exception processing, and verifying that the source/target are synchronized properly.

One of the problems that organizations face is the need to address lots of different business issues using, what often involves, multiple software packages. Integrating these different pieces of software – perhaps even from different vendors – can add an extra level of complexity to the job in hand. What those organizations really need is a single piece of software that’s flexible enough to provide a comprehensive solution for changed data capture, replication, enhancing existing ETL (Extract, Transform, and Load) processes, and data migrations/conversions. Quite a big ask.

Wouldn’t you be interested in software that offers industrial-strength, near-real-time data integration solutions that include high-performance Changed Data Capture (CDC), data replication, data synchronization, enhanced ETL and business event publishing? And what if it was equally simple to experience the high-speed delivery of mainframe data (IMS, DB2, VSAM, etc) into data warehouses and downstream applications? Too good to be true?

If you’re like me, you carry around a list of capabilities in your head, and tick them off – or more often don’t tick them off – when you give software the once over. So here’s the kind of things I’d have on my list for an integration engine. In general I’d expect:
  • Concurrent operation across multiple operating system platforms
  • Multi-step processes within a single script (UNION)
  • Simultaneous multi-record type file handling
  • Multi-level array handling (repeating groups) of source data store records/rows
  • Data filtering and cleansing
  • Dynamic look-up table processing
  • Support for data transfer and communication using TCP/IP and MQSeries
  • Preservation of referential integrity (RI) rules on target updates
  • Joins/Merges of heterogeneous databases/files.
In terms of data transformation I’d like to see:
  • Case (If/Else) logic
  • Extensive date cleansing and formatting
  • Arithmetic functions (add, subtract, multiply, etc)
  • Aggregation functions (sum, min, max, avg, etc)
  • Data type conversions
  • String functions
  • Data filtering
  • XML data formatting
  • Delimited data formatting.
When it comes to datastore processing I’d want:
  • High performance bulk data transfer
  • Concurrent processing of multiple data store types
  • Creation of target data stores from source data store format
  • Insert/append to existing target data stores
  • Update/replace existing target data stores
  • Delete from existing target data stores
  • New column/field creation Data Movement.
And for Data Movement, my list includes MQSeries, TCP/IP, and FTP.

If there was also some kind of Integration Center that had an easy-to-use Graphical User Interface (GUI) enabling users to quickly develop data integration interfaces from a single control point – that would be good. Additionally, some way to develop, deploy and maintain data interfaces, create relational DDL (Data Definition Language), XML (Extensible Mark-up Language ) and C/C++ structures from COBOL Copybooks, monitor the status of integration engines, and contain an integrated metadata repository – that would be a real plus.

I’d definitely want to find out more about a single piece of software that provided high-performance Changed Data Capture (CDC) and Apply, data replication, event publishing, Extract, Transformation, and Load (ETL), and data conversions/migrations.

So, if you’re like me and want to know more, there’s a webinar from SQData’s Scott Quillicy on 1 December at 2pm GMT (8am CST). To join the webinar from your PC, you need to register before the event at I’ll see you there.

Sunday, 13 November 2011

Guest blog – Mainframe security: who needs it?

This week, for a change, I’m publishing a blog entry from Peter Goldberg, a senior solution architect at Liaison Technologies, a global provider of cloud-based integration and data management services and solutions based in Atlanta. He works directly with customers to identify their unique data security and integration challenges and helps to design solutions to suit their organizations’ requirements. A frequent speaker at industry conferences on eBusiness security issues and solutions, he can be reached at

I’ve been helping companies on both sides of the pond solve their data security problems for many years now. If I’ve learned one thing, it’s this: when I go into an organization that runs Windows, there’s little question of the need for data security. The organization knows it and so do I. When I visit a company whose IT infrastructure revolves around a mainframe, however, the mindset is often quite the opposite. In fact, the biggest data security misconception I encounter is the belief that the mainframe environment is inherently secure. Most IT staff view the mainframe as just another network node. Why? Because it’s universally perceived as a closed environment and, therefore, invulnerable to hackers.

In some cases, it’s the mainframe IT pros who hold this conviction. In other instances, it’s the executive management team. Lack of management attention allows “bad practices” to continue. I can tell you this without reserve: data stored in mainframes needs protection just as much as sensitive information stored on a Windows server or anywhere else. And, as systems continue to support more data, users, applications, and services, effective security management in the mainframe environment becomes significantly more difficult.

News flash: mainframes can be hacked!

For that simple reason, mainframe security should not be taken for granted.

Even though the mainframe is a mature platform, there is a real shortage of mainframe-specific security skills in the market. And, the few mainframe security practitioners who are out there spend a lot of time implementing configuration and controls within their environments as well as putting into place security systems like RACF, which provide access control and auditing functionality. As for other security measures, in my experience, the mainframe people know about encryption, but they’re not terribly aware of newer data security techniques like tokenization as it relates to protecting data within the mainframe environment and beyond.

Tokenization is a data security model that substitutes surrogate values for sensitive information in business systems. A rapidly rising method for reducing corporate risk and supporting compliance with data security standards and data privacy laws, it can be used to protect cardholder information as well as Personally Identifiable Information (PII) and Protected Health Information (PHI).

In fact, for companies that need to comply with the Payment Card Industry’s Data Security Standard (PCI DSS), tokenization has been lauded for its ability to reduce the cost of compliance by taking entire systems out of scope for PCI assessments. And, even in companies that do not deal with PCI DSS or other mandates, tokenization has proven effective for managing the duplication of data across LPARs and for facilitating the usage of potentially sensitive data for development purposes.

Too often, compliance audits skim over mainframe control weaknesses and there are also fewer mainframe-specific security guidelines. But this does not mean that significant risk is not there. You can apply a risk-based, defence-in-depth approach within the mainframe environment by using stronger mainframe host security controls and by using tokenization to protect the data itself.

To beef up data security on a mainframe, here’s my advice:
  1. Bring in mainframe security experts to identify and remediate risks, and to develop and enforce security policies and procedures.
  2. Develop in-house capabilities and skilled professionals across the mainframe platform to support security initiatives.
  3. Evaluate available security configuration and administration tools – there are some really good ones out there.
  4. Apply an in-depth security strategy that includes secure access and authentication controls, and use them appropriately.
  5. Adopt encryption and tokenization to protect sensitive information. Through their proper implementation, it’s really not that hard to achieve a true high level of protection within the mainframe environment.

Protecting sensitive and/or business-critical data is essential to a company’s reputation, profitability, and business objectives. In today’s global market, where business and personal information know no boundaries, traditional point solutions that protect certain devices or applications against specific risks are insufficient to provide cross-enterprise data security. Combining encryption and tokenization, along with centralized key management, as part of a corporate data protection programme works well – including in mainframe-centric environments – for protecting information while reducing corporate risk and the cost of compliance with data security mandates and data privacy laws.

Don’t be fooled: your mainframe isn’t inherently secure. Doing nothing is no longer an option!

Thanks Peter for your guest blog.
And remember, there's still time to complete the mainframe user survey or place a vendor entry in the Arcati Mainframe Yearbook 2012.

Saturday, 5 November 2011

Guide Share Europe – an impression

I could only make Day 1 of this year’s Guide Share Europe conference on the 1st and 2nd of November – which was a huge disappointment. For those of you who weren’t there, I thought I’d give you a flavour of my experience.

Firstly, it was at Whittlebury Hall again – which is a magnificent location just over the border from Buckinghamshire into Northamptonshire. The location is stunning and the facilities are excellent. It is in the countryside, so if you’re travelling by train, there’s a long taxi ride to get there. If you travel by car, there’s a huge car park.

The exhibition hall is big, but not so big you get lost in it. By having lunch and coffee in the hall, there were plenty of opportunities to engage with vendors and chat to other attendees. I always find it’s a great opportunity to catch up with old colleagues and make new friends. The quality of the coffee and food was good – which translates as excellent when compared to some venues!

But the point of GSE is not the food, it’s the presentations. I chair the Virtual IMS user group and the Virtual CICS user group, so I was torn between the CICS and IMS streams. In the end, I split my time between them. I watched Circle’s Ezriel Gross present on Using CICS to Deploy Microsoft .Net Winforms with Smart Client Technology – which was really fascinating. I’m sure we’re going to see more sites integrating their Windows technology with the power of mainframe subsystems. Ezriel made quite a complicated integration seem straightforward and obvious.

Next I watched IBM’s Alison Coughtrie talk about IMS 12 Overview. Another knowledgeable speaker with a lot of information to get over in the time. I certainly think I have a clearer idea of what’s new, and perhaps a small insight into where IBM is taking the product.

After lunch it was Neil Price, who works for TNT Express and chairs the IMS group for GSE, with a presentation entitled Memoirs of a HALDBA. I was so impressed with Neil’s real-life descriptions that I’ve asked him to speak to the Virtual IMS user group. Neil could have gone on for much longer than the time allowed. And I could happily have gone on listening.

Next up in the IMS stream was IBM’s Dougie Lawson. Dougie is another fantastically knowledgeable IBMer, who you may have come across when you’ve had an IMS problem. He talked about The Why and How of CSL. A real bits and bytes expert, who could have talked much longer.

I felt it was time to sit in on the CICS stream and the session I chose was IBM’s Ian Burnett talking about CICS Scalability. Yet again, a fact-filled presentation that would be hard to criticize. I felt my knowledge about CICS (and I used to edit CICS Update) making more sense and falling more into place.

But all work and no play makes Jack a dull boy – as they say. And the evening presentation was How To Cope With Pressure & Panics Without Going Into Headless Chicken Mode from Resli Costabell. A mixture of psychology, NLP, and audience participation made this a memorable session. If you get a chance to see her anywhere – don’t miss it!

After that there were drinks in the exhibition hall sponsored by Attachmate/Suse and Computacenter, followed by dinner sponsored by EMC and Computacenter. Both were very enjoyable in their own way, and they were an opportunity to chat more informally with vendors and real mainframe users. Obviously, I was telling vendors about sponsorship opportunities with the Arcati Mainframe Yearbook, and asking users to complete the user survey.

In conversation, I asked a few of the vendors how business was going. No-one admitted that double-dip recession was taking them out of business, but most suggested that they were keeping their heads above water and business generally was flat – but there was some business being done.

An IBMer suggested that over 30 z196s had been sold in the UK and eight of the new z114s. So, that’s good news for them.

My overall impression of the conference was that it was excellent. I bumped into Mark Wilson (the GSE technical coordinator) during the day as he rushed around making sure everything was going smoothly. And that’s why the conference works so well, because people like Mark work so hard to ensure it does.

Well done everyone who organized it and spoke at it. And if you missed it, go next year.

Sunday, 30 October 2011

Two things you thought would never happen at IBM

I guess any two pundits sitting in a room together 10 years ago and talking about IBM’s future would have been more likely to predict Star Trek-like beaming technology and computers you could talk to than a mainframe that integrated Windows servers and woman landing the top job at IBM.

And here we are. It’s almost November 2011, and both are about to come to pass.

The zEnterprise 196 and the Business Class version, the zEnterprise 114, mainframes come with the zEnterprise BladeCenter Extension. Initially this supported AIX on Power blades and Linux on x86 blades. This fit nicely with IBM’s model of the universe because it owns AIX and Linux is, of course, open source – ie it doesn’t belong to anybody. The Unified Resource Manager (URM) controls the operating systems and hypervisors on the mainframe and the blades. But now – the previously unthinkable – IBM promises that it will have Windows running on its HX5 Xeon-based blade servers for the zBX chassis before the end of this year.

Microsoft Windows Server 2008 R2 Datacenter Edition will run on the PS701 blade servers in the zBX enclosures. The zBX extension can have 112 PS701 blades or 28 HX5 blades.

This is clearly important for those sites that use mainframes or are ready to upgrade to mainframes and still have a big Windows-using population. It’s interesting that so many people consider Windows to be the de facto computing platform. I recently had a conversation where Windows laptops were given the metaphor of rats or beetles – they just turn up everywhere – and Linux was given the metaphor of a stealth operating system or a hidden shadow – it was everywhere, but you didn’t see it. Why stealth, well because Linux turns up behind the scenes on routers, on TiVO boxes, on supercomputers, as the precursor to Android on smartphones, making movies at Pixar and Dreamworks, in the military, governments, everywhere!

After Windows on IBM hardware, the next thing we hear is that Virginia M Rometty, a senior vice president at IBM, is going to be the company’s next CEO – starting in January. “Ginni”, aged 54 (as all the releases inform us), succeeds Samuel J Palmisano, who is 60, and will remain as chairman.

Ms Rometty graduated from Northwestern University with a degree in computer science, joined IBM in 1981 as a systems engineer. She moved through different management jobs, working with clients in a variety of industries. Her big coup was in 2002, when she played a major part in the  purchase of the very big consulting firm, PricewaterhouseCoopers Consulting. PwC staff were used to working in a different way from IBM’s and managing that culture shift was down to Ms Rometty.

In 2009, Ginni became senior vice president and group executive for sales, marketing, and strategy.

You’ll recall that Sam Palmisano took over in 2003 from Louis V Gerstner Jr, who’d joined IBM from RJR Nabisco in 1993 and helped turn round an ailing IBM. The previous incumbent had been the lacklustre John Akers.

I suppose with Siri on iPhones and the much less serious about itself Iris on Android, we’ve moved some way towards being able to talk to a computer – even if it is a smartphone. Still no sign of Scotty being beamed up, though!

Saturday, 22 October 2011

Guide Share Europe annual conference

The Guide Share Europe (GSE) UK Annual Conference is taking place on 1-2 November at Whittlebury Hall, Whittlebury, Near Towcester, Northamptonshire NN12 8QH, UK.

Sponsors this year include IBM, Computacentre, EMC, Attachmate, Suse, CA, Novell, Compuware, Intellimagic, RSM Partners, Velocity Software, and Zephyr. And there will be 30 vendors in the associated exhibition.

There’s the usual amazing range of streams – and, to be honest, there are a number of occasions when I would like to be in two or more places at once over the two days. The streams are: CICS, IMS, DB2, Enterprise Security, Large Systems Working Group, Network Management Working Group, Software Asset Management, Tivoli User Group TWS, Tivoli User Group Automation, MQ, New Technologies, zLinux, and the single-session Training & Certification.

That means that at this year’s conference there will be 126 hours of education covering most aspects of mainframe technology. This is slightly less than last year because two of the Tivoli streams that were included last years have been dropped because they were so poorly attended. This year, there will be 12 streams of ten sessions over the two days, plus five keynotes and that one training & certification WG meeting. In all, there are going to be 85 speakers delivering this training.

There is still time to register, and the organisers are expecting the daily total of delegates to exceed 300 – as it did last year. 

There are also 16 students attending this year, who are taking the mainframe course at UK universities. The majority of students are from the University of Western Scotland (UWS), but there will also be some from Liverpool John Moores University and possibly some more from other UK universities. The organisers have prepared a series of 101 sessions on mainframe architecture and infrastructure that will give these students as well as trainees and those unfamiliar with parts of the infrastructure a basic understanding of the mainframe and how it works.

Many GSE member companies are taking advantage of the five free places they get to send their staff to the conference. This would cost non-members £1000 in early-bird prices, and more than compensates member companies for the recent rise in the GSE membership fee to EUR 840.

You can find out more details about the conference at

If you’re still debating whether to go, let me recommend it to you. The quality of presentations is always excellent. And the networking opportunities are brilliant. If you are going, I look forward to seeing you there.

Sunday, 16 October 2011

The Arcati Mainframe Yearbook 2012

The Arcati Mainframe Yearbook has been the de facto reference work for IT professionals working with z/OS (and its forerunner) systems since 2005. It includes an annual user survey, an up-to-date directory of vendors and consultants, a media guide, a strategy section with papers on mainframe trends and directions, a glossary of terminology, and a technical specification section. Each year, the Yearbook is downloaded by around 15,000 mainframe professionals. The current issue is still available at

Very shortly, many of you will receive an e-mail informing you that Mark Lillycrop and I have started work on the 2012 edition of the Arcati Mainframe Yearbook. If you don’t get an e-mail from me about it, then e-mail and I will add you to our mailing list.

As usual, we’re hoping that mainframe professionals will be willing to complete the annual user survey, which will shortly be up and running at The more users who fill it in, the more accurate and therefore useful the survey report will be. All respondents before Friday 2 December will receive a free PDF copy of the survey results on publication. The identity and company information of all respondents is treated in confidence and will never be divulged to third parties. Any comments made by respondents will be anonymized also before publication. If you go to user group meetings, or just hang out with mainframers from other sites, please pass on the word about this survey. We’re hoping that this year’s user survey will be the most comprehensive survey ever. Current estimates suggest that there are somewhere between 6,000 and 8,000 companies using mainframes spread over 10,000 sites worldwide.

Anyone reading this who works for a vendor, consultant, or service provider, can ensure their company gets a free entry in the vendor directory section by completing the form at This form can also be used to amend last year’s entry.

As in previous years, there is the opportunity for organizations to sponsor the Yearbook or take out a half page advertisement. Half-page adverts (5.5in x 8in max landscape) cost $700 (UK£420). Sponsors get a full-page advert (11in x 8in) in the Yearbook; inclusion of a corporate paper in the Mainframe Strategy section of the Yearbook; a logo/link on the Yearbook download page on the Arcati Web site; and a brief text ad in the Yearbook publicity e-mails sent to users. Price $2100 (UK£1200).

To put that cost into perspective, for every dollar you spend on an advert you reach around 22 mainframe professionals.

The Arcati Mainframe Yearbook 2012 will be freely available for download early in January next year.

Sunday, 9 October 2011

World’s smallest mainframe!

Mainframes are so amazingly powerful and versatile, wouldn’t you like to have one in your pocket? Maybe that’s not possible (yet), but there have been many attempts over the years to shrink down the mainframe to a more manageable size.

I’m not talking about some sci fi shrink ray wielded by some fearsome purple-coloured alien, I’m talking about the use of emulation software to make one lot of hardware successfully interpret instructions designed to be used on completely different hardware – and vice versa. The mainframe programs think they are running on a mainframe and continue quite happily – totally unaware of the work being performed by the emulation software.

Fundamental Software Inc (FSI) gave us FLEX-ES, which ran on Intel chips and allowed developers to test their mainframe applications on their PCs. The PC itself ran Linux and FLEX ran under that – emulating a range of mainframe hardware devices including terminals and tape drives. Fundamental also sold hardware allowing real mainframe peripherals to connect to PCs.

In 2000 a company called T3 launched the tServer based on FLEX-ES.

UMX Technologies also offered Intel server emulation – using UMX’s Virtual Mainframe software. The company offered Windows compatibility as well.

Then there was Hercules, an Open Source software implementation of  mainframe architectures. Hercules runs under Linux, Windows, Solaris, FreeBSD, and Mac OSX. Hercules was created by Roger Bowler and was maintained by Jay Maynard. Jan Jaeger designed and implemented many of the advanced features of Hercules, including dynamic reconfiguration, integrated console, interpretive execution and z/Architecture support – according to their Web site. IBM stopped licencing its operating systems for Hercules systems, so users were left with running older public domain versions of IBM operating systems or illegally running newer versions.

Platform Solutions Inc (PSI) developed Open Mainframe servers, Open Systems servers, and NEC D-Series storage arrays. The company’s System64 product line consolidated z/OS, Windows, and Linux operating systems in one secure operating environment based on Intel Itanium 2 processor technology. At the time, Platform Solutions had a strategic partnership with T3 Technologies. In 2008, IBM took them over.

Sim390 was an application that ran under Windows and emulated a subset of the ESA/390 mainframe architecture. The emulator supported most TCP/IP operations (via socket calls using an emulated IUCV interface), and contained a Telnet 3270 (tn3270) server for remote log-in (with IP address filtering), as well as local 3270 sessions. It was possible to run it on a very small machine, such as a Pentium 75MHz with 16MB memory. So says the Sim 390 Mainframe Emulator home page.

But now you don’t need to worry about litigation, old Web sites (and older emulators), or potentially dodgy bits of software. You can have the IBM System z Personal Development Tool (zPDT), which enables a virtual System z architecture environment on x86 and x86-compatible platforms.

The IBM zPDT consists of software that is authenticated and enabled by a USB hardware key, loaded on to the Intel and Intel-compatible platform, running Linux. The zPDT comes with one, two, or three virtual engines, which can be defined as System z general-purpose processors, System z Integrated Information Processors (zIIPs), System z Application Assist Processors (zAAPs), System z Integrated Facility for Linux (IFL), and Integrated Coupling Facility (ICF).

As well as the current IBM operating systems and software, it also supports a variety of real and emulated hardware devices such as disks, tapes, printers, card readers,etc. System z customers, service providers, business partners, and ISVs can get the simpler version as part of the Rational Developer for System z Unit Test (RDz-UT) offer.

So now you can get your hands on a very small mainframe.

Saturday, 1 October 2011

Lumbering sluggers come out ducking and weaving

OK – that’s as far as I intend to go with sport metaphors. I’m talking about IBM and Oracle and where their long-term war is taking them next.

You’ll remember that Oracle bought Sun Microsystems early last year for $7.4 billion. Since then, IBM has been hoovering up customers. In August, market researchers IDC were saying that IBM had grown its Unix revenues by 15 percent in the second quarter and its market share by 6 percent. Adding that Oracle had lost share.

IBM claims that in the second quarter, its Power Systems unit acquired 334 customers from competitors, with 210 of those coming from Oracle. And, just to show that they are on a war footing and it’s not just friendly rivalry, IBM says that its formal migration program, which entices customers to move to IBM systems, has gained 7,210 server and storage customers from rivals since its inception in 2006.

There is a third player on the pitch – HP – which has been experiencing pretty dire times itself recently. IBM’s saying it’s acquired 110 users from HP. HP recently announced that Meg Whitman, the former CEO at eBay, will take over from Leo Apotheker, who’s only been there a year. Why dump Apothekar? No other reason than the company losing half it’s market value in the time Apothekar has been in charge!

There were even rumours (and, who knows, it might still happen) that Oracle would scoop up HP and add it to its own portfolio. Others suggest that the problems Oracle experienced with Sun’s SPARC hardware business may convince it to keep away from HP’s Itanium. Perhaps IBM might buy HP? That last sentence should come enclosed in tags!

But after a longish period of haemorrhaging its Sun SPARC users and having to put up with IBM’s suitably smug grins, Oracle has now announced its high-end SuperCluster system powered by its new T4 SPARC chip. With an estimated 50,000 SPARC customers, it’s a business well-worth hanging on to.

The SuperCluster T4-4 is a general-purpose system offering a claimed 33 percent more price/performance than IBM’s largest Power servers and (again claimed) more than 50 percent more price/performance than an Itanium-based Integrity server from HP.

The SuperCluster is powered by Oracle’s eight-core T4 chip, which Oracle claims offers five times the performance of the current 16-core T3. The SuperCluster also includes the capabilities of Oracle’s existing Exadata database system and Exalogic cloud-in-a-box offering, both of which are powered by x86 chips from Intel.

The SuperCluster runs the current Solaris 10 operating system or the new Solaris 11, and will run any applications that its SPARC customers might run.

We can only wait and see what IBM will produce when it comes out of its corner. It certainly knows that the fight is back on.

Sunday, 25 September 2011

Web 3.0 and Facebook

Keen as I am on selling my company’s services to help other organizations make the best use of social media, I never thought that I would be focusing a blog on our old friend Facebook. And yet, this week’s announcements at the F8 developer conference seem to have taken Facebook out of the ‘me-too’ duel with Google plus and Twitter and, in a quantum leap, put it way ahead of the game. Bringing Tim Berners-Lee’s vision of a Semantic Web closer.

Facebook recently allowed us to separate our real friend from our ‘Facebook friends’ in a similar way to Google plus’s circles. Then they gave us the ‘Subscribe’ button, allowing us to filter what we read. We can subscribe to ‘all’ updates, ‘most’ updates, or ‘only important’ updates rather than get news of all the goings on of our friends. But then – like Twitter – you can subscribe to people you don’t even know, following their statuses and profile updates. Interesting, but, in many ways, underwhelming. Then they announced Timeline, which is a replacement for the current profile page. And then the big one – Open Graph.

Open Graph (a new class of app) will apparently enable Facebook users to share experiences in realtime. Users will be able to instantly share activities with their friends without being required to grant apps permission each time. The more business-oriented amongst you will realize that Facebook users will be sharing more data with friends, so with Graph Targeting the marketing people will be able to deliver specific marketing messages to the ideal target market.

But, apart from Mark Zuckerberg getting even more shedloads of money, this announcement moves us a step closer to the Semantic Web – or Web 3.0 as it’s sometimes called. Way back in 1999 Tim Berners-Lee said: “I have a dream for the Web [in which computers] become capable of analysing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy ,and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.”

The Semantic Web will, in many ways be computer driven rather than human driven, and will integrate information from many different sources. This is dependant on meta data being avilable – and in many ways, this is what will happen with Facebook’s new approach.

We can turn our Web pages into graph objects using the Open Graph protocol tags and the familiar Facebook Like button on those Web pages. The tags look like:

In addition to the Open Graph protocol’s four required properties:
  • og:title –t he title of the object as it should appear within the graph, eg a film title.
  • og:type – the type of your object, eg "movie".
  • og:image – an image URL, which should represent the object within the graph.
  • og:url – the canonical URL of tha object that will be used as its permanent ID in the graph, eg

Facebook has added:
  • fb:app_id – a Facebook Platform application ID that administers this page.

And recommends using: 

  • og:site_name – a human-readable name for your site.
  • og:description – a one to two sentence description of your page.

When a user ‘likes’ a Web page using a Like button, a News Feed story is published to Facebook.

Wikipedia suggests that: “There are some who claim that Web 3.0 will be more application-based and centre its efforts towards more graphically-capable environments .” This is what Facebook’s Open Graph appears to be.

It also seems that some companies, such as those providing music streaming services, video streaming services, and newspapers will be able to customize the ‘Like’ button to say ‘listen’, ‘watch’, or ‘read’, as appropriate. Then, once someone has shared the content using these new buttons, other Facebook users will be able to access the content within Facebook provided the content supplier has created a compatible Facebook app.

Mainframers probably already know that IMS has a page at, and CICS has a page at You might not know that the Virtual IMS user group has a page at, and the Virtual CICS user group has a page at

Interesting times for Facebook and definitely putting some distance between it and its nearest rivals – for a while.

Sunday, 18 September 2011

Mainframe maintenance – a new paradigm with new challenges

For many organizations, we’re beginning to see a new model of how IT customer support can be organized – and the model is coming from management who are completely platform-agnostic. To them, IT is IT – it doesn’t matter whether something runs on a mainframe or a distributed platform. And this new way of working brings with it new challenges.

This whole change in staff structure is also being encouraged by the advent of the zEnterprise hybrid machines with their zBX blades running everything from AIX to, potentially, Windows. A consequence is that a mainframe specialist could be dealing with a Linux error message, or a Windows SharePoint guru might be trying to understand what’s going on inside CICS. What can you do to help them?

Or let’s suppose in a more traditional mainframe environment, for whatever reason, you lost some of your top technical people. Perhaps they got jobs elsewhere or perhaps they retired early, but suddenly you find yourself with a huge knowledge gap. Maybe you can transfer someone across from the distributed team. Or maybe you can recruit one of the new generation of youngsters who are learning the benefits of mainframe computing. But whatever you do, there will be a fairly long period of time during which anything out of the ordinary occurring is going to leave everyone scratching their heads and searching Google – whereas, previously, your in-house expert knew exactly what to do. So, in this situation, what are you going to do?

Let’s not worry too much at this stage about Service Level Agreements (SLAs) and performance targets. Let’s simply focus on the problem. How can any organization, irrespective of how its IT team is constructed, ensure that appropriate expertise is available at all times to whichever staff members are available?

Obviously you can have the manuals, and some could be on the IBM mainframe portal, but that doesn’t give you speedy access to the necessary information. A Google search will reveal hundredsof pages of results, but it takes a degree of expertise to sift through those and find the correct one quickly. And someone without any expertise could spend a very long time reading solutions to completely different problems before ever finding the right one. Not a satisfactory way to provide IT services to customers – whether internal or external to the organization.

So what would be a good solution? How can these issues of staff working outside their comfort zone be dealt with in a way that is good for the business? And what kind of a solution will still be able to ensure those business-critical mainframes are being supported in a year’s time, in five year’s, or even further into the future?

This is where a new breed of solutions that can address this coming challenge are positioning themselves. One of these, Softlib with its iSolve product (, allows an organization to combine all its IT-related information into a single virtual library. That means users – your harassed staff – have to search in only one place, not as previously in many places, to find a solution to any problem. And once you have a single location for information available, you can allow product champions and other IT-literate staff access to it – which should result in more empowered and satisfied users and fewer calls to the Help Desk.

It makes sense to organize the information in this single virtual library using themes, so CICS information might be one theme, IMS another, Linux a third, etc. The information in the library starts from IBM and third-party software vendors’ manuals, and can be supplemented with information from newsgroups and other online resources. Plus, you can add your own technical expertise.

Access to the information can be from a Web browser or a terminal server. It can be hosted locally, or as a cloud-based resource. The advantage of the cloud route is that the information is looked after by Softlib and they already have access to a huge number of the resources you’ll need. So you can start using the facility almost immediately. Plus the online documentation is automatically updated when new information becomes available. Other benefits include knowledge usage analytics that can help address missing or outdated knowledge, and seamless integration with CRM, bug-tracking, Service Desk, content-management applications, etc.

All in all, Softlib’s iSolve product has a lot to offer most mainframe sites, and certainly provides an answer to the question of what to do if you restructure your IT customer support and need to extend the working expertise of your staff onto other platforms such as AIX and Windows. It also offers a solution to the problem of losing key mainframe experts in a mainframe-only environment.

Sunday, 11 September 2011

Get the size of all site collections for a Web application

Get the size of all site collections for a Web application

This week, Darren Pritchard, our SharePoint guru, lays out clearly exactly what needs to be done in order to find out the size of all site collections for a Web application in SharePoint 2007.

The first step is to create a batch file containing:

SET STSADM="c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\bin\STSADM.EXE"
%STSADM% -o enumsites -url SiteStats.txt

Pretty obviously, where it says , you need to change that to the name of the Web application that you need to produce the statistics for.

Next you need to copy the batch file to your Web front-end server and run it.

The output will be in a file called SiteStats.txt. Open the file and for each site collection you will see ‘StorageUsedMB=’ and a value. It’s the value that you’re interested in.

On a completely different topic: on Tuesday 13 September the Virtual CICS user group meets. Rocket Software’s Charles Jones will be discussing “CICS TS 4.2: Leveraging event processing and high-performance Java”. More details about how to register can be found on the user group Web site at

Friday, 2 September 2011

Create custom permissions – for SharePoint

It’s been a while since we’ve published one of iTech-Ed Associate Darren Pritchard’s SharePoint 2007 beginners’ guides. This time he’s explaining custom permissions and how to create them.

Let’s start off by defining what we’re talking about. Specifying custom permission levels give you more control over the degree of access users can have to SharePoint sites, site collections, or site content. In effect, you create a new security group.

So, let’s run through the steps:
  1. From the site collection click ‘Site Actions’ 
  2. Click ‘Site Settings’
  3. Under ‘Users and Permissions’ click ‘Advanced Permissions’ 
  4. You will then see a list for permission level group
  5. Select the ‘Settings’ drop down
  6. Click ‘Permission Levels
  7. Click ‘Add a Permission Level’
  8. You will then be able to create your own security group.

It’s worth remembering that only this site and all its sub-sites will have access to your new group.

Below is a list of permissions that can be set. Please note that selecting one may also result in others being selected because they are required as part of your selection.

List Permissions:
  • Manage Lists – create and delete lists, add or remove columns in a list, and add or remove public views of a list.
  • Override Check Out – discard or check in a document that is checked out to another user.
  • Add Items – add items to lists, add documents to document libraries, and add Web discussion comments.
  • Edit Items – edit items in lists, edit documents in document libraries, edit Web discussion comments in documents, and customize Web Part Pages in document libraries.
  • Delete Items – delete items from a list, documents from a document library, and Web discussion comments in documents.
  • View Items – view items in lists, documents in document libraries, and view Web discussion comments.
  • Approve Items – approve a minor version of a list item or document.
  • Open Items – view the source of documents with server-side file handlers.
  • View Versions – view past versions of a list item or document.
  • Delete Versions – delete past versions of a list item or document.
  • Create Alerts – create e-mail alerts.
  • View Application Pages – view forms, views, and application pages. Enumerate lists.

Site Permissions:
  • Manage Permissions – create and change permission levels on the Web site and assign permissions to users and groups.
  • View Usage Data – view reports on Web site usage.
  • Create Subsites – create subsites such as team sites, Meeting Workspace sites, and Document Workspace sites.
  • Manage Web Site – grants the ability to perform all administration tasks for the Web site as well as manage content.
  • Add and Customize Pages – add, change, or delete HTML pages or Web Part Pages, and edit the Web site using a Windows SharePoint Services-compatible editor.
  • Apply Themes and Borders – apply a theme or borders to the entire Web site.
  • Apply Style Sheets – apply a style sheet (.css file) to the Web site.
  • Create Groups – create a group of users that can be used anywhere within the site collection.
  • Browse Directories – enumerate files and folders in a Web site using SharePoint Designer and Web DAV (Distributed Authoring and Versioning) interfaces.
  • View Pages – view pages in a Web site.
  • Enumerate Permissions – enumerate permissions on the Web site, list, folder, document, or list item.
  • Browse User Information – view information about users of the Web site.
  • Manage Alerts – manage alerts for all users of the Web site.
  • Use Remote Interfaces – use SOAP, (Simple Object Access Protocol) Web DAV, or SharePoint Designer interfaces to access the Web site.
  • Use Client Integration Features – use features that launch client applications. Without this permission, users will have to work on documents locally and upload their changes.
  • Open – allows users to open a Web site, list, or folder in order to access items inside that container.
  • Edit Personal User Information – allows a user to change his or her own user information, such as adding a picture.

Personal Permissions:
  • Manage Personal Views – create, change, and delete personal views of lists.
  • Add/Remove Personal Web Parts – add or remove personal Web Parts on a Web Part Page.
  • Update Personal Web Parts – update Web Parts to display personalized information.

Armed with that information, you’re now in a position to try to create a new security group and give a person or a group of people a different level of access from what they had previously.

Monday, 29 August 2011

IMS systems and costs - analysis

I blogged about IBM’s IMS (Information Management System) at the end of July, saying that it has been around since 1968 and originated as a bill-of-materials program for NASA’s Apollo programme. I said that IMS effectively comes in two parts – there’s the Transaction Manager (TM) part and the Data Base (DB) part. I talked about different types of database, and I mentioned the Virtual IMS user group at

Today I want to pose the questions: how much does an IMS development/test system cost? And how many development test systems does a site typically have installed?

It’s a bit like asking: how long is a piece of string? Obviously every piece of string has a length, but it is unknown, a quantative answer can’t be given. And by implication, whatever else is being discussed will contain a degree of indeterminate uncertainty!

Our experience at iTech-Ed (where we administer the Virtual IMS user group) is that a single IMS development test system can cost an organisation between US$1,000,000 per year and $2,000,000 per year (and possibly more in some cases).

There are some sites that run their development systems on dedicated machines that can be larger than many average-sized organizations’ production systems.

However, there is an additional complication. We believe that, although IMS is a huge revenue earner for IBM, they will waive their fee for software for organisations that are development shops and don't use it for production.

We also estimate that the personnel costs for installing and maintaining IMS development systems can amount to about half a million US dollars per year.

And the number of IMS development/test systems can vary hugely from 1 or 2 true development systems (plus test, QA, etc) in smaller shops, to larger customers, who may have any number from around ten to perhaps 30+. We know of some users with 300+ test IMS regions, but the bulk of the bell-shaped curve is skewed to much lower values. The reason we believe the average is ten or slightly above is because of the amount of administrative effort these test systems take to maintain.

The waters can be muddied further by the fact that organizations can negotiate deals on price with IBM, but are then discouraged from sharing information about those prices with others.

Our conclusion is that the cost to the organisation of running a development system depends on the size of the installation. US$1-2M is a good estimate of the cost for each IMS development/test system, with 10 being a reasonable estimate of, on average, how many development/test systems exist.

And, of course, if you have any further information on this, we would be really interested to hear from you.

Sunday, 21 August 2011

The PC at 30

In the future, IBM will be known as the PC maker of choice for most people, and those PCs will be running a GUI (Graphical User Interface) based on CP/M. Well, that was the view of some people 30 years ago when IBM gave birth to its first PC.

It was on 12 August in a ballroom at the Waldorf Astoria, New York that the IBM 5150 made its first appearance. And because IBM was known for making mainframes, this device was called a ‘personal computer’.

IBM didn’t invent the idea of small personal computers, they just wanted a part of a new and growing market place. In those days you could buy small computers like the Sinclair Spectrum, and slighty bigger boxes from Apple, Atari, Commodore, Osborne, and Tandy. The big mind-shift for the IBM engineers in Boca Raton, Florida, was to construct their PC from parts that were available off-the-shelf. Up until then, IBM had always designed and made what they needed. However, time was precious and development was faster by shopping to get the parts. It was a decision that allowed clone makers a foot in the door.

IBM wrote the BIOS (Basic Input/Output System), the part that’s loaded when the machine boots up. Next they needed an operating system. In the same way they were buying hardware components, they thought they’d buy the OS. The best one around was CP/M (Control Program for Microcomputers) from Gary Kildall of Digital Research, Inc. The story goes that IBM’s representatives waited to see him but he didn’t want to deal with men in suits. Remember back then how ‘cool’ computing was. As a consequence, IBM looked for another source for the operating system. They found Bill Gates. He provided PC-DOS, which was a rewrite of Seattle Computer Products’ (SCP) 86-DOS. The rest, as they say, is history.

Because the 5150 was made from these off-the-shelf component, other companies were quick to copy it. These clone makers badged their machines as IBM compatible. It seems a while since anyone put one of those stickers on a PC! However, because they couldn’t copy the IBM BIOS, they were never 100% compatible in those days. Now, of course, it’s not an issue. And many companies have come on the scene, made a lot of money making PCs, and disappeared again.

The first PC came standard with 16 kilobytes of memory, upgradeable to 64K, two 5.25-inch floppy drives, an Intel 8088 processor running at 4.77MHz, a display/printer adapter card, and a 12-inch green CRT monitor. You could then buy IBM’s dot-matrix printer and the necessary cable. This meant you’d be looking at over $3000 for the whole lot!

And now, IBM doesn’t have a PC business. It sold it to Lenovo in 2004. In 1996, Caldera acquired the assets of Digital Research from Novell, and later changed its own name to The SCO Group, and more recently the TSG Group.

It’s always hard predicting the future, even if you invented it!

Sunday, 14 August 2011

We’re all friends now

There used to be a time when selling software was a cut-throat game. A salesman would turn up saying how good their product was and quietly poison the prospective client’s mind against alternative products from other vendors – listing their weaknesses and down-playing their strengths. In fact, I’ve even been paid to write documents for sales teams to use doing exactly that!

But now there is a much more grown-up approach to business. It seems that nowadays sales people are working together to move products. And where their own product may be gappy in some way, they are recommending the software of an erstwhile competitor. The benefits of this cooperative approach means that the customer gets a better service from vendors and a better understanding of the strengths and weaknesses of the products. And it also means that those companies are able to sell more products – which, after all, is how you stay in business!

So what prompted these thoughts? At this week’s SHARE in Orlando, Florida CA Technologies started off by announcing a new release of the CA VM:Manager Suite for Linux on System z and a new capability for CA Solve Operations Automation. There have been lots of anecdotes appearing on the Internet of organisations benefitting hugely from virtualizing their Linux servers on System z and eliminating the server sprawl that preceded it. And, clearly, Linux continues (after its slow start) to be one of the fastest growing segments of the mainframe market. So anything that helps to make zLinux users’ lives easier has got to be a good thing.

According to CA’s press release: “The new release of the CA VM:Manager Suite includes enhancements across product areas, which extend integrated management capabilities designed to control costs, improve performance, increase user productivity, and more efficiently manage and secure z/VM systems that support Linux on System z.. It also adds tape management capabilities for Linux on System z, along with improvements that help CA Technologies customers install, deploy, and service their CA z/VM products quickly and more effectively.”

The new capability in CA Solve Operations Automation means Linux applications can be managed as if they were System z applications, which reduces the need for mainframe Linux operations expertise.

The synergy comes with the announcement of a partnerships between CA and INNOVATION Data Processing and Velocity Software, which, they claim, are designed to help customers increase cost savings by optimizing Linux management. CA will distribute INNOVATION Data Processing’s UPSTREAM for Linux on Z and UPSTREAM for UNIX on Z, and Velocity Software’s zVPS Performance Suite.

UPSTREAM for Linux on Z is, they say, an intuitive, easy-to-use, data protection solution for what was once distributed data that is now the foundation for Linux applications being consolidated on the mainframe. UPSTREAM for Linux on Z can help reduce backup, restore, and disaster recovery costs, while increasing business resiliency by enabling customers to leverage the use of existing mainframe resources to meet their enterprise data protection needs. The UPSTREAM for Linux on Z solution is designed so that users can easily schedule timely backups and still meet the need for immediate reliable recovery; of a file, disk volume, or an entire data centre with confidence.

zVPS offers, again according to their press release, easy-to-use graphical and Web-based tools to help analyse capacity requirements, establish operational alerts, and determine chargeback usage. Its detailed information helps IT staff optimize performance and effectively manage the cost of their Linux on System z environment. By gathering data from Linux on System z and distributed environments, such as VMWare, Microsoft, Linux, and Unix servers, zVPS supports server consolidation projects and facilitates decisions on the most cost-effective placement of workloads.

Cynical observers, who are slightly longer in the tooth, will remember a time when Computer Associates would have bought the company (in that Victor Kiam, Remington Rand sort of way!). Clearly, CA Technologies is now all about ‘working with’ other vendors. 

Sunday, 7 August 2011

IBM’s lawyers can take the day off!

IBM’s dark-suited legal team can relax a little following the news that three organisations have agreed to drop antitrust complaints filed against IBM in Europe and the USA. The companies involved are T3 Technologies, NEON Enterprise Software, and TurboHercules.

Back in October 2009, the US Department of Justice (DoJ) started investigating IBM’s mainframe monopoly following complaints from T3. Back in 2000, T3 launched its tServer low-end mainframe based on the FLEX-ES technology from Fundamental Software Inc (FSI). IBM is saying T3 withdrew its appeal for a ruling in the US courts in May this year. IBM also says T3 has withdrawn its European Commission complaint, alleging antitrust behaviour by IBM.

IBM also says that NEON has dropped its European Commission complaint. This makes sense because in June NEON agreed (in the sense that a person with their arm twisted agrees) to stop reselling and distributing its zPrime product and requested customers to remove and destroy their copies. zPrime was controversial since it first appeared in July 2009 because it allowed mainframe users to run workloads on specialty processors rather than on the main processor. IBM’s revenue stream is based on main processor workloads. So you can see why users would welcome such a product (and the consequent savings they would make) and why IBM would not. As a consequence claims and counter claims flew back and forth between the two companies until the resolution in early June. Since then, NEON’s IMS products have been acquired by BMC.

Finally, TurboHercules has dropped complaints about IBM with the EU. TurboHercules, a French company, was set up in 2009 by Roger Bowler, who created the open source Hercules mainframe hardware emulator. TurboHercules allows mainframe operating systems and applications to run on x64 and Itanium processors running Windows, Linux, Mac OS, or Solaris as the host environment for Hercules. The organisation was part funded by Microsoft (obviously, no lover of mainframe technology).

But it’s not all good news for IBM. It’s still the subject of antitrust probes by the US DoJ and the European Commission. So, those lawyers can’t take off too many days!

And on a completely different topic: don’t forget it’s the Virtual IMS user group meeting on Tuesday with Scott Quillicy, CEO and Founder of SQData talking about IMS replication for high-availability. There are more details at

Sunday, 31 July 2011

All change!

It’s been a funny old week. IBM offering apps for mobile phones instead of sticking strictly to big iron, and Google buying a slew of IBM patents. When the British surrendered to the rebel American army at the end of the war of independence they played a tune called the world turned upside down. That’s what this week feels like.

It seems that Windows 7 smartphones aren’t really up there with the top three yet, because IBM has only made its app available to the app store for iTunes, Android, and the teenagers’ favourite, Blackberry. For IBM, the idea is to make their social networking platform, IBM Connections, available on smartphones – like Facebook and Twitter (and other social media). To be fair, you could access IBM Connections through a browser on these phones, but now there’s a proper app. Obviously there are different processes for making the app available for the different organizations, which will affect how quickly it will be before you can download the app on your device. The good news is that the app is free.

So what is IBM Connections? According to IBM’s Web site: “IBM Connections is social software for business that lets you access everyone in your professional network, including your colleagues, customers, and partners.

“The latest capabilities in IBM Connections, such as Moderation, Ideation Blogs, and the Media Gallery, enable you to embrace networks of people who are engaged and to work in transparent and nimble ways to create business value.”

It seems that the ideation and media gallery modules are natively available in the mobile apps. This allows users to vote on ideas, comment on ideas, and manage the ideas from their phones. In addition, users can take photos and upload them – so they can be shared immediately.

Built in to IBM Connections 3 are ‘moderation’, ideation blog’ and ‘media widget gallery’. Moderation allows users to review content in blogs, forums, and files before lications and approve, reject, or delete as appropriate. There’s a template available for each community to generate ideas, gather feedback, and come to consensus on the best ideas. This is the ideation blog. The media gallery widget is obviously somehwere to upload and share photos and videos.

Meanwhile, Google has confirmed that it bought 1,029 patents from IBM. These include SEO, servers, routers, relational databases, object-oriented programming, and fabrication and architecture of memory and microprocessing chips. It seems that no-one is revealing how much was paid.

Why would they buy so many patents? Perhaps to avoid litigation because they are using someone else’s idea. Or perhaps it’s to stop a rival company using someone else’s idea. It may be little more than synchonicity that Google has recently launched it’s Facebook-like Google plus. The more cynical among you may suggest they are looking for a way to stop Facebook doing something as yet undisclosed that will affect their business! Or it could be to do with the Android versus iPhone smartphone war. Or maybe its because Oracle is seeking billions of dollars in damages and royalties because of Google’s use of Java in Android phones. Or maybe, late in the day, Google has realized how important patents are in the modern business world.

Interestingly, Google was after 6,000 patents from Nortel Networks, but lost out to a consotium including Apple and Microsoft, who paid $4.5 billion for the patents. This could be the year of patent sales.

So there you have it. A week when the king of big iron turns up on the smallest of smart devices, and when Google gets itself a stash of patents. What will next week bring?

Sunday, 24 July 2011

IMS – getting better all the time

IBM’s IMS (Information Management System) has been around since 1968 and originated as a bill-of-materials program for NASA’s Apollo programme. So why are so many Fortune 500 companies still using it today? Isn’t it “your dad’s technology” and completely inadequate for today’s tasks? Well, the answer is a resounding NO!

IMS effectively comes in two parts – there’s the Transaction Manager (TM) part and the Data Base (DB) part. The transaction manager is like CICS in that users sit at screens (which could be connecting using browsers on laptops) and access and modify data in the database. Under the bonnet, a message queueing system ensures that transactions don’t get lost and can be backed out in the case of an error. All pretty much standard stuff. The more interesting part is the database. This is the reason that IMS is in use at banks and insurance companies (and many other organizations). The database structure allows data to be retrieved speedily from what are often very large databases. It’s this incredible speed that organizations value. In addition, they know that the information retrieved will be correct and up-to-date.

So let’s have a look at the database component – and this is where you realize that you’re not using technology that was invented in the 1960s! The databases available and their structure have been updated over the years to ensure that users are still able to get to their data faster than using other technologies. IMS databases store data hierarchically. This is like a pyramid design where higher layers give access to lower layers by using data stored in fields. This is quite different from DB2 and other databases that connect data in a relational manner. Going back to our pyramid, we have segments of data stored at each level and each segment contains these fields I mentioned above.

There are four types of database that can be used with IMS, although two of them are very similar and often grouped together. The original database type available was (and still is) the “full function” database. This uses DLI calls to access the data and makes use of both primary and secondary indexes. The access methods used to get to the data can be – and there’s quite a long list here – HDAM (Hierarchical Direct Access Method), HIDAM (Hierarchical Indexed Direct), SHISAM (Simple Hierarchical Indexed Sequential), HSAM (Hierarchical Sequential), and HISAM (Hierarchical Indexed Sequential). Typically, sites tend to use HDAM and HIDAM. The data is actually stored using VSAM (Virtual Storage Access Method) or OSAM (Overflow Sequential), which only exists for IMS files. OSAM improves performance by optimizing the I/O channel program for IMS.

The next two types of database are the “fast path” databases, and these use VSAM. These can be used in situations where the transaction rates are high – and that’s why IMS is so successful in larger organizations. These two database types are called DEDBs (Data Entry DataBases) and MSDBs (Main Storage DataBases). What distinguishes them from full function databases is that there is no indexing. Many sites have replaced their MSDBs with VSO (Virtual Storage Option) DEDBs.

The most recent type of IMS database is the HALDB (High-Availability Large DataBase). They were first introduced with IMS 7 in order to handle very large amounts of data in the database. With V9 of IMS came the ability to reorganize the data online and so not need to take a database offline to reorganize (optimize) it – which, of course, increased the availability of the data.

Many separate databases can be grouped together to produce a single logical database that will be used by the transactions running on the system.

As you can see, since those days of moon rockets, IBM has beefed up IMS databases so that they can handle extremely high transaction rates. Then it increased the amount of data that can be stored in the database itself. And finally IBM increased the availability of that database to produce a product that is trusted and relied upon by organizations that need to be able to ensure the integrity and availability of their data.

If you’re interested in IMS, you’ll be interested in the Virtual IMS user group. This is a free-to-join vendor-independent user group that holds virtual meetings every other month and always includes a guest speaker talking about an IMS-related technical topic. You can find out more at

Sunday, 17 July 2011

Command economies, decentralization, and the z114

It comes and goes. It’s like a pendulum swinging in one direction,  running out of steam, and then swinging in the completely opposite direction. And it applies to countries, economies, and the way people view computing. Let me explain...

During the 1970s, computing, where it existed, was very much a centralized affair. The gods of the mainframe pretty much controlled what anyone was able to do. It was like Stalinist Russia. Everything came out of the centre. You didn’t get it, unless someone at the hub of things deemed it necessary for you to have it.

Currently in the UK, we have the opposite approach in terms of our model of how things should work. Quite logically, you might think, if you live in a rural area with rolling fields full of wheat or livestock, your concerns are completely different from those of someone living in a post-industrial run-down urban area. Of course, this localism easily lends itself to the criticism of postcode lottery. Anyway, we have little islands of individuality separate from each other. Unfortunately, the reality is that political areas tend to include more than a monoculture of just rural or just urban populations. Plus you have different needs for different age groups – you can see where this idea falls down when applied to the real world, but hang on to the little islands metaphor.

Now let’s turn time back to 1989. We find the Berlin wall coming down and the whole centralized power base of the USSR and it’s Warsaw Pack allies crumbling. In the world of computing, we find the balance of opinion has moved right away from mainframes. In the early 90s, their death was confidently predicted. In its place we had millions of underpowered PCs running DOS-based operating systems. And as the 90s progressed we saw the triumph of Windows and Microsoft. We also saw that antithesis of centralization, Open Source software. Unix started life in 1969, and Linus Torvalds’ Linux arrived in 1991. Even IBM, which had developed and standardized the PC in the early 1980s, was working on the development of other platforms. 1988 gave us the AS/400 – now the IBM System i and which now runs on the POWER platform. The RS/6000, running a Unix variant called AIX, arrived in the 1990s and also now run on POWER hardware.

So, having been empowered to make their own decisions and choices of hardware and software, what have users done since then? Well, in the PC world, they go for big servers that are virtualized in order to benefit from the control that gives them. It makes back-ups and business continuity easier.

And now, here we are in 2011 and IBM announces a Business Class (basically not a top-end machine, more one for the everyman mainframe user) zEnterprise – the z114. It’s gone back to being a centralized piece of hardware because not only is it a mainframe, it’s a POWER7 box, and it has x86 blades. So that gives users a smaller footprint, less power consumption, and control of everything using the IBM zEnterprise Unified Resource Manager and the IBM zEnterprise BladeCenter Extension (zBX). The POWER7 blades mean that AS/400 and RS/6000-heritage users have a home. And the x86 blades not only run Linux x86 applications unchanged, but, by the end of the year, are expected to run Windows applications too.

The culture shock at many sites will come when the distributed applications teams (and they may have many different names) discover that all the things they’ve been planning to achieve (virtualized desktops, virtualized servers, etc) are just part of the techniques that the mainframe people take for granted. And the mainframers are going to have to understand that for many of the people in the other teams, it’s in many ways still about 1992 in terms of business recovery times etc. But when the teams do come together, the synergy is going to be very beneficial for the organization that allow it to happen.

This new mainframe, unusually, comes with a published price tag – $75,000. As part of the package you get the IBM Smart Analytics Optimizer to analyse data faster at a lower cost per transaction, and the IBM WebSphere DataPower XI50 for integrating Web-based workloads. The new hardware runs the latest version of z/OS – 1.13. You get 3.8GHz processors (the zEnterprise 196 uses  5.3GHz processors), and you can configure up to 14 of them with 10 specialty processors – zIIP, zAAP, and IFL.

The pendulum has now swung completely back. We have a single box capable of providing all the different computing needs of an organization.