Sunday 20 December 2009

Seeing is believing

Well it’s nearly Christmas and everyone is winding down for the festive break – whether they celebrate Christian holidays or not – so I thought I’d look away from mainframe things and turn my focus of attention to video eyeware!

I must admit that I watch less-and-less television in the lounge with others while the programmes are transmitted live, and more-and-more using iplayer or one of the other second-chance-to-see services offered by TV channels and others. Now rather than be solitary in my viewing habits, I thought it would be a good idea to invest in some video glasses with built-in speakers, so that at least I could sit in the same room as other members of my family, while catching up with those missed programmes.

In case you’re unfamiliar with the concept, what I wanted was something small – not too much bigger than a normal pair of glasses (and certainly not something that started life as 1990s virtual reality headgear). Instead of lenses there would be two flat screens, which would be so close to my eyes that it would look like I was watching a cinema-screen-sized TV picture. I also wanted to have earphones attached, so I could hear what was being said, etc.

I also wanted to test them out – to see whether they worked with my laptop. In addition, I wanted to see how long I could wear them for before they started to feel too heavy or make my eyes go blurry. And I wanted to see what quality the sound was and whether there was much sound leakage – which would have irritated anyone sitting next to me.

Sadly, I have as yet been unable to get review copies of any makes. So, I can’t include any recommendations with this blog. Also, it seems that there are only a few makes and models available in the UK (where I’m based) – certainly fewer than in the USA.

Amazon.co.uk offers MyVu’s Solo Plus for £49.99, Vizix AV920 for £195.00, CTA’s Digital Mobile Home Theater for £104.98, QL’s Theatre for £149.99, and ezGear’s ezVision Video Eye Wear for £140.00, amongst other models.

Amazon.com adds UNKN’s iDesign Digital Video Glasses $199.95 and Syba’s RCG RC-VIS62005 for $222.53, KJB’s C6000CL color video glasses with clear lenses for $364.12, eDimensional’s Wired eDimensional 3D gaming glasses for $69.95, i-Theater XY Video Glasses for $145.00, I-O Display’s i-glasses VIDEO for $779.00, and TheaView’s LV-QB02 Video Glasses for $259.00, amongst others.

It’s always nice to have choice, but it would be good to be able to compare these and see how well they work for my requirements. I don’t want to connect them to an iPhone, I don’t want to play games wearing them – on a Wii or anywhere else.

Having said that. They do seem to be an ideal solution to the problem of an antisocial TV viewer (me) being able to rejoin the rest of the family. They also seemed like they could be useful in so many other situations too. So, with Christmas coming up, if you’re stuck for what others can give you, try video eyeware – and let me know how well they work!

It’s not to late to become a fan of my company, iTech-Ed Ltd, on Facebook – click here. And you can follow my tweets here.

Merry Christmas

Sunday 13 December 2009

IBM launches London Analytics Solution Centre – part 2

Last week I was talking about the launch of IBM's London-based Analytics Solution Centre and the fact that it joined six others across the globe. This week I want to tell you a little more.

Part of the documentation given to guests at the launch included some survey results of 225 business leaders worldwide. The survey found that enterprises are operating with bigger blind spots, and that they are making important decisions without access to the right information. Respondents recognize that new analytics, coupled with advanced business process management capabilities, signal a major opportunity to close gaps and create new business advantage. The documentation goes on to assure us that those who have the vision to apply new approaches are building intelligent enterprises and will be ready to outperform their peers.

Elsewhere it suggests that: “By embracing advanced analytics across the enterprise, intelligent enterprises will optimize three interdependent business dimensions:

  • Intelligent profitable growth: intelligent enterprises have more opportunities for growing customers, improving relationships, identifying new markets, and developing new products and services.
  • Cost take-out and efficiency: intelligent enterprises optimize the allocation and deployment of resources and capital to create more efficiency and manage costs in a way that aligns to their business strategies and objectives.
  • Proactive risk management: intelligent enterprises have less vulnerability and greater certainty in outcomes as a result of their enhanced ability to predict and identify risk events, coupled with their ability to prepare and respond to them.
“Each of these dimensions is a critical part of optimization – the impact of a decision or action along any one of them will have repercussions for the others.”

Those, in a nutshell, are the compelling reasons that IBM sees for organizations to embrace the advanced analytic they are providing.

They also suggested some quick questions an enterprise could ask themselves to determine whether advanced analytics could help them. The questions were:

  1. Is your view of customer data and customer profitability limited?
  2. Are you unaware of how your reputation is being shaped by social networks and consumer blogs?
  3. Are you incurring losses due to unmanaged risk or high rates of fraud?
  4. Do you have blind spots related to customer and partner credit risk?
  5. Are you operating in an environment of increased regulatory oversight and require better transparency to reduce risk?
  6. Do you have duplicated or siloed data with multiple versions of the truth?
  7. Are you unable to use information as a platform for growth and cost reduction?
Not surprisingly, IBM concludes that an answer of “yes” to any of these questions means it is time for an enterprise to think hard about business analytics and optimization.

So, let's just conclude by recapping what IBM says Business Analytics and Optimization can do. IBM asserts that it helps your organization run smarter by bringing together foundational business intelligence, performance management, and advanced analytics with proven models that accelerate your time-to-value, and predictive modelling. It concludes by suggesting that you get all this by utilizing IBM's “unparalleled research capability”.

I was convinced that analytics has an important future, what about you?

Don’t forget that if you’re on Facebook, you can become a fan of my company, iTech-Ed Ltd, by clicking here and then on the fan button. And if you want to follow my tweets, it’s twitter.com/t_eddolls.

Sunday 6 December 2009

IBM launches London Analytics Solution Centre

I was invited to Southbank for IBM’s launch of its London-based Analytics Solution Centre – joining six other similar centres across the globe. The morning session included an introduction by Carolyn Taylor, and presentations by Brendan Riley (IBM’s head honcho in the UK), Lord Peter Mandelson (who spoke very well), and Cambridge University’s Professor Andy Neely. After a short Q&A we were shown some very interesting demos.

So what are these Analytics Solution Centres? We were told that the centres will help IBM apply its advanced analytics expertise to the complex business challenges facing its clients. You’re perhaps no clearer after I’ve just given the official answer. So let me try to explain in another way...

They were suggesting that there are lots of little bits of information floating around that could be tied together because this bigger picture would then help organizations make better decisions. An example we saw on video was of traffic flow. How it would theoretically be possible to measure the movement made by people’s mobile phones to measure the actual flow of traffic along busy streets. Using this information, faster alternative routing information could then be sent to, perhaps, the sat navs in the cars. That way, everyone would arrive at their destinations faster.

Another example that was demo’d was about flooding in Galway Bay. Previously, the local harbour master might have decided that a particular combination of rainy weather, winds, and high tides put the town at risk of flooding. Now, sensors in buoys are linked to other information to come up with a much better picture of when flooding might occur. It also links into other systems measuring water quality. And further system links local fish restaurants with local fisherman, who can say what fish they are catching, and the restaurant can put those fish on their menu for that evening!

We also saw the ORX (Operational Riskdata eXchange Association) system – which is used to help members quantify risk exposure. It was something banks needed following the Basel II accord. The banks’ risk information is securely and anonymously exchanged, and the banks are from 18 different countries. The consequence of this initiative is that banks now have a much better understanding of their exposure to potentially damaging operational risk.

Andy Neely stressed the need for getting the organization architecture right. He suggested that getting IT right saw an 8% improvement, and getting the organization right also saw an 8% improvement. However, getting them both right at the same time led to a 34% improvement!

The motto of the presentation, if I might call it that, was “predict and act”. The suggestion was that currently organizations tended to sense something was going on and then respond.

The Analytics Solution Centre in London will comprise around 400 consultants, software specialists, and mathematicians, and will include recent graduates. This number will increase as business increases.

Other Analytics Solution Centres can be found in Berlin, Beijing, Dallas, Tokyo, New York, and Washington DC.

The whole thing looks very exciting and an area destined to grow.

On a completely unrelated matter... If you’re on Facebook, why not become a fan of my company, iTech-Ed Ltd. Click on the link - http://tiny.cc/Q09yi - and click the fan button. And if you want to follow my tweets, it’s twitter.com/t_eddolls.

Sunday 29 November 2009

Clouding your thoughts


Cloud computing has received a number of boosts quite recently, and I thought I’d just run them down for you.

IBM made absolutely sure you knew their latest announcement was from them by calling it IBM Smart Business Development and Test on IBM Cloud (I usually put the acronym for a new product in brackets just after I give its full name, but this time I’ll just leave to you to work it out). This gives customers a free public cloud beta for software development. Get in early, because the beta will be open and free until general availability (sometime early in 2010).


IBM also announced the IBM Rational Software Delivery Services for Cloud Computing. This includes a set of ready-to-use application life-cycle management tools for developing and testing in the IBM Cloud.


Microsoft also has its head in the clouds and has announced Azure, as a way to bridge the gap between desktop and cloud-based computing. I guess the allure of Azure is that applications that were developed using a common Windows programming model can now be run as a cloud service.


Also clouding the distinction between what's a desktop application and what isn't, we've got the new Chrome OS. It just assumes that all applications are JavaScript-based Web applications that you use from a Web browser. So, effectively, every application is running in the cloud. And once a connection to the Internet is established, Chrome OS automatically synchronizes data using cloud storage.


Now I've read people commenting on how the Chrome OS will hit low-end Windows machines, but my guess is that it will actually hit Linux netbooks. Don't get me wrong, I'm a big fan of Linux and all things Open Source, but the real reason for running Linux on a netbook is because Vista is too memory hungry. OK, I know XP and Windows 7 are much better than Vista (or should I say, much much better!), but the idea of running Chrome OS on a netbook takes away the need to run Linux, and would appeal more to those people keen to experiment. Just a thought.


Going back to IBM, who have developed the world’s largest private smart analytics cloud-computing platform – aka codename Blue Insight – which combines the resources of more than 100 separate systems to create one large, centralized repository of business analytics. According to IBM, “cloud computing represents a paradigm shift in the consumption and delivery of IT services”. Blue Insight has allowed IBM to eliminate multiple Business Intelligence systems that were performing more-or-less the same ETL (Extract-Transform-Load) processes for different user groups.


Gartner Research are big fans of cloud computing, telling us that: “The use of cloud computing in general has been spreading rapidly over the past few years, as companies look for ways to increase operating efficiency and data security. The cloud industry, which is in its infancy, will generate $3.4 billion in sales this year.”


Merrill Lynch reckons that by 2011 the cloud computing market will reach $160 billion, including $95 billion in business and productivity applications. With that kind of money around, it’s no wonder that IBM and Microsoft are keen to get some of it.


And finally on this topic, IBM has announced a program designed to help educators and students pursue cloud-computing initiatives and better take advantage of collaboration technology in their studies. They’re calling it the IBM Cloud Academy. IBM provides the cloud-based infrastructure for the program, with some simple collaboration tools.


This is where I do the “every cloud has a silver lining” joke – or not!.

Sunday 22 November 2009

Guest blog – Shadow ROI

This week, for a change, I’m publishing a blog entry from DataDirect’s Jeff Overton, product marketing manager for Shadow. Jeff looks at the return on investment for Shadow users through Mainline/DataDirect TCO calculator.

For horizontal technologies, like integration, it is difficult to quantify Return On Investment (ROI) because it underpins many business systems. What can be quantified is the Total Cost of Ownership (TCO). Nowhere is TCO more important than mainframes, where a single IBM System z10 can have the capacity of 1500 or more Intel servers. Licensing hardware and software on such an enterprise scale can be costly, so there is, and always has been, a need to manage this capacity and understand how its resources, such as processor time, are being allocated.

IBM’s line of specialty engines, including the System z Integrated Information Processor (zIIP), are designed to help lower mainframe TCO by processing qualified workloads rather than having this work processed on the General Purpose Processor (GPP). These engines are just like a GPP except:
  • Their capacity is typically not used in calculating software licensing fees based on mainframe capacity.
  • Their processing speed is not governed, that is they run at full speed all the time
  • Their processing capacity is enormous – a single zIIP engine for an IBM System z10 machine has a capacity of 920 MIPS.
At Progress DataDirect (www.datadirect.com/products/mainframe-integration/shadow-rte/index.ssp) we recognized the potential TCO savings from these engines, and four years ago re-architected DataDirect Shadow, our single unified platform for mainframe integration, to exploit these engines. In 2007 we introduced the first generation of that effort, and earlier this year introduced the second generation. Today we can offload up to 99% of the integration processing performed by the product. It is important to note that our implementation is in strict accordance with ISV use of the zIIP and does not cause IBM or any other third-party code to become zIIP-enabled. The market reception to leveraging zIIP specialty engines to legally reduce integration costs has been extremely well received.

However, IT decision makers requested we provide more estimates of the potential savings based on THEIR workloads. In response we partnered with Wintergreen research (www.wintergreenroi.com), a well-respected analyst firm specializing in TCO/ROI analysis, to deliver a Web-based calculator (www.datadirect.com/products/mainframe-integration/shadow-rte/shadow-tco-home/tco-calculator/index.ssp). The Calculator models the potential capacity savings, measured in MIPS, as well as the monetary savings. It uses what is called the Willhoit constant, named after Gregg Willhoit, our Chief Software Architect, who developed the algorithm that estimates the offload of DataDirect Shadow processing. Today the calculator covers two processor-intensive types of integration processing: Web Services and SQL. In as little as an hour, our field-engineering team can quantify the savings using your workload profile for:
  • Number and type of Web services (requester or provider) or SQL statement type (join, aggregate, etc.)
  • Size of SOAP payload or estimated result set size for SQL
  • Invocations over a modelled timeframe – such as per-day or per-peak period to help model peak capacity requirements
  • Cost per MIPS. Because cost can be calculated differently, the model offers the option to use the comprehensive Wintergreen mainframe-costing model, which includes hardware, software, data centre, and labour costs, or a way to use your own numbers.
Using these core metrics, the capacity and monetary savings are presented immediately. The Calculator goes further by modelling this workload out over an additional five years and provides parameters to account for changes in workload, mainframe capacity, and MIPS costs. This is a great, low-investment process to quickly get a clear picture of the costs over the typical five-year planning horizon that many organizations rely on.

In as little as an hour, IT can be in a much stronger position to provide detailed and accurate information to support ROI analysis of not only mainframe integration investments but the potentially large MIPS dividend available to the entire mainframe from utilizing zIIP specialty engines to process up to 99% of the integration processing performed by Progress DataDirect Shadow.

Thanks Jeff for being our first guest blogger. And remember, there's still time to complete the mainframe user survey or a vendor entry for the Arcati Mainframe Yearbook 2010.

Sunday 15 November 2009

GSE conference


I was lucky enough to attend the Guide Share Europe National Conference on 4th and 5th November at Whittlebury Hall. This pulled together lots of mainframers, who were very interesting to talk to – including three young lads who are mainframe apprentices! – plus numerous excellent speakers. There were also a number of vendors there in the exhibition area who were keen to chat and pass on information about their new products – which was also very informative.

I managed to have a long chat with NEON’s Tony Lubrano who gave a presentation in the New technologies stream on zPrime. He explained how zPrime 1.2 now includes an Enablement Console, making it easier for users to select the applications they want to move from the central processor to the zIIPs or zAAPs. There’s also an LE (Language Environment) Initialization Exit feature that automates the task of enabling LE-compliant applications to migrate to the specialty engines. Tony explained how these requirements had come from users and had been delivered in the new release.

The people from Innovation Data Processing were keen to talk about their core FDR products, plus the newer FDRERASE, and FDRERASE/OPEN, and FDRVIEWS, FDRMOVE, FDRMOVE, and FDRPAS.

I had an enjoyable catch-up with the team from Compute (Bridgend) who demonstrated their new SELCOPY/i, which is part of SELCOPY or CBLVCAT and provides multiple windows for user action and produces what they call a “mainframe desktop”. It’s worth checking the huge number of facilities on the Web site (www.cbl.com).

I was surprised to find mainframe companies I didn’t know. There was Thesaurus (www.i-tcs.com), which offers products, consultancy, and managed services, and have expertise with mainframe Linux. There was EZLegacy (www.ezlegacy.com), who had EZSource, their application-oriented configuration management database. There were two EPV (www.epvtech.com) products: EPV for z/OS and EPV for DB2. Olga Henning represented Blue Sea Technology (www.blueseasoft.com). Stephen Golliker represented Higobi (www.higobi.com).

There were many other exhibitors who were friendly and helpful discussing their products

But I didn’t really go for the exhibitors, I wanted to see some of the presentations. There were streams for CICS, IMS, DB2, Enterprise security, zLinux, Large systems working group, Network management working group, Software asset management, and New technologies.

I was particularly interested in the IMS stream – because of my work with the Virtual IMS Connection user group (www.virtualims.com), and managed to see an excellent presentation by IBM’s Alan Cooper on “Rock solid security in the post-SMU era”. I also sat in on the “Birds-of-a-feather” session to see how real IMS users are finding the product and particularly what difficulties they have to overcome in their environments.

It was an excellent event. It was well-organized and run. It was in a lovely location. And everyone I spoke to was friendly and helpful, and keen to talk mainframe technical talk. Many thanks to the organizers for setting up such an excellent event, and to Mark Wilson who was conference manager for this year’s conference.

BTW: if you like this blog, go to http://www.computerweekly.com/Articles/2009/11/03/238190/vote-in-the-computer-weekly-it-blog-awards-2009.htm. Look for Individual IT professional male, then use the drop-down menu to find Mainframe update and select it. Then go down the page and press "Done" - and you will have voted for my blog. Tell all your friends!

Sunday 8 November 2009

The big daddy of virtualization just got better

While all those Windows-warriors are talking about Windows 7 and virtualization strategies, the king of virtualization – IBM’s VM software – has seen the release of z/VM Version 6.1.

Microsoft has its desktop virtualization technology, and is up to Version 2 of the Microsoft Desktop Optimization Pack 2009 (MDOP) – the add-on you need for most of the Windows 7 virtualization capabilities – assuming you have the right chip in the first place. The big thing about Windows 7 is that it lets users run their software in XP emulation mode! The App-V (Application Virtualization) client, which is built into MDOP, provides the client side for virtual application launches. Users can click desktop icons to launch a server-based application, which they can use as if it had launched on their own machine. Microsoft Enterprise Desktop Virtualization (MED-V) allows Virtual PC to launch on top of Windows 7 and adds a management capability by linking to Microsoft’s management server and providing the client-side support for policy-based usage controls, provisioning, and delivery of a virtual-desktop image. But enough about that!

Anyway, the new release of z/VM is available only on the IBM System z10 Enterprise Class server and System z10 Business Class server, and future System z servers (z11 and whatever comes next).

According to IBM, z/VM V6.1 offers:
  • Guest LAN and Virtual Switch (VSWITCH) exploitation of the Prefetch Data instruction to use new IBM System z10 server cache prefetch capabilities to help improve the performance of guest-to-guest streaming network workloads

  • Closer integration with IBM Systems Director by shipping the Manageability Access Point Agent for z/VM to help simplify installation of the agent

  • Inclusion of post-z/VM V5.4 enhancements delivered in the IBM service stream.
IBM adds that this release provides the basis for some major future enhancements as indicated by the announced Statements of Direction that include:
  • z/VM Single System Image:

  • IBM intends to provide capabilities that permit multiple z/VM systems to collaborate in order to provide a single system image. This is planned to allow all z/VM member systems to be managed, serviced, and administered as one system across which workloads can be deployed. The single system image is intended to share resources among all member systems.

  • z/VM Live Guest Relocation:

  • IBM intends to further strengthen single system image support by providing live guest relocation. This is planned to provide the capability to move a running Linux virtual machine from one single system image member system to another. This is intended to further enhance workload management across a set of z/VM systems and to help clients avoid planned outages for virtual servers.
CA was quick on the scene offering Day One support for its many z/VM solutions.

I’m always interested in VM developments, I wrote two books about VM many years ago, and still have a soft spot for it. It seems that the big daddy of virtualization is still well ahead of any competitors out there and just keeps getting better.

Sunday 1 November 2009

A couple of HTML tips

This time, just a couple of Web coding tips for valid HTML

Have you ever wanted to embed a Youtube video on a page AND have it validated? Youtube allow you to specify the size you want etc and then give you the code using the embed tag – so it looks like this:

<object width="480" height="295">
<param name="movie" value="http://www.youtube.com/v/n69yPQGcSJY&hl=en&fs=1&">
</param>
<param name="allowFullScreen" value="true">
</param>
<param name="allowscriptaccess" value="always">
</param>
<embed src="http://www.youtube.com/v/n69yPQGcSJY&hl=en&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="480" height="295"></embed></object>


If you’re curious, it’s Gavin Bate talking about climbing Everest.

Anyway, the code won’t validate because you can’t use the embed tag. What works is the following:

<object type="application/x-shockwave-flash" style="width:480px; height:295px;" data="http://www.youtube.com/v/n69yPQGcSJY&hl=en&fs=1&rel=0">
<param name="movie" value="http://www.youtube.com/v/n69yPQGcSJY&hl=en&fs=1&rel=0" />
</object>

Notice, that I also added “amp;” to the &s.

My second tip is to do with blob lists inside blob lists, also referred to as nesting bullet points.

You might think that the correct way to code the following:

  • Adam

  • Eve

    • Cain

    • Abel




Was like this:


<ul>
<li>Adam></li>
<li>Eve</li>
<ul>
<li>Cain</li>
<li>Abel</li>
</ul>
</ul>



But that is invalid. It is correctly written as:


<ul>
<li>Adam></li>
<li>Eve
<ul>
<li>Cain</li>
<li>Abel</li>
</ul></li>
</ul>


And just talking about blob lists... You do know that you can control what type of blob you get. For example, code:
<ul type="square">

and you’ll get a square blob. You can also use “circle”. For ordered lists try lower or upper case Roman (“i” or “I”), and lower or upper case letters “a” or “A”).

And if you need a new Web site designed and coded, or if you need your tired old one revamped, please contact me at trevor@itech-ed.com.

And don't forget to complete the Arcati Yearbook user survey at www.arcati.com/usersurvey10.

Sunday 25 October 2009

Back by popular demand - the Arcati Mainframe Yearbook 2010

Many of you will have received an e-mail informing you that Mark Lillycrop and I have started work on the 2010 edition of the Arcati Mainframe Yearbook. And if you haven't had an e-mail from me about it, then e-mail trevor@itech-ed.com and I will add you to our mailing list.

The Arcati Mainframe Yearbook has been the de facto reference work for IT professionals working with z/OS (and its forerunner) systems since 2005. It includes an annual user survey, an up-to-date directory of vendors and consultants, a media guide, a strategy section with papers on mainframe trends and directions, a glossary of terminology, and a technical specification section. Each year, the Yearbook is downloaded by around 15,000 mainframe professionals. The current issue is still available at www.arcati.com/newyearbook09.

At the moment, we are hoping that mainframers will be willing to complete the annual user survey, which is at www.arcati.com/usersurvey10. The more users who fill it in, the more accurate and therefore useful the survey report will be. All respondents before the 4th December will receive a PDF copy of the survey results on publication. The identity and company information of all respondents is treated in confidence and will not be divulged to third parties. If you go to user group meetings, or just hang out with mainframers from other sites, pass on the word about the survey to others, please. We're hoping that this year's user survey will be the most comprehensive survey ever. Current estimates suggest that there are somewhere between 6,000 and 8,000 companies using mainframes spread over 10,000 sites.

Anyone reading this who works for a vendor, consultant, or service provider, can ensure their company gets a free entry in the vendor directory section by completing the form at www.arcati.com/vendorentry. This form can also be used to amend last year's entry.

As in previous years, there is the opportunity for organizations to sponsor the Yearbook or take out a half page advertisement. Half-page adverts (5.5in x 8in max landscape) cost $600 (UK£350). Sponsors get a full-page advert (11in x 8in) in the Yearbook; inclusion of a corporate paper in the Mainframe Strategy section of the Yearbook; a logo/link on the Yearbook download page on the Arcati Web site; and a brief text ad in the Yearbook publicity e-mails sent to users. Price $1800 (UK£950).

The Arcati Mainframe Yearbook 2010 will be freely available for download early in January next year.

Sunday 18 October 2009

Mainframe futures

Whenever I start a piece on mainframe futures, I’m always reminded of poor old Stewart Alsop when he was editor-in-chief of InfoWorld. He was the man who famously announced in 1991 that the last mainframe in the world would be unplugged in 1996. Sorry Stewart, not even close!

I’m going to divide this look at mainframe futures into five areas – hardware, software, training, role, and attitude. And, of course, underlying this whole view is the assumptions that mainframes will be with us for a number of years yet.

Looking at hardware, we can see that there is a continual improvement in the speed or size of what’s available, while at the same time a reduction in the footprint and the greenhouse effect. We’ve had the introduction of the specialty engines – IFL for Linux, zAAP for Java (WebSphere), and zIIP for DB2 – and we’re looking at the growing take up of these specialty engines. We’ve also heard about the z11 processor, which is anticipated to be with us in September 2011. Interestingly, in a back-to-the-future sort of way, at least some of the machines will be water-cooled. Improvements are coming all the time.

In terms of software, there have been a huge number of enhancements. CA, as part of its Web 2.0 strategy, enhanced most of its mainframe software line this year. And other companies are continuing to upgrade theirs. NEON Enterprise Software launched its controversial zPrime software. DataDirect has version 7.2.1 of its Shadow suite. In terms of making the mainframe easier to use, particularly in the light of an ageing population of experts, many vendors, including IBM, are including autonomics into their software. This means that the software will try to identify potential problems and fix them. The other strategy used by vendors is to make using the mainframe more like using a Windows environment, which then makes it more easily accessible by young programmers. And attracting young programmers is important for the organizations using mainframes as well as the mainframe software vendors. Many people will now be familiar with using Eclipse. And remember that it’s estimated that over 60% of company data is held on a mainframe, and much of that is being accessed using COBOL programs. So software is continually evolving and getting better.

Both IBM and CA are taking steps to ensure that training is available at universities for youngsters. IBM has its Academic Initiative, which was introduced in 2004. This runs at universities in the USA, UK, and Europe. Similarly, CA is working with universities, starting in the Czech Republic, to provide mainframes they can use for specific training modules. These and other initiatives will ensure an on-going supply of qualified COBOL and Assembler programmers. Having young well-trained programmers ensures the future of mainframes.

So what is the role of the mainframe? Before you rush to answer that question, let me suggest that there is no simple answer. The mainframe has any number of roles in most organization. It is still satisfying roles it acquired 20 or 30 years ago, and it is also gaining new ones. For example SOA (Service-Oriented Architecture) is still growing in importance allowing the mainframe to be a Web service consumer as well as a Web service provider to Internet-based users. There is also much talk about mainframes and their role in cloud computing. We’ve also recently seen a growth in the use of mainframes in Business Intelligence solutions – particularly with IBM’s recent acquisition of SPSS. So the mainframes role is constantly evolving and changing, but it’s always vitally important to the success of businesses that make use of mainframes, and could also be a useful tool for organizations that don’t use mainframes.

The last area I want to touch on is the public’s attitude towards mainframes. It is important that IBM and everyone else who believes in the mainframe helps convince the “Windows generation” that there are other choices – some of which, like the mainframe, are better alternatives. There’s a whole generation of IT guys who’ve never worked on a mainframe and who think its old-fashioned and not fit for today’s environment – probably the same people who rush out to buy Citrix to emulate some of the best characteristics of a mainframe; or the people who virtualize their servers thinking it is something new. We all need to get out there and raise people’s awareness. I’m not saying that a mainframe is the right environment for everyone, but I’m sure many mid-sized organizations are missing out on an opportunity because of the blinkered thinking of some of their IT people. Let’s help change that.

All-in-all, the mainframe still has a great future ahead of it. So much is going on to make it so. Long may it continue.

Monday 12 October 2009

IMS Version 11

I mentioned last week about IMS Version 11, which has been around in a pre-release version for nearly a year. Well, the good news is that it becomes generally available on 30 October 2009.

Here are some of the highlights.

Database Manager enhancements:
  • IMS Open Database support offers direct distributed TCP/IP access to IMS data, providing cost efficiency, enabling application growth, and improving resilience.
  • Broadened Java and XML support and tools can ease IMS development and access to IMS data.
  • IMS Fast Path Buffer Manager, Application Control Block library, and Local System Queue Area storage reduction utilize 64-bit storage to improve availability and overall system performance.
  • Enhanced commands and user exits simplify operations and improve availability.
Transaction Manager enhancements:
  • IMS Connect (the TCP/IP gateway to IMS transactions, operations, and now data) enhancements offer improved IMS flexibility, availability, resilience, and security.
  • Broadened Java and XML tooling eases IMS application development and connectivity, and enhances IMS Web services to assist developers with business transformation.
  • Enhanced commands and user exits simplify operations and improve availability.
  • IMS Application Control Block library and Local System Queue Area reduction utilize 64-bit storage to improve availability and system performance.
Also, last week saw latest webinar from Virtual IMS Connection (www.virtualims.com), which was entitled, “Mainframe integration is not a strategy – get your MIPS back while delivering value today”, and was presented by Rob Morris, Chief Strategy Officer with GT Software.

Rob suggested that the goals for integration were: fast, agile; flexible; adaptable; consistent; and justifiable. And Rob went on to pose the question whether the goals for mainframe integration should be any different. He then suggested that the mainframe was different, saying:
• Platform:
– Cost of mainframe operations
– Proprietary sub-systems and APIs
– Limited resources.
• It goes beyond combining the words…Mainframe Web Services
• “Free Tools” are not free:
– MIPS costs
– Simplistic design requires additional tools.

Dusty Rivers, who is described on the GT Software blog as an IMS SOA Evangelist, was also on hand to give the user group a rapid demonstration of GT Software’s Ivory product.

Some of the advantages they listed with Ivory are that:
  • Services are deployed instantly
  • Can be deployed to mainframe (CICS, started task, z/Linux) or off-mainframe (Windows or Linux)
  • Leverages zLinux and specialty engines to slash costs.
Rob also explained how users could save money because the work is moved from the GPP (General Purpose Processor) to the IFL specialty processor. Which, as we’ve seen in these blogs, is the route being taken by a number of software vendors.

So, as I said last week, IMS is an interesting technology.

Sunday 4 October 2009

IMS – what’s new?


IMS – that’s Information Management System, IBM’s combined database-management system and transaction-processing system, not IP Multimedia Subsystem or anything else with the same three-latter acronym – is quite an exciting technology at the moment.

Apart from IBM releasing Version 11 into the wild and the useful upgrades incorporated into that, there have been lots of enhancements to IMS-related software recently.

For example, Mainstar announced a new product called Database Backup and Recovery for IMS (DBR for IMS) on z/OS, which maximizes investment in large system databases and storage systems. DBR for IMS is a storage-aware backup and recovery solution that integrates storage processor fast-replication facilities with IMS backup and recovery operations to allow instantaneous backups with no downtime, reducing recovery time, and simplify disaster recovery procedures while using less CPU, I/O, and storage resources. DBR for IMS provides backup and recovery techniques to address the high-availability and integrity needed by organizations.

CA announced CA Database Management r12 for IMS, its integrated solution that eases the management of IMS databases. This solution provides database administration, performance management, and backup and recovery capabilities for IMS Full Function, Fast Path, and High Availability Large Database (HALDB) structures. Key enhancements to CA Database Management r12 for IMS include support for IMS 11, performance improvements to both the CA products and IMS itself, and increased data availability during backups.

Most recently, Progress DataDirect has announced Release 7.2.1 of its Progress DataDirect Shadow mainframe integration platform with an enhanced ANSI SQL-92 engine for relational to non-relational data processing utilizing the IBM System z Integrated Information Processor (zIIP). The latest Release allows ANSI SQL-92 workloads for IMS DB databases and CICS VSAM files to be diverted from the mainframe's General Purpose Processor (GPP) to the zIIP specialty engine, which does the work without using any of the mainframe licensed MIPs capacity.

And if you think no-one is really interested in IMS, then you’re in for a bit of a shock! There are more IMS user groups around today than there were two years ago. One of those is the Virtual IMS Connection user group at www.virtualims.com. This group holds virtual meetings, allowing members to share their ideas and listen to presentations without leaving the office – and so save on travel time and the expense of travelling to a meeting.

The next meeting is on Tuesday 6 October at 10:30 Central Time, when Rob Morris, Chief Strategy Officer with GT Software will give a presentation entitled, “Mainframe integration is not a strategy – get your MIPS back while delivering value today”. The talk will discuss how you can integrate with the mainframe, project by project, without major licensing requirements or MIPS concerns.

And if IMS were outside mainstream computing, how come this talk has been covered in so many publications? You can find the story at: http://www.gtsoftware.com/events/virtual-ims-connection-featured-presentation-gt-software
http://www.businesswire.com/portal/site/statenewslines/?ndmViewId=news_view&newsId=20090922005338&newsLang=en
http://www.thefreelibrary.com/GT+Software%27s+Rob+Morris+to+Address+IMS+User+Group,+Discussing...-a0208207837
http://apache.sys-con.com/node/1116458
http://news.websitegear.com/view/138241
http://www.forbes.com/feeds/businesswire/2009/09/22/businesswire129325237.html
http://newsblaze.com/story/2009092207584400002.bw/topstory.html
http://www.pr-inside.com/gt-software-s-rob-morris-to-address-r1493341.htm
http://websphere.sys-con.com/node/1116458
http://linux.sys-con.com/node/1116458

So, IMS is an exciting technology. If you’d like to join the meeting, go the Virtual IMS Connection (www.virtualims.com) Web site and sign up. Details of how to join the meeting will be e-mailed to you.

Sunday 27 September 2009

Is an iPhone a me-phone?

I find it strange that people can have discussions about politics and even religion without getting upset and almost coming to blows, but mention whether an upright vacuum cleaner is better than a cylinder vacuum, or whether a Mac is better than a PC, and not only do people come out of the woodwork to express their opinion, they hold those opinions as fundamental values about how the world works and their place in it! So I find I’m experiencing a certain trepidation about writing about the iPhone because I know everyone will want to share their views – including members of the flat-earth society, people who have been abducted by aliens, and (this is the largest group) people who’ve never used an iPhone. But anyway, here goes…

Do I want to buy an iPhone? What are the reasons for and against making this one purchase. And this is more than a mobile (cell) phone purchase – this is a lifestyle purchase. This is not a phone to ring home on and say you’ll be there in 20 minutes; this is a phone to pull out of your pocket at meetings and watch people’s reactions to the fact you have one. There’s the me-too types, who immediately expose their iPhones to public gaze. There’s the me-never types who have made a choice not to have an iPhone and want you to know that’s what they’ve done. They could have had an iPhone if they’d wanted – they say. And there’s the group that looks longingly, know that you are a special kind of person because you have bought an iPhone – or certainly, that’s what the Apple marketing team would like us to think!

So what can an iPhone do that makes it so special? And I feel safe asking this question in a blog rather than a room full of people because I’m not going to be stuck in the corner with someone who acts like this is a religious revival meeting. I have only one sentiment for these over-enthusiastic proselyters – you didn’t invent the phone, you only bought it!

Anyway, to answer my own question, it makes phone calls, connects to the Internet, takes photos, and more recently videos, and makes it very easy to put those photos on the Internet – Youtube etc – and it plays music. Plus – and this is an incredibly important plus – it has gadgets, lots of gadgets. These are what make the iPhone so special. There’s a gadget that’s a compass (now how many times in a day have I needed to know which way is north). There’s a gadget that’s a spirit level (same comment), a gadget that’s a four-inch ruler (ditto). There’s also gadgets for connecting to Facebook, for getting Sky news, for a map of the London underground system, for seeing how many people have swine flu near you, for where you are flying over on an aeroplane, for identifying any tune that you can hear, for crushing bubblewrap, for recipes, etc etc. I love seeing a map with a dot showing exactly where I am. The gadgets are the things that turn the iPhone into something that you can use for every situation, whatever you’re doing.

Or does it? Well no. For a long time, I have used my phone as a way of taking documents to meetings. PDFs, Word documents, and Excel spreadsheets are the lifeblood of meeting. I could take PowerPoint presentation, but they usually go on a memory stick that I plug into a laptop at the far end. But for minutes of meetings and letters etc, I have gone paperless. I read the appropriate section of the document on my phone. I can’t do that with an iPhone. There are some Office apps in the AppStore, but there isn’t a Microsoft Office app. And before you start talking about the war between Apple and Microsoft, let me remind you that Office first appeared on a Mac. The other big real life problem is that iPhones play MP4s. There’s a good reason for it, but my collection of TV programmes and films are stored as AVIs, which won’t play on the iPhone, whereas they do play on my current phone. So how can I amuse myself on long train journeys? I could jailbreak and install something that would play AVI files, but I don’t feel I should have to do that.

And the other reason I don’t like it is iTunes! I thought we’d got away from nanny-knows-best software. I have to copy everything into iTunes before I can use it on my phone. Are we living in a fascist state? Can you imagine the anti-just-about-everything response you’d get if Microsoft tried to get away with that! I’m told that you grow to love it. Mmmh!

So, my primitive brain is sold on the fun part of owning an iPhone, but my intellectual brain knows that it’s not up to the job of being a business phone. I will wait until Version 4 comes out with all these gaps plugged, and then I will probably get one. As long as I can have it using the mobile network I want to use, and there’s a user-friendly alternative to iTunes.

Sunday 20 September 2009

Who said it could never happen?!

Our story starts back in March this year when Novell released SUSE Linux Enterprise (SLE) 11. Because of Novell’s alliance with Microsoft, this version supported the Mono runtime, which allows applications coded in C# and using the .Net Framework to run on non-Windows platforms without recompilation. Novell got into the open source Mono project by acquiring Ximian. What makes it particularly interesting is that SLE 11 runs on IBM mainframes.

Now let’s change the picture. Let’s pan across to Redmond where we see a rain-soaked figure scuttling out of the nocturnal storm into a brightly-lit building. Elsewhere in the same building, the much-heralded Windows 7 is being promoted ahead of its forthcoming release. Microsoft’s current version of Windows, Vista, was less than stellar in its success in terms of take-up by large organizations. So, we watch the marketing people deciding that a nominalized version of Windows is not going to sell well, and an acronymed version – remember XP and ME – seems like a retrograde step, so they put all their marketing expertise together and decide to call it “7”. Remember that “7” is very lucky in Chinese culture. They smile.

The scene changes again. A slow dissolve to an IBM presentation, where much is being made of System z’s virtualization capabilities. How it has a long and proud history and is just head-and-shoulders above any other virtualization software on any other platform. A tracking shot shows heads nodding in agreement amongst the well-informed audience.

But now, trying hard to ignore the man behind the curtain, we find an unlikely group of friends who want to consolidate their hardware assets. They know the world and his wife use laptops for their daily computing needs, and they want the same virtualization benefits mainframers enjoy to be available to them. Can the wizard help?

An out-of-focus close-up zooms out to reveal Mantissa’s z/Vos. I’ve mentioned this product about six months ago when it was announced. The software runs in z/VM and allows mainframers to run other operating systems under it including Windows.

Let’s cut away to our eager smiling heroes, who now realise that z/Vos, once properly available, offers them a way to run Windows on a mainframe, and SLE 11 Mono Extension gives them a way to run Windows .Net applications on a mainframe, although how easy that will be I’m not quite sure. It seems the big and little ends of computing have finally come together.

Fade to black.

And just to change the tone of things, here’s a haiku:

I Googled myself
and worryingly found that
again I’m not there.

Sunday 13 September 2009

Young mainframers

“Young mainframers” – now there’s two words you probably didn’t expect to see in the same sentence, unless you were reading something written more than twenty years ago! But this week, I met some of the new breed of young mainframe enthusiasts who are in their twenties.

CA – a company that needs no introduction, I’m sure – took a mixed bag of journalists and analysts to Prague this week to talk about a recent survey they had conducted and to introduce us to a scheme they have set up with universities to explain to youngster what a mainframe is and why it is so important.

Let’s take a quick look at the survey first. It was conducted by Vanson Bourne, and surveyed organizations in six European countries. If you want to read the whole thing, it’s called “The Mainframe: Surviving and Thriving in a Turbulent World” and can be found at http://www.ca.com/Files/SupportingPieces/ca_mainframe_survey_report_208226.pdf.

They came to four conclusions:
1 Organisations, using the mainframe as a fully connected resource within the distributed Web-enabled enterprise, experience significantly greater benefits than those with a disconnected comparatively-isolated mainframe environment.
2 Where the mainframe is a fully-connected resource, 65% of all respondents reported it to be an ‘incredibly secure environment’; 63% stated that performance levels are ‘excellent’; and 52% said that ‘the system never goes down’.
3 The more the mainframe is part of an enterprise-wide technology strategy, the greater the role it plays and the greater its level of utilization: the average amount of business-critical data administered by the mainframe among all ‘connected’ respondents is 64%.
4 66% agreed that the mainframe user will soon start to suffer from a shrinking workforce if the relevant skills are not available. However, 52% agreed that a Web-enabled GUI that less-experienced users could easily master would make the mainframe more attractive and help to close the skills gap.

So clearly, Web-connected mainframes are a positive business strategy for organizations. The big problem that many face is a skills issue. All those youngsters who got into mainframe computing in the seventies and eighties are getting on a bit. They may have vast amounts of experience, but many are more concerned about their retirement than anything else! IBM has an academic initiative to ensure youngster realize that there’s more to computing than Java. It, along with other software vendors, has introduced autonomic software – self repairing – and has made the interface to their software much easier to use. Excitingly, CA has also thrown its great weight into the battle for the hearts and minds of youngsters.

CA now has links with universities, giving them access to software and hardware, which the students can use for parts of their degree, masters, or PhD studies. There are then jobs available for suitable students. And suitability doesn’t mean any great knowledge of mainframes, but a willingness to learn how they work. CA then runs internal training to get these youngster – who come from all over the world, not just the Czech Republic – up to speed. They also use a mentoring system where, shall we say, more mature mainframe software experts can pass on their knowledge to the youngsters. CA has also simplified the user interface to its software. I have spoken to the next generation of mainframers, and it’s clear that they are determined, enthusiastic, and clearly very bright. Sites running mainframes can feel more relaxed about where their future software is coming from.

With IBM and CA working so successfully with younger people, it would be interesting to see what larger software houses, perhaps BMC and Progress Software, are doing.

Sunday 6 September 2009

Exploitation – good or bad?

If I read about the exploits of James Bond or Batman, or some other fictional hero, then I am usually amused and entertained by what I read. If I hear about the exploits of a politician, I am, perhaps, less enthralled – wondering what devious deeds have occurred. So the noun, “exploits”, carries a slightly more positive mixed message. But what about the words exploitation?

Wikipedia (http://en.wikipedia.org/wiki/Exploitation) informs us that the term “exploitation” has two distinct meanings:
1 The act of using something for any purpose. In this case, exploit is a synonym for use.
2 The act of using something in an unjust or cruel manner.

So if the word has two meaning, what sort of problems are we, the mainframing public, going to have when we read two different opinions about a piece of software that exploits a piece of hardware? Is this a good thing or a bad thing?

Yet again, I’m talking about that software bombshell called zPrime from NEON Enterprise Software. Those of you who know exactly what the software does, look away now! For everyone else, here’s a very brief overview. IBM builds mainframes and charges users by how much processing they do using the General Purpose Processor. In addition, IBM has specialty processors, which can be used for Linux, DB2, and Java. These are paid for, but then users save each month because they are not processing these applications using the GPPs. So, IBM gets its regular income from CICS, IMS, batch, TSO/ISPF, etc, etc, which do use GPPs. But what if, you could run CICS, etc in a specialty processor? Wouldn’t that save lots of dosh each month? That’s what NEON must have thought because that’s exactly what their zPrime software allows to happen. Ordinary mainframers save money – even after paying NEON for the software – but IBM loses anticipated revenue. Mainframes become more affordable, but still IBM loses revenue. What happens next?

I think that IBM is in a very difficult position with this. Obviously they can do sabre rattling about breaking licence agreements with current customers, and try to maintain the flow of revenue each month, but, if the price of running a mainframe was significantly lower, wouldn’t that attract more people to buy mainframes? Those mid-sized companies that are wresting with virtualizing their servers and solving problems that mainframers take for granted as being standard practice, might well view a mainframe as a very competitively-priced opportunity. IBM must look to the future and see this as an opportunity. And I wonder whether this is contributing to their kind of half-hearted response.

Admittedly, IBM has other considerations. For example, the more it blusters against zPrime, the more oxygen of publicity it gives it – and, as a consequence, the more sites that are likely to try it. Also, if IBM were to somehow ban the use of zPrime on its computers, that could lead to a lengthy court case. So, at the moment, IBM is simply telling customers to check their contracts to ensure they’re not breaking them by using zPrime. This seems little more than a non-specific scare tactic. After all, no contracts were written with zPrime even dreamed about. People signing new contracts might want their legal teams to check whether there’s anything in it about zPrime, but I haven’t heard of such sentences being included.

If you want my advice, and you didn’t ask, I’d get on the phone to NEON and get them to try it at my site. For NEON, exploiting specialty processors is a good thing. For IBM it isn’t. That’s their two different views of exploitation of specialty processors.

As a tailpiece of advice – while I’m in the mood – if I ran IBM, I’d be looking to buy NEON about now.

Monday 31 August 2009

Trust, bad debts, and the economy as a whole

My whole business strategy is based on trust. Software and hardware vendors trust that when I sign a non-disclosure agreement, I won’t reveal their secrets ahead of the scheduled launch day. Companies that send me review units of hardware and software trust that I will actually review the unit and publish the review. And if organizations ask me to write something for them, I trust that they will pay me for it.

And this usually works very well. I’ve never published an article about a product that hasn’t been formally announced. And I’ve always written and published (somewhere) reviews of hardware and software I’ve been sent.

It’s the other side of the trust equation that has, from time to time, been a problem in varying degrees. For example, one company that advertised on the Virtual IMS Connection Web site (www.virtualims.com), which my company looks after, were sometimes a bit slow to pay their monthly invoice.

Technical Support magazine got further and further behind with payments for articles a few years ago until they eventually stopped publishing. The good news – for me and other contributors – is that they eventually paid all their outstanding debts. So well done to them, our trust was eventually rewarded.

On the whole, organizations that have asked me to write articles or internal documentation for them have been very good at paying at the agreed time. Similarly with the Arcati Mainframe Yearbook (www.arcati.com), sponsors and advertisers have been generally good at paying on time. Our trust relationship worked. But being a smaller company, iTech-Ed (www.itech-ed.com) needs to receive payments when promised because it has commitments to other companies that have got to be met. If company A withholds a payment to company B, then company B either has to withhold payments to company C or incur bank charges for a temporary overdraft. And finally, somewhere along the line in these days of credit crunch, someone is going to owe the bank so much that they’ll cease trading altogether.

I wrote a short article for a company a few years ago. Shortly after I sent the article, the company disappeared! Its Web presence was gone and no e-mails were ever replied to. It was the first (and last) time that I ever wrote for that organization and I lost an afternoon’s work. The trust relationship was completely broken.

More worrying for me, is when an article I have written at an agreed price has been published on the Internet, but the commissioning organization don’t pay at the agreed time and then stop replying to e-mails! I’m perfectly happy for quotes from my blogs or other articles to be quoted by other bloggers or article writers. That’s great – and it happens all the time. What I’m irritated about is companies that agree a price, commission an article, publish it, and then don’t pay. No matter how much the article costs, it’s cash flow for smaller companies that keeps the economy going – and that thing called trust.

I wrote an article entitled “Which browser is best for me” for Sift Media back in April. It’s now September and they still haven’t paid up. Perhaps, you’re thinking, that they didn’t publish it. Well, you can find it at http://www.accountingweb.co.uk/item/196884 and you can find a reference to it at http://www.simplyraydeen.com/faq/96-browsers/131-which-is-the-best-browser-for-me. It’s also apparently mentioned at http://www.infotechaccountants.com/forums/showthread.php?t=18036 and http://britanniaradio.blogspot.com/2009/04/editors-note-looking-at-budget.html and http://phentermineonline.to.pl/news/Which-is-the-best-browser-for-me,128756.html.

Which all seems like Accounting Web, which is owned by Sift Media, got good mileage out of my work.

The point I want to make – to large and small companies alike – is pay your bills on time. Putting someone on 90 days or more before you pay, puts the whole economy at risk simply because small amounts of money moving rapidly from one organization to another can lead to larger amounts changing hands, and soon the economy is back on its feet and we are all benefiting. One slow payer or one bad debt puts a spanner in the works for everyone! And, of course, breaks any trust that exists between two companies.

Monday 24 August 2009

CA Eclipsed

According to Wikipedia (http://en.wikipedia.org/wiki/Eclipse_(software)) Eclipse is a multi-language software development environment comprising an IDE and a plug-in system to extend it. It is written primarily in Java and can be used to develop applications in Java and, by means of the various plug-ins, in other languages as well, including C, C++, COBOL, Python, Perl, PHP, and others.

Eclipse started life as an IBM Canada project. It was developed by Object Technology International (OTI) as a Java-based replacement for the Smalltalk-based VisualAge family of IDE products. The Eclipse Foundation was created in January 2004. IBM’s Chief Technology Officer Lee Nackman claims that the name “Eclipse” was chosen to target Microsoft’s Visual Studio product.

Eclipse was originally meant for Java developers, but through the use of plug-ins to the small run-time kernel, other languages can be used. And there are Eclipse widgets.

And now CA has got in on the act. CA InterTest Batch and CA InterTest for CICS, now feature a Graphical User Interface based on the Eclipse Platform, which they claim makes it easier for new and experienced mainframers to execute core testing, and debugging tasks. The press release adds that these tasks historically have been time-consuming phases of the mainframe application development and deployment life-cycle.

What CA is claiming is that the new CA InterTest GUI helps developers re-use and re-purpose existing mainframe application code in order to further improve productivity and support Service Oriented Architecture implementations. It says that by plugging CA InterTest tools into their larger Eclipse-based integrated development environments, customers can more easily and seamlessly debug end-to-end composite applications that include mainframe, distributed, Web, and/or mobile components.

IBM has been making a big thing of Eclipse for a long time – well it would I suppose as it had a hand in its development. Its Rational mainframe tools integrate with Eclipse.

Also, Compuware announced a new version of its analysis and debugging tool last week, Xpediter/Eclipse 2.0. The company said that Compuware Xpediter helps the next generation of developers analyse applications and quickly understand the business processes and data flows in those applications, avoiding an unnecessarily steep learning curve. Xpediter/Eclipse 2.0 also helps these new developers become productive quicker by moving away from the traditional “green screen” interface and providing a modernized point-and-click environment, to which these new employees are accustomed.

This announcement more clearly points to the thinking behind these product updates – and that is that mainframers are getting old, so in order to keep the machines functioning there needs to be a way for younger people to become productive very quickly without learning the arcane ways of the machine – and Eclipse provides such an environment for them to work in. Watch out for more Eclipse-related announcements.

Sunday 16 August 2009

IMS Open Database

The latest webinar from Virtual IMS CONNECTION (www.virtualims.com) was entitled, “IMS Open DB functionality in IMS V11”, and was presented by Kevin Hite, an IMS lead tester with IBM. Kevin is a software engineer originally from Rochester, NY, who has worked in IBM for both WebSphere for z/OS and IMS. He is the test lead for IMS V11 Open Database and is the team lead for a new test area, IMS Solution Test. Solution Test is responsible for integrating new IMS function to customer-like applications in a customer-like environment.

IMS Open Database is new in V11, and Kevin informed the user group that it offers scalable, distributed, and high-speed local access to IMS database resources. The product allows business growth, and allows more flexibility in accessing IMS data to meet growth challenges, while at the same time allowing IMS databases to be processed as a standards-based data server.


What makes IMS Open Database different is its standards-based approach, using Java Connector Architecture 1.5 (Java EE), JDBC, SQL, and DRDA. It enables new application design frameworks and patterns.


One particular highlight Kevin identified with the new solution was three universal drivers. These include:

  • Universal DB resource adapter
    – JCA 1.5, which provides: XA transaction support and local transaction support; connection pooling; connection sharing; and the availability of multiple programming models (JDBC, CCI with SQL interactions, and CCI with DLI interactions).
  • Universal JDBC driver
  • Universal DLI driver.
With distributed access:
  • All Universal drivers support type 4 connectivity to IMS databases from TCP/IP-enabled platforms and runtimes, including:
    – Windows
    – zLinux
    – z/OS
    – WebSphere Application Server
    – Stand-alone Java SE
  • Resource Recovery Services (RRS) is not required if applications do not require distributed two-phase commit.
For local connectivity, Kevin informed us that the Universal driver support for type 2 connectivity to IMS databases from z/OS runtimes includes WebSphere Application Server for z/OS, IMS Java dependent regions, CICS z/OS, and DB2 z/OS stored procedures.

The two Universal drivers for JDBC – IMS Universal DB Resource Adapter and IMS Universal JDBC Driver – offer a greatly-enhanced JDBC implementation including:

  • JDBC 3.0
  • Local commit/rollback support
  • Standard SQL implementation for the SQL subset supported
    – Keys of parent segments are included in table as foreign keys, and allows standard SQL implementation
  • Updatable result sets
  • Metadata discovery API implementation
    – Uses metadata generated by DLIModel Utility as “catalog data”
    – Enables JDBC tooling to work with IMS DBs just as they do with DB2 DBs.
This is just a small part of a very interesting presentation and gives little more than a flavour of what IMS professionals can expect from IMS Open Database in V11 of IMS.

Sunday 9 August 2009

How old is old?

Picking up on my blog of a couple of weeks ago about COBOL reaching 50 this year, I thought it might be interesting to see just how old some of the technology we know and love actually is.

For example, CICS – the Customer Information Control System – has been around since 1969. Although we tend to associate CICS with IBM Hursley these days, it was originally developed at Des Planes in the USA and was called PU-CICS, with the PU bit standing for Public Utility. In the early 1970s, development was at Palo Alto, but moved to Hursley in 1974.

IMS – Information Management System – is even older, having first appeared in 1968. IMS was developed for the space race and contributed to the success of the Apollo program. It’s said, but no-one knows outside of IBM, that IMS is IBM's highest revenue software product. If you’re not already aware, I organize the Virtual IMS Connection user group at www.virtualims.com. It’s free to join and you get six free Webinars a year and six free user group newsletters. But I digress.

Batch processing goes right back to the very early days of computing in the 1950s.

TSO, or Time Sharing Option, first appeared in the early 1960s. Originally, and there’s a clue in its name, it was an optional extra on OS/MVT (Operating System/ Multiprogramming with a Variable number of Tasks), a precursor to MVS. TSO became a standard feature with the release of MVS in 1974. ISPF (Interactive System Productivity Facility), which is associated with TSO, didn’t appear until the 1980s.

DB2 – database 2 – first appeared in 1983. DB2 is a relational database, and as well as on mainframes, turns up on PCs and other IBM platforms. It competes with Oracle and Microsoft’s SQL Server products on these other platforms. Oracle appeared in 1979.

Mainframes themselves were developed during the 1950s.

The World Wide Web is meant to have come into existence in 1991 – thanks to the work of Tim Berners Lee.

IBM came into existence in 1924, when a company called CTR changed its name to IBM. It had been trading as IBM in Canada since 1917.

Microsoft was founded in 1978 – but enough about them.

Citrix was founded in 1989 by Ed Iacobucci and others who’d worked on the ill-fated OS/2 project at IBM.

COBOL’s 50. Java was first released by Sun Microsystems in 1995. It was based on the work of James Gosling. The language was initially called Oak. The international standard for C++ came in 1998.

It’s interesting looking back and realizing just what a significant effect these golden oldie technologies have had, and how they will continue to thrive into the foreseeable future.

Sunday 2 August 2009

zPrime rattles a few cages

I seem to be spending the summer talking about zIIP and zAAP (System z Integrated Information Processor, and System z Application Assist Processors). And a couple of weeks ago I was enthusing about NEON Enterprise Software’s new zPrime product and how users should get it and save money before IBM changed the rules.

And I’m inclined to still think that way, it’s just that IBM has responded to the announcement much faster than I imagined.

For people who’ve been living off-planet, IBM charges users by the amount of General Purpose Processor (GPP) they use, while also making specialty processors available for things like Linux and DB2. Now, doing your processing in a specialty processor saves money because you’re not using the chargeable GPPs – and in real life can save money by putting off the need for an expensive upgrade. Into this situation comes the zPrime bombshell. Using their new software, NEON reckons that 50% of workloads can run on specialty processors – that’s not just DB2, that’s IMS, CICS, TSO/ISPF, batch, whatever.

Not surprisingly, at the thought of seeing their potential revenue cut in half, IBM has taken a dim view of the announcement. In a recent customer letter, IBM’s Mark S Anzani, VP and Chief Technology Officer for System z, cautions customers about the zPrime product. Apparently, customers with questions about IBM’s position on zPrime can contact Mark on anzani@us.ibm.com.

The customer letter contains the following paragraph:
“In general, any product which is designed to cause additional workloads, not designated by IBM or other SW providers as eligible to run on the Specialty Engines, to nevertheless to be routed to a Specialty Engine should be evaluated to determine whether installation and use of such a product would violate, among other things, the IBM Customer Agreement (for instance, Section 4 regarding authorized use of IBM program products such as z/OS) and/or the license governing use of the IBM “Licensed Internal Code” (frequently referred to as “LIC”) running on IBM System z servers, or license agreements with any third party software providers.”

NEON sent out a press release on 16 July saying: “NEON Enterprise Software is responding to a massive wave of interest over a newly-released software product called NEON zPrime that saves mainframe users millions of dollars in IT costs by realizing the full potential of IBM System z specialty processors.”

And how do other software vendors feel about this? CA – probably the biggest apart from IBM – ignores the announcement. The latest e-mail I have says that Chris O’Malley, executive vice president and general manager of CA’s Mainframe Business Unit, will deliver a keynote address to SHARE in Denver. As usual, no word from BMC. No it’s a PR company for DataDirect that drew my attention to Gregg Willhoit’s blog (http://blogs.datadirect.com/2009/07/ibm-cautions-customers-about-neon-enterprise-softwares-zprime-product.html) and the IBM customer letter (http://blogs.datadirect.com/media/IBM%20position%20document.pdf) on the DataDirect site.

It will be interesting to see how many customers install zPrime and what happens next.

Sunday 26 July 2009

COBOL on the mainframe

It was 50 years ago today... Sgt Pepper taught a band to play – so might go the lyrics to Sgt Pepper’s Lonely Hearts Club Band, the title track for the Beatles’ eighth album and first concept album, which was released on 1 June 1967. And it was 50 years ago (although not to the day) that Grace Hopper gave the world COBOL – COmmon Business-Oriented Language.

It seems that a committee comprising William Selden and Gertrude Tierney from IBM, Howard Bromberg and Howard Discount from RCA, and Vernon Reeves and Jean E Sammet from Sylvania Electric Products completed the specifications for COBOL in December 1959. So where does Grace Hopper fit in? Well, the specifications were greatly inspired by the FLOW-MATIC language invented by Grace Hopper, and a couple of others. The name COBOL was agreed by the committee on 18 September 1959.

COBOL programs couldn’t run until compilers had been built, so it wasn’t until December 1960 that what was essentially the same COBOL program ran on two different makes of computers – an RCA computer and a Remington-Rand Univac computer.

Not surprisingly, there have been a number of developments in COBOL over the 50 years. The American National Standards Institute (ANSI) developed a standard form of the language in 1968 (known as American National Standard (ANS) COBOL). There was a 1974 revised version, and in 1985 there was another revision. And the most recent revision was 2002.

Still, after 50 years, there can’t be many people using it – after all, the computer industry is high tech, not old tech! This view, which I heard quite forcibly expressed the other day, is simply not true. Figures quoted – and Im never sure how anyone could accurately know this, but it seems about right – suggest that there are more than 220 billion lines of COBOL in use – arguably 80% of the world’s actively used code. It’s also been suggested that there are 200 times as many COBOL transactions in a day than Google searches!

Now, a lot of this code is on mainframes, and companies like Micro Focus are keen to get mainframers onto other platforms. One difficulty with this for mainframers is what to do with their COBOL programs. To help, Micro Focus last week announced Reuze, a tool for migrating business processes from the mainframe to Windows, without having to rewrite applications.

In most cases, Reuze allows the COBOL programs to remain unchanged, eliminating the need to rewrite the source code for SQL Server or to remove mainframe syntax, says the company. It supports 64-bit Windows architectures and the .NET Framework.

The product has two components: Developer, a client-based graphical tool for migrating applications to Windows; and Server, the deployment environment for the migrated applications. Developer includes an integrated development environment based on Microsoft Visual Studio, and allows for cross-team collaboration.

I’m sure smaller mainframe sites will find this of interest; larger ones, perhaps less so. But whatever size machine you run your COBOL on, say happy birthday to it!