Sunday 30 March 2014

Tell me about this Yammer thing

I’ve been to a few companies recently that have been using Yammer as a business tool. If you’ve got offices that are spread out, or if your workforce aren’t usually in the office, then it provides an easy way for people to be able to share things – like comments, documents, or images. And you can form groups so discussions, that are only relevant to a small group of people, stay within that small group or team.

Yammer started life in 2008 and was bought by Microsoft in 2012. It’s described as an enterprise social network. That means it’s not a public social network like Facebook, it’s for internal communication between members of an organization or group.

It’s free, it’s very easy to use (if you’ve ever used Facebook), and it provides a private and secure place for discussion. The simplest way to use Yammer is from your browser (Explorer, Firefox, Chrome, etc), and you can download the app for your smartphone or tablet.

It’s easy to set up and use, but I thought I’d put together some instructions for new users, so they know how to get on and start using it.

To sign up, go to www.yammer.com. You’ll see a large box in the middle of the page:
 



Type in your company e-mail address – you can’t use your personal e-mail address because it won’t work.

Complete your Yammer profile and add a photo. New people in your organization may not be familiar with who you are and your particular skill set.

You can join groups and follow topics that are relevant to you. If Yammer gets very busy with people posting, you won’t want to be informed every time there’s a new post. So, click on the three dots in the upper right-hand corner. In the drop-down menu, select ‘Edit Profile’. Then select ‘Notifications’ from the list on the left, and then choose how often you want to receive notifications. ‘Save’ your choice. There’s a ‘Back Home’ box top-left to get back.

You can also follow other people – that way you get to see what they’re posting.

When you come to use Yammer on subsequent occasions, you simply click on ‘Log In’ on the right of the top menu bar.



Now you can start to use Yammer.

You can post messages – these can be comments, questions, updates. You can post links to articles or blogs elsewhere on the Web.

You can follow people, which means that you want to view messages from them in ‘My Feed’. It’s not like a friend request. They don’t have to agree. They don’t have to follow you back.

You can read what other people are posting and get a feel of what’s going on across the organization.

You can ‘Like’ other people’s posts.

You can find out more about people in your organization by reading their profile.

You could start your own group or join existing groups.

You can upload pictures. You can organize events/meetings. You can survey what people think about things

You can use topics so that all the posts are around a specific topic. To add a topic to a post, click ‘add topic’ while writing the message or you can use a hashtag. You can also add topics to a published message by clicking ‘more’. Hashtags (#) are used to identify what posts are about and to make finding information easier.

You can search for information in the search box near the top of the page. This will find whether anyone else has posted about a particular topic.

And you can send a direct message in three ways. Use the @ sign followed by the user’s names. As you start to type the name, a drop-down menu will give you suggestions. You can send a private message:
  • Click ‘Inbox’ in the left column.
  • Click ‘Create Message’ on the right sidebar.
  • Select ‘Send Private Message’.
  • In the ‘Add Participants’ field, start to type the person’s user name. A drop-down list of matching user names appears.
  • Select the name of the name of the person you want to send the message to.
  • Write your message, and then ‘Send’.
And you can send a message through ‘Online Now’:
  • Click ‘Online Now’ in the bottom-right corner.
  • Start writing the person’s name. A drop-down list of matching user names appears.
  • Use the up and down arrows, and ‘Enter’ to select a name. A message box opens.
  • Write your message, and then ‘Send’.
Recipients are notified that they have a message.

Unbelievably, Yammer refers to all communications inside Yammer as “Yams”. Yams are sorted into various feeds. A feed, if you’re new to social media, is a way of keeping you up-to-date with content that other people are posting.

I think many organizations would benefit from an internal social media tool. There are alternatives to Yammer available, but I think it can be very useful within an organization to help with communication. And it can be fun!

Sunday 23 March 2014

What is Software Defined Anything?

If you’ve sat through a training seminar recently, you’ve probably seen a slide talking about software-defined anything or software-defined everything. Or you may have seen the acronym SDx and wondered what it is and where it’s come from. So let’s have a look at what they’re talking about.

Basically, what we’re looking at is using software to control different kinds of hardware, and then to make that software able to control multiple-component hardware systems. With the growth of the Internet of Things (IoT), it makes sense to start thinking about being able to create rules that are implemented in software that can be used to control a myriad of different types of devices.

At the moment there are a number areas using software-defined technology. For example there’s Software-Defined Storage (SDS), which seems to apply to all sorts of storage software, particularly virtualization software. Different vendors use the term loosely for different things. Software-Defined Networking (SDN) is where network devices are programmable and so networks themselves are more dynamic. Again, it’s a term that’s used by different vendors for different things.

Software-Defined Storage Networks (SDSN) is an attempt to virtualize storage networks by separating the actual physical network from its controlling software. A Software-Defined Hypervisor (SDH) seems to refer to virtualizing the hypervisor layer and separating it from its management console. And finally, there’s Software-Defined Infrastructure (SDI) aka Software-Defined Data Centre (SDDC), which is an aspirational concept where data centre services are controlled by policy-driven software.

Two things probably leap to mind about now. Firstly, this seems a lot like marketecture! We’ve seen this before, where vendors are really selling us an idea of something rather than it being a tangible reality. We are very much in the early days of this sort of thing. The second thing is that this is not directly linked to mainframes. This is VMware’s ideas – as well as a huge number of other companies.

Having said that, of course, the newer hybrid mainframes from IBM will be able to make use of this technology as it becomes available in reality. Also, Gartner reckons that SDx is one of the major disruptive technologies to watch. It makes it easier to scale up existing architecture and even try out different architectures. Also it makes it possible to tune networks, matching network performance to workloads. And, of course the main selling points are flexibility, agility, security, and price.

IBM’s Smarter Computing blog has an interesting blog by Shamin Hossain called “Software defined everything: When a data center becomes soft”, which can be found at http://www.smartercomputingblog.com/software-defined-environment-2/software-defined-everything/.

Clearly the prefix ‘software-defined’ is one that we’re going to hear a lot more about this year.

Sunday 16 March 2014

Happy birthday WWW

The World Wide Web celebrated 25 years on 12 March – although that’s really 25 years since conception rather than since birth. It was on the 12 March 1989 that Sir Tim Berners-Lee first put forward his proposal for what became the World-Wide Web.

The 34-year-old software engineer at CERN physics lab in in Geneva wrote a paper called, “Information Management: A Proposal”. The driving force was the need to not only communicate with colleagues, but also keep in contact with the many scientists who had worked at CERN and were now working elsewhere.

It soon became clear that the idea could be extended beyond CERN and in 1990, working with Robert Cailliau, proposed to use hypertext. The first Web site was created that year. Their thinking at the time was that there would be a web (the WorldWideWeb) of hypertext documents, and people could view them using a browser.

Steve Jobs is strangely linked to this story, and that’s because the first server was a NeXT Computer. These workstations were built by Jobs and his team, which included other ex-Macintosh staff. There was actually a note on the computer telling people not to turn it off. By late 1991, people outside CERN could access the Web as a service available on the Internet. By 1992, there was a server outside of Europe. It was set up in Palo Alto at the Stanford Linear Accelerator Center.

What Berners-Lee did that was special was to combine hypertext and the Internet. He also developed three technologies that we take for granted nowadays. They are: Hypertext Transfer Protocol (HTTP); Hypertext Mark-up Language (HTML); and unique Web addresses – URLs (Uniform Resource Locators).

In 1993 the Web browser called Mosaic was released. This had a graphical user interface and made browsing the Web easy and quick. The World Wide Web Consortium (W3C) was founded in 1994. The rest, as they say, is history.

Using the 25th birthday as a springboard, Sir Tim Berners-Lee has called for a bill of rights to protect freedom of speech on the Internet and users’ rights following leaks about government surveillance of online activity. Berners-Lee has said that there is a need for a charter like the Magna Carta to help guarantee fundamental principles.

Edward Snowden’s leaking of so many documents revealing (or confirming) that governments all over the world are monitoring Internet activity (as well phones) has brought Web privacy to the attention of the public. It seems that the NSA has been collecting personal data about Google, Facebook, and Skype users.

And now we can use the Web on our tablets and smartphones. We can buy just about anything from Amazon. We can look-up everything on Google, find out the details on Wikipedia, keep up with our friends on Facebook, watch videos on YouTube, shop, bank, and choose the best deals for insurance. We can send e-mails, and tweet about what we’re doing and who we’re with. We can also apply for a job, listen to music, read a book, and browse photos. In fact, we can do just about anything it’s possible to do.

It’s true that there is a dark side to the Web. People can find out a lot of information about you from a quick online search, and organizations, many of which are part of the government, can find out even more about your browsing habits etc. But on the whole, for most people, getting online is a natural part of the day – and it’s a very enjoyable part of their day. So well done TimBL, great idea. And happy birthday to the World Wide Web.

Sunday 9 March 2014

REXX - the wonder program!

REXX (REXX or Rexx) has been around since the early 80s. I first came across it in the mid-80s, when sites were beginning to use it with VM and CMS as a powerful replacement for EXECs. I thought it would be interesting to take a look at this venerable, but still powerful tool in the system programmer’s toolkit.

Amazingly, REXX wasn’t written as an IBM project, it was written by IBMer Mike Cowlishaw in his own time in Assembler. It was designed as a macro or scripting language and was based on PL/I. It first saw the light of day at SHARE in 1981, and became a full IBM product in 1982.

IBM has implemented REXX on VM, MVS, VSE, CICS, AIX, and AS/400s, and on subsequent products. There are even versions available for Java, Windows, and Linux. A compiled version for CMS became available in 1987 and was written by Lundin and Woodruff.

REXX was so popular that versions were developed for other platforms, including Unix, Solaris, Apple Mac OS X, OpenVMS, and lots of others. There’s also NetRexx, which compiles to Java byte code, and ObjectREXX, which is an object-oriented version of REXX.

In 2005, Open Object Rexx (ooRexx) was announced. It has a Windows Script Host (WSH) scripting engine. So code can be written for Windows. It also has a command line Rexx interpreter.

Those of you who use the ZEN family of products from William Data Systems will be interested to know that ZEN Rexx, formerly available with ZEN’s AUTOMATION component, has now been made available to all WDS customers through the base ZEN component.

ZEN’s Rexx support comprises: a ZEN Rexx Interface; a ZEN Rexx Function Pack; and a command interface. The ZEN Rexx Interface provides support for running Rexx programs under ZEN in a similar way to running Rexx programs under TSO or Netview. You can also run ZEN Rexx programs using a Modify (F) command from the z/OS console. Under ZEN, Rexx programs can be run from the ZEN Profile at initialization time, from the Command Facility and System Log panels, or from a user-defined ZEN menu item.

The ZEN Rexx Function Pack provides extensions to standard REXX through which users can communicate directly with ZEN. The command interface enables users to issue commands from their ZEN Rexx programs and get any responses back. This means that commands provided by their WDS ZEN components can be issued from a ZEN Rexx program, as well as any z/OS, VTAM, or other product command.

REXX may have had humble beginnings, with Mike Cowlishaw working in his own time, but it has gone on to conquer the world (metaphorically) and is now in regular use, in different incarnations, on just about every platform imaginable, including your smartphone and tablet.

Saturday 1 March 2014

Big Data 2.0

We were only just beginning to get our heads around Hadoop and Big Data in general when we find everyone is starting to talk about Big Data 2.0 – and it’s bigger, faster, and cleverer!

Hadoop, as I’m sure you know, is an open source project, and it’s available from companies like IBM, Hortonworks, Cloudera, and MapR. It provides a storage and retrieval method (HDFS – Hadoop Distributed File System) that can knock the socks off older, more expensive storage options on databases using SAN or NAS. It also means that more data can be stored. And that means not just human-keyed data, but data from the information of things (point of sales machines, sensors, cameras, etc) as well as social media. It’s an OCD sufferer’s dream come true. No need to delete (throw away) anything. But with all the data, it becomes important to find some way to ‘mine’ it – to derive information from the data that can be commercially useful. And that’s what’s happening, deeper and richer sets of results are being derived from the data that are beneficial to organizations.

With Version 2 of Hadoop, everything is faster. Data is processed at amazing speeds in-memory. The analysis is taking place at speed on terabytes of data. It also allows decisions to be made at speeds unavailable to humans. Research shows that algorithms with as many six variables out-perform human experts in most situations. This was tested on experts predicting the price of wine in future years and stock marketeers. So now, Big Data 2.0 means better decisions can be made at incredible speed.

It’s also possible for machines to learn using these techniques – such as the Google classic of having software that can identify the presence of a cat in video footage and no-one being quite sure how it is doing it.

For mainframe sites, Hadoop isn’t just some distant dream. You don’t need a room full of Linux servers to make it work – in fact that’s the clue to the solution. Much of this works very nicely on Linux on System z (or zLinux as many people still think of it). And once the data is on a mainframe, it becomes very easy to copy parts of it to a z/OS partition for more work to be done on the data. Cognos BI runs on the zLinux partition, so the first level of information extraction can be performed using that Business Intelligence tool. Software vendors are coming to market with products that run on the mainframe. BMC has extended its Control-M automated mainframe job scheduler with Control-M for Hadoop. Syncsort has Hadoop Connectivity. Compuware has extended its Application Performance Management (APM) software with Compuware APM for Big Data. And Informatica PowerExchange for Hadoop provides connectivity to the Hadoop Distributed File System (HDFS).

So what’s it like on the ground and away from the PowerPoint slides? At the moment, my experience is that really big companies – Google, Amazon, Facebook, and similar are pushing the envelope with Big Data. But it seems that many large organizations aren’t strongly embracing the new technology. Do banks, insurance companies, and airlines – the main users of mainframes – see a need for Big Data? Seemingly not – or not yet. Perhaps they are waiting for money to be spent and mistakes to be made before they adopt best practice and reap the benefits. Perhaps they are waiting for Big Data V3?

Big Data is definitely here to stay and those companies that could benefit from its adoption will gain a huge commercial advantage when they do.