I often find it amusing when people pull out a very significant sounding, obviously committee-written definition of “interoperability.” If I didn’t know better, I would have thought that the definition was written and then delivered on a stone tablet. Is this necessary, or is interoperability one of those things that you know when you see it?
Three words that are often tossed around when talking about how one piece of software can talk to another are “compatibility,” “interoperability,” and “interchangeability.” Let me talk about them in order.
In my opinion, the phrase “software compatibility” is very fuzzy and therefore useless. Does it mean that both pieces of software can run on the same operating system without having one or the other explode? (In olden days it might have meant that they didn’t overwrite each other’s DLLs.) I do think it connotes some sense of their co-existing peacefully, but not really doing more than that.
More dangerously, I think it’s a term used by some who don’t want to sign up to some idea of strong interoperability, in the sense that open standards are used to specify how the systems should talk. “I don’t want you to hold me to being interoperable, so let’s just say we’re compatible. That way I can just keep building my systems the way I want to do so.”
So don’t use “compatibility,” or else understand that you are probably using someone else’s key word with a tainted meaning.
“Interoperability” is the real term here, meaning that two different pieces of software can exchange information and the meaning of that information within a larger document or process is well understood by both. Of course, ultimately the information may move among more than two pieces of software. This software might all be on one device or one computer, one or more may be a desktop or server application or a remote service, and they may be geographically close or distant.
Let me use the phrase “weak interoperability” to mean that software systems can work together because they share proprietary or non-publicly controlled data formats, and the exchange of that data is similarly done using proprietary or non-publicly controlled protocols. A typical case is “You want interoperability? Buy all your software from me. Don’t worry, I’ll guarantee that it will all just work.”
For discussions of interoperability, the software systems we are talking about should be considered black boxes. That is, data goes in and data goes out, but we should not really care about how the applications or services are implemented. You might be worried about aspects of software such as open source, patents, or trade secrets, but for interoperability it is the flow of information back and forth with which we concern ourselves.
First, then, we care about about the format of the information as it is externalized from the software and made available to others. In this area, standards like XML, HTML, the OpenDocument Format, and many XML-based standards for particular industries are good examples. This interchange format might be static, that is, pre-generated, or it might be dynamically generated from user input, a database, or some computation. The technical quality and implementability of the formats are very important. Having a “standard” that only one party can fully implement will not lead to general interoperability.
Protocols are the second area of interoperability that you need to know and care about. It’s one thing to have information, but how is it wrapped up and sent securely and reliably from one place to another? The protocols might be very simple or they might be very complicated, involving encryption, transactions, guaranteed delivery techniques, and other aspects that we might attribute to the qualities of service between the software.
So now we have our information, we have it nicely wrapped up for delivery, but how do we know how to get it into the other application? As a somewhat contrived example, if I were to send
Account number: 745690555121200099
to a bank, I might want to know if I’m talking about a deposit or a withdrawal. There are various ways of doing this, but the essential idea is that you need to understand the interfaces into an application or service. The common term is “API,” for “Application Program Interface.” That bank might have just one interface and I would put the “withdrawal” instruction in with the data, or it might have separate “deposit” and “withdrawal” APIs and I invoke the one I want for a specific action. As I said, this is over simplifying, but you get the idea.
When we have open standards used for information formats, protocols, and APIs, we have what I’ll now officially term “strong interoperability.” The specifications used thusly are “software interoperability standards.”
There are those who do not feel that software APIs need to be open for interoperability and there are others who feel that strong interoperability means that open standards must be used for the description of the business processes under which the information is flowing from one application or service to another. I think the debate will continue for several more years, and all that I’ll add to that now is that the recent trend in the IT industry is to become more and more open.
The final term I want to discuss is “interchangeability.” To what degree can I take out one software application or service and replace it with another? Here the concern might extend beyond formats, protocols, and APIs into the areas of user interface and support on multiple operating systems.
To pick a specific application area, many people are asking how interchangeable are office suites like Microsoft Office, OpenOffice.org, IBM Lotus Symphony, Corel Office, and KOffice. Given Microsoft Office’s past and current dominant market position, there has been significant debate around standards for document formats.
To avoid retraining costs, many are concerned about how similar the user interfaces are. Of course, that has also been a concern between versions of the same product, as when Microsoft Office 2007 introduced the new ribbon interface. I would argue that innovation suffers if you always restrict yourself to copying the user interface of the current market leader, no matter the software category.
The office category is interesting because it also includes things like spreadsheet macros and formulas. Can one spreadsheet application run the macros and understand the formulas in a document created by another application? Is this true across operating systems, even for software from the same vendor?
So software interchangeability goes beyond formats, protocols, APIs, user interface, and operating system support, and also includes in-application programmability. Note that these issues apply not just to standalone desktop applications but also to software delivered through a web browser.
For software services that are only accessed programmatically, pure interchangeability is often more possible. Banks and financial institutions, for example, could use very standardized formats, protocols, and APIs to deliver the same services, for example, credit card approvals.
With cloud computing becoming more and more important, people are correctly asking questions about standards. My sense is that virtually none of the cloud environments are interchangeable and that interoperability among them is sketchy, at best. Unless one provider ends up being overwhelmingly dominant, interoperability will need to be improved.