Open Standards vs. Open Source, Part 4: The SOA Connection

Print Friendly

Podcast of this blog entry

pdf version In the beginning, there was one computer and it was big and slow and it filled an entire room. Eventually, there were many computers and they were smaller and they could talk to each other.

All was not good, however, because they did not speak to each other in the same way. Strangely enough, this was often true even when a particular collection of machines all tried to work together in a hospital, an automobile manufacturer, an insurance company, or a government agency.

In the meanwhile, the software that ran on the computers got bigger, more powerful, and sometimes needlessly complicated. If two computers did communicate with each other to get some job done, it was very difficult to put in similar software created by someone else on one of the machines. This was true even if that new software might have been significantly better in some way or less expensive.

It became very hard to substitute in different hardware that was much faster or otherwise better suited to more efficiently accomplish the intended task. We also discovered that machines started to know too much about each other. This was not nosiness on their part, of course, it was just that people started to depend on particular special features of the software or the hardware when putting everything together so the machines could do their jobs.

There were times when we wanted to use knowledge of special features in very high performance situations, but for most situations with application software, building in these kinds of dependencies eventually caused more problems than they were worth. What we really needed was a way for the computers, really the software running on these networked machines, to be able to ask each other for information or to do certain jobs in ways that did not give away the underlying details of the software, the operating systems, or the hardware.

If you could do this, and computer scientists refer to such systems as being loosely coupled, then it would be much easier to completely hide the underlying details of how the systems were built. This would allow us to make changes or improvements to the systems while still allowing the software to communicate in the same way.

We could make the overall job run faster by putting in speedier hardware running a different operating system, yet the systems could still communicate in the same way. We could move one computer closer or farther away, speed up the communication technology, and yet everything would still keep working.

If a car manufacturer needed to order parts, then it could use exactly the same language and communication style to talk to two or more suppliers. If a new supplier offered better quality or a lower price, then that supplier could be added into the system and the same kinds of interchanges could take place with it as had been happening with the older suppliers.

What we’re describing here is interoperability, software and hardware systems made by different people that can nevertheless communicate in a high level way that does not depend on the underlying implementation details. This means we don’t all have to buy our computer hardware from the same vendor and we don’t all have to use the same operating system and applications.

It means that we have the choice to buy or build or otherwise obtain what is right for us and it gives us control. This means that we don’t have to rely on proprietary software communication schemes from vendors and we don’t have to get software interoperability via one vendor’s trade secrets.

A vendor or a software provider gets our business if they offer the best product, code, or service at the right price. They know we can substitute in something made by someone else. Hence we get more competition and ongoing improvements, both technical and economic.

The world doesn’t quite work this perfectly now, but it could come close. We can build open, interoperable software systems useful for businesses, governments, schools, hospitals and anything else that could benefit from the advantages discussed above. Standards can make it all work together while open source and traditional proprietary software will give us a range of choices in how we build the systems.

Let’s stop and think about the World Wide Web. When you use the Web, do you ever worry about what software is being used to deliver the pages you view? Do you think about the hardware?

Websites often run both proprietary and open source software and they use hardware from many different vendors. From the perspective of a consumer of web pages and someone who sends personal information on the Web, you want the transmissions to be fast, reliable and secure. You want to be able to fully interact with the pages within your browser, no matter which browser you choose.

That is, you care about the quality of service and you care about the standards being used to encode the pages and the way they are sent back and forth. This is possible today and has been for many years.

Recent releases from open source browser software providers and smaller proprietary browser software vendors has focused the spotlight on the importance of good, current support for standards and consistent attention to security issues. As a result, the market leader has been forced to update its browser in order to try to remain competitive and stop losing market share. Standards are like that: they force vendors to support interoperability in the way customers demand.

The Web illustrates the success of standards in hiding how sites are actually implemented. This allows a website owner to use any software or hardware that suits his or her purpose. That owner also wants good quality of service. He or she wants happy customers who are pleased with using whatever services are offered on the site.

Users of the sites don’t have to think about the technology being used but rather can think about whatever they are trying to accomplish. This may be buying music, finding the local movie listings, ordering presents for an upcoming holiday, or reading blogs. There are many such examples, but in all of them the standards that make up the web allow you to think about what you are trying to do versus technically how you are doing it.

It also enables geographic independence. To give you an example, I have a personal website and I don’t have the vaguest idea where the machine running it is physically located. As long as interactions with it are fast enough, the location makes no difference.

This allows a lot of flexibility for website owners, especially the ones who may be running hundreds if not thousands of hardware servers. As long as the quality of service’performance, reliability, and security’is sufficiently high, it makes no difference where on the planet the hardware is. The Web is loosely coupled and its success is evident to hundreds of millions of people every day.

If we can accomplish all this with the Web and what it has done for e-commerce and making information available globally, can we do something similar but a bit more sophisticated and more general for interactions between arbitrary pieces of software? Can we have more fine-grained security where we can allow doctors to digitally sign the different parts of medical records for which they are responsible? Can we encrypt different parts of purchase orders so that only authorized people can see information relevant to them in a business process?

Can we easily substitute in new supply chain partners without disrupting our ongoing business and workflow? Can we transparently link multiple hospitals together so that all the electronic services we need to treat patients are available? Can a government provide the necessary infrastructure to take care of its citizens’ needs while being able to use open source or proprietary software … or both?

Many people, including myself, think the answer is yes, and the way to do is via something called Service Oriented Architecture, or SOA. Open standards are what make it work.

A service is something that does a particular set of activities and has a consistent interface. Think about an ATM, an automatic teller machine.

I have used these all over the world and but for language and currency, they all pretty much do the same thing. You put in your card, type in your personal identification numbers, and then you can interact with your accounts. You can transfer money from one account to another, withdraw money in the local currency, and ask about how much money you have. There are also other activities, but we would summarize all them as being ‘banking services.’

Because of years of experience, there is a good and common understanding of what the standard banking services are. In fact, if you use the Web to do online banking, you will also be using some of the services. (I don’t know of any that allow you to withdraw or print money from your computer!)

Whether you are using an ATM or using online banking, the steps are similar. You authenticate yourself (that is, you provide enough evidence that you are who you say you are) and then you invoke one or more services. You may use the balance inquiry service before you use the money transfer service. Eventually you do everything you want to do and you end your session. The next person who uses the ATM or your computer does not continue using your identity to access your accounts.

When you use an ATM, do you have any idea how the back-end banking systems are built? Do you care?

I maintain that you care about how quickly and robustly the ATM responds to you and that your privacy is maintained. This is quality of service again. You also care about successfully using any ATM you may encounter. This is standardization reappearing.

We can now translate these two banking ideas’ATMs and online banking’to computer to computer interactions. Using services, we could build software that automatically paid bills once a month without any human interaction once it was set up. We could automatically transfer money from a parent’s checking account to her college student child’s savings account in a different bank on the other side of the country. We could do all this without knowing the exact details of the service implementations.

To be clear, we would have to have the right authorizations to do any of this, but I’m talking about technical feasibility. For the sake of brevity of discussion, I’ll assume from now on that the appropriate security is being used with any service.

If we think about health care, we can come up with other services that might be useful: ordering drugs from one or more pharmacies, requesting lab tests or their results, retrieving medical records including subsets such lists of allergies, and so on. For government, we might recall traffic violation or arrest records, request local tax records to compare with federal ones, or provide real time epidemic status information.

For the travel industry, there might be services that query hotel availability, make airline reservations, book restaurants, and reserve theater tickets. These individual services could all be combined into a compound service that might be called ‘book my next vacation.’ I encourage you to imagine other software services that might be useful in various industries, your business, and your life.

The “orientation” part of SOA means that you try to use services for everything possible and practical. The “architecture” part of SOA means that you have some discipline and governance in how you design, create, run and maintain the services and how they all fit together. Services will increasingly be the way we implement the components of our business processes. They help us once again separate the “what” is being done from the “how.”

The standards that we use to make this work fall into three categories: data formats, protocols, and interfaces. When we talk about software interoperability, this is what we mean.

A data format is how we represent the information we send back and forth. A protocol wraps up that data with the necessary transmission and security information so it can be moved reliably from one computer to another. The interface is the exact specification of how you tell a service to do something, whether it is a query or an action to be performed. All together, these three things describe how you talk to a service and how they talk to each other.

Data formats can be highly structured information such as the details of a banking transaction expressed in XML or something less structured like a doctor’s notes contained in an OpenDocument Format memo. Protocols and interfaces are typically very structured. The standards being developed in the W3C and OASIS for web services are an important way of implementing SOA for many people.

If one vendor, individual, or group owns any one of these, there is a potential problem. We want the freedom to call any service we wish if we have the right authorization to invoke it. If a vendor prevents others from using a particular format unless they pay a fee, then there is effectively a tax on the communication between software services.

This was a major fear when the Web was maturing but luckily it came to naught, though it was not without some significant challenges. If a vendor requires software interfaces to be licensed then they are effectively trying to lock users into their way of doing things. When protocols are proprietary then we limit a customer’s ability to link together software systems and services in ways that they choose.

In short, we need truly open standards and not vendor controlled or dictated specifications in order for SOA to reach its full potential as a solution for customers.

If I am insisting on open standards for SOA, is there any room for doing anything proprietary here? Yes, and that is in how the services themselves might be built. Since we are using open standards to communicate to and from a service, we have the freedom to implement the service using any hardware or software that we choose. The implementation will not affect what the user sees or does other than how it delivers its quality of service.

If we want to use open source software and it gets the job done in terms of features, cost, and maintenance, great! If proprietary software gives you the security, performance, scalability and ability to run on multiple hardware platforms, use it! If you use a combination of both, that’s just fine as well. That’s your choice and it is under your control.

Service Oriented Architecture is now a major driving force in the IT world. As we redesign our older software to operate in this new SOA environment, the value of truly open standards is becoming more and more clear.

Collectively we’re getting a better understanding of how open standards provide the freedom we need to factor our systems in the right way. This will allow us to openly interoperate with the software systems of our customers, partners, suppliers and, in the case of governments, citizens.

Open source is playing a role here because it is often how standards are first made available. SOA presents new business opportunities and better ways for industries to communicate within and between themselves.

There’s a healthy future for software development, be it open source or proprietary, and I believe open standards will be at the core of our success in the days to come.


Comments are closed