In 1999, when I joined Epinions.com, we were using a proprietary database/app server, and a custom apache module to talk to it. The apache module parsed some very simple html templates, and found "entities", which looked like standard html entities, but contained function calls, which were calls to the backend server. We had a proprietary protocol, RAD (Random Ass Data), written by Lou Montulli of Netscape fame, that transferred these requests back and forth. Typically, the returned data would simply be blobs of html that would be inserted directly into the document. This meant that the backend server, which was C based, had to understand a great deal about html, which was bad.
When we decided to integrate php into our system, one of the main goals was to separate out the display oriented processing from the logic/data oriented processing. This required a way of sending back much richer result sets corresponding with native php types: ints, doubles, lists, hashed arrays, etc. To this end, I devised a simple xml vocabulary that was almost a 1 to 1 representation of php's native data types. I then created an API in the app server code for generating this vocab, and a small php extension which used expat to parse the xml and then decoded it. In the final days of that project, I began moving on to introspection and some other fun stuff. For example, I created a C API for describing methods and their arguments, and then a php function for formatting the returned data prettily. Thus, php coders finally had some decent documentation as to what they could expect from a given method call in the server.
The day came when we had to re-architect the site, circa May 2000. We liked the current xml based response mechanism, but the new architecture called for more complicated queries and a more robust request/response solution. By this time, I had read about XML-RPC, and had noted its great similarity to what I'd been doing. I proposed that we switch to using standard XML-RPC for the new project, and it was agreed to. The only problem was that I could not find any great C implementations. I found one, expat-ensor, that generally worked, but we had performance issues and found the API non-intuitive. Worse, it was no longer supported. One day, after battling with ensor for a long time, I got to looking at my old code and decided that the API was actually more sensible and could work just fine for XML-RPC. I sat down, and within 2 days had a working prototype that was many times faster, architecturally cleaner, and could be plugged into either the php extension or the backend server. A few days later and it could read/write from either the xml-rpc vocabulary or the simpler vocab I had previously devised.
In the early days, while working out the bugs, it was quite neat that we could use the php native xml-rpc implementation (by Edd Dumbill) interchangeably with my C implementation. I wrote a pair of simple xmlrpc_encode/decode functions for Edd's code that mirror'ed my own API's, and thus switching between the implementations became a one line change. I think this was when people within our organization really started to see the benefit of using a standard protocol.
(on WSDL) Nevertheless, it strikes me as a bit strange that a protocol that is capable of describing data/objects of any type (SOAP) requires yet another xml vocabulary to provide introspection capabilities. That said, I'm sure it is great, and will probably support it at some time. I'd like to see xml-rpc actually mentioned in the wsdl spec(s), given that it is not supposed to be soap specific.
My philosophy with xml-rpc has always been that the vocabulary should be as simple as possible to provide a small set of building blocks on which more complex things can be built. So when I needed introspection capabilities, I simply defined them in terms of xml-rpc's native methods and data structures. Since the early days I have provided some form of introspection in my code, and a few months ago I posted a detailed, yet still small spec [1], that describes how this works, so that other implementors may also choose to support it.
My introspection support is meant to work in a fashion similar to javadoc, robodoc, and other auto documentation systems. Basically, the developer leaves markup in the code and the documentation system makes sense of it. The developer may leave as much or as little information as desired. The cool thing about this is that it can be queried at run time by anyone with access to the system, and formatted by them in whatever manner they desire. Further, both the parameters and return values from methods may be nested arbitrarily deep and have names/descriptions associated with them. I have used this data to provide server help page(s) and interactive web interfaces to the xml-rpc server. Given that the introspection data provides both method names and parameter types and descriptions, it then becomes possible for the client to present the user with a form wherein s/he may fill in the parameters by hand and execute the method call.
A few Introspection examples:
Terraseek's GPS Web Service uses introspection to provide browser interface.
http://216.101.160.38/xmlrpc.html
A generic introspection client that can be pointed at any xmlrpc-epi
server
http://xmlrpc-epi.sourceforge.net/xmlrpc_php/introspection_client.php
Pretty formatting of introspection data via php function
http://xmlrpc-epi.sourceforge.net/xmlrpc_php/introspection.php
(Having built the system at ePinions, what was the story behind open sourcing it?)
I wanted to open source it from the start, and told my manager immediately, who was amenable. I got it to the point where it passed the XML-RPC.com validation test suite just prior to thanksgiving of 2000, and requested official approval. Long story short, it took until March 2001 before the legal department was satisfied and I was finally able to open source it. In the meantime, Eric Kidd came out with his C/C++ library, thus making mine a bit redundant. The positive news is that this effort sort of cleared the barriers, and Epinions.com has now open-sourced two more pieces of software: mvserver [2], which uses xml-rpc, and yats [3], which is a fast template engine that I wrote a while back. I have received a lot of positive feedback -- enough to keep me working on the library from time to time, though not as much as some would like. The big news is that I've just received access to the php CVS repository, and am planning to make xmlrpc-epi-php a standard php extension. woo hoo!
[1] http://xmlrpc-epi.sourceforge.net/specs/rfc.system.describeMethods.php
[2] http://mvserver.sourceforge.net/
[3] http://yats.sourceforge.net/
(Where did your XML library come from?)
I wrote it in order to facilitate integrating php into our system. Specifically, it was two pieces of code: an API on the backend server that would spit out xml representing data structures, and a separate piece on the php side that used expat to parse the xml and convert into native php data structures. The XML vocabulary was similar to that of XML-RPC, but more human readable, and with support for mixed arrays, which php supports but XML-RPC (and some languages) do not. (mixed array = some values have keys, some do not). This vocabulary was something that I just came up with on the whiteboard one day in about 20 minutes, asked Lou if he liked it, and sat down to crank out the code.
It really only has two type elements: "scalar" and "vector". Vectors may be arbitrarily nested within eachother, as with XML-RPC. scalars [now] support the the same types as XML-RPC, although
originally it did not include base64 or datetime. I also "borrowed" methodName, methodCall, and methodResponse from XML-RPC when working on the 2nd generation library; the first generation vocab had not required either because it was only ever used for responses.
More examples of this vocab are available on the SourceForge Page at:
http://xmlrpc-epi.sourceforge.net/main.php?t=samples
My current xmlrpc-epi distribution still supports this original vocabulary, called "simpleRPC", and it can actually read/write to either it or XML-RPC. This is possible because I wrote the library in a modular fashion such that there is a parsing layer (expat), a DOM layer (custom), serialization layer(s), and finally the data structure and API layer. I am currently toying with the idea of plugging in a serialization layer for SOAP (or a subset thereof), and thereby have a single C library that can read/write to XML-RPC or SOAP interchangeably with a single application level API.
(Why did you use XML-RPC?)
I think it was just a matter of a common need and similar solutions. I had needed a way to represent arbitrarily nested, typed data. XML seemed the easiest way to do it, because the parser was already written for me. Userland had needed something similar and thus had come up with a solution that was very close to mine, and so matched up well with my existing API.
(Tell me about what it took to get folks to use the standard protocol)
I think there was quite a bit of the "Not Invented Here" syndrome. Heck, we were even using our own database! Also, remember that this code was originally written as a means for connecting between our web server and the backend system(s). It was not really important that it be able to interoperate with other applications. It was important that it be very fast. XML is quite verbose, which translates into a lot of network traffic and additional parsing overhead, and so there was originally some concern that perhaps we should not use XML at all. Consider that at the time we were serving over a million page views a day, and that some of those pages contained > 20 requests to the backend servers. When you start dealing with those types of numbers, performance becomes very important, and the existing XML-RPC implementations, most of them for scripting languages, were simply not up to the task. Thus, we could see the advantages of using an open protocol, but it soon became clear to me that I'd have to roll my own implementation. Ultimately, I felt this was one of the more valuable things I did for my career while employed at Epinions, because it got me involved with technology and people outside the company, and that type of knowledge and experience is much more transferrable than the arcana of how a particular web site operates.
(How did you decide where the complexity would sit, in the wrapper or a process that sits on the end of a pipe? ie. would you embedd the protocol in a more complicated protocol, or would you ask programs getting the data over the pipes to do complicated things with it?)
Either/both. I think that as applications are developed to use XML-RPC, people realize they are doing common things over and over, and begin to form higher level protocols to address them. This is what I did with Introspection, and also with standardized error codes, which are not part of the XML-RPC spec (http://xmlrpc-epi.sourceforge.net/specs/rfc.fault_codes.php)
I think that, in general, this is the way the internet and the web have developed. Relatively simple protocols have been piled, one upon another, to ultimately enable something that is very complex. tcp/ip is composed of many layers. Http sits on top of them. Now xml-rpc and soap are sitting on top of http, and eventually other things will sit on top of them. In SOAP's case, there is already wsdl and uddi, for instance.
(Tell me about management pushback when open sourcing the code)
The push back had to do with the license and with liability. Surprisingly, the GPL was deemed "too restrictive", so we ultimately settled on a BSD-like license. Then we had to prove that there was no code in the library owned by anyone else. The laywer(s), of course, had much more important things to be doing, and so all of this took much longer than would be expected.
I wanted to open source it because I love open source. I can't stand the thought of having to write something that I know someone else has already written. It annoys me. It feels wrong. So anytime I can spare someone else that pain, it makes me happy.
I think the company agreed to do it because it was non-essential software. It was not something that really gave us a competitive edge or was otherwise deemed "strategic". Further, since it was written around an open protocol, it just made common sense that it would be useful for others, and that it might even be useful for our partners, etc.
Thanks for writing in, Dan!
Link to this article