Multipurpose Transaction Protocol: A New Data Transport Model

It is no secret that high-performance and the Internet are often seen as contradictory terms. Even private IP networks see serious performance challenges once they extend beyond the local building and around the globe. As devices and their users become more mobile, it has become critical to design with high-speed, Wide Area Networks in mind. But software and hardware designers are traditionally given few choices and little control for ensuring high network performance.

At the application level, network communication is viewed through the interfaces to TCP/IP, the protocol that manages nearly all data transfer across IP networks. Its thirty-four year old data model is very simple and general — a heavy weight, full-duplex, internally buffered, byte pipe. But this general purpose model can a) be difficult to program against and b) have a devastating performance cost.

Figure 1. Throughput testing of TCP versus MTP across 72 live Internet paths of varying capacities.

Designers looking for ways to work around TCP's well-known performance and scaling limitations have had few options. Compression and caching only take you so far. When networks stressed by high congestion, loss, and latency enter the picture, the fundamental inefficiencies of TCP become intractable. The traditional bare-bones alternative, UDP/IP, offers only fire-and-hope packet service and is, thus, impractical for any serious data transfer.

Enter the Multipurpose Transaction Protocol (MTP/IP). Developed in recent years to address the growing problems with TCP performance on WANs, MTP approaches the concept of a transport protocol with an entirely new data model. Taking cues from the high-volume, request-response pattern of modern network applications, Core MTP follows a simplified transaction data model. Each core network operation consists of small request datagrams exchanged for a potentially huge collection of response datagrams. Combined with a more robust packet design, this simpler model eliminates overhead like three-way handshakes and time-wait states. These core transactions can then be modularly combined to create more sophisticated data models, with only the minimum overhead appropriate to the task.

The real advantages, however, come when the bits hit the wire. The flow control and error recovery algorithms of MTP explicitly assume that the network may be congested and lossy, and that bandwidth and latency may vary wildly from one moment to the next.

MTP takes advantage of the typically half-duplex nature of most data transfers by positioning its core flow-control algorithms at the receiving, rather than the sending end. This gives it a much more immediate and realistic view of what's going on in the network. Better input means faster adaptation to the network conditions of the moment, both at startup and throughout the transaction. As a result, MTP is able to ensure that the data pipe stays full, but not overflowing.

Figure 2. Illustration of TCP data flow across a congested WAN. Flow oscillation, scaling issues, and congestion result in wasted bandwidth.

For many MTP users, this translates into seven times faster data transfers over high-speed WANs, and ten times improved reliability on any network, when compared to even modern TCP implementations. This holds true even for data that is already compressed and completely unique, where compression and caching would break down. Most importantly, the results are scalable; the adaptive nature of the protocol means no setup tuning or special equipment is needed.

A case in point was Motorola Corporation's adoption of ExpeDat, an MTP/IP based file transfer product also produced by Data Expedition Inc. (DEI). Motorola has offices and engineers all over the world who perform daily engineering simulations each involving gigabytes of inputs and outputs. Building and maintaining a high-performance computing cluster in every office would be prohibitively expensive, so engineers had to transfer jobs across the corporate WAN, a task that could take hours. Once the data was loaded, they often had to wait in a queue for their local cluster to become available because transferring data to idle but more distant clusters took even longer.

When engineers at the Advanced Computing Engineering (ACE) group began testing MTP/IP ExpeDat, they saw their transfer times jump from around 6 megabits per second to over 42 megabits per second. Transfers that had been taking 90 minutes were now done in just 12. They had always known their WAN had a theoretical capacity of 45 megabits per second, but had been unable to achieve it using TCP based technology. ACE engineers immediately saw the potential; simulation jobs could be load balanced on a global scale. Managers saw the potential to increase service levels with the same or even fewer clusters.

Figure 3. Illustration of MTP data flow across the same WAN as Figure 2. Ability to scale and adapt to third-party traffic raises utilization to near 100-percent.

Initial deployment required two components: distributing the server software to the clusters, and the client software to the engineers. But first they had to get corporate IT to sign off on a new (and, back then, unknown) technology. MTP/IP borrows the UDP/IP packet format, so it works on any standard IP network. But network managers wanted to be sure that the performance gains were not coming at the expense of other users. Several weeks of testing proved there were no disruptions. Even so, managers only really hit their comfort zone when they realized that MTP/IP has builtin bandwidth controls so they could be explicit about things like maximum bit-rate and latency.

Server deployment ran into only one hitch — firewalls. Like any new application, a port had to be opened. But this had to be done on all the firewalls and it has to be the right port. Early on, there were frequent mix-ups between TCP ports and UDP ports (they are different) and administrators who didn't realize how many firewalls they actually had. Improvements to the software documentation, and a new diagnostic tool that can trace the location of an offending firewall, were needed to remove those issues.

Client deployment was surprisingly smooth. Since the end-users were already familiar with the FTP and SCP interfaces upon which ExpeDat had been modeled, they adapted readily to the new software. The biggest problem early on was keeping up with the flood of requests for new features.

The ability to deploy simulations globally led to a new goal — provide the engineers with an automated system for finding idle clusters, monitoring their jobs, and transferring the data for them. This led to the development of a custom embedded MTP/IP application that works beneath a web-browser front end to provide direct command and control functionality to both end-users and a central queue management server. DEI developed the underlying communication components so Motorola could concentrate on the user interface and system control components. The new "GlobalSim" system has since become the backbone of ACE's services to Motorola's engineers.

Now ACE engineers are looking beyond the ExpeDat products to create their own source level applications directly on top of the MTP/IP development kits. For those used to programming against the TCP socket model, MTP presents a cornucopia of choices and tools. These range from the Core SDK that provides direct access to simple transactions, up to the highly abstracted ExpeDat Client SDK that provides call-and-forget document transfer to ExpeDat servers.

The advantage of this flexibility is substantial. Not only do software developers have an alternative to TCP, they have a robust variety of interfaces that can be adapted to their needs. Simple little things like being able to check how much data can fit into a buffer before you have to commit to writing it become big development time savers. Having the option to eliminate buffers and data copying entirely is even better.

Engineers at Motorola, and elsewhere, are just beginning to design the next generation of applications that will take advantage of this new data transport model and the flexibility it offers. There will surely be challenges, as at least a new way of thinking about network possibilities is required. As they progress I expect to be able to report about previously unimagined applications of a truly high-performance Internet.

This article was written by Seth Noble, Ph.D, founder of Data Expedition, Inc. (Quincy, MA). For more information, contact Dr. Noble at This email address is being protected from spambots. You need JavaScript enabled to view it., or click here .