Dramatic Increases in Data Transfer Speed
Despite years of innovations in WAN performance, the throughput of applications over WANs has historically been inadequate. Some approaches focus on bandwidth expansion; some optimize specific protocols; some attack the problem through caching; some prioritize traffic. Compression, caching, QoS, WAFS, and other technologies have their place, but rarely has a vendor delivered an integrated solution that addresses the multiple root-causes of poor application performance.
"Within the first week we had reduced WAN traffic by a factor of seven."
Riverbed tackles the whole picture. Their Steelhead appliance confronts all the key factors that slow application performance over WANs: high latency, limited bandwidth, “chatty” transport protocols, and even “chattier” applications.
As you know, WAN connections typically have lower bandwidth and higher latency than LAN links, affecting performance. There are four distinct bottlenecks. One is bandwidth. No application can send data faster than the available bandwidth. The other three are forms of latency bottlenecks that can impact applications even when bandwidth appears to be plentiful.
The end-to-end acknowledgement behavior of TCP creates the first latency bottleneck. TCP is limited to a window of packets in transfer from end-to-end or client to server, for example. When the window is filled, the sender can’t send additional packets until the destination acknowledges receipt of at least some of what has already been sent. If this maximum window is too small, the throughput of the link will be limited by the rate at which each full window can be sent to the other side and acknowledged.
Not only is there a (probably inadequate) maximum window size, but TCP doesn’t even consistently run at that maximum. TCP’s slow-start and congestion-control behaviors cause the second bottleneck.
TCP gradually ramps up its window size when transmission appears to be successful and sharply cuts back its window size when transmission appears to be unsuccessful. For high latency, high bandwidth links, this leads to some available bandwidth going unused for extended periods. (Note: High Speed-TCP is now available on the Steelhead 5010, and supports up to 750 Mbps per connection for blazing fast data replication and backup.)
The so-called application protocols that run on top of TCP cause the third latency bottleneck. As stated, the availability of bandwidth didn’t matter if TCP was limited by the size of a window of data and the need to acknowledge that data. Fixing the TCP layer issues is irrelevant if large application messages must be acknowledged.
Application protocols like HTTP and FTP that were originally designed for wide-area environments don’t encounter this third latency bottleneck. Application protocols originally designed for use on LANs (like Microsoft Windows file sharing via CIFS) can be severely affected by it.
Steelhead Monitors Data Patterns to Reduce Repetition
Historically, the various points of attack in WAN improvement have addressed only specific bottlenecks or a narrow set of protocols. WAFS technology, while excellent for alleviating bandwidth issues and application latency, does not address TCP latency. TCP optimization software addresses the TCP latency challenge, but doesn’t cover bandwidth and application issues.
Riverbed was born of a new approach that merged historically disparate solutions into one. They pioneered the evolution of Wide Area Data Services (WDS).
Riverbed’s Steelhead appliance architecture has several key elements that differentiate it from other approaches. They were the first to use a disk-based architecture to store network traffic. The disk makes it possible to go back in time to find old repeated data patterns, even when that data last traversed the network days or months earlier. Devices using RAM alone can be easily overrun by traffic levels and file sizes, killing performance.
The Steelhead appliance’s hard disk stores all WAN traffic and uses proprietary algorithms to remove repetition. It uses two key application-independent pieces of technology: Scalable Data Referencing (SDR) that removes redundant TCP traffic and Virtual Window Expansion (VWE) that reduces round trips.
SDR replicates data within and across the network in a unique protocol-independent format to reduce subsequent transmissions of the same data. Rather than attempt to replicate data blocks from a disk volume, files from a file system, e-mail messages, or Web content from application servers, Steelhead appliances represent and store data in a format independent of protocol and application.
Riverbed’s SDR algorithms work across all TCP applications including Microsoft Office, Lotus Notes, CAD, ERP, NFS, FTP and HTTP to ensure the same data is never sent more than once over the WAN. SDR reduces bandwidth consumption, typically by 60% to 95% and more for some applications--less than 1% of previous levels.
Steelhead also includes a set of application protocol-specific optimizations that include elements specialized for HTTP and FTP. The most important of these are a set of algorithms known as Transaction Prediction, which minimize the number of round trips taken across the WAN without interfering with the client-server semantics.
With specific knowledge of CIFS, the windows file sharing protocol, Steelhead appliances are able to predict upcoming client requests in advance, inject requests on behalf of the client, and then “bundle” many transactions into a few. In addition, appliance data stores can be automatically and transparently pre-populated with new file system data or email data to accelerate initial access by the client.
Each round trip avoided saves a discrete amount of time, independent of how much bandwidth is available. When thousands of round trips are avoided, the time saved can be measured in minutes or even hours.
Request more Riverbed information or a demo
Arrange an onsite evaluation of Riverbed products