Thursday, April 29, 2004
When we consider the problem of delivering data to a large number of
hosts spread all over the world from a few servers, there are a number
of independent reasons why we gain in terms of security, reliability and
Our tests show that even without considering the effect of additional
(orthogonal) optimizations, such as taking advantage of open connections
or compression in its various forms, we can get a two fold speedup on
16KB downloads for a quarter of all pairs of our North American data-centers,
just by picking the appropriate intermediate locations. The downloads
that benefit the most happen between hosts that are
The system designed continuously collects ICMP data, and for each pair of hosts that might wish to communicate, it finds a small set of candidate hosts that based on the ICMP data are suitable as intermediate hosts for the data transfers. When an actual data transfer is to take place, the host initiating the transfer relies on data about recent transfers to compare the direct and indirect alternatives it has, and uses the best of them.
This represents joint work with Claudson Bornstein, Tim Canfield, and Satish Rao.
His research interests includes algorithm desgin, parallel computing, computational geometry, VLSI design, and scientific and numerical computing. He is well-known for his work in primality testing, graph isomorphism, parallel tree contraction, and graph separators.