SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    Re: a vote for asymmetric connections in a session



    At 08:13 AM 9/6/00 -0700, Matt Wakeley wrote:
    >Mike Kazar wrote:
    >
    > > Folks,
    > >
    > > Here are my reasons for preferring the asymmetric connection model to the
    > > symmetric connection model, in decreasing order of importance.
    > >
    > > 1.  Implementing a sliding window protocol for seqRn processing is likely
    > > to be as hard, or harder, as implementing TCP.  Getting something that
    > > works will be easy, but getting something that works well, even when
    > > various iSCSI TCP connections are running at different rates is likely to
    > > be an "interesting" research problem.  Yet connections running at different
    > > rates is likely to occur in real life quite frequently, sometimes due to
    > > different connections seeing different packet drop events in a somewhat
    > > congested network, sometimes due to different connections taking different
    > > paths (the whole *goal* of multiple connections in a session, after all).
    >
    >Well, I disagree.  Implementing sliding windows is not that hard.  If sliding
    >windows were the toughest thing to implement in iSCSI, this would be a cake
    >walk.
    
    My point isn't that sliding window functionality is hard to implement; it 
    isn't.  But getting good performance in the event of network congestion is 
    the last 10% of the problem that takes 90% of the work.  Remember, you'll 
    be dealing with N different connections, all potentially running at 
    different speeds, at differing times, and you have to choose a window size 
    that makes sense based upon the dynamic behavior of all of these TCP 
    connections.
    
    I think it might be illuminating to try to figure out some algorithms to 
    keep the throughput high even when one of the connections drops a packet 
    and cuts its TCP window size down significantly.  All the other connections 
    are still pumping in data at full speed, most of which is now stuck behind 
    data from the slow connection.  Do you just write off half your performance 
    until the slow connection picks up speed again?  Does the sender just stop 
    using the slow connection for "a while"?  Do you crank up the iSCSI window 
    size so you can buffer data from the fast connections until the slow 
    connection increases its window again?  And what if instead of running 
    slowly because of a dropped packet, the slow connection is slow because it 
    has been routed over a slower physical link?  At that point, all you can do 
    (assuming you can figure out this is what's happening) is have the sender 
    send less data through the slow link.  This whole problem sounds 
    non-trivial to me.
    
    
    > >
    > >
    > > Furthermore, the sliding window state machine for a given session's seqRn
    > > processing is running at N times the event rate of an individual TCP
    > > connection's state machine (N is the # of connections in a session), making
    > > it even more challenging to implement.
    >
    >I disagree again.  The TCP sliding window SM is running at once per TCP
    >segment.  If just iSCSI command messages where crammed down the connection,
    >that means the session's seqRn processing will be done once every 48 bytes.
    >Every time an iSCSI message is received, there is a lot of processing invoked.
    >Adding to the list a simple sliding window algorithm is not a challenge.
    
    I think we just disagree on terms here.  If you have N connections in a 
    particular session, you have N independent TCP state machines that do not 
    communicate with each other, *each one* getting an event every 48*N bytes 
    of session traffic.  There is only one iSCSI session instance state 
    machine, and it is seeing a new event for its seqRn processing state 
    machine every 48 bytes of session traffic.  That was my point.
    
    
    > >
    > >
    > > 2.  It is easier to implement target-specific load sharing mechanisms if
    > > the target gets to choose, perhaps with input from the host, the connection
    > > on which a particular transfer should be performed.  In the symmetric
    > > proposal, the host chooses the connection to use completely on its own.
    >
    >And it's good that the host chooses the connection.  It is the one that 
    >has set
    >up DMA structures on a particular connection to get the data in/out with
    >minimal CPU intervention.
    
    I wouldn't expect it to be significantly harder for the host to set up a 
    DMA transfer for a connection whose final selection was done by the target 
    than one the host selects completely on its own.
    
             Mike
    
                     (kazar@spinnakernet.com)
    
    > >
    > >
    > >         Mike
    > >                 (kazar@spinnakernet.com)
    >
    >In my opinion, these are not "good" reasons to vote for one or the other.
    >
    >Matt Wakeley
    >Agilent Technologies
    
    


Home

Last updated: Tue Sep 04 01:07:32 2001
6315 messages in chronological order