SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    Re: iSCSI: Flow Control



    David Robinson wrote:
    
    > > The latter case could overwhelm the
    > > SCSI command queue, but still be within the TCP window of
    > > the receiving node.
    >
    > This I don't understand. The act of TCP receiving data into
    > its window (and thus causing the window to shrink) is
    > complete independant of the application (aka iSCSI) grabbing
    > the data (causing the window to grow). As an extreme example
    > a naught initiator sends 1,000 commands down to a target that
    > has a command queue depth of 2, will it be overwhelmed?  I think
    > not.
    
    And let's also stipulate that there is only one TCP connection, and that the
    commands are all write commands...
    
    >  A sane implementation of the target will detect that there
    > is data on the connection, read a chunk large enough to grab
    > the first command and stuff it in the command queue, read the
    > next command and stuff it in the command queue, then simply
    > waits for one of the commands to finish leaving the other 998
    > commands in the TCP receive buffers.  When one of the commands
    > completes it sucks in the next command and puts it in the queue.
    
    Ok, so now the SCSI processes the first command, and sends an RTT (XFER_RDY in
    FC terms) to the initiator.  Now, the initiator sends the data down the same
    TCP connection, and it gets stuck behind all those 998 commands in the TCP
    receive buffers.  The command can't complete because it can't get the data,
    and the data can't be delivered because there's no room for the commands in
    front of it.  Deadlock.  Do you see the issue now?  (this is a good example of
    why the single TCP connection model, be it synchronous or asynchronous, is
    bad).
    
    -Matt
    
    >
    > > That is, unless you want iSCSI to talk
    > > to TCP to tell it to close the window.  But that would
    > > violate layering principles, I believe.
    >
    > The above example violates not layering, the TCP window closes because
    > the application does not read all of the data because its command
    > queue is too small.  Now a clever implementation can do interesting
    > tricks in hardware where it copies the data stream straight into
    > the command queue based on its interpretation of the data stream.
    > The overly clever (and thus dangerous) implementation could also
    > play with the window and not open it up until *after* the command
    > is executed.  But again this is an implementation hack that
    > saves a copy but then violates layering, but the proposed protocol
    > does not violate layering.
    >
    > Still don't see the issue.
    >
    >         -David
    >
    
    


Home

Last updated: Tue Sep 04 01:07:09 2001
6315 messages in chronological order