SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    RE: Status summary on multiple connections



    I thought we had hashed this thing to death in a previous string.
    Now we have to go through all the issues all over again.
    
    -----Original Message-----
    From: John Hufferd/San Jose/IBM [mailto:hufferd@us.ibm.com]
    Sent: Wednesday, September 27, 2000 3:53 PM
    To: ips@ece.cmu.edu
    Subject: RE: Status summary on multiple connections
    
    
    
    David Black,
    Let me vote that there is merit in having two Asymmetric Conversation per
    Session.  Even though I do appreciate the request for a minimum of Two,
    I am troubled with the fact that I think many folks have already built
    iSCSI Initiators and Targets that use just one connection per session.
    
    Now in Robert Snively note below, he talks about the value of the Wedge
    Drivers, and some of us also buy into that.  However, his arguments were
    based on spreading the stuff across NICs etc.  I do not know if that is
    required, but having one TCP/IP connection that only carries Commands, and
    one that only carries Data, seemed to solve almost all the complaining and
    pacing/credit/sliding window  conversations we have been having.  This is
    true even when the second TCP/IP connection (of the session) is on the SAME
    NIC.  (Robert, I think that all your wedge statements still hold true with
    2 Asymmetric Connections per session,  if both connections are on the same
    NIC.)
    
    Except, for that, much of the rest of Roberts points are "right on".  So if
    we can just for awhile talk about whether the Dual Conversation Asymmetric
    approach is acceptable, I think that you may very well find that a lot of
    issue go away.  And I think that was Matt's point.
    
    If this is correct, then, it is important to have a minimum two
    conversation per Session (maybe two is all that is really needed).  But
    what about the implementations that are currently in flight.  I think we
    may have to permit the single connection per Session, but strongly
    discourage it, since when it gets into longer distances, or works with
    implementation with less memory, or in some kind of network stress, the
    recovery approaches are much more draconian (dropping the connection etc.).
    That means that all the implementations that are working today, will
    continue to work, but will not be nearly as good as the Asymmetric Double
    Connection per Session  (ADCS?) versions.
    
    Now Matt's point is that for folks just starting, being able to have code
    both the Double and the Single (with all the extra recovery/pacing stuff
    required in the Single Conversation per Session) seems wrong.  And I think
    I have heard him state that updating the Driver code from a Single to a
    Double (especially if within the same NIC, was a non problem).
    
    If my statements about ADCS  is correct, the testing effort should be a lot
    more straight forward since all the other various "boundary conditions"
    will not have to be tested for, since they will not exist.
    
    And if Matt's statements are correct, that makes me wonder why wouldn't we
    go that way as a requirement?  Does anyone that currently has iSCSI drivers
    have a problem of moving to 2 Asymmetric Connections per Session as a
    requirement, if so, please state why.
    
    .
    .
    .
    John L. Hufferd
    
    
    
    Robert Snively <rsnively@Brocade.COM>@ece.cmu.edu on 09/27/2000 08:46:26 AM
    
    Sent by:  owner-ips@ece.cmu.edu
    
    
    To:   Julian Satran/Haifa/IBM@IBMIL, ips@ece.cmu.edu
    cc:
    Subject:  RE: Status summary on multiple connections
    
    
    
    Julo,
    
    A third alternative does exist and may be preferable.  That is
    to reflect the present broad usage of SCSI that uses wedge drivers
    to achieve parallelization, bandwidth aggregation, and high availability.
    By that definition, the proper number of connections per I_T nexus
    is one and the proper number of connections per session is also one.
    
    This recognizes the reality that there is really little to gain
    in virtualizing multiple connections into a single session image
    when you can perform even more flexible virtualization through a
    wedge driver at a higher level.
    
    The single connection alternative allows a simplistic ordering
    structure, a simple recovery mechanism, and does not require
    state sharing among multiple NICs.  It allows bandwidth aggregation
    across any set of boundaries that is required.  Because command
    queuing is the rule among high performance SCSI environments,
    latency appears only as an increment in host buffer requirements
    except during writes that perform a commit function.  Those traditionally
    have been taken out of the performance path by using local
    non-volatile RAM to perform the commit functions, using slower
    high latency writes with less strict ordering requirements relative
    to reads to actually perform the write to media.
    
    The overheads associated with handling a multiple connection
    session of any type are basically indistinguishable from the
    overheads associated with device virtualization through wedge drivers.
    If you consider software instruction path lengths for the total
    functionality, you can conceive of the multiple connection session
    as simply a TCP/IP wedge driver.
    
    Bob
    
    >  -----Original Message-----
    >  From: julian_satran@il.ibm.com [mailto:julian_satran@il.ibm.com]
    >  Sent: Tuesday, September 26, 2000 9:48 AM
    >  To: ips@ece.cmu.edu
    >  Subject: Status summary on multiple connections
    >
    >
    >
    >
    >  Dear colleagues,
    >
    >  I am attempting to summarize where we stand with regard to
    >  the multiple
    >  connection issue
    >  and to the two possible models - Symmetric (S) and Asymmetric (A).
    >
    >  Many of us feel strongly that the multiple connection issue
    >  is central to
    >  the whole design
    >  and cannot be added as an afterthought.  Moreover designing
    >  the hooks to
    >  allow it later
    >  will certainly already force us to make a decision. And both
    >  the hardware
    >  and the software designers will be ill-served if we hand
    >  them a half-backed
    >  solution.
    >
    >  However this is not an invitation to reiterate positions
    >  that where already
    >  stated.
    >  If you feel that I have grossly misstated anything in this
    >  note please
    >  write me and use
    >  the mailing list only if my answer is not satisfying.
    >
    >  And yes - like the chairman - I would like to make progress
    >  but I don't see
    >  any way to
    >  do it without satisfactorily closing this issue.
    >
    >  The reasons for multiple connections where discussed at some
    >  length and
    >  where very nicely summarized in a series of notes by Michael Krause
    >  (beginning of August).
    >
    >  The core reasons for having multiple connections where the
    >  need for more
    >  bandwidth and availability than a single link can supply
    >  with a level of
    >  complexity affordable for simple
    >  installations and with a traffic engineering and management clearly
    >  separated from the transport users (SCSI).
    >
    >  The session is embodying this requirement.
    >
    >  The only major objections I have heard against this where
    >  those requiring
    >  to go all
    >  the way in having one TCP connection/LU - and after a short
    >  debate this
    >  objection
    >  was practically removed.
    >
    >  The other objections we heard where that this is basically a
    >  transport
    >  issue and it should be
    >  solved at transport level.
    >  That might be true - but since many, if not most, of TCP
    >  applications do
    >  not have this
    >  requirement it is highly unlikely that TCP is going to do connection
    >  trunking in the foreseeable
    >  future.
    >
    >  iSCSI can be designed to use multiple TCP connections in one
    >  of two ways:
    >
    >  -Asymmetric - one TCP flow only carries commands the others
    >  carry only data
    >
    >  -Symmetric - every flow carries both commands and their
    >  associated data
    >
    >  The S version is designed in the I-D version 01
    >  The S version requires an command ordering scheme and that
    >  is provided by a
    >  command counter and a sliding window scheme. It was argued
    >  that ordering
    >  needs might be more prevalent than usually thought and a
    >  good conservative
    >  design should preserve ordering.
    >  Ordering-per-LU (as it is designed in FCP-2 version 4)  was
    >  considered
    >  impractical as it
    >  required initiators to maintain state for each LU - while
    >  the rest of the
    >  design required initiators only to maintain state for
    >  outstanding commands.
    >  Several comments on this list suggested that this windowing
    >  mechanism could
    >  also be used as a command-flow-control mechanism and that is
    >  a "bonus" of
    >  the scheme.
    >
    >  The A version - comes in two flawors:
    >
    >  - pure A (PA) in which ONLY commands  flow on one TCP
    >  connection while data
    >  flow
    >     on DIFFERENT connections (with only one data connection
    >  being selected
    >  for a
    >     command);   this scheme requires a minimum of 2 TCP
    >  connections although
    >  not
    >     necessarily on  different physical links.
    >
    >  - collapsed A (CA) in which commands flow one a single TCP
    >  connection while
    >  data
    >     flow on ANY connection; this scheme requires as a minimum 1 TCP
    >  connection.
    >
    >  Here is first attempt to list the benefits and drawbacks of
    >  all of them:
    >
    >
    >  - S - benefits
    >          - well understood
    >          - simple hardware setup and/or TCP API activation
    >          - window mechanism can be also used for flow control
    >          - the minimum required is a single TCP connection
    >
    >       - drawbacks
    >         - need to maintain a window mechanism
    >         - a multiplexing mechanism has to be carefully
    >  crafted to avoid
    >            closing TCP windows to severely affect command flow and
    >  performance
    >
    >
    >     - PA - benefits
    >            - TCP will both order the commands and provide for
    >  flow control
    >               through the TCP window mechanism
    >            - the multiplexing mechanism is simpler as closing
    >  TCP windows
    >  will
    >               never affect command flow
    >            - data flow can use a streamlined header (and processing)
    >
    >          - drawbacks
    >             - more complex hardware and/or software API activation
    >             - need for a minimum of two TCP connections (not links)
    >
    >      - CA - benefits
    >              - TCP will order the commands
    >              - needs a single TCP connection as a minimum
    >
    >         - drawbacks
    >             - more complex hardware and/or software API activation
    >             - need for a minimum of two TCP connections (not links)
    >             - if command flow control is required counters
    >  and the sliding
    >  window mechanism
    >                are required
    >             - a multiplexing mechanism has to be carefully
    >  crafted to avoid
    >                closing TCP windows to severely affect command flow and
    >  performance
    >
    >
    >  From the above it should be apparent - as was already
    >  pointed out by Matt
    >  Wakeley -
    >  that the CA inherits the drawbacks of S and PA and as such
    >  the choice is
    >  really
    >  between S and PA.
    >
    >  Julo
    >
    >
    >
    
    


Home

Last updated: Tue Sep 04 01:06:59 2001
6315 messages in chronological order