SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    The third alternative



    
    
    Bob,
    
    This thread went on for a long time and the arguments against leaving it
    all
    too the wedge drivers where:
    
    - that it makes the same feature available to all environments (even those
    not using
    wedge drivers today)
    - that it is more in line with traffic engineering in networks in that you
    don't have to
      work on the application to change the underlying network traffic
    characteristics and
      this can be even done dynamically with existing networking tools
    - that it better reflects layering
    - that it will simplify all applications and mainly exception handling
    - that it will be an interoperable solution (are wedge drivers
    interoperable?)
    
    
    The only argument for SCSI wedge drivers where that they EXIST ALREADY.
    Pretty weak argument for those building new equipment and for
    interoperability.
    
    I think that if we keep ourselves honest we have to either:
    
    - provide for multiple connections at the iSCSI level as it is transport
    problem
       that other TCP applications are not compelled to handle (I hear already
    BUT SCTP
      handles it!) and hope that one day the session concept will drift into
    pure transport
    
    - go to T10 and ask the to standardize wedge drivers!
    
    What would you consider as a "better alternative"?
    
    Julo
    
    
    
    Robert Snively <rsnively@Brocade.COM> on 27/09/2000 18:46:26
    
    Please respond to Robert Snively <rsnively@Brocade.COM>
    
    To:   Julian Satran/Haifa/IBM@IBMIL, ips@ece.cmu.edu
    cc:
    Subject:  RE: Status summary on multiple connections
    
    
    
    
    Julo,
    
    A third alternative does exist and may be preferable.  That is
    to reflect the present broad usage of SCSI that uses wedge drivers
    to achieve parallelization, bandwidth aggregation, and high availability.
    By that definition, the proper number of connections per I_T nexus
    is one and the proper number of connections per session is also one.
    
    This recognizes the reality that there is really little to gain
    in virtualizing multiple connections into a single session image
    when you can perform even more flexible virtualization through a
    wedge driver at a higher level.
    
    The single connection alternative allows a simplistic ordering
    structure, a simple recovery mechanism, and does not require
    state sharing among multiple NICs.  It allows bandwidth aggregation
    across any set of boundaries that is required.  Because command
    queuing is the rule among high performance SCSI environments,
    latency appears only as an increment in host buffer requirements
    except during writes that perform a commit function.  Those traditionally
    have been taken out of the performance path by using local
    non-volatile RAM to perform the commit functions, using slower
    high latency writes with less strict ordering requirements relative
    to reads to actually perform the write to media.
    
    The overheads associated with handling a multiple connection
    session of any type are basically indistinguishable from the
    overheads associated with device virtualization through wedge drivers.
    If you consider software instruction path lengths for the total
    functionality, you can conceive of the multiple connection session
    as simply a TCP/IP wedge driver.
    
    Bob
    
    >  -----Original Message-----
    >  From: julian_satran@il.ibm.com [mailto:julian_satran@il.ibm.com]
    >  Sent: Tuesday, September 26, 2000 9:48 AM
    >  To: ips@ece.cmu.edu
    >  Subject: Status summary on multiple connections
    >
    >
    >
    >
    >  Dear colleagues,
    >
    >  I am attempting to summarize where we stand with regard to
    >  the multiple
    >  connection issue
    >  and to the two possible models - Symmetric (S) and Asymmetric (A).
    >
    >  Many of us feel strongly that the multiple connection issue
    >  is central to
    >  the whole design
    >  and cannot be added as an afterthought.  Moreover designing
    >  the hooks to
    >  allow it later
    >  will certainly already force us to make a decision. And both
    >  the hardware
    >  and the software designers will be ill-served if we hand
    >  them a half-backed
    >  solution.
    >
    >  However this is not an invitation to reiterate positions
    >  that where already
    >  stated.
    >  If you feel that I have grossly misstated anything in this
    >  note please
    >  write me and use
    >  the mailing list only if my answer is not satisfying.
    >
    >  And yes - like the chairman - I would like to make progress
    >  but I don't see
    >  any way to
    >  do it without satisfactorily closing this issue.
    >
    >  The reasons for multiple connections where discussed at some
    >  length and
    >  where very nicely summarized in a series of notes by Michael Krause
    >  (beginning of August).
    >
    >  The core reasons for having multiple connections where the
    >  need for more
    >  bandwidth and availability than a single link can supply
    >  with a level of
    >  complexity affordable for simple
    >  installations and with a traffic engineering and management clearly
    >  separated from the transport users (SCSI).
    >
    >  The session is embodying this requirement.
    >
    >  The only major objections I have heard against this where
    >  those requiring
    >  to go all
    >  the way in having one TCP connection/LU - and after a short
    >  debate this
    >  objection
    >  was practically removed.
    >
    >  The other objections we heard where that this is basically a
    >  transport
    >  issue and it should be
    >  solved at transport level.
    >  That might be true - but since many, if not most, of TCP
    >  applications do
    >  not have this
    >  requirement it is highly unlikely that TCP is going to do connection
    >  trunking in the foreseeable
    >  future.
    >
    >  iSCSI can be designed to use multiple TCP connections in one
    >  of two ways:
    >
    >  -Asymmetric - one TCP flow only carries commands the others
    >  carry only data
    >
    >  -Symmetric - every flow carries both commands and their
    >  associated data
    >
    >  The S version is designed in the I-D version 01
    >  The S version requires an command ordering scheme and that
    >  is provided by a
    >  command counter and a sliding window scheme. It was argued
    >  that ordering
    >  needs might be more prevalent than usually thought and a
    >  good conservative
    >  design should preserve ordering.
    >  Ordering-per-LU (as it is designed in FCP-2 version 4)  was
    >  considered
    >  impractical as it
    >  required initiators to maintain state for each LU - while
    >  the rest of the
    >  design required initiators only to maintain state for
    >  outstanding commands.
    >  Several comments on this list suggested that this windowing
    >  mechanism could
    >  also be used as a command-flow-control mechanism and that is
    >  a "bonus" of
    >  the scheme.
    >
    >  The A version - comes in two flawors:
    >
    >  - pure A (PA) in which ONLY commands  flow on one TCP
    >  connection while data
    >  flow
    >     on DIFFERENT connections (with only one data connection
    >  being selected
    >  for a
    >     command);   this scheme requires a minimum of 2 TCP
    >  connections although
    >  not
    >     necessarily on  different physical links.
    >
    >  - collapsed A (CA) in which commands flow one a single TCP
    >  connection while
    >  data
    >     flow on ANY connection; this scheme requires as a minimum 1 TCP
    >  connection.
    >
    >  Here is first attempt to list the benefits and drawbacks of
    >  all of them:
    >
    >
    >  - S - benefits
    >          - well understood
    >          - simple hardware setup and/or TCP API activation
    >          - window mechanism can be also used for flow control
    >          - the minimum required is a single TCP connection
    >
    >       - drawbacks
    >         - need to maintain a window mechanism
    >         - a multiplexing mechanism has to be carefully
    >  crafted to avoid
    >            closing TCP windows to severely affect command flow and
    >  performance
    >
    >
    >     - PA - benefits
    >            - TCP will both order the commands and provide for
    >  flow control
    >               through the TCP window mechanism
    >            - the multiplexing mechanism is simpler as closing
    >  TCP windows
    >  will
    >               never affect command flow
    >            - data flow can use a streamlined header (and processing)
    >
    >          - drawbacks
    >             - more complex hardware and/or software API activation
    >             - need for a minimum of two TCP connections (not links)
    >
    >      - CA - benefits
    >              - TCP will order the commands
    >              - needs a single TCP connection as a minimum
    >
    >         - drawbacks
    >             - more complex hardware and/or software API activation
    >             - need for a minimum of two TCP connections (not links)
    >             - if command flow control is required counters
    >  and the sliding
    >  window mechanism
    >                are required
    >             - a multiplexing mechanism has to be carefully
    >  crafted to avoid
    >                closing TCP windows to severely affect command flow and
    >  performance
    >
    >
    >  From the above it should be apparent - as was already
    >  pointed out by Matt
    >  Wakeley -
    >  that the CA inherits the drawbacks of S and PA and as such
    >  the choice is
    >  really
    >  between S and PA.
    >
    >  Julo
    >
    >
    >
    
    
    
    


Home

Last updated: Tue Sep 04 01:06:57 2001
6315 messages in chronological order