SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    RE: Requirements specification



    
    
    
    Two points:
    
    1.  The iSCSI presenters seem to imply that having 'n' connections through
    the IP
    fabric will automatically give you 'n' (or close to 'n') times the
    bandwidth if you
    round-robin your packets across the connections. This is probably true only
    in the
    very ideal case and in the real world, the situation is far less rosy to
    make such
    an assertion so confidently.
    
    2. Fault-tolerance currently is (and should be) layered above the SCSI
    transport layer. There are
    enough solutions from several vendors in the market which deal with this,
    so there is no use in
    reinventing the wheel.
    
    Based on the "Keep It Simple and Stupid" and "Optimize for the Common Case"
    principles,
    I would appeal to the iSCSI "design team" to forgo multiple connections per
    session and use specialized
    solutions for remote mirroring and remote tape backup (the apps that
    require multiple
    connections).
    
    Sincerely,
    Prasenjit
    
       Prasenjit Sarkar
       Research Staff Member
       IBM Almaden Research
       San Jose
    
    
    "Douglas Otis" <dotis@sanlight.net>@ece.cmu.edu on 08/04/2000 09:20:40 AM
    
    Sent by:  owner-ips@ece.cmu.edu
    
    
    To:   Julian Satran/Haifa/IBM@IBMIL, <ips@ece.cmu.edu>
    cc:
    Subject:  RE: Requirements specification
    
    
    
    Julo,
    
    You comments are based on several assumptions reflecting your present
    architecture.  Your implementation is done at the controller rather than a
    device.  You also assume authentication is done at the controller.  Each
    LUN
    could belong to a different authority and be an independent (virtual)
    device
    managed through LDAP.  If you bring the interface to the device, you can
    obtain the required scaling that is otherwise difficult at the controller
    as
    with your architecture.  By combining everything into a single connection,
    you do not improve reliability, scalability, availability or fault
    tolerance.
    
    Doug
    -----Original Message-----
    From: owner-ips@ece.cmu.edu [mailto:owner-ips@ece.cmu.edu]On Behalf Of
    julian_satran@il.ibm.com
    Sent: Thursday, August 03, 2000 7:37 PM
    To: ips@ece.cmu.edu
    Subject: Re: Requirements specification
    
    
    
    
    David,
    
    The one additional requirement is availability/fault-tolerance.
    
    Your arguments about performance are valid. However I doubt that there will
    be enough incentives - beyond price - to develop things for high end
    controllers and
    servers.
    
    Enabling multiple connections brings those applications the performance
    required
    without any serious implications to the rest of the "family" (as I outlined
    in Pittsburgh
    controllers and servers that don't need multiple connections/session don't
    have to implement them).
    
    Storage traffic requirements will always exceed those of many other
    applications.
    
    As for the "one-connection-per-LU" we covered this solution in long
    discussions
    and even several full fledged implementation - as it is compelingly simple.
    However the resource consumption is unjustifiably high and the security
    problems are
    even worse (the LUs "viewed" by an initiator depend on who he says he is)
    than
    in the current draft.
    
    Regards,
    Julo
    
    
    
    David Robinson <David.Robinson@EBay.Sun.COM> on 04/08/2000 02:43:11
    
    Please respond to David Robinson <David.Robinson@EBay.Sun.COM>
    
    To:   ips@ece.cmu.edu
    cc:    (bcc: Julian Satran/Haifa/IBM)
    Subject:  Requirements specification
    
    
    
    
    To further elaborate on my comments in Pittsburgh on multiple
    connections per link and connections per LUN vs per target.
    
    The current requirements specify that the protocol must support
    multiple connections per session.  So far the only justification
    for this that I have clearly heard is performance, current and future
    systems will demand bandwidth that will require aggregation. Is there
    any other reason for multiple connections?
    
    My challenge to this requirement is that it is fundementally a link
    and transport layer issue that is being exposed to the session layer
    due to a perception that current link/transport implementations are not
    adequate to meet perceived demand.  The key question here is if this
    is a "physics" issue that can't be solved with better implementations
    or just bad implementations? I am leaning towards the latter. I expect
    that if this protocol is a success, a number of highly tuned adapters
    using tricks such as hardware assist will be developed.  Those doing
    the development will have direct control over the quality of the
    implementation.  Furthermore, the performance critical environments
    are likely to be local in nature so preassure to create necessary
    switches and routers will also exist.
    
    The advantages of limiting a single connection per session should be
    a simplification in the connection management and error handling.  From
    the earliest drafts we have already seen restrictions of individual
    command/data/status sequences to a single connection to better handle
    ordering issues. I forsee further restrictions possibly being
    required to cover handling of lost connections when sequences are
    received out of across multiple connections. Similarily Steve's
    comments on security management of multiple connections is of concern.
    
    The second area that I brought up was the requirement of one session
    per initiator target pair instead of one per LUN (i.e. SEP). I am willing
    to accept the design constraint that a single target must address
    10,000 LUNs which can be done with a connection per LUN. However,
    statements of scaling much higher into the areas where 64K port
    limitations appear I think is not reasonable.  Given the bandwidth
    available on today's and near future drives that will easily
    exceed 100MBps I can't imagine designing and deploying storage systems
    with over 10,000 LUNs but only one network adapter.  Even with 10+ Gbps
    networks this will be a horrible throughput bottleneck that will
    get worse as storage adapters appear to be gaining bandwidth faster than
    networks. Therefore requiring greater than 10,000 doesn't seem necessary.
    
    >From the performance perspective, a connection per LUN also makes sense.
    SCSI command flows are already being constrained to a single connection
    in the current proposal for ordering reasons, so the number of
    concurrent outstanding requests per LUN is a manageable number. The
    concurrency desired by multiple connections per session in the
    existing draft will naturally occur with a connection per LUN.  As
    each TCP connection is a unique flow existing link layer hardware
    that tries to preserve ordering based on a "flow" (likely IP/port pairs)
    will give the desired performance properties. Both my objections and
    the requirements for multiple connections I question above become moot.
    
    >From a connection management, command ordering, and error recover
    perspective things should also get simplier.  Ordering is obviously
    maintained and the sender can now recover from connection errors
    based on a smaller context and possibly use TCP layer information
    to determine what responses were received (ACK windows?).
    
    To summarize I would like to see the requirements changed to reflect
    a maximum of 64K LUNs per IP node, require only one transport layer
    connection per session, and define a session to be an initiator/LUN
    pair.
    
         -David
    
    
    
    
    
    
    
    
    


Home

Last updated: Tue Sep 04 01:08:02 2001
6315 messages in chronological order