SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    Re: Connection Consensus Progress



    Chris,
    Even when the TCP/IP processing moves into the NIC, the ability to manage
    multiple NICs will be in an iSCSI Device Driver.  Therefore it is possible
    to perform iSCSI alternate path balancing and error recovery.  If Wedge
    Drivers will be able to do it, and they can, then it can also be done in
    the iSCSI device driver.
    
    The only important question is -- do we want to prevent the iSCSI Device
    Drivers from being able to load balance and being able to perform alternate
    path recovery.  If we want to PREVENT the iSCSI Device Driver from doing
    this, then we should NOT have Multiple connections per Session.  If we
    would like the iSCSI Device Drivers (sometime) to do the load balancing and
    alternate path recovery, then we should leave the Session definition as it
    is.  The current default is a single connection per session, so that will
    always permit the vendors to use their own Wedge Device Drivers, therefore,
    we do not need to eliminate the possibility of a smarter iSCSI Device
    Driver, sometime in the future.
    
    .
    .
    .
    John L. Hufferd
    
    
    
    Christopher Stein - Network Storage <Christopher.Stein@east.sun.com>
    @ece.cmu.edu on 08/26/2000 01:53:13 PM
    
    Please respond to Christopher Stein - Network Storage
          <Christopher.Stein@east.sun.com>
    
    Sent by:  owner-ips@ece.cmu.edu
    
    
    To:   ips@ece.cmu.edu
    cc:
    Subject:  Re: Connection Consensus Progress
    
    
    
    
    Julian,
    
    If one of the reasons for supporting multiple TCP connections is
    to allow load balancing across HBAs for fault tolerance, this
    reason will be lost as the TCP processing moves down into the HBA.
    
    My understanding was that TCP on a chip technology is desired for
    high-performance iSCSI. If this is the case, then multiple
    connections for tolerance of HBA faults is a misfired bullet.
    
    -Chris
    
    >>X-Authentication-Warning: ece.cmu.edu: majordom set sender to
    owner-ips@ece.cmu.edu using -f
    >From: julian_satran@il.ibm.com
    >X-Lotus-FromDomain: IBMIL@IBMDE
    >To: ips@ece.cmu.edu
    >Subject: Re: Connection Consensus Progress
    >Mime-Version: 1.0
    >Content-Disposition: inline
    >
    >
    >
    >David,
    >
    >I understand and share your concerns about how good we understand the
    >requirements for recovery and balancing.
    >
    >But as I stated repeatedly we can't wait for somebody else to solve our
    >problem and the
    >field requirement is there as witnessed by the products that attempt to
    >solve it in a
    >proprietary fashion (and BTW a TCP connection failure could also be
    >repaired simply by
    >TCP but TCP does not do it).
    >
    >However if they are there the both sets have to be solved at SCSI level -
    >since several links
    >if not handled properly increase the failure probability.
    >
    >However your last point about multiple HBAs is lost on me.
    >We attempted to make iSCSI work with several HBAs and went to some length
    >to keep
    >the requirements to the HBA hardware as if the HBAs act independently
    >(counters can be
    >shared). Is there something we missed?
    >
    >Julo
    >
    >David Robinson <David.Robinson@EBay.Sun.COM> on 25/08/2000 23:32:23
    >
    >Please respond to David Robinson <David.Robinson@EBay.Sun.COM>
    >
    >To:   ips@ece.cmu.edu
    >cc:    (bcc: Julian Satran/Haifa/IBM)
    >Subject:  Re: Connection Consensus Progress
    >
    >
    >
    >
    >> I agree with you up to a point.  I know of customers that always need
    >> multiple physical paths to the Storage Controller.  Regardless of how
    >fast
    >> the link is, they need a faster link, and these hosts need to be able to
    >> spread the load across several different HBAs.  (Some are on one PCI
    bus,
    >> and some on another, etc.)  When this happens, as it does today, with
    >Fibre
    >> Channel, we are required, as are a number of other vendors, to come up
    >with
    >> a multi HBA balancer.  We call our Fibre Channel version "DPO" (Dynamic
    >> Path Optimizer), EMC has another version (I do not know what they call
    >> theirs).  This Code sits as a "Wedge" Driver above the FC Device Drivers
    >> and balances the work across the different FC HBAs.  I think this same
    >> thing will be required in the iSCSI situation.  Note:I think, the FC
    >> versions only work with IBM or EMC's etc. Controllers.  (SUN probably
    has
    >a
    >> similar one also.)
    >
    >I understand this scenerio, it is often used as a high availability
    >feature.
    >The key question is if this should be handled above at the SCSI layer
    >as it is most often done now, or in the iSCSI transport. While I like
    >the goal to unify this into one architecture, I have serious doubts that
    >we have the understanding of the requirements and needs necessary
    >to get it right.  Thus we will ultimately end up in the same situation
    >we are in today with a SCSI layer solution.  In addition, if the promise
    >of a hardware iSCSI NIC/HBA is acheived, allowing multiple paths using
    >different NIC/HBAs will still require "wedge" software.
    >
    >     -David
    >
    >
    >
    >
    >
    
    
    
    
    


Home

Last updated: Tue Sep 04 01:07:42 2001
6315 messages in chronological order