SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    Re: Performance of iSCSI, FCIP and iFCP



    
    Steph,
    
    thanks for raising this point, which is clearly central to the fairness
    discussion in a more general context, not restricted to IP Storage.
    
    There are two cases to consider: whether the storage traffic is isolated
    or is multiplexed with other traffic in the IP network.
    
    My original message on this subject of performance was assuming the
    first case, where all available bandwidth (in my example 10Gb/s) was
    available for storage traffic, and showed that in some scenarios, about
    1/4 of bandwidth was lost unnecessarily with the one-TCP solution. 
    Definitely, no fairness issue was involved.
    
    Now let's consider the second case, where storage traffic traverses a
    non-exclusive IP network, such as "the Internet", where fairness is an
    issue.
    
    Unfortunately, as far as I know, despite the TCP friendly mandate
    expressed or implied in the IETF community, there is no clear definition
    of what is considered to be fair use of network bandwidth (TCP friendly)
    and what is unfair.  RFC 2914, citing RFC 2309:
    
      ".. the issue of the appropriate granularity of a "flow",
       where we define a `flow' as the level of granularity appropriate for
       the application of both fairness and congestion control.  From RFC
       2309:  "There are a few `natural' answers: 1) a TCP or UDP connection
       (source address/port, destination address/port); 2) a
       source/destination host pair; 3) a given source host or a given
       destination host.  We would guess that the source/destination host
       pair gives the most appropriate granularity in many circumstances.
       The granularity of flows for congestion management is, at least in
       part, a policy question that needs to be addressed in the wider IETF
       community."
    
    In the context of IP storage, the IPS gateways are different than the
    source and destination hosts for a TCP flow in a regular IP network.
    
    For example, in the ECM (Endpoint Congestion Manager) WG, we had a
    discussion on this subject.  The ECM proposal attempts to regulate the
    sending rate for a pair of endpoints to be limited to roughly one TCP
    connection between those two hosts, given the current network conditions
    (loss probability, round trip delay).  I raised a few questions.  For
    example, if an endpoint is a multi-user system, and N users from this
    system simultaneously connect to the same web server, under ECM they get
    1/N of the bandwidth they would otherwise get if using N PCs, which is
    not fair.  Some of us agreed that the fair unit of bandwidth should be
    per user and not per system, but was not put on paper.  
    
    This opinion is underscored by the following from RFC 2616, also cited
    by RFC 2914:
      "A single-user client SHOULD NOT maintain
       more than 2 connections with any server or proxy."
    
    In the same spirit, I would argue that the storage traffic would be
    entitled to one share of (TCP-friendly) bandwidth per storage
    connection, and not per pair of gateways, since one storage connection
    more closely emulates one user activity (or maybe even more that that). 
    In other words, mandating one TCP connection per pair of IP storage
    gateways would be like mandating one TCP per pair of IP routers.
    
    Of course, this is my subjective opinion with a bit of common sense, in
    the absence of a clear definition of fair network usage.
    
    The paper I referred to in my other message does not contain any
    consideration to fairness, only an analytical model for TCP throughput
    as a function of delay and packet loss.  A side implication is that
    different TCP implementations are not fair (friendly) to each other, but
    that's a long known fact.
    
    Surely, you can modify TCP congestion control algorithm to get as much
    bandwidth as N other TCPs, but that creates a new, "super-TCP" protocol
    that has to be designed, verified and approved. (Observe that super-TCP
    would have a different way to recover from losses that N TCPs.) 
    Moreover, even if super-TCP would be used, the IP storage gateways would
    still need to be aware of the identity and number of storage
    connections.
    
    Your comments, as always, are highly appreciated.	
    
    Victor
    
    
    
    Stephen Bailey wrote:
    > 
    > Victor,
    > 
    > > I have to admit that we did not have the model to support
    > > numerically our contention that multiple links will behave better in
    > > the presence of errors and performance will degrade more gracefully
    > > even on congestion.
    > 
    > The last time we had this discussion (multiple links between the same
    > endpoints enabling higher performance) on this list, it was my opinion
    > that doing this is not `TCP friendly', and so should not be allowed in
    > the general, Internet context, which is one of the design points for
    > iSCSI.
    > 
    > Clearly, when slow starting, and recovering from congestion, n
    > connections can effectively open the window n times as fast as 1
    > connection, but doing so will potentially be at the expense of other
    > competing flows which don't use multiple connections.  If ALL flows
    > use multiple connections, you simply end up hitting the congestion
    > wall harder, which, in an abstract sense, makes the system more likely
    > to oscillate (or collapse?).
    > 
    > I don't know if it WILL create oscillation or collapse (I'm just a
    > hammer mechanic), but it seems to me that if hitting the congestion
    > wall harder in this way were acceptable to overall network behavior,
    > the single connection TCP congestion avoidance could be adjusted to
    > create this behavior (and capture this benefit) without requiring
    > multiple TCP connections.
    > 
    > Your refereed, published work on this seems suggests that I am holding
    > on to `myths' about the need to play fair with TCP.  If so, I would
    > greatly appreciate your debunking my myths here on this list, in the
    > clearest way.  Particularly, we need to know if the IETF (or IESG or
    > IAB, or whomever it is....), will accept this type of behavior out of
    > iSCSI.
    > 
    > While I am opposed to iSCSI's multiconnection session, I do admit its
    > benefits.  My objections are that the feature will not be widely used,
    > is difficult to implement correctly, AND the same benefits are already
    > available through long-standing upper layer mechanisms.  That aside, I
    > accept the WG consensus on multiconnection sessions, but what the
    > iSCSI spec still needs to do, in no uncertain terms, is indicate
    > whether multiple connections per endpoint pair is something which
    > implementations SHOULD or SHOULD NOT allow for use in a general,
    > Internet context, or at all.
    > 
    > Even if iSCSI specifies that multiple connections per endpoint pair
    > SHOULD NOT be used in a general context, we know that implementations
    > are not going to prohibit it (it's primarily a configuration
    > decision), and that still allows the feature to be used in
    > specifically engineered applications, such as those built with
    > dedicated storage fabrics.
    > 
    > Thanks,
    >   Steph
    
    


Home

Last updated: Tue Sep 04 01:05:51 2001
6315 messages in chronological order