SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    RE: iscsi : DataPDULength can differ in each direction.



    Julian,
     
    Couldn't quite understand what you are saying. Could you
    indicate which of the following you are proposing.
     
    1. Each receiver specifies the max DatPDUlength it is willing
    to receive.
     
    2. There is no negotiation, and the values are (possibly) different in each
    direction?
     
    3. The values are negotiated by the leading connection only?
     
    4. Framing mechanism if used are not an issue because the length
    used will be the lower of the length indicated and what is allowed by
    the framing protocol?
     
    Somesh
    -----Original Message-----
    From: owner-ips@ece.cmu.edu [mailto:owner-ips@ece.cmu.edu]On Behalf Of Julian Satran
    Sent: Friday, October 05, 2001 8:30 AM
    To: ips@ece.cmu.edu
    Subject: RE: iscsi : DataPDULength can differ in each direction.
    Importance: High


    I think that the set of arguments we have heard are solid enough to the DataPDUlength becoming a parameter that characterizes the receiver only.  

    However this parameter should be also connection specific so that the sender and receiver can align (some framing mechanisms may require adaptation to path MTU) and having a different DataPDUlength on different paths can affect recovery.

    If we leave it session specific the logic of iSCSI can easily incorporate different PDU lengths and the draft changes are very limited.

    Julo


    Dave Sheehy <dbs@acropora.rose.agilent.com>
    Sent by: owner-ips@ece.cmu.edu

    05-10-01 01:13
    Please respond to Dave Sheehy

           
            To:        ips@ece.cmu.edu (IETF IP SAN Reflector)
            cc:        
            Subject:        RE: iscsi : DataPDULength can differ in each direction.

           



    Comments in text.

    >   >  >                  Can someone give a tangible benefit to this that can
    >   >  outweigh the
    >   >  > spec and implementation churn at this late stage of the game?
    >   >
    >   >  It would allow iSCSI HBAs to interact more efficiently
    >   >  with SW iSCSI
    >   >  implementations and vice versa.
    >   >
    >
    > I don't believe it would in practice. Consider the following. The max
    > PDU size sent during login is more that just that, it is in fact the
    > senders maximum supportable max PDU size. If one side sends 64k and
    > the other side 8k although it is technically indicating it can't
    > receive more than 8k in a single PDU, for all practical purposes it is
    > also indicating it can't handle, and therefore can't send PDUs bigger
    > than 8k.

    I think that's a faulty interpretation of what's being proposed. As in both
    Fibre Channel and TCP the offered PDU size is the size you're willing to
    accept. It can and often is completely decoupled from the size you're capable
    of sending. This is commonly implemented in Fibre Channel and it's not
    exactly rocket science.

    > I believe if we go this route we'll simply see the side with the lower
    > DataPDULength sending its "natural" size PDUs and never sending the
    > larger size wanted by the other side. More on this below ...
    >
    >   >  >                  From my point of view the benefit of asymmetric PDU
    >   >  sizes would have
    >   >  > to be very large to make it worth the extra complexity
    >   >  in buffer
    >   >  > management code alone.
    >   >
    >   >  >From the vantage point of an iSCSI HBA it doesn't seem
    >   >  all that hard.
    >   >
    >
    > Well, it seems to me faced with a peer with a different max PDU size
    > there are relatively few ways to proceed.

    Again, I believe it's fairly straightforward and there are real life
    examples that work exactly this way. The only unfortunate thing is that it's
    been proposed at this late date. In hind sight, it's a feature with an
    obvious benefit.

    > If the peer has a lower max PDU size there are 2 choices. Use 2 buffer
    > pools, one for receive set to the local Max PDU size, and one for send
    > set to the peer Max PDU size. This is where the extra buffer
    > management complexity comes in. Or, use one buffer pool and simply
    > part fill the buffers for sending. This is the easy case.

    I don't see what buffer size has to do with PDU size that's not implementation
    dependent. The PDU content is going to be fragmented anyway (i.e. network
    header, iSCSI header, data, digest, ...) so the problem you describe exists
    regardless.

    > If the peer has a larger max PDU size then either only send up to the
    > local PDU size, as I mentioned above, or chain buffers together to
    > build larger than the local max PDU size. Again, this is where the
    > extra buffer management complexity comes in. Remember that by
    > definition these chains will need to be bigger than the largest chain
    > size the implementation can handle. Unless for some reason the
    > DataPDULength sent was chosen at some arbitrary size smaller than the
    > implementations maximum.

    I think SW iSCSI stacks are going to want to use big PDUs and in general
    iSCSI HBAs are going to use small ones. I think the SW stacks are going to
    set up their buffer structures to use large PDUs. I don't think (although
    it's certainly possible) that they are going to dynamically adjust their
    buffer sizes for each iSCSI session. So, in the scenario where a SW iSCSI
    stack is talking to an iSCSI HBA it's buffering is going to be used
    inefficiently in both directions. If the Data PDU Length is negotiated
    separately for each direction then the buffering on the SW side is being

    used inefficiently in only one direction. Both sides win with this behavior.

    Dave






Home

Last updated: Fri Oct 05 19:17:25 2001
7083 messages in chronological order