SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    RE: iscsi : DataPDULength can differ in each direction.



    Having looked at the discussions and putting some more thoughts, I agree
    that
    Max PDU length should be provided by both the initiator and target and can
    be called
    as MaxRecvPDULength.  Whether it is connection specific or session specific
    will add little complexity. Implementing different PDU lengths for initiator
    and target will not add any complexity.
    
    >Since a multi-cxn session can [ideally] span multiple iscsi hardware
    >vendor implementations + optionally, software iscsi implementation[s],
    >the DataPDULength should be expressed on a per-cxn basis.
    
    I am also with the reasoning that says why
    this parameter ould be connection based instead of session specific.
    
    Deva
    Adaptec
    
    
    
    
    -----Original Message-----
    From: owner-ips@ece.cmu.edu [mailto:owner-ips@ece.cmu.edu]On Behalf Of
    Santosh Rao
    Sent: Friday, October 05, 2001 11:31 AM
    To: ips
    Subject: Re: iscsi : DataPDULength can differ in each direction.
    
    
    
    Having now accepted that DataPDULength is a "maximum receive pdu length"
    limitation that each end point is allowed to express, this key needs to
    be on a per connection basis, and not a per session basis.
    
    The reason for this being that limitations on DataPDULength (more aptly,
    max receive pdu length) stem from implementation specific quirks. (One
    of the reasons for allowing this feature, in the first place.).
    
    Since a multi-cxn session can [ideally] span multiple iscsi hardware
    vendor implementations + optionally, software iscsi implementation[s],
    the DataPDULength should be expressed on a per-cxn basis.
    
    As a side note, with the change in the semantics of this field, would it
    be appropriate to consider re-naming DataPDULength to MaxRecvPDULength
    ("Maximum Receive PDU Length") ?
    
    Thanks,
    Santosh
    
    
    
    "Mallikarjun C." wrote:
    >
    > Julian,
    >
    > I don't really have a strong opinion on connection vs session
    > applicability of DataPDULength, but one comment.
    >
    > If we specify the current ExpDataSN in Task Management function
    > request (reassign) to be in stead "Expected Buffer Offset", this
    > appears to enable per-connection DataPDULength.  Specifying a
    > buffer offset may also be helpful to targets in general, not
    > requiring them to bring the DataSN->buffer_offset translation
    > from a (potentially failed) different NIC (though perhaps it is
    > not all that serious in practice since implementations will
    > send DataPDULength-sized PDUs except for the last in an I/O).
    >
    > Do you see any other issues that complicate a connection-specific
    > value?
    >
    > Regards.
    > --
    > Mallikarjun
    >
    > Mallikarjun Chadalapaka
    > Networked Storage Architecture
    > Network Storage Solutions Organization
    > MS 5668 Hewlett-Packard, Roseville.
    > cbm@rose.hp.com
    >
    > Julian Satran wrote:
    > >
    > > I think that the set of arguments we have heard are solid enough to
    > > the DataPDUlength becoming a parameter that characterizes the receiver
    > > only.
    > >
    > > However this parameter should be also connection specific so that the
    > > sender and receiver can align (some framing mechanisms may require
    > > adaptation to path MTU) and having a different DataPDUlength on
    > > different paths can affect recovery.
    > >
    > > If we leave it session specific the logic of iSCSI can easily
    > > incorporate different PDU lengths and the draft changes are very
    > > limited.
    > >
    > > Julo
    > >
    > >  Dave Sheehy
    > >  <dbs@acropora.rose.agilent.com>                 To:
    > >  Sent by: owner-ips@ece.cmu.edu           ips@ece.cmu.edu (IETF IP
    > >                                          SAN Reflector)
    > >  05-10-01 01:13                                  cc:
    > >  Please respond to Dave Sheehy                   Subject:        RE:
    > >                                          iscsi : DataPDULength can
    > >                                          differ in each direction.
    > >
    > >
    > >
    > > Comments in text.
    > >
    > > >   >  >                  Can someone give a tangible benefit to this
    > > that can
    > > >   >  outweigh the
    > > >   >  > spec and implementation churn at this late stage of the game?
    > > >   >
    > > >   >  It would allow iSCSI HBAs to interact more efficiently
    > > >   >  with SW iSCSI
    > > >   >  implementations and vice versa.
    > > >   >
    > > >
    > > > I don't believe it would in practice. Consider the following. The
    > > max
    > > > PDU size sent during login is more that just that, it is in fact the
    > > > senders maximum supportable max PDU size. If one side sends 64k and
    > > > the other side 8k although it is technically indicating it can't
    > > > receive more than 8k in a single PDU, for all practical purposes it
    > > is
    > > > also indicating it can't handle, and therefore can't send PDUs
    > > bigger
    > > > than 8k.
    > >
    > > I think that's a faulty interpretation of what's being proposed. As in
    > > both
    > > Fibre Channel and TCP the offered PDU size is the size you're willing
    > > to
    > > accept. It can and often is completely decoupled from the size you're
    > > capable
    > > of sending. This is commonly implemented in Fibre Channel and it's not
    > >
    > > exactly rocket science.
    > >
    > > > I believe if we go this route we'll simply see the side with the
    > > lower
    > > > DataPDULength sending its "natural" size PDUs and never sending the
    > > > larger size wanted by the other side. More on this below ...
    > > >
    > > >   >  >                  From my point of view the benefit of
    > > asymmetric PDU
    > > >   >  sizes would have
    > > >   >  > to be very large to make it worth the extra complexity
    > > >   >  in buffer
    > > >   >  > management code alone.
    > > >   >
    > > >   >  >From the vantage point of an iSCSI HBA it doesn't seem
    > > >   >  all that hard.
    > > >   >
    > > >
    > > > Well, it seems to me faced with a peer with a different max PDU size
    > > > there are relatively few ways to proceed.
    > >
    > > Again, I believe it's fairly straightforward and there are real life
    > > examples that work exactly this way. The only unfortunate thing is
    > > that it's
    > > been proposed at this late date. In hind sight, it's a feature with an
    > > obvious benefit.
    > >
    > > > If the peer has a lower max PDU size there are 2 choices. Use 2
    > > buffer
    > > > pools, one for receive set to the local Max PDU size, and one for
    > > send
    > > > set to the peer Max PDU size. This is where the extra buffer
    > > > management complexity comes in. Or, use one buffer pool and simply
    > > > part fill the buffers for sending. This is the easy case.
    > >
    > > I don't see what buffer size has to do with PDU size that's not
    > > implementation
    > > dependent. The PDU content is going to be fragmented anyway (i.e.
    > > network
    > > header, iSCSI header, data, digest, ...) so the problem you describe
    > > exists
    > > regardless.
    > >
    > > > If the peer has a larger max PDU size then either only send up to
    > > the
    > > > local PDU size, as I mentioned above, or chain buffers together to
    > > > build larger than the local max PDU size. Again, this is where the
    > > > extra buffer management complexity comes in. Remember that by
    > > > definition these chains will need to be bigger than the largest
    > > chain
    > > > size the implementation can handle. Unless for some reason the
    > > > DataPDULength sent was chosen at some arbitrary size smaller than
    > > the
    > > > implementations maximum.
    > >
    > > I think SW iSCSI stacks are going to want to use big PDUs and in
    > > general
    > > iSCSI HBAs are going to use small ones. I think the SW stacks are
    > > going to
    > > set up their buffer structures to use large PDUs. I don't think
    > > (although
    > > it's certainly possible) that they are going to dynamically adjust
    > > their
    > > buffer sizes for each iSCSI session. So, in the scenario where a SW
    > > iSCSI
    > > stack is talking to an iSCSI HBA it's buffering is going to be used
    > > inefficiently in both directions. If the Data PDU Length is negotiated
    > > separately for each direction then the buffering on the SW side is
    > > being
    > > used inefficiently in only one direction. Both sides win with this
    > > behavior.
    > >
    > > Dave
    
    --
    ##################################
    Santosh Rao
    Software Design Engineer,
    HP-UX iSCSI Driver Team,
    Hewlett Packard, Cupertino.
    email : santoshr@cup.hp.com
    Phone : 408-447-3751
    ##################################
    
    


Home

Last updated: Fri Oct 05 20:17:22 2001
7084 messages in chronological order