SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    Re: Some Thoughts on Digests



    
    ----- Original Message -----
    From: <julian_satran@il.ibm.com>
    To: <ips@ece.cmu.edu>
    Sent: Tuesday, December 05, 2000 1:24 PM
    Subject: Re: Some Thoughts on Digests
    
    
    >
    >
    > Jim,
    >
    > We will consider again the selection of the polynomial.
    > As for your arguments for the length - the object of the digest is get the
    > probability of an undetected error down to a number acceptable for todays
    > networks.
    > The probability of an error going undetected is 2**-32 *
    > BER*length-of-block.
    > Under the general accepted practice of considering all errors as
    > independent events
    > the block length protected by a single CRC is limited to a specific length
    > (2 to 8k is the usual
    > figure given for fiber today).
    
    This sounds a little garbled to me.  I question whether the
    probability of an undected error is linearly proportional to
    block length, but assuming it is, that argues AGAINST segmenting
    the block because the probability of an undected error will
    of course scale linearly with the number of segments so you
    are no better off.
    
    The argument for using 2 to 8K blocks on fiber is a bit different.
    If the number of bits in a block is less than 2**32, then it
    requires at least 3 bits in error for an undected error to occur.
    A CRC-32 will catch ALL 1 and 2 bit errors.  Therefore if
    an assumption is made that bit errors are independent events
    (i.e. the probability of any bit being in error is independent
    of what other bits are in error) the probability of an
    undected error looks like:
    
            ProbUndectErr = 2**-32 * (BER * blockLength) ** 3.
    
    The cube relationship DOES in fact argue for segmenting to
    get that absolute minimum undected error rate.  But no
    matter how big the block is, the probability of an undected
    error has an upper bound of 2**-32.
    
    In the case of iSCSI, though, these assumptions do not hold.
    
    1.    Many links use encoding such as 8-10 meaning that a
          single error event will corrupt 8 bits.
    
    2.    The only errors that the iSCSI CRC will see are those
          that escaped both the link level CRC and the TCP checksum.
          These errors will have very different distribution than
          raw fiber errors.
    
    3.    An undected error rate of 2**-32 of only those errors
          that escape both the link CRC and the TCP checksum
          is acceptable.
    
          I recently saw a figure that 1 in 30000 TCP segments
          has a checksum error.   Assume this is about 1 error
          if every 50MB of data transferred.  Multiply by 2**16,
          and assume that an undected (by TCP) error will occur
          once in about 3.3TB of data.  Multiply again by
          2**32 to account for a CRC-32 and the undected error
          rate is once in 1.4E19 bytes.  On a 10Gb/s link
          running continuously, this corresponds to a MTBF of 300 years.
          While not neglegable, there are certainly failure modes
          that will cause undected errors more frequently than
          this, and it does not seem justified to segment the
          message for CRC computation in a questionable attempt
          to improve reliability of the CRC.
    
    
    > And yes we are probably overdesigning for
    > large bursts but we have no idea at this level if there will be any
    > mitigating interleaving coding underneath and long error bursts are the
    > most common for of error in networks.
    
    If long error bursts are the most common, this argues AGAINST segmenting.
    The benifit of segmenting is in the case of independent bit errors,
    NOT burst errors.
    
    >
    > Julo
    >
    
    
    


Home

Last updated: Tue Sep 04 01:06:10 2001
6315 messages in chronological order