SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    RE: iSCSI: leading zeros and decimal-encoded binary strings



    Bill,
    
    There is an additional thing I don't like about decimal-encoded binary strings. We will have to put in support for a feature that is unlikely to be used. The only items defined as binary-values are CHAP and SRP binary strings which usually will be too long for decimal encoding anyway.
    
    Take CHAP for instance. It C and R are defined as binary-values. R is 16 bytes so it can't be decimal encoded. C shorter than 8 bytes seems unlikely.
    
    My preference would be to replace the definition of binary-value with the current definition of large-binary value, replace large-binary-value with binary-value (in 10.2 and 10.3) and get rid of regular-binary-value and large-binary-value. 
    
    Regards,
    Pat
    
    -----Original Message-----
    From: Bill Studenmund [mailto:wrstuden@wasabisystems.com]
    Sent: Tuesday, July 02, 2002 2:56 PM
    To: ips@ece.cmu.edu
    Subject: iSCSI: leading zeros and decimal-encoded binary strings
    
    
    Here's the heart of what I dislike about decimal-encoded binary strings:
    how do you convey the length of the binary string when you have leading
    zero bits?
    
    -14 says:
    
    "When used to encode binary strings decimal constants have an implicit
    byte-length that is the minimum number of bytes needed to represent the
    base2 encoding of the decimal number."
    
    How do leading zeros impact the "base2 encoding .."?
    
    Consider "012". Obviously the number is 12. The question is how many bytes
    are in the binary string? Is it one, since a number of three decimal digit
    numbers fit in one byte (up to "255"), or is it two bytes in the string
    since other three-decimal-digit numbers won't fit in one byte ("256"
    through "999")?
    
    The spec isn't clear, at least not to me.
    
    Ok, so one or two-byte strings aren't that likely. The problem though is
    that all of the byte sizes don't line up nicely with decimal digits (since
    2^x = 10^y only works for x=0, y=0 AFAIK).
    
    Since we can't force that binary strings don't start with a zero-byte (or
    two), we need to support some way of communicating how long such a string
    is.
    
    Two suggestions:
    
    1) Rip out the decimal binary strings bit
    
    2) Come up with a table saying if you have so many digits which contain
    leading zeros, you have a binary string of so many bytes.
    
    Examples:
    
    String:		Bytes in binary string:
    
    00000000000000000012	  8
    00072057594037927935	  8
    
       00000000000000012	  7
       00281474976710655	  7
    
         000000000000012	  6
         001099511627775	  6
    
           0000000000012	  5
           0004294967295	  5
    
              0000000012	  4
              0016777215	  4
    
                00000012	  3
                00065535	  3
    
                   00012	  2
                   00255	  2
    
    The advantage of such a method is that you can easily force the right
    number of leading zeros with printf format strings. For instance,
    printf("%010d", an-int32-type) will do the right thing.
    
    But we need to communicate that in the spec, and we don't.
    
    Thoughts?
    
    Take care,
    
    Bill
    
    P.S. I will be on vacation starting tomorrow through the weekend.
    


Home

Last updated: Tue Jul 02 20:18:59 2002
11088 messages in chronological order