SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    Re: iSCSI authentication requirements



    On Thu, Mar 28, 2002 at 08:13:42PM -0500, David Jablon wrote:
    > Ok.  On the one hand, you sure do make that attack sound hard.
    > But stated in other words, an attacker can inject just a couple
    > bogus packets, a mere hiccup from the user's view, and then he
    > gets unlimited offline guessing to crack the password.  Some
    > might consider that a *bad thing* indeed.
    
    Nope, not that easy.  As I said, the attacker has to be on the network
    communications path between the client and the server.  Also, the
    attacker has to prevent the original packets from the client to the
    server from getting through.  This is not necessarily trivial --- it's
    *not* just a matter of "injecting a few bogus packets".
    
    > Furthermore, limiting the attack window to *just* the first time may
    > not significantly decrease the problem.  It depends on other
    > assumptions.  In some cases it may be like locking all but one door
    > of your house.  A good burglar will try them all, or wait for the
    > appropriate time, etc.  In contrast, a strong password method closes
    > all these doors to offline cracking of network messages, forcing the
    > burglar to try something different.
    
    Again, nope.  It's like saying that while the house is being built,
    there is a small window when the deadbolt hasn't been installed yet.
    But once the deadbolt has been installed, the burgler can't get in.
    
    Given the usage patterns of clients talking to disks, most of the time
    you're talking the same disk where you had previously stored data.  So
    most of the time, it's *not* the first time a client is talking to a
    disk, and by definition, that vulnerability only happens once.  For
    that reason, limiting the vulnerability to an active attack during the
    initial attack window is quite significant.
    
    Also, note that if the public key of the server is (optionally)
    certified by some well-known CA, then even the initial attack window
    (if people think it's important) can be avoided.  And if you think
    that's impractical, note that significant amounts of e-commerce is
    done using precisely level of protection.
    
    > Again, this is false.  Even that last requirement can be met.
    > While it is true that free use and distribution products like Linux
    > can pose a different problem for a patent holder, it is surmountable.
    > And finally, as you almost point out, all these interoperability
    > problems go away when a patented technique is standardized as
    > a SHOULD implement method.
    
    Nope.  Security mechanisms have to be a MUST implement.  If we have a
    non-encumbered MUST implement mechanism, then whether or not we have
    an encumbered SHOULD mechanism at some level is less important.  Most
    people will probably depend on the MUST implement for
    interoperability.  So then, people will only pay whever are costs (in
    money, legal uncertainty, etc.) for using said encumbered technology
    if said encumbered technology really provides enough value that it's
    worth the cost.
    
    In some sense, this might be the best result.  As I said, it really is
    all about engineering tradeoffs.  Are the benefits worth the costs?
    If there is a MUST implement which is unencumbered, then instead of
    arguing about the relative costs and benefits, vendors will be able to
    decide for themselves whether or not it's worth it to trade in the
    encumbered technology.
    
    > * I disagree with the concept of making *uninformed* business/engineering
    > tradeoffs.
    
    Ultimately, we always have to make decisions based partial
    information.  There have been cases where a patent holder has
    publically stated during the standardization process that the patent
    will be free for people implementing the specific protcol, in that
    case, the working group can make decisions based on that knowledge.
    
    But if the statement which is made is only the stock "reasonable and
    non-discriminatory", that can mean many things.  It can mean a no-cost
    license, or it could mean "everyone gets to pay $10 million dollars",
    where while the terms might be non-discriminatory, it could lock out
    all but the largest companies, which many would say is a Bad Thing.
    
    But just because we don't know doesn't mean that we should freeze and
    not make any decision.  At the same time, we can't assume that
    companies will be utterly benign with their patent licensing policies.
    All we can do is make the best possible decision we can based on the
    information we have.  And that's something that we always have to do,
    no matter what the field of endeavor.
    
    > * I think it's up to the working group members to decide what technology is
    > truly equivalent to an optimal case for its purposes, without undue pressure
    > or interference from "outsiders" (including me too of course).
    
    The security area of the IETF, and the security AD's of the IESG are
    not "outsiders".  They are part of the IETF community.  And the IETF
    community *has* made some global decisions for the entire IETF.  For
    example, in the past, the IETF as a whole had decided that despite
    export control regulations, strong cryptography would be required,
    despite the desires of some working groups, which were dominated by
    vendors who were motivated by a desire to ship products that could be
    exported outside of the U.S.  The general policy that strong
    cryptography would be required for all working groups, despite any
    nation's export control policy was termed the "Danvers Doctrine", from
    the IETF plenary meeting in Danvers, Massachusetts, where it was clear
    this consensus was reached.
    
    More recently, the IETF decided that standardizing wiretap-friendly
    provisions into protocols was bad from an engineering standpoint, and
    so this was something that the IETF would not engage in.
    
    So there is in fact a rich history of the IETF community as a whole
    (and not just an individual working group) making statements which
    take into account the entire Internet community, sometimes to the
    frustration of parties whose priorities are solely based on monetary
    motivations, or by people who only care about law enforcement's
    concerns.  Personally, I believe this is a Good Thing.
    
    						- Ted
    
    P.S.  I will note that in the case of public key technology, it seems
    pretty clear that the patent issue very much did inhibit its
    deployment.  Most recently at this Minneapolis IETF meeting, we
    enjoyed the integration of dynamic DNS updates and DHCP, which
    involved the use of public key technology.  The standards involved
    were developed while the RSA patent was in force, but in fact,
    widespread deployment of RSA technology in IETF protocols really
    didn't happen before the RSA patent expired.  And at this Minneapolis
    IETF meeting, the dynamic DNS update demonstration was spearheaded by
    open source implementations: the ISC dhcp server, the ISC dhcp server,
    and ISC BIND implementation were all involved.  None of these open
    source implementations could have used the RSA technology while the
    patent was still in force.
    
    Make no mistake.  Patents have their costs.  If you want your protocol
    to be widely deployed across the Internet, use of patented technology
    (unless that patent has been freely licensed) has historically been a
    strike against your protocol.
    


Home

Last updated: Sat Mar 30 12:18:16 2002
9394 messages in chronological order