SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    RE: Requirements specification



    At 04:13 PM 8/9/00 -0700, Douglas Otis wrote:
    > > -----Original Message-----
    > > From: owner-ips@ece.cmu.edu [mailto:owner-ips@ece.cmu.edu]On Behalf Of
    > > Stephen Bailey
    > > Sent: Wednesday, August 09, 2000 2:52 PM
    > > To: ips@ece.cmu.edu
    > > Subject: Re: Requirements specification
    > >
    > >
    > > > I don't think network channel speeds are increasing briskly.
    > > Ethernet was 10
    > > > Mb/s in 1980, 100 Mb/s in 1990, and 1 Gb/s in 2000. These are slow
    > > > infrastructural step functions.
    >
    >You're comparing a times two rate to a times 10 rate.  You should also add
    >10 Gb/s in 2001, and 40 Gb/s in 2002.
    
    10 GbE standard is finalized in March 2002.  100 Gbps optics have been 
    demonstrated and one would expect to jump to that as the next wide-area / 
    metro solution and possibly the data center backbone via a 100 GbE within 
    5-7 years after 10 GbE is finalized.  These are fabrics that one attaches 
    into via some speed relative to the adapter / chipset that is being used 
    within the server or storage device.
    
    >iSCSI does not provide any solutions if you consider the bottleneck in this
    >interface is the memory.  iSCSI requires ordered delivery, multiple memory
    >copies, split header aggregation, etc.   All these things prevent iSCSI from
    >being successful even with dedicated hardware.
    
    All external secondary fabrics independent of the protocol are limited by 
    the memory bandwidth - this has many components: coherency overheads, 
    D-cache miss rates, actual data manipulation, etc.  Use of iSCSI is not the 
    issue per se but the impact of the protocol and its processing on that 
    memory bandwidth.  Even technologies like InfiniBand or VI are similarly 
    limited by the memory bandwidth, have similar levels of complexity and 
    resource requirements, and do not provide any solutions per se.  The key is 
    to off-load as much of the protocol processing as possible so that the 
    application is the main consumer of the memory bandwidth.  iSCSI does 
    provide the ability to implement a protocol off-load solution.
    
    >Odd, memory is the bottleneck.  A memory effective solution will be found,
    >but do not expect iSCSI to lead.  If you have a high speed channel, there is
    >no reason not to aggregate slower devices.  An interim cache will not
    >improve reliability, scale ability, or availability.  If this network is
    >feeding a high end server, the server back plane will always offer higher
    >performance and hence the best location for any additional memory and state
    >information.
    
    Whether one integrates the server point of attachment with the memory 
    controller is independent of this protocol discussion - the issue is how 
    much memory is required to be accessed that is protocol-specific and 
    whether this is best saved on off-chip memory at the server point of 
    attachment or in the server memory.  There are pros / cons for each given 
    not all memory solutions provide the needed bandwidth nor latency (latency 
    is critical for protocol state tables, etc.).  From a spec point of view 
    all of the above is implementation-specific but implementation impacts 
    should be evaluated to determine whether a given feature within the 
    specification is practical to implement or not with the technology expected 
    in the product deployment time frame.
    
    Mike
    
    


Home

Last updated: Tue Sep 04 01:07:55 2001
6315 messages in chronological order