SORT BY:

LIST ORDER
THREAD
AUTHOR
SUBJECT


SEARCH

IPS HOME


    [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

    RE: Requirements specification



    David,
    
    Although read-channels and data densities improve at a steady pace, as they
    have for the past quarter century, the mechanics of the drives have not.
    Today's drives can deliver 320 Mbits/second of data on the outside
    cylinders.  Improvement of the mechanics comes at a high price with respect
    to power and cost.  There are many trade-offs made in this struggle
    including the physical size of the drive and the number of heads and disks
    all with substantial impact in a highly competitive market.  The cost/volume
    trend takes us to a single disk which increases access time as read channel
    data rate increases.
    
    Would it be logical to design a system where everything is aimed at taking
    advantage of the high momentary data rate offered by the read channel, or by
    offering the same throughput using more drives where each drive's interface
    bandwidth is restricted with respect to these read channel data rates?  The
    advantage of such an approach is found with respect to smaller random
    traffic.  With more devices, redundancy is easily achieved and parallel
    access offers a means of performance improvement by spreading activity over
    more devices.  Remember the switch provides bandwidth aggregation and is not
    found in the individual device.  Each device would only see their traffic.
    The client could see the traffic of hundreds of these devices.  Regardless
    of the nature of the traffic, performance would be more uniform.
    
    An 8ms access + latency figure in the high cost drives restricts the number
    of 'independent' operations that average 64k byte to 100 per second or 52
    Mbit per second.  Such an architecture of 'restricted' drives would scale
    whereas the controller based approach does not and is vulnerable.  An
    independent nexus at the LUN is the only design that offers the required
    scaling and configuration flexibility.  Other than your comment about
    keeping up with the read channel, I would tend to agree.  Several Fast
    Ethernet disks combined at a 1 Gbit Ethernet client makes sense in cost,
    performance, capacity, reliability, and scalability.  The protocol overhead
    should be addressed in the protocol itself.  There are substantial
    improvements to be made in the protocol area.
    
    If the desire is to maintain the existing architecture, then Fibre-Channel
    encapsulation provides that function as well and solves many of the protocol
    issues plaguing iSCSI or SEP attempts.
    
    Doug
    
    -----Original Message-----
    From: owner-ips@ece.cmu.edu [mailto:owner-ips@ece.cmu.edu]On Behalf Of
    David Robinson
    Sent: Thursday, August 03, 2000 4:43 PM
    To: ips@ece.cmu.edu
    Subject: Requirements specification
    
    <snip>
    The second area that I brought up was the requirement of one session
    per initiator target pair instead of one per LUN (i.e. SEP). I am willing
    to accept the design constraint that a single target must address
    10,000 LUNs which can be done with a connection per LUN. However,
    statements of scaling much higher into the areas where 64K port
    limitations appear I think is not reasonable.  Given the bandwidth
    available on today's and near future drives that will easily
    exceed 100MBps I can't imagine designing and deploying storage systems
    with over 10,000 LUNs but only one network adapter.  Even with 10+ Gbps
    networks this will be a horrible throughput bottleneck that will
    get worse as storage adapters appear to be gaining bandwidth faster than
    networks. Therefore requiring greater than 10,000 doesn't seem necessary.
    
    <snip>
    	-David
    
    
    


Home

Last updated: Tue Sep 04 01:07:56 2001
6315 messages in chronological order