Page 1 of 1

ACARD/SSD vs. 1Gb/10Gb Ethernet

Posted: Wed Dec 06, 2017 10:41 pm
by jwhat
Hi Performance Testers,

having done SSD test I was curious to see how Onyx350/4 did on NFS share via Gigabit Ethernet (1Gbe).

The NFS server is 8 Core Apple Xserve with 10Gbe to 10Gbe switch and then 2x1Gbe LAG groups to 1Gbe switch:

So have:

<XServe + 10Gbe> <-----> <Netgear 10Gbe Switch> <--2x1Gbe LAG--> <Netgear 1Gbe Switch> <----> <Apple 1Gbe> <----> <Onyx350/4+1Gbe-tg0>

Here is initial test result:

Code: Select all

dubious 81# diskperf -W -D -n "1Gb Ethernet NFS" -c4m test4m
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : 1Gb Ethernet NFS
# Test date     : Thu Dec  7 17:17:17 2017
# Test machine  : IRIX64 dubious 6.5 01090133 IP35
# Test type     : XFS data subvolume
# Test path     : test4m
# Request sizes : min=16384 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4194304 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
      16384   21.81   28.80   23.39   26.95   22.14   27.29
      32768   32.65   35.96   34.55   35.19   30.80   36.07
      65536   32.69   41.47   32.33   41.09   31.24   41.37
     131072   32.08   36.90   32.24   38.03   29.54   36.17
     262144   31.53   35.87   32.42   37.56   32.64   36.56
     524288   30.97   35.60   32.68   39.09   32.62   36.80
    1048576   30.91   35.95   31.32   36.85   30.93   36.07
    2097152   29.18   36.60   32.08   41.71   29.70   36.64
    4194304   30.18   36.14    0.00    0.00   30.07   36.36


So now again removing 1 switch hop:

<XServe + 10Gbe> <-----> <Netgear 10Gbe Switch> <--2x1Gbe LAG--> <Netgear 1Gbe Switch> <----> <Onyx350/4+1Gbe-tg0>

no real difference and still well below over 100MB/sec bandwidth available:

Code: Select all

dubious 83# diskperf -W -D -n "1Gb Ethernet NFS" -c4m test4m
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : 1Gb Ethernet NFS
# Test date     : Thu Dec  7 17:33:46 2017
# Test machine  : IRIX64 dubious 6.5 01090133 IP35
# Test type     : XFS data subvolume
# Test path     : test4m
# Request sizes : min=16384 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4194304 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
      16384   22.46   29.77   23.18   26.85   23.78   28.28
      32768   31.20   36.11   33.88   35.57   29.75   35.83
      65536   29.44   39.95   34.00   41.27   30.58   39.95
     131072   32.36   36.38   32.19   37.92   32.80   36.66
     262144   31.63   36.76   32.64   38.22   30.19   36.51
     524288   31.57   36.05   34.20   38.74   31.48   36.54
    1048576   30.87   36.60   34.82   41.28   30.29   35.94
    2097152   28.74   35.40   32.98   38.58   29.49   35.71
    4194304   30.00   35.68    0.00    0.00   29.87   35.44


But looking back at UW (40MB/sec available on Octane2 this looks alright, so will also run test on Octane2 to see what results its 1Gbe gets).

I am going to see if there any any difference between IO9 based 1Gbe and stand alone PCI adaptor...

here are result with eg1 interface..

Code: Select all

dubious 14# diskperf -W -D -n "1Gb Ethernet NFS" -c4m test4m
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : 1Gb Ethernet NFS
# Test date     : Thu Dec  7 20:10:30 2017
# Test machine  : IRIX64 dubious 6.5 01090133 IP35
# Test type     : XFS data subvolume
# Test path     : test4m
# Request sizes : min=16384 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4194304 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
      16384   14.05   18.47   16.47   18.48   16.78   18.46
      32768   15.44   24.70   18.42   24.63   18.36   24.79
      65536   18.47   36.77   18.49   36.66   18.49   36.76
     131072   18.49   48.87   18.78   48.71   18.49   48.87
     262144   18.48   48.77   18.49   48.78   18.52   48.82
     524288   18.50   48.88   18.49   48.73   18.48   48.84
    1048576   18.51   48.75   18.50   48.75   18.49   48.87
    2097152   18.48   48.87   18.49   48.70   18.48   48.80
    4194304   18.50   48.61    0.00    0.00   18.57   48.62


This is funny as peak throughput is better than tg0, but results have anomalies....

To check did l1 "* PCI" and this shows eg1 PCI card as operating at 66Hz which is same as IO9 board, but as it was dedicated to just network I thought it might perform better. Will try the tg1 card in IO9 in other chassis...



Cheers from Oz,

jwhat.

Re: ACARD/SSD vs. 1Gb Ethernet

Posted: Thu Dec 07, 2017 10:28 pm
by tomvos
Here are the diskperf numbers for the QL12160 SCSI controller, the Acard ARS-2320 adapter and the Plextor M6Pro SSD on my Fuel 600 MHz:

Code: Select all

Excelsior %1 diskperf -W -D -n Fuel600-M6Pro-QL12160 -t10 -c100m testfile
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name : Fuel600-M6Pro-QL12160
# Test date : Sat Mar 19 16:24:36 2016
# Test machine : IRIX64 Excelsior 6.5 07202013 IP35
# Test type : XFS data subvolume
# Test path : testfile
# Request sizes : min=16384 max=4194304
# Parameters : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 104857600 bytes
#---------------------------------------------------------
# req_size fwd_wt fwd_rd bwd_wt bwd_rd rnd_wt rnd_rd
# (bytes) (MB/s) (MB/s) (MB/s) (MB/s) (MB/s) (MB/s)
#---------------------------------------------------------
16384 34.18 36.00 38.77 29.25 38.68 29.30
32768 52.71 47.60 55.40 40.33 55.16 40.64
65536 68.55 63.80 70.10 54.48 69.93 55.04
131072 72.88 69.93 73.14 59.60 73.07 59.89
262144 74.82 74.69 75.02 68.07 74.93 68.32
524288 75.25 76.82 75.33 73.15 75.34 73.45
1048576 74.86 77.45 74.87 76.01 74.81 75.02
2097152 73.33 78.46 73.35 76.07 73.35 76.27
4194304 70.14 78.75 70.25 77.50 70.20 77.60

Re: ACARD/SSD vs. 1Gb Ethernet

Posted: Fri Dec 08, 2017 1:46 pm
by jwhat
Hi tomvos,

I posted my acard ARS-2320S numbers in other thread.

Both Plextor M6 & Samsung 850 are rated around 500MB/sec so well above what U160 SCSI can handle.

Seems like simplest way to get good performance from single drive.

Have you enabled read/write cache and command tagging (via fx label parameters) if not you can get some extra goodness from your configuration.

I had lots of problems trying to get tg1 on second IO9 in my setup going. So will not do further testing with NFS until 10Gbe card arrives (I have switch but its in different location, so will move boxes to be near it, so I have access to optical port).


Cheers,

jwhat.

Re: ACARD/SSD vs. 1Gb Ethernet

Posted: Sat Dec 09, 2017 2:26 am
by tomvos
Hi,

thanks for the tip about read/write cache and command tagging. I've to admit that I do not know what is the setting of this on my fuel. But I'll check this soon - perhaps it will improve the numbers a little bit.

Cheers,
Thomas

Re: ACARD/SSD vs. 1Gb/10Gb Ethernet

Posted: Fri Dec 15, 2017 2:20 am
by jwhat
Hi Nekochaners,

updated thread to 1Gb/10Gb Ethernet as I got Neterion XFrame 10Gbe PCI card via ebay.

Was expecting BIG improvement and was hoping to get better performance from 10Gbe than FC...

So here is reality based on first test:

Code: Select all

dubious 9# diskperf -W -D -n "10Gb Ethernet NFS" -c4m test4m
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : 10Gb Ethernet NFS
# Test date     : Fri Dec 15 20:56:50 2017
# Test machine  : IRIX64 dubious 6.5 01090133 IP35
# Test type     : XFS data subvolume
# Test path     : test4m
# Request sizes : min=16384 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4194304 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
      16384   24.69   24.05   24.75   24.09   24.76   24.17
      32768   49.28   17.90   49.96   13.05   49.43   18.44
      65536   49.44   24.53   49.45   13.39   49.46   23.98
     131072   49.40   35.98   51.17   19.28   49.39   37.99
     262144   49.47   37.19   49.42   20.20   49.44   35.36
     524288   49.37   38.03   50.53   21.71   49.40   37.90
    1048576   48.76   37.55   28.01   25.03   48.96   37.30
    2097152   49.09   36.74   49.08   13.87   49.10   36.69
    4194304   49.28   35.13    0.00    0.00   49.29   36.08


50MB/sec... not bad mind you and now faster than internal UW SCSI disk, but I was hoping to get at least 100MB/sec.

Mind you this is without any substantial testing and I am not sure on optical cabling (I have just reused some existing FC cabling I had and not sure if this is giving good signal for 10Gb).

Connection topology is now simpler, as I had to move SGI machines next to switch to get optical interface:

<XServe + 10Gbe> <-----> <Netgear 10Gbe Switch> <----> <Onyx350/4 + 10Gbe-xg1>

Also pulled out GBIC interface and it looks like standard item, will lookup details and post in hinv thread.

Cheers from Oz.

jwhat.

Re: ACARD/SSD vs. 1Gb/10Gb Ethernet

Posted: Fri Dec 15, 2017 2:31 am
by jan-jaap
Did you at least set up a network segment with jumbo frames? The standard frame size of ~ 1500 bytes will quickly overwhelm the system with interrupt load.

Also, with NFS there are many parameters to tweak. On both ends.

Re: ACARD/SSD vs. 1Gb/10Gb Ethernet

Posted: Fri Dec 15, 2017 3:47 am
by jwhat
Hi Jan-Jaap,

no I have not done any tuning.

Just wanted to see set of baseline numbers.

I played around with Jumbo frames when I first got 10Gbe going with Macs and found that this only had very small impact on performance 5-10% (using AFP rather than NFS) and sometimes Jumbo frame actually resulted in worse performance and it results in compatibility issues across cards and switches.

As part of testing I used transferring of very large media library and have done ongoing testing as setup has continued to evolve, from FC to DAS and update from Gbe to 10Gbe and I am now getting good performance without Jumbo frames, playing Mezzanine format (ProRes) 4K video across 10Gbe LAN with no frame loss.

I read a good article on how introduction of TCP off load processing (which is pretty much standard in all 10Gbe cards) significantly reduces benefit of Jumbo frame. The off load processing helps to stop interrupts and amount of kernel interaction I gather.

I was hoping that I could get away with minimum effort ;-) .


Cheers,


jwhat.