ACARD/SSD vs. 1Gb Ethernet

SGI hardware problems, solutions, tips, hacks, etc.
Forum rules
Any posts concerning pirated software or offering to buy/sell/trade commercial software are subject to removal.
jwhat
Posts: 316
Joined: Sat Aug 09, 2003 6:25 pm
Location: Australia

ACARD/SSD vs. 1Gb Ethernet

Unread postby jwhat » Wed Dec 06, 2017 10:41 pm

Hi Performance Testers,

having done SSD test I was curious to see how Onyx350/4 did on NFS share via Gigabit Ethernet (1Gbe).

The NFS server is 8 Core Apple Xserve with 10Gbe to 10Gbe switch and then 2x1Gbe LAG groups to 1Gbe switch:

So have:

<XServe + 10Gbe> <-----> <Netgear 10Gbe Switch> <--2x1Gbe LAG--> <Netgear 1Gbe Switch> <----> <Apple 1Gbe> <----> <Onyx350/4+1Gbe-tg0>

Here is initial test result:

Code: Select all

dubious 81# diskperf -W -D -n "1Gb Ethernet NFS" -c4m test4m
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : 1Gb Ethernet NFS
# Test date     : Thu Dec  7 17:17:17 2017
# Test machine  : IRIX64 dubious 6.5 01090133 IP35
# Test type     : XFS data subvolume
# Test path     : test4m
# Request sizes : min=16384 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4194304 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
      16384   21.81   28.80   23.39   26.95   22.14   27.29
      32768   32.65   35.96   34.55   35.19   30.80   36.07
      65536   32.69   41.47   32.33   41.09   31.24   41.37
     131072   32.08   36.90   32.24   38.03   29.54   36.17
     262144   31.53   35.87   32.42   37.56   32.64   36.56
     524288   30.97   35.60   32.68   39.09   32.62   36.80
    1048576   30.91   35.95   31.32   36.85   30.93   36.07
    2097152   29.18   36.60   32.08   41.71   29.70   36.64
    4194304   30.18   36.14    0.00    0.00   30.07   36.36


So now again removing 1 switch hop:

<XServe + 10Gbe> <-----> <Netgear 10Gbe Switch> <--2x1Gbe LAG--> <Netgear 1Gbe Switch> <----> <Onyx350/4+1Gbe-tg0>

no real difference and still well below over 100MB/sec bandwidth available:

Code: Select all

dubious 83# diskperf -W -D -n "1Gb Ethernet NFS" -c4m test4m
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : 1Gb Ethernet NFS
# Test date     : Thu Dec  7 17:33:46 2017
# Test machine  : IRIX64 dubious 6.5 01090133 IP35
# Test type     : XFS data subvolume
# Test path     : test4m
# Request sizes : min=16384 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4194304 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
      16384   22.46   29.77   23.18   26.85   23.78   28.28
      32768   31.20   36.11   33.88   35.57   29.75   35.83
      65536   29.44   39.95   34.00   41.27   30.58   39.95
     131072   32.36   36.38   32.19   37.92   32.80   36.66
     262144   31.63   36.76   32.64   38.22   30.19   36.51
     524288   31.57   36.05   34.20   38.74   31.48   36.54
    1048576   30.87   36.60   34.82   41.28   30.29   35.94
    2097152   28.74   35.40   32.98   38.58   29.49   35.71
    4194304   30.00   35.68    0.00    0.00   29.87   35.44


But looking back at UW (40MB/sec available on Octane2 this looks alright, so will also run test on Octane2 to see what results its 1Gbe gets).

I am going to see if there any any difference between IO9 based 1Gbe and stand alone PCI adaptor...

here are result with eg1 interface..

Code: Select all

dubious 14# diskperf -W -D -n "1Gb Ethernet NFS" -c4m test4m
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name     : 1Gb Ethernet NFS
# Test date     : Thu Dec  7 20:10:30 2017
# Test machine  : IRIX64 dubious 6.5 01090133 IP35
# Test type     : XFS data subvolume
# Test path     : test4m
# Request sizes : min=16384 max=4194304
# Parameters    : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 4194304 bytes
#---------------------------------------------------------
# req_size  fwd_wt  fwd_rd  bwd_wt  bwd_rd  rnd_wt  rnd_rd
#  (bytes)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)  (MB/s)
#---------------------------------------------------------
      16384   14.05   18.47   16.47   18.48   16.78   18.46
      32768   15.44   24.70   18.42   24.63   18.36   24.79
      65536   18.47   36.77   18.49   36.66   18.49   36.76
     131072   18.49   48.87   18.78   48.71   18.49   48.87
     262144   18.48   48.77   18.49   48.78   18.52   48.82
     524288   18.50   48.88   18.49   48.73   18.48   48.84
    1048576   18.51   48.75   18.50   48.75   18.49   48.87
    2097152   18.48   48.87   18.49   48.70   18.48   48.80
    4194304   18.50   48.61    0.00    0.00   18.57   48.62


This is funny as peak throughput is better than tg0, but results have anomalies....

To check did l1 "* PCI" and this shows eg1 PCI card as operating at 66Hz which is same as IO9 board, but as it was dedicated to just network I thought it might perform better. Will try the tg1 card in IO9 in other chassis...



Cheers from Oz,

jwhat.
jwhat - ask questions, provide answers

User avatar
tomvos
Donor
Donor
Posts: 136
Joined: Fri Jul 04, 2008 1:08 pm
Location: Aachen, Germany, Europe
Contact:

Re: ACARD/SSD vs. 1Gb Ethernet

Unread postby tomvos » Thu Dec 07, 2017 10:28 pm

Here are the diskperf numbers for the QL12160 SCSI controller, the Acard ARS-2320 adapter and the Plextor M6Pro SSD on my Fuel 600 MHz:

Code: Select all

Excelsior %1 diskperf -W -D -n Fuel600-M6Pro-QL12160 -t10 -c100m testfile
#---------------------------------------------------------
# Disk Performance Test Results Generated By Diskperf V1.2
#
# Test name : Fuel600-M6Pro-QL12160
# Test date : Sat Mar 19 16:24:36 2016
# Test machine : IRIX64 Excelsior 6.5 07202013 IP35
# Test type : XFS data subvolume
# Test path : testfile
# Request sizes : min=16384 max=4194304
# Parameters : direct=1 time=10 scale=1.000 delay=0.000
# XFS file size : 104857600 bytes
#---------------------------------------------------------
# req_size fwd_wt fwd_rd bwd_wt bwd_rd rnd_wt rnd_rd
# (bytes) (MB/s) (MB/s) (MB/s) (MB/s) (MB/s) (MB/s)
#---------------------------------------------------------
16384 34.18 36.00 38.77 29.25 38.68 29.30
32768 52.71 47.60 55.40 40.33 55.16 40.64
65536 68.55 63.80 70.10 54.48 69.93 55.04
131072 72.88 69.93 73.14 59.60 73.07 59.89
262144 74.82 74.69 75.02 68.07 74.93 68.32
524288 75.25 76.82 75.33 73.15 75.34 73.45
1048576 74.86 77.45 74.87 76.01 74.81 75.02
2097152 73.33 78.46 73.35 76.07 73.35 76.27
4194304 70.14 78.75 70.25 77.50 70.20 77.60
:Fuel: :Octane2: :O2: :O2: :1600SW: :Indy: :Indy:
Where subtlety fails us we must simply make do with cream pies.

jwhat
Posts: 316
Joined: Sat Aug 09, 2003 6:25 pm
Location: Australia

Re: ACARD/SSD vs. 1Gb Ethernet

Unread postby jwhat » Fri Dec 08, 2017 1:46 pm

Hi tomvos,

I posted my acard ARS-2320S numbers in other thread.

Both Plextor M6 & Samsung 850 are rated around 500MB/sec so well above what U160 SCSI can handle.

Seems like simplest way to get good performance from single drive.

Have you enabled read/write cache and command tagging (via fx label parameters) if not you can get some extra goodness from your configuration.

I had lots of problems trying to get tg1 on second IO9 in my setup going. So will not do further testing with NFS until 10Gbe card arrives (I have switch but its in different location, so will move boxes to be near it, so I have access to optical port).


Cheers,

jwhat.
jwhat - ask questions, provide answers

User avatar
tomvos
Donor
Donor
Posts: 136
Joined: Fri Jul 04, 2008 1:08 pm
Location: Aachen, Germany, Europe
Contact:

Re: ACARD/SSD vs. 1Gb Ethernet

Unread postby tomvos » Sat Dec 09, 2017 2:26 am

Hi,

thanks for the tip about read/write cache and command tagging. I've to admit that I do not know what is the setting of this on my fuel. But I'll check this soon - perhaps it will improve the numbers a little bit.

Cheers,
Thomas
:Fuel: :Octane2: :O2: :O2: :1600SW: :Indy: :Indy:
Where subtlety fails us we must simply make do with cream pies.


Return to “SGI: Hardware”

Who is online

Users browsing this forum: nyef and 1 guest