Dell / R730XD sas JBOD disk performance and compare raid0

Disclaimer: This article is a blogger original article, shall not be reproduced without the bloggers allowed. https://blog.csdn.net/kwame211/article/details/91041038

server configuration

Dell / R730XD 2620V4 * 2 16G DDR4 * 4 300G SAS * 2 + 6T SAS * 12 

oracle linux 6.9

 

Basics

RAID0

Composition raid0 manner: one or a plurality of disks, a disk write part of the data, b writing part, ..., by dividing the data into different disks to improve the speed, so the speed of a monolithic disk n (number of disks ) times.

Advantages: performance, a single disc is n times;

Cons: no safety margin, one bad, all finished.

 

Configuring single disc raid0 command:

Copy the code

/opt/MegaRAID/MegaCli/MegaCli64 -PDlist -aALL | grep "ID"  | uniq | awk -F: '{print $2}' | awk '{print $1}'
Enclosure Device ID: 32
 
 
 
## raid0 created for each dish, of parameters:
## cfgLdAdd can create raid0,1,5,6
##[Enclosure Device ID:磁盘slot]
## [WT | WB] raid write strategy: write through (do not write cache) and write back (write cache), sas disk random write performance is rather poor, so setting WB
## [NORA | RA | ADRA] raid read policy: no read ahead (default) | read ahead | adpter read ahead
## [direct, cached] Read Cache: Default direct, generally does not require a cache read
## [CachedBadBBU | NoCachedBadBBU]: the relationship bbu and write cache, [bad bbu, write back changes to write through | bad bbu, still write back]
## A0 raid 卡 adapters
 
 
/opt/MegaRAID/MegaCli/MegaCli64 -cfgLdAdd -r0 [32:0] WB Direct -a0
/opt/MegaRAID/MegaCli/MegaCli64 -cfgLdAdd -r0 [32:1] WB Direct -a0
/opt/MegaRAID/MegaCli/MegaCli64 -cfgLdAdd -r0 [32:2] WB Direct -a0
/opt/MegaRAID/MegaCli/MegaCli64 -cfgLdAdd -r0 [32:3] WB Direct -a0
/opt/MegaRAID/MegaCli/MegaCli64 -cfgLdAdd -r0 [32:4] WB Direct -a0
/opt/MegaRAID/MegaCli/MegaCli64 -cfgLdAdd -r0 [32:5] WB Direct -a0
/opt/MegaRAID/MegaCli/MegaCli64 -cfgLdAdd -r0 [32:6] WB Direct -a0
/opt/MegaRAID/MegaCli/MegaCli64 -cfgLdAdd -r0 [32:7] WB Direct -a0
/opt/MegaRAID/MegaCli/MegaCli64 -cfgLdAdd -r0 [32:8] WB Direct -a0
/opt/MegaRAID/MegaCli/MegaCli64 -cfgLdAdd -r0 [32:9] WB Direct -a0
/opt/MegaRAID/MegaCli/MegaCli64 -cfgLdAdd -r0 [32:10] WB Direct -a0
/opt/MegaRAID/MegaCli/MegaCli64 -cfgLdAdd -r0 [32:11] WB Direct -a0
 
## Viewing Device
/opt/MegaRAID/MegaCli/MegaCli64 -cfgdsply –aALL  | grep -E "DISK\ GROUP|Slot\ Number"
 
[@s26.txyz.db.d ~]# fdisk -l | grep '\/dev\/sd'
Disk / dev / sda: 299.4 GB, 299439751168 bytes
/dev/sda1   *           1        2611    20971520   83  Linux
/dev/sda2            2611        5222    20971520   83  Linux
/dev/sda3            5222        7311    16777216   82  Linux swap / Solaris
/dev/sda4            7311       36405   233700352    5  Extended
/dev/sda5            7311        9922    20971520   83  Linux
/dev/sda6            9922       36405   212726784   83  Linux
Disk /dev/sdb: 6000.6 GB, 6000606183424 bytes
Disk /dev/sdc: 6000.6 GB, 6000606183424 bytes
Disk /dev/sdd: 6000.6 GB, 6000606183424 bytes
Disk /dev/sde: 6000.6 GB, 6000606183424 bytes
Disk /dev/sdf: 6000.6 GB, 6000606183424 bytes
Disk /dev/sdg: 6000.6 GB, 6000606183424 bytes
Disk /dev/sdh: 6000.6 GB, 6000606183424 bytes
Disk /dev/sdi: 6000.6 GB, 6000606183424 bytes
Disk /dev/sdj: 6000.6 GB, 6000606183424 bytes
Disk /dev/sdk: 6000.6 GB, 6000606183424 bytes
Disk /dev/sdl: 6000.6 GB, 6000606183424 bytes
Disk /dev/sdm: 6000.6 GB, 6000606183424 bytes

Copy the code

 

 

JBOD

JBOD: If yes raid0 horizontal one kind of combination that is a kind of longitudinal JBOD composition, although there are n blocks of the disc, only after a full, writing the second block; a logical combination of a damaged portion is lost data.

Pros: the loss of a disk, lost only part of the data

Cons: Write performance is equivalent to a single disc.

Configuration steps:

 

Copy the code

 
## is open JBOD adapter0
/opt/MegaRAID/MegaCli/MegaCli64 -AdpSetProp EnableJBOD 1 -a0
 
## to 32: 0 configured JBOD, unfortunately, can not set some parameters raid card
/opt/MegaRAID/MegaCli/MegaCli64 -PDMakeJBOD -physdrv[32:0]  -a0
 
##

Copy the code

 

 

Performance Testing

According to the above characteristic point of view, when a plurality of discs, the performance is not comparable in place, but this mfs distributed storage cluster configuration, a single disk requires a single volume, and therefore the pressure measurement scenario is as follows:

  • Single disc raid0
  • 5 disk raid0
  • Monolithic disk JBOD
  • 5 disk JBOD

Are two scenarios: random writes, sequential write. Fio employed for pressure measurement.

iops

  seq-write rand-write seq-read rand-read
RAID0 95611 7098 56266 3463
JBOD 463 971 55593 630

bandwidth(kb/s)

  seq-write rand-write seq-read rand-read
RAID0 382448 28393 225065 13852
JBOD 1853.2 3886.8 222374 2521.7

See the results, feeling multiple disk arrays have no measurable necessary.

in conclusion

Order raid0 single disk write performance of about 200 times JBOD, random write performance is 10 times. The reason may be soft because JBOD raid, raid card does not apply, and write a raid using WB

Both sequential read performance is similar, but the random read performance, RAID0 about five times the JBOD.

Guess you like

Origin blog.csdn.net/kwame211/article/details/91041038