Raid controller driver linux

Подсунуть драйвера raid при установке

Доброго! Была такая тема, но сабж в ней так и не раскрыт. имею Intel R1304GL4DS9, для него скачаны Intel® Embedded Server RAID Technology 2 RAID Driver для Linux нужно поднять CentOS 7. как подсунуть драйвера?

Никак, там модули (драйверы), собранные Red Hat 7, 7.1, 7.2, 7.3 и под Suse Linux Enterprise Server 12 и его обновления.

Исходников модуля (драйвера) что бы собрать его под другое ядро (другого дистрибутива) нет.

Поэтому ставь CentOS на RAID программный RAID собранные средствами mdadm.

ну нет. Только что установил на зеркале, без mdadm и никаких проблем. Единственное сервер был посвежее и производитель SuperMicro. полагаю, дело чисто в дровах

На supermicro такой же RAID?

чипсет там другой возможно, но рейд такой же, ну в смысле там тоже зеркало

Я не про зеркало / не зеркало, я про RAID контроллер.

Это неполноценный аппаратный RAID, так что ставь систему на RAID посредством mdadm.

Всё равно даже если ты создашь RAID в оболочке самого RAID контроллера ты увидишь RAID массивы в системе используя mdadm.

систему на RAID посредством mdadm

тут не совсем понял. Я понял что сначала ставим на один винт CentOS, потом устанавливаем mdadm и создаем RAID. нет?

Можно сразу создать массив и на него ставить.

Во время установки создай RAID массив и ставь на него.

подсобите с настройками BIOS, а то установщик не видит диски, только флешку с которой идет установка. «the Redhat recommendation for such things is to disable RAID mode in the controller BIOS and set it to AHCI mode» но в «Intel® Embedded Server RAID Technology II (Intel® ESRT2) uses Ctrl+E to enter the RAID BIOS console.» нет такой фичи как выбрать AHCI mode а в BIOS сервера стоит: AHCI capable SATA Controller=[AHCI] SAS/SATA Capable Controller=Intel(R) ESRT2 (LSI)

Хз, читай документацию к плате, грузись в установщик CentOS или какой-нибудь Linux LiveCD и смотри увидел ли он диски подключенные к контроллеру или нет.

Может эта настройка в «Advanced options» в BIOS (UEFI).

указанные настройки как раз в эдванседе и находятся

«Дергай ручки и проверяй в LiveCD».

Для HP b120i дрова подсовываются через inst.dd и флешку с дровами.

про НР много исписано, про данный сервер нет. Однако выбор режима контролера явил диски. ESRT2 сменил на RSTe!

Я тебе про то, что b120i это какой-то там интеловый рейд. Скорее всего дрова подсовываются аналогично. Внутри образа пакет, он ставится перед запуском установщика.

Источник

Chapter 21. Managing RAID

This chapter describes Redundant Array of Independent Disks (RAID). User can use RAID to store data across multiple drives. It also helps to avoid data loss if a drive has failed.

21.1. Redundant array of independent disks (RAID)

The basic idea behind RAID is to combine multiple devices, such as HDD , SSD or NVMe , into an array to accomplish performance or redundancy goals not attainable with one large and expensive drive. This array of devices appears to the computer as a single logical storage unit or drive.

RAID allows information to be spread across several devices. RAID uses techniques such as disk striping (RAID Level 0), disk mirroring (RAID Level 1), and disk striping with parity (RAID Levels 4, 5 and 6) to achieve redundancy, lower latency, increased bandwidth, and maximized ability to recover from hard disk crashes.

RAID distributes data across each device in the array by breaking it down into consistently-sized chunks (commonly 256K or 512k, although other values are acceptable). Each chunk is then written to a hard drive in the RAID array according to the RAID level employed. When the data is read, the process is reversed, giving the illusion that the multiple devices in the array are actually one large drive.

Читайте также:  Postgresql database management system

System Administrators and others who manage large amounts of data would benefit from using RAID technology. Primary reasons to deploy RAID include:

  • Enhances speed
  • Increases storage capacity using a single virtual disk
  • Minimizes data loss from disk failure
  • RAID layout and level online conversion

21.2. RAID types

There are three possible RAID approaches: Firmware RAID, Hardware RAID, and Software RAID.

Firmware RAID

Firmware RAID , also known as ATARAID, is a type of software RAID where the RAID sets can be configured using a firmware-based menu. The firmware used by this type of RAID also hooks into the BIOS, allowing you to boot from its RAID sets. Different vendors use different on-disk metadata formats to mark the RAID set members. The Intel Matrix RAID is a good example of a firmware RAID system.

Hardware RAID

The hardware-based array manages the RAID subsystem independently from the host. It may present multiple devices per RAID array to the host.

Hardware RAID devices may be internal or external to the system. Internal devices commonly consisting of a specialized controller card that handles the RAID tasks transparently to the operating system. External devices commonly connect to the system via SCSI, Fibre Channel, iSCSI, InfiniBand, or other high speed network interconnect and present volumes such as logical units to the system.

RAID controller cards function like a SCSI controller to the operating system, and handle all the actual drive communications. The user plugs the drives into the RAID controller (just like a normal SCSI controller) and then adds them to the RAID controller’s configuration. The operating system will not be able to tell the difference.

Software RAID

Software RAID implements the various RAID levels in the kernel block device code. It offers the cheapest possible solution, as expensive disk controller cards or hot-swap chassis [1] are not required. Software RAID also works with any block storage which are supported by the Linux kernel, such as SATA , SCSI , and NVMe . With today’s faster CPUs, Software RAID also generally outperforms Hardware RAID, unless you use high-end storage devices.

The Linux kernel contains a multiple device (MD) driver that allows the RAID solution to be completely hardware independent. The performance of a software-based array depends on the server CPU performance and load.

Key features of the Linux software RAID stack:

  • Multithreaded design
  • Portability of arrays between Linux machines without reconstruction
  • Backgrounded array reconstruction using idle system resources
  • Hot-swappable drive support
  • Automatic CPU detection to take advantage of certain CPU features such as streaming Single Instruction Multiple Data (SIMD) support
  • Automatic correction of bad sectors on disks in an array
  • Regular consistency checks of RAID data to ensure the health of the array
  • Proactive monitoring of arrays with email alerts sent to a designated email address on important events

Write-intent bitmaps which drastically increase the speed of resync events by allowing the kernel to know precisely which portions of a disk need to be resynced instead of having to resync the entire array after a system crash

Note that resync is a process to synchronize the data over the devices in the existing RAID to achieve redundancy

  • Resync checkpointing so that if you reboot your computer during a resync, at startup the resync will pick up where it left off and not start all over again
  • The ability to change parameters of the array after installation, which is called reshaping . For example, you can grow a 4-disk RAID5 array to a 5-disk RAID5 array when you have a new device to add. This grow operation is done live and does not require you to reinstall on the new array
  • Reshaping supports changing the number of devices, the RAID algorithm or size of the RAID array type, such as RAID4, RAID5, RAID6 or RAID10
  • Takeover supports RAID level converting, such as RAID0 to RAID6
  • Читайте также:  Sqlite unique constraint index

    21.3. RAID levels and linear support

    RAID supports various configurations, including levels 0, 1, 4, 5, 6, 10, and linear. These RAID types are defined as follows:

    RAID level 0, often called striping , is a performance-oriented striped data mapping technique. This means the data being written to the array is broken down into stripes and written across the member disks of the array, allowing high I/O performance at low inherent cost but provides no redundancy.

    Many RAID level 0 implementations only stripe the data across the member devices up to the size of the smallest device in the array. This means that if you have multiple devices with slightly different sizes, each device gets treated as though it was the same size as the smallest drive. Therefore, the common storage capacity of a level 0 array is equal to the capacity of the smallest member disk in a Hardware RAID or the capacity of smallest member partition in a Software RAID multiplied by the number of disks or partitions in the array.

    RAID level 1, or mirroring , provides redundancy by writing identical data to each member disk of the array, leaving a «mirrored» copy on each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1 operates with two or more disks, and provides very good data reliability and improves performance for read-intensive applications but at a relatively high cost.

    RAID level 1 comes at a high cost because you write the same information to all of the disks in the array, provides data reliability, but in a much less space-efficient manner than parity based RAID levels such as level 5. However, this space inefficiency comes with a performance benefit: parity-based RAID levels consume considerably more CPU power in order to generate the parity while RAID level 1 simply writes the same data more than once to the multiple RAID members with very little CPU overhead. As such, RAID level 1 can outperform the parity-based RAID levels on machines where software RAID is employed and CPU resources on the machine are consistently taxed with operations other than RAID activities.

    The storage capacity of the level 1 array is equal to the capacity of the smallest mirrored hard disk in a Hardware RAID or the smallest mirrored partition in a Software RAID. Level 1 redundancy is the highest possible among all RAID types, with the array being able to operate with only a single disk present.

    Level 4 uses parity concentrated on a single disk drive to protect data. Parity information is calculated based on the content of the rest of the member disks in the array. This information can then be used to reconstruct data when one disk in the array fails. The reconstructed data can then be used to satisfy I/O requests to the failed disk before it is replaced and to repopulate the failed disk after it has been replaced.

    Because the dedicated parity disk represents an inherent bottleneck on all write transactions to the RAID array, level 4 is seldom used without accompanying technologies such as write-back caching, or in specific circumstances where the system administrator is intentionally designing the software RAID device with this bottleneck in mind (such as an array that will have little to no write transactions once the array is populated with data). RAID level 4 is so rarely used that it is not available as an option in Anaconda. However, it could be created manually by the user if truly needed.

    Читайте также:  Картридж для принтера canon lbp 6020 какие подходят

    The storage capacity of Hardware RAID level 4 is equal to the capacity of the smallest member partition multiplied by the number of partitions minus one . Performance of a RAID level 4 array is always asymmetrical, meaning reads outperform writes. This is because writes consume extra CPU and main memory bandwidth when generating parity, and then also consume extra bus bandwidth when writing the actual data to disks because you are writing not only the data, but also the parity. Reads need only read the data and not the parity unless the array is in a degraded state. As a result, reads generate less traffic to the drives and across the buses of the computer for the same amount of data transfer under normal operating conditions.

    This is the most common type of RAID. By distributing parity across all the member disk drives of an array, RAID level 5 eliminates the write bottleneck inherent in level 4. The only performance bottleneck is the parity calculation process itself. With modern CPUs and Software RAID, that is usually not a bottleneck at all since modern CPUs can generate parity very fast. However, if you have a sufficiently large number of member devices in a software RAID5 array such that the combined aggregate data transfer speed across all devices is high enough, then this bottleneck can start to come into play.

    As with level 4, level 5 has asymmetrical performance, and reads substantially outperforming writes. The storage capacity of RAID level 5 is calculated the same way as with level 4.

    This is a common level of RAID when data redundancy and preservation, and not performance, are the paramount concerns, but where the space inefficiency of level 1 is not acceptable. Level 6 uses a complex parity scheme to be able to recover from the loss of any two drives in the array. This complex parity scheme creates a significantly higher CPU burden on software RAID devices and also imposes an increased burden during write transactions. As such, level 6 is considerably more asymmetrical in performance than levels 4 and 5.

    The total capacity of a RAID level 6 array is calculated similarly to RAID level 5 and 4, except that you must subtract 2 devices (instead of 1) from the device count for the extra parity storage space.

    This RAID level attempts to combine the performance advantages of level 0 with the redundancy of level 1. It also helps to alleviate some of the space wasted in level 1 arrays with more than 2 devices. With level 10, it is possible for instance to create a 3-drive array configured to store only 2 copies of each piece of data, which then allows the overall array size to be 1.5 times the size of the smallest devices instead of only equal to the smallest device (like it would be with a 3-device, level 1 array). This avoids CPU process usage to calculate parity like with RAID level 6, but it is less space efficient.

    The creation of RAID level 10 is not supported during installation. It is possible to create one manually after installation.

    Linear RAID is a grouping of drives to create a larger virtual drive.

    In linear RAID, the chunks are allocated sequentially from one member drive, going to the next drive only when the first is completely filled. This grouping provides no performance benefit, as it is unlikely that any I/O operations split between member drives. Linear RAID also offers no redundancy and decreases reliability. If any one member drive fails, the entire array cannot be used. The capacity is the total of all member disks.

    Источник

    Поделиться с друзьями
    КомпСовет
    Adblock
    detector