The growing demands of professional media workflows and data-intensive applications require storage solutions that can deliver both exceptional performance and reliability. This technical analysis explores the integration of HighPoint's Rocket 1628A NVMe Switch Adapter with the xiRAID RAID engine in a professional workstation with relatively few compute resources. By combining these technologies, we aim to demonstrate how organizations can achieve enterprise-level storage performance using commercially available components, without the need for specialized hardware or extensive system optimization. The tests conducted focus on real-world performance metrics across various workloads.
Test Environment
In this study, we conducted tests to explore various scenarios, selecting configurations to evaluate performance across different use cases. This included two small RAID5 setups, each using four Micron 7450/7400 PRO drives (storage set 1), and a RAID6 setup with 8 KIOXIA KCD8DPUG30T7 drives (storage set 2).
Hardware configuration:
- Motherboard: ProArt X670E-CREATOR WIFI
- CPU: AMD Ryzen 9 7950X (16 cores, 32 threads)
- RAM: 64GiB System Memory (4 x 16GiB)
- Switch Adapter: HighPoint Rocket 1628A Switch Adapter
-
Storage set 1:
- 4 x Micron 7450 PRO NVMe SSD (7680 GB each) - Linux NVMe devices 0-3
- 4 x Micron 7400 PRO NVMe SSD (7680 GB each) - Linux NVMe devices 4-7
- Storage set 2: 8 x KIOXIA KCD8DPUG30T7 NVMe SSD (30,720 GB each)
Software configuration:
- Operating System: Red Hat 8.10
- RAID Software: xiRAID Classic v4.1.0
System Optimizations
-
BIOS Settings:
- Global C-state Control: Disabled
- Core Performance Boost: Enabled
- System Profile: tuned-adm profile throughput-performance
Raw Drive Performance Testing
Prior to testing, all drives underwent a double overwrite process using a 128k block (workload independent precondition), followed by sequential read and write tests. Subsequently, the drives were overwritten using a 4k block before conducting random read and write tests.
The performance results closely aligned with the manufacturer's specifications, as evidenced by the tables below.
Raw drives performance for Micron 7450/7400 PRO drives
Test Scenario | 8 x drives: | 4 x Micron 7450 | 4 x Micron 7400 |
---|---|---|---|
Raw sequential read (numjobs=1, queue depth=64) | 50.1 GB/s | 25.1 GB/s | 25.0 GB/s |
Raw sequential write (numjobs=1, queue depth=64) | 41.2GB/s | 22.9 GB/s | 18.9 GB/s |
Raw random read (numjobs=8, queue depth=64) | 7345k IOPS (CPU limitation) | 4010k IOPS | 4024k IOPS |
Mixed read/write 50% | read: 1639k IOPS write: 1639k IOPS |
read: 839k IOPS write: 839k IOPS |
read: 803k IOPS write: 803k IOPS |
During concurrent random read operations across all drives, we encountered a platform limitation due to CPU constraints, achieving 7,345k IOPS instead of the expected 8,000k IOPS.
NVMe drives with high random read IOPS generate frequent interrupts or require increased polling, which raises CPU utilization. As the drive approaches its maximum IOPS capacity, CPU usage can rise disproportionately due to context switching, interrupt handling, or contention in the IO subsystem.
As a result, in this hardware configuration, we encountered CPU limitations even at the raw drives level. Consequently, we decided to skip this test for another NVMe drive set (Kioxia KCD8DPUG30T7), as the same CPU bottleneck would likely occur.

CPU consumption with random read to all NVMEs
CPU utilization of FIO threads was monitored during the most resource-intensive operations: random read and random write tests.

CPU consumption in the context of FIO threads for 8 drives
Raw drives performance for KIOXIA KCD8DPUG30T7 drives
Test Scenario | All drives |
---|---|
Raw sequential read (numjobs=1, queue depth=64) | 52.7 GB/s (PCIe Limit) |
Raw sequential write (numjobs=1, queue depth=64) | 40.5 GB/s |
Mixed read/write 50% (numjobs=1, queue depth=64) | read: 1290k IOPS write: 1294k IOPS |
xiRAID Performance Testing
RAID5
Given the platform's configuration of 4 Micron 7450 and 4 Micron 7400 drives, we established two RAID 5 arrays designated as media1 (for Micron 7450) and media2 (for Micron 7400), and conducted tests on each array individually and simultaneously.
Throughout the testing process, we collected CPU utilization statistics for all xiRAID threads using a custom bash script. This script leverages the top command to filter and aggregate CPU usage for processes with a COMMAND beginning with xi, then normalizes the sum by the number of CPU cores. The FIO configuration files and statistics collection script listings are included in the appendix.

xiRAID configurations
Combined RAID5 Performance (media1 + media2)
Baseline performance calculation for 2 arrays:
- Sequential Read: 50.1 GB/s (100% of raw performance)
- Sequential Write: 30.9GB/s (75% of raw performance: 41.2 GB/s x 75% = 30.9GB/s)
- Random Read: 7,345K IOPS (100% of raw performance)
- Random Write: 819K IOPS (Performance test of 8 NVMe drives under simultaneous load with a 50/50 r/w pattern demonstrated 3,278K IOPS (Read: 1,639K IOPS, Write: 1,639K IOPS). With a RAID5 write penalty factor of 4, the expected performance for 2 RAID5 arrays is calculated as: 3,278/4 = 819K IOPS)
Test Results:
2 x RAID 5 with 4 drives | Baseline performance | Efficiency | |
---|---|---|---|
xiRAID, sequential read (numjobs=3, queue depth=32) | 47.4 GB/s | 50.1 GB/s | 95% |
xiRAID, sequential write (numjobs=3, queue depth=32) | 29.4 GB/s | 30.9 GB/s | 95% |
xiRAID, random read (numjobs=32, queue depth=32) | 5,421K IOPS | 7,345K IOPS (CPU limitation) | 74%* |
xiRAID, random write (numjobs=32, queue depth=32) | 626K IOPS | 819K IOPS | 76% |
* requires further investigation due to encountered CPU constraints on the test platform

Individual RAID5 Performance on 4 Micron 7450 Drives (media1)
Baseline performance calculation for the media1 RAID array:
- Sequential Read: 25.1 GB/s (100% of raw performance)
- Sequential Write: 17.1GB/s (75% of raw performance: 22.9 GB/s x 75% = 17.1GB/s)
- Random Read: 4,010K IOPS (100% of raw performance)
- Random Write: 419K IOPS (Performance test of 4 NVMe drives under simultaneous load with a 50/50 r/w pattern demonstrated 1,678K IOPS (Read: 839K IOPS, Write: 839K IOPS). With a RAID5 write penalty factor of 4, the expected performance for this RAID5 array is calculated as: 1,678/4 = 419K IOPS)
Test Results:
Media 1 with 4 x Micron 7450 | Baseline performance | Efficiency | |
---|---|---|---|
xiRAID, sequential read (numjobs=3, queue depth=32) | 24.8GB/s | 25.1 GB/s | 98% |
xiRAID, sequential write (numjobs=3, queue depth=32) | 16.2GB/s | 17.1 GB/s | 94% |
xiRAID, random read (numjobs=32, queue depth=32) | 3,928K IOPS | 4,010K IOPS | 98% |
xiRAID, random write (numjobs=32, queue depth=32) | 343K IOPS | 419K IOPS | 81% |

Individual RAID5 Performance on 4 Micron 7400 Drives (media2)
Baseline performance calculation for the media2 RAID array:
- Sequential Read: 25.0 GB/s (100% of raw performance)
- Sequential Write: 14.1 GB/s (75% of raw performance: 18.9 GB/s x 75% = 14.1 GB/s)
- Random Read: 4,024K IOPS (100% of raw performance)
- Random Write: 401K IOPS (Performance test of 4 NVMe drives under simultaneous load with a 50/50 r/w pattern demonstrated 1,606K IOPS (Read: 803K IOPS, Write: 803K IOPS). With a RAID5 write penalty factor of 4, the expected performance for this RAID5 array is calculated as: 1,606/4 = 401K IOPS)
Test Results:
Media 2 with 4 x Micron 7400 | Baseline performance | Efficiency | |
---|---|---|---|
xiRAID, sequential read (numjobs=3, queue depth=32) | 24.6GB/s | 25.0 GB/s | 98% |
xiRAID, sequential write (numjobs=3, queue depth=32) | 13.5GB/s | 14.1GB/s | 95% |
xiRAID, random read (numjobs=32, queue depth=32) | 3,954K IOPS | 4,024K IOPS | 98% |
xiRAID, random write (numjobs=32, queue depth=32) | 302K IOPS | 401K IOPS | 75% |

RAID6
Using 8 KIOXIA NVMe SSD drives, we created one RAID6 array (media6).
Baseline performance calculation for the media6 RAID array:
- Sequential Read: 52.7 GB/s (100% of raw performance)
- Sequential Write: 30.4B/s (75% of raw performance: 40.5 GB/s x 75% = 30.4GB/s)
- Random Write: 430K IOPS (Performance test of 8 NVMe drives under simultaneous load with a 50/50 r/w pattern demonstrated 2584K IOPS (Read: 1290k IOPS, Write: 1294k IOPS ). With a RAID6 write penalty factor of 6, the expected performance for this RAID6 array is calculated as: 2584/6 = 430K IOPS)
Test Results:
Media 6 with 8 x KIOXIA cd8p | Baseline performance | Efficiency | |
---|---|---|---|
xiRAID, sequential read (numjobs=16, queue depth=32) | 52.7GB/s | 52.7 GB/s | 100% |
xiRAID, sequential write (numjobs=8, queue depth=32) | 28.7GB/s | 30.4 GB/s | 94% |
xiRAID, random write (numjobs=32, queue depth=64) | 384K IOPS | 430K IOPS | 89% |

Conclusion
The comprehensive performance analysis demonstrates that the combination of xiRAID's RAID engine and HighPoint's Rocket 1628A NVMe Switch Adapter delivers exceptional storage performance in a professional workstation environment. With sequential read efficiencies reaching 98-100% of raw backend performance and write operations achieving up to 95% efficiency, the solution proves highly capable of handling demanding workloads across diverse industries.
The RAID6 configuration achieved near-optimal sequential performance, reinforcing the solution's suitability for bandwidth-heavy applications. This configuration also demonstrated robust handling of write-intensive tasks, further expanding its applicability to professional media workflows. Similarly, the RAID5 configurations provided strong random and sequential performance, demonstrating the system's ability to support high-throughput operations.
The absence of bottlenecks in the HighPoint PCIe switch operation, coupled with consistent performance across both Micron and KIOXIA drive configurations, validates the robustness and scalability of this architecture. These results highlight that cost-effective NVMe-based systems can deliver enterprise-grade performance without requiring specialized infrastructure or significant resource overhead. This solution provides a practical and efficient approach to meeting the growing storage demands of professional workflows, offering an exceptional price-to-performance ratio for organizations in data-intensive markets.
Appendix
CPU Usage Script
Bash script that uses the top command to filter and summarize CPU usage for processes with a COMMAND that starts with xi/fio. Since top itself doesn’t directly support filtering, we’ll use it in batch mode with some parsing via awk.
bash Copy code #!/bin/bash # Define the output file output_file="xi_cpu_usage.log" # Set the number of physical CPU cores manually core_count=32 # Replace with the actual number of cores on your system # Check if core_count is set correctly if [[ -z "$core_count" || "$core_count" -le 0 ]]; then echo "Error: Invalid core count. Please set a positive integer for core_count." exit 1 fi # Loop to run every 10 seconds while true; do # Capture the current date and time timestamp=$(date "+%Y-%m-%d %H:%M:%S") # Use 'top' to get the CPU load of processes and filter based on COMMAND starting with 'xi' total_cpu=$(top -b -n 1 | awk '$1 != "PID" && $12 ~ /^xi/ {sum += $9} END {print sum}') # Check if total_cpu was calculated correctly if [[ -z "$total_cpu" ]]; then echo "Error: Failed to retrieve CPU usage from 'top' command." total_cpu=0 fi # Divide total CPU usage by the number of physical cores avg_cpu_per_core=$(echo "scale=2; $total_cpu / $core_count" | bc) # Format the output with timestamp output="[$timestamp] Average CPU usage per core for processes with COMMAND starting with 'xi': ${avg_cpu_per_core}%" # Append output to the file and display it on the screen echo "$output" | tee -a "$output_file" # Wait 10 seconds before repeating sleep 10 done
Explanation
- output_file="xi_cpu_usage.log": Defines the log file where results will be saved.
- while true; do ... done: Runs the CPU load check indefinitely every 10 seconds.
- timestamp=$(date "+%Y-%m-%d %H:%M:%S"): Captures the current date and time for logging.
- total_cpu=$(top -b -n 1 | awk ...): Collects the total CPU usage for COMMANDs starting with xi, as in the previous script.
- output="[$timestamp] Total CPU usage ...": Formats the output with a timestamp.
- echo "$output" | tee -a "$output_file": Writes output to both the terminal and the specified file (xi_cpu_usage.log).
- sleep 10: Pauses the script for 10 seconds before repeating.
Precondition config for sequential tests
rw=write
bs=128K
iodepth=64
direct=1
ioengine=libaio
group_reporting
loops=2
[job 1]
filename=/dev/nvme0n1
[job 2]
filename=/dev/nvme1n1
[job 3]
filename=/dev/nvme4n1
[job 4]
filename=/dev/nvme5n1
[job 5]
filename=/dev/nvme2n1
[job 6]
filename=/dev/nvme3n1
[job 7]
filename=/dev/nvme6n1
[job 8]
filename=/dev/nvme7n1
Precondition config for random tests
rw=randwrite
bs=4K
iodepth=32
numjobs=4
direct=1
ioengine=libaio
group_reporting
loops=2
[job 1]
filename=/dev/nvme0n1
[job 2]
filename=/dev/nvme1n1
[job 3]
filename=/dev/nvme4n1
[job 4]
filename=/dev/nvme5n1
[job 5]
filename=/dev/nvme2n1
[job 6]
filename=/dev/nvme3n1
[job 7]
filename=/dev/nvme6n1
[job 8]
filename=/dev/nvme7n1
FIO RAW sequential tests
direct=1
bs=128k
ioengine=libaio
rw=write
group_reporting
iodepth=64
runtime=300
[job 1]
filename=/dev/nvme0n1
[job 2]
filename=/dev/nvme1n1
[job 3]
filename=/dev/nvme2n1
[job 4]
filename=/dev/nvme3n1
[job 5]
filename=/dev/nvme4n1
[job 6]
filename=/dev/nvme5n1
[job 7]
filename=/dev/nvme6n1
[job 8]
filename=/dev/nvme7n1
FIO RAW random tests
direct=1
bs=4k
ioengine=libaio
rw=randread
group_reporting
iodepth=128
numjobs=4
random_generator=tausworthe64
runtime=3000
norandommap
randrepeat=0
gtod_reduce=1
buffered=0
size=100%
time_based
refill_buffers
[job 1]
filename=/dev/nvme0n1
[job 2]
filename=/dev/nvme1n1
[job 3]
filename=/dev/nvme4n1
[job 4]
filename=/dev/nvme5n1
[job 5]
filename=/dev/nvme2n1
[job 6]
filename=/dev/nvme3n1
[job 7]
filename=/dev/nvme6n1
[job 8]
filename=/dev/nvme7n1
FIO Raw mixed tests
direct=1
bs=4k
ioengine=libaio
rwmixwrite=50
rw=randrw
group_reporting
iodepth=64
numjobs=8
random_generator=tausworthe64
runtime=3000
norandommap
randrepeat=0
gtod_reduce=1
buffered=0
size=100%
time_based
refill_buffers
[job 1]
filename=/dev/nvme0n1
[job 2]
filename=/dev/nvme1n1
[job 3]
filename=/dev/nvme2n1
[job 4]
filename=/dev/nvme3n1
[job 5]
filename=/dev/nvme4n1
[job 6]
filename=/dev/nvme5n1
[job 7]
filename=/dev/nvme6n1
[job 8]
filename=/dev/nvme7n1
FIO RAID sequential tests
rw=write
#rw=read
bs=192K
iodepth=32
direct=1
ioengine=libaio
runtime=3000
numjobs=3
offset_increment=33%
group_reporting
#[job 1]
#filename=/dev/xi_media1
[job 2]
filename=/dev/xi_media2
FIO RAID Random tests
rw=randwrite
#rw=randread
bs=4K
iodepth=32
direct=1
ioengine=libaio
runtime=6000
numjobs=32
group_reporting
random_generator=tausworthe64
norandommap
randrepeat=0
gtod_reduce=1
buffered=0
size=100%
time_based
refill_buffers
[job 1]
filename=/dev/xi_media1
[job 2]
filename=/dev/xi_media2