AX300 Series

AX300 Series Edge Server

1- Product Overview

1.1- Introduction

The Axial AX300 Series of powerful, high-density edge servers are specifically engineered for complex computing workloads. Equipped with 4th 5th Gen Intel® Xeon® Scalable processing and a huge range of expansion and storage options, the AX300 Series delivers exceptional performance in a shallow depth, 3U form factor. Its industrial design and versatile installation capabilities make it the perfect fit for challenging edge computing environments. With incredible configuration flexibility, the AX300 Series is ideally suited for AI training and AI inferencing, virtualization, advanced automation, or any application that demands scalable, low latency computing capabilities at the edge.

Axial AX300 Series with lid removed
Axial AX300 Series with Security Bezel and Slide Rails
Axial AX300 Series with Tower Stand, Security Bezel, and Rear Cable Security Bezel

For more information on accessories and additional features, visit the AX300 product page:

1.2- Safety

Safe Use and Installation Instructions
  1. Install the device securely. Be careful handling the device to prevent injury and do not drop.

  2. Equipment is intended for installation in a Restricted Access Area.

  3. Elevated Operating Ambient - If installed in a closed or multi-unit rack assembly, the operating ambient temperature of the rack environment may be greater than room ambient. Therefore, consideration should be given to installing the equipment in an environment compatible with the maximum ambient temperature (Tma) specified by the manufacturer.

  4. Reduced Air Flow - Installation of the equipment in a rack should be such that the amount of air flow required for safe operation of the equipment is not compromised.

  5. Mechanical Loading - Mounting of the equipment in the rack should be such that a hazardous

condition is not achieved due to uneven mechanical loading.

  1. Circuit Overloading - Consideration should be given to the connection of the equipment to the supply circuit and the effect that overloading of the circuits might have on overcurrent protection and supply wiring. Appropriate consideration of equipment nameplate ratings should be used when addressing this concern.

  2. Reliable Earthing - Reliable earthing of rack-mounted equipment should be maintained. Particular attention should be given to supply connections other than direct connections to the branch circuit (e.g. use of power strips).

  3. Ambient operating temperature must be between 5 °C to 40 °C with a non-condensing relative humidity of 8-85%.

  4. The device can be stored at temperatures between -40 °C to 70 °C.

  5. Keep the device away from liquids and flammable materials.

  6. Do not clean the device with liquids. The chassis can be cleaned with a cloth.

  7. Allow at least 2 inches of space around all sides of the device for proper cooling. If the device is mounted to a vertical surface then recommended device orientation is so that heatsink fins allow air to rise unobstructed. Alternative orientations may result in reduced operational temperature range.

  8. This device is intended for indoor operation only.

  9. Install the device only with shielded network cables.

  10. Service and repair of the device must be done by qualified service personnel. This includes, but is not limited to, replacement of the CMOS battery. Replacement CMOS battery must be of the same type as the original.

  11. Proper disposal of CMOS battery must comply with local governance.

  12. Product must only be connected to a certified router, switch or similar network equipment.

  13. Product is intended for indoor use only.

  14. Product cannot be connected to the public network.\

WARNING: There is danger of explosion if the CMOS battery is replaced incorrectly. Disposal of battery into fire or a hot oven, or mechanically crushing or cutting of a battery can result in an explosion.

Précautions et guide d’installation

Ne pas ouvrir ou modifier l'appareil. L'appareil utilise des composants conformes aux réglementations FCC et EC. La modification de l'appareil peut annuler ces certifications.

  1. Installez l'appareil en toute sécurité. Manipulez l'appareil avec précaution pour éviter de vous blesser et ne le laissez pas tomber.

  2. L'équipement est destiné à être installé dans une zone à accès restreint.

  3. Température ambiante de fonctionnement élevée - En cas d'installation dans un rack fermé ou à plusieurs unités, la température ambiante de fonctionnement de l'environnement du rack peut être supérieure à la température ambiante de la pièce. Par conséquent, il convient de veiller à installer l'équipement dans un environnement compatible avec la température ambiante maximale (Tma) spécifiée par le fabricant.

  4. Débit d'air réduit - L'installation de l'équipement dans un rack doit être telle que la quantité de débit d'air requise pour un fonctionnement sûr de l'équipement ne soit pas compromise.

  5. Chargement mécanique - Le montage de l'équipement dans le rack doit être tel qu'un condition n'est pas atteinte en raison d'une charge mécanique inégale.

  6. Surcharge de circuit - Il convient de tenir compte de la connexion de l'équipement au circuit d'alimentation et de l'effet que la surcharge des circuits pourrait avoir sur la protection contre les surintensités et le câblage d'alimentation. Une prise en compte appropriée des valeurs nominales de la plaque signalétique de l'équipement doit être utilisée pour répondre à cette préoccupation.

  7. Mise à la terre fiable - Une mise à la terre fiable de l'équipement monté en rack doit être maintenue. Une attention particulière doit être accordée aux raccordements d'alimentation autres que les raccordements directs au circuit de dérivation (par exemple, utilisation de multiprises).

  8. La température ambiante de fonctionnement doit être comprise entre 5 °C et 40 °C avec une humidité relative sans condensation de 8 à 85 %.

  9. L'appareil peut être stocké à des températures comprises entre -40 °C et 70 °C.

  10. Gardez l'appareil à l'écart des liquides et des matériaux inflammables.

  11. Ne nettoyez pas l'appareil avec des liquides. Le châssis peut être nettoyé avec un chiffon.

  12. Laissez au moins 2 pouces d'espace autour de tous les côtés de l'appareil pour un refroidissement correct. Si l'appareil est monté sur une surface verticale, l'orientation recommandée de l'appareil est de sorte que les ailettes du dissipateur thermique permettent à l'air de monter sans obstruction. Des orientations alternatives peuvent entraîner une plage de températures de fonctionnement réduite.

  13. Cet appareil est destiné à une utilisation en intérieur uniquement.

  14. Installez l'appareil uniquement avec des câbles réseau blindés.

  15. L'entretien et la réparation de l'appareil doivent être effectués par un personnel qualifié. Cela inclut, mais sans s'y limiter, le remplacement de la batterie CMOS. La batterie CMOS de remplacement doit être du même type que celle d'origine.

  16. L'élimination appropriée de la batterie CMOS doit être conforme à la gouvernance locale.

  17. Le produit doit uniquement être connecté à un routeur, un commutateur ou un équipement réseau similaire certifié.

  18. Le produit est destiné à une utilisation en intérieur uniquement.

  19. Utilisez uniquement des connecteurs répertoriés UL pour la connexion aux panneaux de fusibles automobiles.

  20. Le produit ne peut pas être connecté au réseau public.

ATTENTION: Il existe un risque d'explosion si la pile CMOS n'est pas remplacée correctement. L'élimination de la batterie dans le feu ou dans un four chaud, ou l'écrasement ou le découpage mécanique d'une batterie peut entraîner une explosion.

1.3- Box Contents & Accessories

The following accessories are included with every system:

  • Cable management ties (2RALXX220400)

  • 3.5” bay mounting bracket (Pre-Installed with needed screws kit)

  • 5.25” bay mounting brackets (2RALXX6751A1)

  • Spare M4 screws for slide rail (2RALXX6755A0)

  • Chassis keys

If additional items were purchased, such as rail mounting kits/brackets, they will be included in the system packaging.

1.4- Product Specifications

1.5- System Identification & Lables

System Label

The system label is located on the right side of the chassis as depicted in the image below. The system label will contain the following information:

  • System Model

  • OnLogic Serial Number

  • Regulatory & Compliance Certification Logos

Front Service Label

On the front of the chassis, there is a retractable product information label containing pertinent product information such as:

  • System Model

  • OnLogic Serial Number

  • BMC MAC addresses

2- Technical Specifications

2.1- External Features

Front I/O

Front LEDs & Buttons

The ID LED/Button is available to assist with locating the system. ID may be turned On / Off by physically pressing the ID button. ID may also be turned On, Off, or set to Blink from the Baseboard Management Controller (BMC) Web UI.

Rear I/O

2.2- I/O Definitions

Front I/O Ports

Rear I/O Ports

VGA Port

There is one VGA Video port located on the back of the Axial AX300 Series Server. This is connected to the AST2600 BMC Chip.

USB Ports

There are 2x USB 3.1 Gen 1 Type-A ports in the front and 2x USB 2.0 Type-A in the rear of the Axial AX300 Series Server.

Network Ports

The AX300 Series features the following onboard Ethernet ports:

1GbE Dedicated BMC Port LEDs

Management Ports

10GbE Networking Port LEDs

The 2 ports on the AX300 Series are 10GBASE-T ports

Expansion Slot Configuration

PCIe Gen 5.0 Slots (Six x16 & One x8)

The AX300 Series features 7 PCI 5.0 Express slots on the motherboard (Note: PCIE1 is electrically x8). The PCIe lanes are routed to the listed CPU and offer support for various types of devices.

Recommended PCIe lane population order

To maximize performance and compatibility of PCIe adapters, the following PCIe lane population order should be followed when adding PCIe adapters to the system.

PCIe lanes are numbered below for reference.

2.3- Internal Connectivity

Motherboard Layout & Component Population

This diagram provides a comprehensive map of the AX300 motherboard, detailing various headers and illustrating the optimal installation of RAM and PCIe cards based on CPU and RAM module configurations

Storage & Drive Bays

M.2 2280/22110 M-key

The M.2 Socket (M2_1, Key M) supports type 2280/22110 M.2 PCI Express modules up to Gen5 x4 (32GT/s x4).

SATA Headers

There is one SATA header on the AX300 Series Server motherboard. The data ports support SATA III 6Gbps storage devices.

MCIO Header

There are 2 MCIO headers on the Axial AX300 Series motherboard. Each supports up to PCIe5.0 x8. By default these connections are used for the 4x Front Drive Bay option and uses the CBDT119 cable (4x OCuLink to 2x MCIO)

OCuLink Headers

There are three OCuLink headers on the Axial AX300 Series motherboard that each support PCIe3.0x4 or 4 SATA 6Gb/s per port.

When configured for SATA,

OCU 1 provides ports SATA0_0, SATA0_1, SATA0_2 and SATA0_3

OCU 2 provides ports SATA0_4, SATA0_5, SATA0_6 and SATA0_7

OCU 3 provides ports SATA1_4, SATA1_5, SATA1_6 and SATA1_7

Note that when configured as SATA, OCU 1 and 2 occupy a single VROC RAID instance apart from OCU 3, and therefore a RAID volume spanning all 3 OCuLink ports cannot be created.

OCuLink connector labeling:

Expansion Storage Physical Location

All expansion storage options are located in either the 3.5” or 5.25” expansion bays:

5.25” Front Bay Options:

  • 8x SATA SSD Bay - 8x 2.5", 5mm to 7mm, SATA Drives (8x SATA)

  • 4x NVMe SSD Bay - 4x 2.5" U.2/U.3 (SFF-8643) NVMe SSD Drives (4x OCuLink)

  • 12x M.2 Bay - 12x M.2 SATA Drives (3x OCuLink, 4 drives per connection)

3.5” Rear Bay Options:

  • 1x SATA HDD/SSD Bay - 1x 2.5”, 5mm to 15mm, SATA HDD & SSD Drive (1x SATA)

Front 5.25” Drive Bay Population

The following drive population recommendations are provided to ensure consistency of connectivity, operation, and OS drive enumeration aligned to physical drive bay locations.

Rear 3.5” Drive Bay Population

Drive Bay Locks

The front 8x SATA SSD Bay and rear 1x SATA HDD/SSD Bay option both feature locking mechanisms to prevent unauthorized drive removal.

Memory Configuration

DDR5 DIMM Slots

The AX300 Series supports up to sixteen (eight per CPU) 288-pin DDR5 DIMM slots in two groups, and supports Single Channel Memory Technology.

5th Gen Intel Xeon Scalable Processors support transfer speeds between 4800MT/s and 5600MT/s depending on the processor SKU.

Supported Memory Modes

The IMC (Integrated Memory Controller) of the processors supports two mirror modes. The ability to support the memory modes is dependent upon the DIMM population. There are 2 IMC per CPU.

Mirror Modes

Full Mirror Mode will set the entire memory in the system to be mirrored, consequently reducing the memory capacity by half. One half will remain active while the other half will be left in reserve.

Partial Mirror Mode will enable the required size of memory to be mirrored. If rank sparring is enabled, partial mirroring of RAM will not take effect.

Enabling any type of Mirror Mode will disable XPT Prefetch.

Memory R.A.S. Features

ADDDC Sparing

ADDDC Sparing enables Memory Rank Sparing to reserve memory ranks to replace failed memory ranks when an error is detected. This will reduce the total memory available to the OS. This feature is disabled by default in the BIOS.

Patrol Scrub

Patrol Scrub is a background activity initiated by the processor to seek out and fix memory errors. This feature is disabled by default in the BIOS.

DIMM Population Requirements

The following rules apply to when populating DIMMs in the AX300 Series Server:

  1. Only DDR5 DIMMs may be installed into the system.

  2. The maximum frequency of the system memory will never exceed that of the lowest frequency DIMM(s) installed in the system.

When populating DIMMs within the system, the following population is recommended for each CPU configuration in order to maximize overall system performance (V indicated populated slot):

Single CPU configurations:

Dual CPU configurations:

Memory recommendations when using GPUs

When GPUs are installed in the system, it is recommended that the minimum system memory should be at least 1.5 times the total GPU memory. For superior performance and to accommodate future workloads, it is strongly recommended that the system memory be 2.0 times the total GPU memory. Failure to meet these memory requirements may result in degraded performance or unexpected application behavior.

Motherboard Headers & Connections

TPM Header

The Axial AX300 Series supports an optional discrete TPM 2.0 module. The TPM module included with the AX300 Series Server is the Infineon SLB9670 TPM2.0 (13-pin SPI module)

CR2032 CMOS Battery Socket

A socket is provided for a CR2032 battery. Regulatory requirements dictate the installed battery shall be rated for operation to at least 85°C.

PMBus

A connector is provided for a cable connection to the power supply PDB.

RS232 COM Header

A 10-pin (9 electrical) header is provided for cabled serial port, for diagnostics or legacy remote management.

Dual Rotor Fan Header

Seven fan headers are included. Hotswap is supported at the motherboard level.

Front Panel Header

The front panel header provides power button and LED, reset button and hard disk activity connections. AX300 exposes the power button and LED on the chassis and leaves the others disconnected.

AUX Front Panel Header

The AUX front panel header provides connections for some auxiliary features as indicated. The Locator LED and System Fault LED are exposed on the AX301 chassis front, and CASEOPEN detection is used to report the lid status.

ATX Power

System voltages are provided to the motherboard through a standard 24-pin ATX power connector.

ATX 12V Power

Four (4) 12V Aux power inputs are present. All four inputs are used in units built by OnLogic and can only be used with the CBPW166 cable.

VROC

A VROC key header is included to enable Intel Virtual RAID on CPU and NCME/AHCI RAID on CPU PCIE.

BMC SMBus

An SMBus connection to the BMC is provided.

USB 3.2 Gen1

A USB 3.2 Gen 1 connection is cabled to the front of the chassis to enable the front USB ports.

Onboard Diagnostics (Dr. Debug)

Dr. Debug is used to provide code information, making debugging easier. Please see the charts below for Dr. Debug code information.

2.4- Motherboard

Layout & Component Overview

For DIMM installation and configurations instructions please see DIMM Population Requirements

Onboard LED Indicators

Motherboard Manual

Click here for the most up-to-date manual directly from the motherboard supplier.

2.5- Power Management

The sections below focus on the power features and capabilities of the AX300 Series Server.

Supported Power Supplies

The system supports two redundant power supplies, which may be 1000W/2400W depending on the selected mode and input voltage (110V/240V respectively). These power supplies are hot-swappable, meaning they can be replaced while the system is running without interrupting its operation.

The power input into both supplies must be the same as they will automatically adjust their power mode depending on the input voltage. 110V input will have a maximum output of 1000W and 240V input will have a maximum output of 2400W.

If you need to replace a failed power supply, simply remove the failed unit and insert a new one of the same wattage. The system will automatically recognize the replacement power supply and bring it online to restore redundancy.

IMPORTANT: When utilizing multiple 300W-350W or the maximum of seven 150W PCIe adapters (such as GPUs), the 2400W power supply mode is recommended on both power supplies due to momentary power spikes (exceeding the rated wattage of the adapters) that may occur. When these power spikes occur, the power consumption of the PCIe adapters combined with power draw of other system components may exceed the available power of the supplies.

Power Redundancy

The power supplies in this system are fully redundant in a primary/backup mode. This means that the two power supplies work in parallel, with one power supply acting as the primary source of power and the other as a backup.

In normal operation, the primary power supply is responsible for supplying power to the system, while the backup power supply remains idle. If the primary power supply fails, the backup power supply automatically takes over, ensuring that the system continues to receive power without interruption.

The power supplies are designed to work seamlessly together, with the primary power supply handling the majority of the load and the backup power supply providing additional power as needed. This redundancy ensures that the system can continue to operate even if one power supply fails, providing a high level of reliability for critical systems.

If a power supply failure occurs, the alerts will be presented via the Baseboard Management Controller (BMC), an audible alarm may occur, and the error LED will assert. If this happens, the failing supply can be serviced while the system remains operational on the backup power supply. Once the replacement power supply is installed, the system will automatically detect it and bring it online, restoring full redundancy.

Power Consumption

This table represents the max power draw of a max configuration AX300 Series Server with two 6548Y+ CPUs, a selection of GPUs, and 1 of the 3 options for storage docks in the front.

Input Voltage and Line Cords

Based on the total power consumption of the system, the following guidelines should be followed relative to the power supply input voltages, quantities, and line cords:

Auto Power On Configuration

The Axial AX300 can be configured to turn on automatically when power is connected. This is useful for power outage recovery or if the unit is mounted in a hard to reach location. You can adjust Auto Power On settings by following the steps listed below.

  1. Power on the system and press F2 a few times to access the BIOS

  2. Navigate to Server Mgmt > BMC Tools

  3. Locate Restore AC Power Loss setting

  4. This can be changed to any of the following states:

    • Power Off: The system will remain off when power is restored

    • Last State: The system will recover to the state it was in before the power loss event (i.e. If the unit was off, it would stay off. if the unit was powered on, it would power back on.)

    • Power On: The system will power on after any power loss event

  5. Press F10 to Save & Exit

2.6- Thermals & Cooling

The Axial AX300 Series Edge Server is designed to operate and function across a wide temperature (5 to 40°C) and humidity range (8 to 85% RH non-condensing). The following sections describe the thermal and cooling capabilities of the system and its expected behavior in various conditions.

System Fans & Airflow Direction

The Axial AX300 Series Edge Server has five 80 x 80 x 38mm system fans, which can be independently controlled and configured via the Baseboard Management Controller (BMC) relative to the supported system temperature sensors. The default fan duty and configuration settings have been validated to operate in accordance with the supported temperature range (up to 40°C). If the ambient operating temperature is tightly controlled, additional fan configuration optimizations may be manually adjusted to optimize acoustics and reduce power consumption. For additional information pertaining to manual fan configuration settings, please consult the Axial Edge Server BMC Manual.

The power supply fans operate independently and have their own closed-loop cooling algorithm.

By default, the fans are divided into 3 cooling zones: PCIe Expansion, CPU and Memory, and Power Supply and Storage. Internal baffling ensures CPU air flow is directed through the CPU heatsinks. A partial diverting duct directs some of the exhaust air from CPU0 around CPU1 and bypass air from the inlet to CPU1.

Note: In the case of a configuration with only 1 CPU installed, there will be no diverting duct installed in the chassis

In addition to the system fans, some PCIe and 5.25” bay devices will have their own independently controlled fans.

Temperature Sensors

Temperature sensor data is available for many onboard temperature sensors.

Default Fan Settings

The system is configured to operate in accordance with a closed loop thermal algorithm which accounts for components’ temperature maximums, reduced acoustics, lower power consumption, and optimal performance.

Fan Zone Assignments

The fan zone assignments, default closed loop tables, and associated temperature sensors are outlined in this section.

Fan Zone 1 - CPU Area

Assigned Temperature Sensor: CPU1 Temp, CPU2 Temp

Assigned Fans: FAN5, FAN6

As per the default configuration settings, the system fans will increase duty cycle at 1% increments every second when the CPU temperature is at or above 80°C. When the temperature drops below 65°C, the system fan duty cycle will reduce by 1% every second.

Fan Zone 2 - PCIe / GPU Area

Assigned Temperature Sensor: TEMP_GPUx, M.2 Temp

Assigned Fans: FAN3, FAN4

As per the default configuration settings, the system fans will increase duty cycle at 1% increments every second when the GPU temperature is at or above 70°C. When the temperature drops below 60°C, the system fan duty cycle will reduce by 1% every second.

Note: GPU temperature sensing is only supported with Nvidia professional grade or Intel Datacenter GPUs

Fan Zone 3 - PSU / Storage Area

Assigned Temperature Sensor: TRMB1 (Ambient Temperature Sensor)

Assigned Fans: FAN7

As per the default configuration settings, the system fan will increase duty cycle at 1% increments every second when the ambient temperature is at or above 60°C. When the temperature drops below 50°C, the system fan duty cycle will reduce by 1% every second.

Additional Fan Defaults

The default system idle duty cycle is 15% and the default maximum duty is 95%.

Due to the chassis air inlet and outlet sizing, the fan performance does not marginally improve above 95% duty cycle, while fan noise increases.

Upon System Fan Failure or BMC Firmware Update, System Fans will ramp to maximum speed.

Fan Default Acoustics

Acoustical performance of the default fan settings is provided in terms of three configurations: Entry, Typical, and Maximum. Configuration details are provided in Table 4.3.3.1 below. Each configuration has been tested according to ISO7779 & ISO9296 @ 23°C.

System Configurations

Acoustic Workload Applications/Test

Acoustic Results

Thermal Performance and Validation

The default fan duty and configuration settings have been validated to operate in accordance with the supported temperature range (up to 40°C) as per the following test scenario and results.

Test Conditions

System Configurations

Workload Applications/Test

Temperature Range

The system performance was tested along a ramp with the following levels at two hours per level. During the test sequence numerous points throughout the system were monitored to ensure adequate cooling was provided to components in the system. The system was also tested 5ºC and 10ºC above its rated temperature range to help classify performance outside of the rated temperature range.

Test Results

Maximum Temperature Recorded for Critical Components

CPU TMAX

GPU TMAX

RAM TMAX

Entry Configuration

84C

89C

56C

Typical Configuration

79C

92C

56C

Max Configuration

87C

94C

60C

Entry Configuration Thermal Performance Test

Summary: The Axial AX300 Series system configured with entry level hardware performed at full capacity under the workload across the full thermal range (0C-40°C) without the CPU or GPU throttling.

Typical Configuration Thermal Performance Test

Summary: The Axial AX300 Series system built with a typical hardware configuration performs at full capacity under the workload up to 40°C. CPUs maintain the full workload across the full temperature sweep. The Nvidia RTX 4500 ADA remains below critical temperatures for the operational range and begins throttling above 40°C as critical temperatures are reached.

Maximum Configuration Thermal Performance Test

Summary: The Axial AX300 Series system built with a maximized hardware configuration performs at full capacity under the workload up to 40°C. CPUs maintain the full workload across the full temperature sweep. The four Nvidia RTX 6000 ADA cards remain below critical temperatures for the operational range and begin throttling above 40°C as critical temperatures are reached.

CPU Performance Relative to High Ambient Temperatures

CPUs with high Thermal Design Power (TDP) ratings may experience performance throttling under sustained, maximum workloads, particularly in high ambient temperature environments.

To optimize performance in high ambient temperature environments, it is recommended to characterize workloads accordingly and adjust system settings as needed.

2.7- Block Diagram

3- Installation & Mechanial

3.1- Dimensions

AXial AX300 Series without Security Bezel
AXIal AX300 Series with Front Security Bezel
AXial AX300 Series with Tower Stand, Front Security Bezel, and Rear Cable Security Bezel.

3.2- Mounting

Mounting Hardware

The Axial AX300 Series Edge Server has been designed with flexibility in mind and can be mounted in different ways. As the system is designed to fit industry standard 19” Electronic Industries Alliance (EIA) racks, there are multiple rack mounting rail kits available. Additionally, a 19” EIA two-post rack mounting option is available.

Two options are available for mounting outside of a traditional rack. The system may be wall mounted using the OnLogic wall mount kit. Alternatively, the system supports feet for a vertical desktop configuration.

In addition to the mounting options, the Axial AX300 Series Edge Server supports locking front and rear security bezels.

Rack Mounting

The Axial AX300 Series Edge Server has been designed to support standard 19" EIA rack mounting, which is a common form factor used in data centers and server rooms. To accommodate different rack depths, 4 different rail options support 16.5" - 47" rack depths. These rail kits can be used to securely mount the server in the rack, are easy to install, and include all the necessary hardware for attachment into the rack.

All rail kit options are also supported in a “reverse” orientation, with I/O and power supplies facing the front of the rack.

Rackmount Ball Bearing Slide Rails

The Ball Bearing Slide Rails are an optional accessory designed to enhance the functionality and ease of use of the Edge Server. These slide rails are designed to be used with standard 19" EIA racks and allow for easy installation and removal of the server from the rack. The ball bearing design ensures smooth and effortless motion, while the sturdy construction provides a secure and stable platform for the server. With these slide rails, you can easily access the server for maintenance or upgrades without the need for complex disassembly or cumbersome lifting.

The desired length Ball Bearing Slide rail kit can be chosen at time of configuration based on the rack depth requirements.

Mounting Hole: Square, Rack Depth Range (front to back flange): 420mm (16.5in) to 940mm (37in)

Forward Orientation: Install the six M4x0.7 L=4mm Low Profile Cheesehead screws provided with the rail kit. Align the first hole in the rail with the hole label specified in the table below.

Reverse Orientation: Install the six M4x0.7 L=4mm Low Profile Cheesehead screws provided with the rail kit. Align the first hole in the rail with the hole label specified in the table below.

Rail SKU

Minimum Span

Maximum Span

Forward Mounting Hole Label

Reverse Mounting Hole Label

Compatible with Cable Management Arm?

3RAMISBBG002

430.4mm (16.9”)

600mm (23.6”)

F2

B2

No

3RAMIS205200

564mm (22.2”)

850mm

(33.4”)

F1

B1

Yes

3RAMISBBG003

626.4mm (24.7”)

939.9mm (37.0”)

F1

B1

Yes

Cable Management Arm Slide Rail Kit

The Cable Management Arm Slide Rail Kit (Part Number: 3RAMIS205300) is an optional accessory that enhances the standard ball bearing slide rails by providing a cable management arm to neatly organize and secure cable connections to the Edge Server system while still supporting easy removal of the server from the rack for maintenance and upgrades.

The Cable Management Arm is not compatible with the 420mm (16.5”) to 600mm (23.5”) slide rail.

For slide rail options 23.5” and up, the Cable Management Arm can be chosen at time of configuration based on the rack depth requirements.

Mounting: Integrated with slide rail

Clip the cable management arm into the brackets on the slide rail

Two-Post Rack Bracket Kit

The Two-Post Rack Bracket Kit (Part Number: MTR-2POST-AX301) is an optional accessory designed to enable flexible mounting of the Edge Server. These brackets are designed to be used with standard 19" EIA Two-Post racks and allow for stable and secure mounting of the server in a rack.

The Two-Post Rack Bracket Kit supports both forward and reverse mounting. To convert to a reverse mounting orientation, remove the handle.

The Kit includes all required hardware for square or M5 threaded racks. Additional hardware may be required for racks with other mounting holes.

Mounting Hole: Square or M5 threaded with included hardware. #10-32, #12-24 or round hole with appropriate additional hardware.

Rack Post Range (front to back flange): 50mm (2in) to 150mm (6 in)

Step 1: Install twelve M5 cage nuts in the rack as shown. Secure the four brackets to the rack posts using twelve M5x0.8 L=15mm Pan Head Screws. Install two M5 cage nuts in the front brackets.

Step 2: Install the edge server in the brackets. Secure the server using two M5x0.8 L=15mm Pan Head Screws.

Reverse Rail Mounting

The Axial AX300 Series Edge Server is designed to accommodate a reversed, front I/O rackmount orientation, with the I/O and power supplies facing the front of the rack.

All rail kit options with supported rear I/O are also supported in a “reverse” orientation, with I/O and power supplies facing the front of the rack. To convert to a reverse orientation, remove the rack handles and reattach them at the rear of the chassis.

Reverse Orientation: Install rack handles at the rear of the chassis using eight #6-32 L=1/4” Pan Head Screws.

Airflow Implications: Most racks have a front-to-back airflow expectation, where cooling air is supplied at the front and exhaust air is removed at the rear of the rack. Mounting a standard AX300 Series server in the reverse configuration, without accommodations for reversing airflow, will violate this convention and may cause cooling problems for either the AX300 or other devices in the rack.

Options to enable front-to-back airflow with front I/O orientations are available. Consult OnLogic for additional information.

Wall Mount Kit

The Axial AX300 Series Edge Server wall mount kit (Part Number: MTW116) is made of sturdy metal and designed to securely hold the server in place against a wall. This optional accessory includes the necessary wall mounting brackets and hardware to flexibly mount the Axial AX300 Series Edge Server system where a rack is not available or practical. It is strongly recommended to include the front security bezel and rear cable security bezel when wall mounting the edge server. If installed, the front security bezel can also support an optional dust filter.

Install the eight M4x0.7 L=4mm Flathead screws provided with the wall mount kit. Use holes “F1” and “B1” for the outermost holes on the bracket. Install four plugs in holes marked with “T” or “∆”. (right side only)

Tower Feet

The optional tower feet (Part Number: MTT-AX301) allow the Axial AX300 Series Edge Server to be used vertically in a desktop tower configuration. Using the front and rear security bezels in conjunction with the tower feet is strongly recommended.

Install four tower feet using one M4x0.7 L=6mm Pan Head screw each, use holes marked “T” or “∆” on the right side of the chassis. Install nine plugs in slide rail holes “F2” through “B2” (right side only)

Stand the system onto the four tower feet.

Bezels

Front Security Bezel

The Axial AX300 Series Edge Server offers an optional front security bezel (Part Number: F1-AX301) that prevents access to the front USB ports, buttons and hot-swap chassis fans. For rack mounted configurations, the front bezel also prevents access to the rack screws to prevent unauthorized removal from the rack. In addition to the security features, the front bezel also contains a replaceable dust filter.

Replacement dust filters are available:

SKU

Description

Filtration

F1-AX301-FILTER

AX300 Front Dust Filter

Debris and large fibers

F1-AX301-FILTER-MERV4

AX300 MERV 4 Front Dust Filter

80% Dust Arrestance (MERV 4)

The rack-out protection prohibits the removal of the system through the tabs on the bottom corners, offering an additional layer of security.

If the rack-out protection is not desired, the screw covers can be removed. Press firmly upward on the screw cover until the pegs pop out of the keyhole slots.

Front Bezel Installation

Install the bezel bracket with three M3 flat head screws.

Angle the bezel and align the hooks on the right side of the bezel with the bezel bracket. Using one hand to keep the right side of the bezel in place, press the bottom left corner inward and upward until the hook pops into place. Use the key to lock the bezel.

Rear Cable Security Bezel

The rear cable security bezel (Part Number: B2-AX301) for the Axial AX300 Series Edge Server prevents unauthorized removal or installation of cables or devices from the rear I/O ports. The bezel has openings on the left and right sides to allow cables to pass through. These openings are protected with dust brushes, and the rear bezel supports an optional dust filter for use in reverse airflow configurations.

Use the included five #6-32 Flat Head T10 Screws to attach the base of the rear security bezel.

Cable Requirements for Earthquake Rating

The Axial AX300 Series is certified to NEBS GR-63-CORE Earthquake Zone 4 Shelf-Level component. To achieve this rating, the use of locking cables for all connections is required. Many I/O connections have a locking mechanism as part of the connector specification. For these connections, use of a specification compliant cable is sufficient to meet the requirement.

Locking power cables can be purchased from Onlogic using the SKUs below:

3.3- System Servicing

System Access Overview

The Axial AX300 Series Edge Server is designed to be compact while maintaining easy serviceability. Please follow the instructions below to service the specified component or device.

Important Note: Except for hot-swapping the chassis fans/power supplies, the system should be powered off and the power disconnected before performing any service.

Opening the AX300

To open the AX300 you will want to Loosen the single retention screw on the system’s cover and unlock the Chassis Lock. You can then pull the chassis lid back and access the units internal components.

Hot-Swap Fan Replacement

The 5 chassis fans on the front of the system are hot swappable.

Step 1: Grip the handle and the release latch on the front of the fan cage. Step 2: Depress the release latch and pull straight out to remove the fan. Step 3: Insert the replacement fan into the socket. Press the fan until the latch clicks. Check that the replacement fan is secure by gently pulling the handle.

Replacing Front Bezel Dust Filter

Step 1: Unlock and remove the Front Security Bezel. Step 2: Remove the dust filter from the bezel. The filter frame is flexible and can be removed by gently lifting one corner until the retaining tabs disengage. Step 3: To install a new filter, simply press the filter into the bezel. Ensure all 8 retaining tabs are engaed:

Accessing & Servicing CPU & Memory Devices

Step 1: Unlock and open the chassis lid. Step 2: Disconnect data and power cables on the cable routing bracket at the PSU side. The other end of these cables can be left connected to the motherboard or PCIe devices as appropriate Step 3: Remove (or fold to one side) the cable routing bracket. Step 4: Remove the 3.5” bay bracket and 5.25” bay device (if populated).

Step 5: Remove the #6-32 Flange Head Screw securing the CPU air duct. Step 6: Bend the CPU air duct material to unlatch the opposing hooks at the back left corner. Step 7: Remove the right side (L-shaped piece) of the CPU air duct.

Step 8: Remove air diverter from between the CPU sockets. Step 9: Service CPU or Memory devices. Tip: Access to memory release levers can be improved by removing chassis FAN 5 or FAN 6.

Remove/Replace CPU

To remove the CPU:

Step 1: Using a T30 bit, loosen the 4 corner fasteners on the cooler in a star pattern to ensure even pressure release. Step 2: Pull the wire clips up toward the CPU to unlatch the CPU cooler. Step 3: Gently pull the CPU cooler up by the edges as the CPU is connected to the cooler. Step 4: Once cooler is removed, the CPU carrier needs to be removed from the cooler by unlatching it gently. Step 5: Remove the CPU from the CPU carrier by lifting the latch release handle found on the bottom of the carrier.

To replace the CPU:

Step 1: Line the CPU up on the CPU carrier using the golden triangle found on the corner of the CPU and additionally using the notches found on the CPU and CPU carrier. Step 2: Push down metal handle to lock CPU into the carrier. Step 3: Flip CPU carrier over and apply thermal paste to the top of the CPU. Step 4: Clip the CPU carrier onto the CPU cooler with the TOP of the CPU making contact with the BOTTOM of the cooler. Keep in mind the corner of the CPU that contains the triangle for alignment.

Step 5: Line up the cooler + CPU carrier with the socket on the motherboard making sure the screws align with the stand offs found on the motherboard and the triangle lines up with the triangle found in the corner of the socket.

Step 6: Flip clips found on the cooler fasteners down, away from the CPU to latch the cooler to the board. Step 7: Using a T30 bit, tighten the CPU cooler fasteners to the motherboard in a star pattern to ensure even distribution of pressure

Reassembly after CPU or Memory Service

Step 1: Replace the left wall of the air duct immediately to the left of the outermost RAM slot as shown below. Step 2: Replace air diverter between the CPU sockets. Connect the hook into the corresponding notch in the left wall of the air duct. Note orientation label on the air diverter. Step 3: Replace air duct right wall in chassis. Each wall of the air duct should be inserted just outside the outermost RAM slot, such that all RAM slots are inside the duct. See below:

Step 4: Align hooks on left side of air duct, then bend duct material to latch rear most opposing hook.

Step 5: Install the #6-32 Flange Head Screw to secure the CPU air duct. Step 6: Reinstall cable routing bracket and connect cables. Step 7: Reinstall 3.5” and 5.25” devices. Step 8: Close and lock lid.

Remove/Replace DIMMS

To remove DIMMS:

Step 1: Press down on the latches on the DIMM slots on the motherboard until they are tilted away from the DIMM. Step 2: Gently pull the DIMM out of the slot.

To replace DIMMS:

Step 1: Press down on the latches on each side of the DIMM slot on the motherboard until they are tilted away from the DIMM slot. Step 2: Line up the notch in the DIMM with the notch in the slot to ensure the DIMM goes in correctly. Step 3: Gently press the DIMM into the slot until an audible click is heard which indicates the latches have been engaged and are holding the DIMM in place.

Reassembly after CPU or Memory Service

Step 1: Replace the left wall of the air duct immediately to the left of the outermost RAM slot as shown below. Step 2: Replace air diverter between the CPU sockets. Connect the hook into the corresponding notch in the left wall of the air duct. Note orientation label on the air diverter. Step 3: Replace air duct right wall in chassis. Each wall of the air duct should be inserted just outside the outermost RAM slot, such that all RAM slots are inside the duct. See below:

Step 4: Align hooks on left side of air duct, then bend duct material to latch rear most opposing hook.

Step 5: Install the #6-32 Flange Head Screw to secure the CPU air duct. Step 6: Reinstall cable routing bracket and connect cables. Step 7: Reinstall 3.5” and 5.25” devices. Step 8: Close and lock lid.

Remove/Replace DIMMS

To remove DIMMS:

Step 1: Press down on the latches on the DIMM slots on the motherboard until they are tilted away from the DIMM. Step 2: Gently pull the DIMM out of the slot.

Servicing Drive Bays

Servicing 3.5” Bay Device

Step 1: Unlock and open the chassis lid. Step 2: If a 3.5” Device is already installed, disconnect power and data cables. Step 3: Remove the 3.5” bay bracket from the chassis. Loosen the thumbscrew, then pull the bracket backward into the chassis before lifting upward. Step 4: If no device is installed in the bay, turn the bracket upside down and remove the two screws holding the blank plate onto the bracket. (Figure 2 below)

Step 5: Insert the new 3.5” device at a slight angle onto the 2 pins on the right side of the bracket. Secure the device using M3 screws. (Figure 3 above) Step 8: To install the bracket into the chassis, align the slot pins on the underside of the bracket with the corresponding holes in the chassis. Then slide the bracket forward and secure the thumbscrew. The front of the device should be flush with the rear of the chassis. Step 9: Connect power and data cables to the device.

Servicing 5.25” Bay Device

Step 1: Unlock and open the chassis lid. Step 2: If a 5.25” Device is already installed, disconnect power and data cables, then proceed with Step 3a. If the 5.25” blank cover is installed, skip to step 3b. Step 3a: Remove the 5.25” bay bracket from the chassis. Loosen the thumbscrew and pull the tab on the release pin, then pull the bracket backward into the chassis before lifting upward. Step 3b: If no device is installed in the bay, remove the empty bracket by pulling the two levers on the release pins:

Step 4: Locate the 5.25” brackets in the accessory box, then secure them to the sides of the device using M3 screws as shown below:

Step 5: To install the bracket into the chassis, align the slot pin on the underside of the bracket with the corresponding hole in the chassis. Then slide the bracket forward and secure the latch pin and thumbscrew. The front of the device should be flush with the front of the chassis. Step 6: Connect power and data cables to the device. Set the fan speed switch to desired “LOW” or “HIGH” speed indicated by “L” and “H” located at the bottom right of the bay.

Servicing PCIe & GPU

PCIe Devices & GPUs

To help protect large PCIe devices (such as GPUs) in high vibration environments, the server supports the addition of bracket(s) to support the back end of the device. Depending on configuration, the system may have a bracket for each card, or a single bracket supporting all installed full-height/full-length devices.

Note: Half-length/low profile PCIe devices do not have a bracket

Individual card bracket

Step 1: Unlock and open the chassis lid. Step 2: If installing a new device, start by removing the PCIe blank plate. Step 3: Attach the bracket to the GPU using M3 screws. Bracket dimensions and screw location will vary depending on the GPU. An example installation is shown below. Consult Onlogic for supported GPU brackets.

Step 4: Install the GPU and bracket assembly into the chassis. Secure with #6-32 flange head screws.

Single Full Height/Full Length Bracket

Step 1: Unlock and open the chassis lid. Step 2: If installing a new device, start by removing the PCIe blank plate. Step 3: If the PCIe Support Bracket is already installed, remove it by removing the four #6-23 flange head screws. Step 4: Install GPU into motherboard, secure front of card with #6-32 flange head screw(s). Step 5: Prepare the bracket: for dual slot cards, twist out the appropriate dividers using a screwdriver. Note: Chassis/bracket PCIe slot numbering/order may not align with the motherboard. Step 6: Install the bracket into the chassis. If the dividers get stuck gently rock the GPUs until the divider fits into the gap between the cards. Secure with four #6-32 flange head screws.

4- Software & Firmware

4.1- BIOS/UEFI

For complete details on BIOS/UEFI configuration, refer to the official BIOS/UEFI User Manual:

Axial AX300 Series BIOS/UEFI Manual

4.2- Remote Management (IPMI/BMC)

For complete details on the Baseboard Management Controller (BMC) functionality, refer to the official BMC Manual:

Axial Edge Server BMC Manual

4.3- Drivers & Downloads

Drivers

Click here for the most up-to-date drivers directly from the motherboard supplier.

BIOS Updates

Bios Version
Release Date
Link

10.02

07/11/2025

1.03

10/3/2023

Update the BIOS with the downloaded file(s) above. Refer to the AX300 BIOS Manual (linked above), Please use Instant Flash for update procedure.

BMC Updates

BMC Version
Release Date
Link

1.12.00

04/01/2025

4.4- Operating System Compatibility & Installation

Supported Operating Systems

The AX300 Series Server supports the following operating systems:

  • Microsoft Windows Server 2022 Essentials

  • Microsoft Windows Server 2022 Standard

  • Ubuntu 22.04 - Server

  • Ubuntu 24.04 - Server

  • Red Hat Enterprise Linux 8.10

  • Red Hat Enterprise Linux 9.x

  • VMware vSphere and ESXi 8

4.5- RAID Configuration

RAID (Redundant Array of Independent Disks) is a technology that allows multiple hard drives to work together as a single logical drive, providing increased performance and data redundancy. The idea behind RAID is to combine the storage capacity of multiple drives to create a larger virtual drive that appears to the operating system as a single disk.

RAID can improve system performance by distributing data across multiple drives, allowing for faster read and write speeds. Additionally, RAID can provide data redundancy by using multiple drives to store the same data, so that if one drive fails, data can still be accessed from the other drives. There are several RAID levels with different configurations and benefits, each offering varying levels of performance and data redundancy.

The Axial AX300 Series Server supports onboard RAID via Intel Virtual RAID on CPU (Intel VROC) supported by the Intel Xeon Scalable processors.

Intel VROC is an enterprise RAID solution that unleashes the performance of NVMe SSDs, enabled by a feature in Intel Xeon Scalable processors called Intel Volume Management Device (Intel VMD), an integrated controller inside the CPU PCIe root complex.

Prior to configuration of RAID, users are advised to back up their data as the process may erase all data on the hard drives.

Note: SATA RAID is limited to volumes from disks on the same controller. SATA0_0 through SATA0_8 (OCuLink ports 1 and 2) may be used to create up to an 8-disk RAID volume, and SATA1_4 through SATA1_7 (OCuLink port 3) may be used to create a separate, up to 4-disk RAID volume. This limitation does not apply to NVMe RAIDs created using the OCuLink ports.

Supported RAID Types

The following sections will discuss the various RAID types that are supported on the AX300 Series Server and their respective advantages/disadvantages.

VROC Options

Intel VROC allows for RAID volumes to be created and controlled by the Intel Volume Management Device (Intel VMD) controller. There are 2 VROC options that can be purchased with the system:

VROC Standard - Allows for RAID 0/1/10 volumes to be created

VROC Premium - Allows for RAID 0/1/5/10 volumes to be created

RAID 0: Striping

RAID 0 (Redundant Array of Inexpensive Disks level 0), also known as striping, is a method of combining multiple physical hard drives into a single logical volume for improved performance.

In RAID 0, data is divided into blocks and spread across two or more physical drives simultaneously. The blocks are written to the drives in a way that balances the load and optimizes performance. When data is read, the blocks are retrieved from multiple drives at the same time, increasing the read and write speed of the overall system.

An advantage of RAID 0 is its improved performance due to the parallel access to multiple drives. However, RAID 0 does not provide any fault tolerance or redundancy. If one drive fails, the entire RAID 0 volume will be lost, along with all data stored on it. Therefore, it is recommended to use RAID 0 only for non-critical data or as part of a larger backup and disaster recovery strategy.

RAID 0 requires a minimum of two drives..

For RAID 0, it is recommended to use disks of the same interface, speed, and capacity. If the disks in a RAID 0 array have different sizes, performance may be limited and the capacity of the array will be limited by the size of the smallest disk.

RAID 1: Mirroring

RAID 1 (Redundant Array of Inexpensive Disks level 1) is a type of data storage technology that provides data redundancy and fault tolerance by creating an exact copy, or mirror, of data on two or more physical drives.

In RAID 1, when data is written to one drive, it is simultaneously written to the other drive(s), creating an exact duplicate of the data on each drive. This ensures that if one drive fails, the data can still be accessed from the remaining drive(s). The read performance of RAID 1 can be faster than that of a single drive because data can be read from multiple drives at the same time. However, write performance is typically slower because data must be written to multiple drives.

An advantage of RAID 1 is its data redundancy and fault tolerance. If one drive fails, the data is still available on the other drive(s). Additionally, RAID 1 can be hot-swappable, meaning that if a drive fails, it can be replaced without having to shut down the system.

However, RAID 1 has some disadvantages, including lower storage capacity compared to other RAID configurations and higher cost due to the need for multiple drives. RAID 1 is recommended for applications that require high data availability and reliability, such as mission-critical systems, servers, and database applications.

RAID 1 requires a minimum of two drives.

For RAID 1, it is recommended to use disks of the same interface, speed, and capacity. If the disks in a RAID 1 array have different sizes, performance may be limited and the capacity of the array will be limited by the size of the smallest disk.

RAID 5: Striping with Parity

RAID 5 (Redundant Array of Inexpensive Disks level 5) is a type of data storage technology that uses striping with distributed parity.

In a RAID 5 configuration, data is striped across multiple disks, with parity information distributed across all the disks. This provides fault tolerance and redundancy, allowing data to be reconstructed in the event of a single drive failure.

RAID 5 offers good performance and fault tolerance for small to medium-sized businesses, but it has a higher overhead and is more complex than some other RAID configurations. Additionally, in the event of a second drive failure, data loss can occur. RAID 5 is often used in applications that require a balance between performance, fault tolerance, and cost.

RAID 5 requires a minimum of three disks, and the capacity of one disk is used for parity information.

For RAID 5, it is recommended to use disks of the same interface, speed, and capacity. If the disks in a RAID 5 array have different sizes, performance may be limited and the capacity of the array will be limited by the size of the smallest disk.

RAID 10: Mirrored Striped

RAID 10 (Redundant Array of Inexpensive Disks level 10), also known as RAID 1+0 or mirrored striped volumes, is a combination of RAID 1 and RAID 0. It provides both data redundancy and improved performance.

In a RAID 10 configuration, multiple pairs of disks are configured as RAID 1 arrays, where data is mirrored between each pair of disks for redundancy. The resulting RAID 1 arrays are then striped together in a RAID 0 array, where data is striped across all of the mirrored pairs for increased performance.

Data is striped across the mirrored pairs, so the capacity of the RAID 10 array is equal to half of the total capacity of the disks. For example, in a four-disk RAID 10 array with 1TB disks, the total capacity of the array would be 2TB.

RAID 10 provides both performance and redundancy benefits, as it offers the performance benefits of RAID 0 while also providing the redundancy of RAID 1. In the event of a single disk failure, the mirrored pair can continue to provide access to the data. However, if both disks in a mirrored pair fail, data may be lost.

RAID 10 requires a minimum of four disks, and must have an even number of disks.

For RAID 10, it is recommended to use disks of the same interface, speed, and capacity. If the disks in a RAID 10 array have different sizes, performance may be limited and the capacity of the array will be limited by the size of the smallest disk.

RAID Configuration via BIOS

RAID volumes can be configured and created via the BIOS or from an operating system (OS).

If an operating system is to be installed on to a RAID volume, the processes outlined in this section must be followed in order to appropriately enable RAID and create the RAID volume where the OS will be deployed.

This section will outline the process for creating RAID volumes outside of the OS via the BIOS.

Enabling VMD Configuration

Prior to configuring or creating any RAID volumes using Intel Virtual RAID on CPU, Intel Volume Management Device (VMD) must be appropriately configured/enabled.

  1. From UEFI System Setup, navigate to Advanced → Intel VMD Technology → VMD Config for PCH ports → set to Enabled. New options will appear

  2. Next, configure the VMD Enabled devices to be enabled:

    1. Enable/Disable VMD → Enabled

    2. Go to PCH Root Port (OCU1/2/3) and enable the Intel VMD on the Specific Root Port as needed

    3. Enable Hot Pluggable if desired

  3. Go into Intel VMD for Volume Management Device on Socket 0/1

    1. Enable VMD on PCIE 0/1/2/3/4/5/6 and MCIO1/2 as needed

  4. Press F10 to Save and Exit. The system will then reboot.

Creating a RAID Volume in BIOS

After VMD has been enabled, a RAID volume can be created. The following procedure outlines the process to create a RAID volume using the RAID configuration utility:

  1. Press F2 or Del to enter UEFI System Setup and navigate to Advanced → Intel(R) VROC SATA Controller

  1. Select "Create RAID Volume" to create a new RAID volume.

  1. Assign a Name and Select the RAID Level you want to create (e.g. RAID 0, RAID 1, RAID 5, etc.) and specify the settings for the RAID volume (e.g. strip size, capacity, etc.).

  2. Choose the hard drives you want to include in the RAID array and add them to the volume by assigning them with an X.

  3. Select Create Volume

  4. Reboot the system and verify that the RAID array has been detected by the operating system or OS installation media.

Deleting a RAID Volume via RAID Option ROM

It's important to note that deleting a RAID volume will erase all data on the hard drives in the array, so be sure to back up any important data before proceeding. The specific steps to delete a RAID volume may vary depending on the RAID configuration utility used and the RAID level in use.

To delete a RAID volume, follow these steps:

  1. During the system boot-up process, press "Ctrl+I" to enter the RAID configuration utility.

  2. Select the RAID volume you want to delete and choose the "Delete RAID Volume" option.

  3. Confirm that you want to delete the RAID volume.

  4. Save the changes and exit the RAID configuration utility.

  5. Reboot the system and verify that the RAID volume has been deleted.

Windows RAID Setup

RAID volumes can be created, configured and managed from within Windows. This section will outline the requirements and processes for doing so.

Installing Windows on to a RAID volume (F6 install method)

Note: Enabling VMD Configuration and Creating a RAID Volume in BIOS steps are prerequisites for installing an OS to a RAID volume.

To install an OS on to a created RAID volume, perform the following steps to install the Intel Virtual RAID on CPU driver during operating system setup:

  1. Download the latest Intel Virtual RAID on CPU Driver package from the OnLogic Support Site or Intel and extract the contents to a USB drive.

  2. Connect the USB drive to the computer where you want to install Windows.

  3. Power off the system.

  4. Connect or remotely mount (via BMC) the Windows installation media and power on the system.

  5. When the system starts, press F11 to bring up the boot menu and select the option to boot from the Windows installation media.

  6. When the Windows Setup screen appears, press the "F6" key to install third-party RAID drivers or use the “Load Driver” option to load the F6 drivers.

  7. Windows Setup will prompt you to insert the driver disk for the RAID controller. Insert the USB drive containing the RAID driver package and click "OK".

  8. Windows Setup will scan the USB drive and display a list of compatible RAID drivers. Select the appropriate driver for the RAID controller (e.g. Intel RAID on Chip) and click "Next".

  9. Windows Setup should now detect the created RAID volume(s) and allow for installation of Windows onto them as if they were a singular physical disk.

  10. Continue with the Windows installation as usual.

These steps and more are provided by Intel here: Intel® Virtual RAID on CPU (VROC) for Windows*

Configuring RAID from within Windows

Installing Intel® Virtual RAID on CPU Software

Prior to configuring a RAID volume within the Windows OS environment, it is necessary to download the required drivers. The following procedure will outline the steps to ensure the proper drivers are downloaded and installed:

  1. Download the Intel Virtual RAID on CPU software from OnLogic or Intel’s website.

  2. Save the file to a known location on your computer's hard drive.

  3. Extract the files and locate SetupVROC.exe in the download and double-click it.

  4. Click Continue (if needed) to launch the installation program.

  5. Click Next at the Welcome screen.

  6. After reading and reviewing the warnings, Click Next.

  7. Read the license agreement. To agree and proceed, click Yes to accept the terms and continue.

  8. From the Readme file information, Click Next. The application files will now be installed.

  9. When the appropriate installation files have been installed, you will be prompted to Click Next to continue.

  10. Click Yes to the restart option and then click Finish to restart the system.

  11. After restarting the system, an Intel Virtual RAID on CPU application will be installed onto the system which can be used to manage RAID volumes on the system using

  12. The Userguide provided by Intel on the download site provides steps on setting up different RAID profiles and adjusting settings as needed

Creating a RAID Volume via Intel Virtual RAID on CPU

The following document outlines the procedure for creating a new RAID volume within the Intel Virtual RAID on CPU application from the operating system: Intel® Virtual RAID on CPU (VROC) for Windows*

  1. Open the Intel Virtual RAID on CPU application.

  2. Click the “Create” icon to create a RAID array.

  3. In “Select Volume Type”, click the desired RAID configuration. Click “Next”.

  4. In “Configure Volume”, select the RAID disks then click “Next”.

  5. In “Configure Volume Name and Size” select the volume name, volume size and strip size for your configuration then click “Next”.

  6. In “Confirm Volume Creation”, you may review the selected configuration, then click “Create Volume”.

After creation of the volume, to make the RAID volume usable from within the OS, it will need to be initialized, partitioned, and formatted (similar to a standard physical disk). To do so, follow the procedure below:

  1. From the Windows Disk Management application, initialize the disk (the newly created RAID volume) such that Logical Disk Management can access it.

  2. Right-click on the Disk associated with the RAID Volume and select “New Simple Volume”

  3. Follow the instructions on the New Simple Volume Wizard.

After the volume wizard process is completed, the RAID volume should now be operational and the RAID volume will appear as if it were a single storage drive.

Deleting a RAID Volume via Intel Virtual RAID on CPU

The following process outlines the procedure for deleting a RAID volume within the Intel Virtual RAID application from the operating system.

  1. Open the Intel Virtual RAID on CPU application.

  2. Click the “Manage” icon.

  3. Select the RAID volume that is to be deleted.

  4. Select “Delete Volume”

Warning: Deleting a RAID volume will destroy all contents held within the RAID array.

Linux RAID Setup

Intel VROC for Linux is mostly delivered through open-source operating system kernel and user space tools, with no additional software download required for specific Linux* distribution releases. It is up to the specific operating system vendor to pull-in Intel VROC features and patches. The distributions below have Intel VROC support, with newer releases being more complete.

Intel Virtual Raid on CPU (Intel VROC) in Linux Support Page

https://www.intel.com/content/www/us/en/support/articles/000094694/memory-and-storage/datacenter-storage-solutions.html

Additionally, as the configuration and implementation details for Intel VROC RAID in Linux may vary between distributions, please refer to the additional documentation below:

Red Hat Enterprise Linux 8 - Managing RAID

https://www.intel.com/content/www/us/en/support/articles/000096169/memory-and-storage/datacenter-storage-solutions.html

Red Hat Enterprise Linux 9 - Managing RAID

https://www.intel.com/content/www/us/en/support/articles/000096169/memory-and-storage/datacenter-storage-solutions.html

OS Support List:

https://www.intel.com/content/www/us/en/support/articles/000099710/memory-and-storage/datacenter-storage-solutions.html

5- Support & Compliance

What is BMC, and what is it for?

General information about the BMC, or Baseboard Management Controller, are discussed on our blog post here.

5.1- Troubleshooting & FAQ

Frequently Asked Questions

What is BMC, and what is it for?

General information about the BMC, or Baseboard Management Controller, are discussed on our blog post here.

Where are the storage drives shown in the BIOS?

Storage drives are shown in a few different places in the BIOS depending on the type (SATA vs. NVMe) and where its connected (Oculink vs. M.2 PCIe). SATA: Advanced -> Storage Configuration -> SATA_4 – SATA7 visible Oculink: Advanced -> Storage Configuration -> Oculink1_SATA_0 – Oculink1_SATA_3 NVMe: Advanced -> NVME Configuration -> Shows a list of available drives. Select a specific drive to view additional information about it. RAID: Advanced -> Intel® Rapid Storage Technology -> Shows any configurated RAID arrays, and selecting one will display the Selected Disks in the particular RAID volume.

Clear CMOS

If the system fails to power on or is unresponsive, clearing the CMOS may help. It will also restore the BIOS to factory defaults.

  1. Disconnect the system from all cables/connection (i.e. power, video, etc.) Follow the Opening the System instructions above to gain access to the motherboard. If a PCIe card is installed, you may need to remove it. Follow the Adding/Removing PCIe card instructions above, if needed.

2. Locate the CMOS pads indicated by the orange circle

  1. Once you’ve located the CLRCMOS1 pads, use a screwdriver or other conductive tool to short the pads together for at least 30 seconds.

After at least 30 seconds, the CMOS has been cleared. Reassemble the system and power it back up. The unit may restart several times while the motherboard reinitializes.

Where are the storage drives shown in the BIOS?

Storage drives are shown in a few different places in the BIOS depending on the type (SATA vs. NVMe) and where its connected (Oculink vs. M.2 PCIe). SATA: Advanced -> Storage Configuration -> SATA_4 – SATA7 visible Oculink: Advanced -> Storage Configuration -> Oculink1_SATA_0 – Oculink1_SATA_3 NVMe: Advanced -> NVME Configuration -> Shows a list of available drives. Select a specific drive to view additional information about it. RAID: Advanced -> Intel® Rapid Storage Technology -> Shows any configurated RAID arrays, and selecting one will display the Selected Disks in the particular RAID volume.

5.2- Security

Cyber Security Advisories

For the latest security advisories concerning OnLogic products, including vulnerability disclosures and necessary updates, please refer to our official Security Advisories page. It is recommended to regularly check this resource for critical security information. Access Security Advisories

Physical Security Features

Front Security Bezel with Intrusion Detection

The Axial AX300 Series Edge Server supports an optional front security bezel. The security bezel helps prevent unauthorized access and tampering with the front ports and buttons of the system.

A barrel lock is used to secure the security bezel in place. A key for the barrel lock is included in the accessory package. The key is shared with the top lid lock and the rear cable bezel lock.

In the event that the bezel is removed while power is present to the system, the bezel intrusion switch will detect this event and the Front Bezel Intrusion sensor will be asserted. Relative to the intrusion, this event will also be logged in the Baseboard Management Controller event log.

Two Point Locking Lid with Intrusion Detection

The Axial AX300 Series Edge Server chassis lid has a two point locking mechanism with an intrusion mechanism built natively into the system chassis.

For the two point locking mechanisms, the first locking point is the top barrel lock. A key for the top barrel lock is included in the accessory box, and is shared with the front security bezel and the cable bezel.

The second lid locking point is a thumb screw located in the rear of the system.

In the event that the system lid is removed while power is present to the system, the intrusion switch will detect this event and the Chassis Intrusion sensor will be asserted. Relative to the intrusion, this event will also be logged in the Baseboard Management Controller event log.

Optional Rear Cable Security Bezel

The Axial AX300 Series Edge Server supports an optional rear cable bezel (Part Number: B2-AX301). The cable bezel helps prevent unauthorized access and tampering with the rear ports, power supplies, buttons and cables of the system. The lid has brush pass throughs on either side to allow routing of connected cables.

The rear cable bezel is composed of two pieces: a base that is screwed to the main chassis and a lid that is secured with a 3 point latch. A key for the latch is included in the accessory package. The key is shared with the top lid lock and the front security bezel lock.

5.3- Regulatory

Compliance Information

Do not open or modify the device. The device uses components that comply with FCC and CE regulations. Modification of the device may void these certifications.

The use of shielded cables for connection of a monitor to the GPU is required to assure compliance with FCC and CE regulations.

CE

The computer system was evaluated for IT equipment EMC standards as a class A device. The computer complies with the relevant IT equipment directives for the CE mark. Modification of the system may void the certifications. Testing includes: EN 55032, EN 55035, EN 60601-1, EN 62368-1, EN 60950-1.

FCC Statement

This device complies with part 15 of the FCC rules as a Class A device. Operation is subject to the following two conditions: (1) this device may not cause harmful interference and (2) this device must accept any interference received, including interference that may cause undesired operation.

ISED

This device complies with Industry Canada license-exempt RSS standard(s). Operation is subject to the following two conditions: (1) this device may not cause interference, and (2) this device must accept any interference, including interference that may cause undesired operation of the device.

Le présent appareil est conforme aux CNR d'Industrie Canada applicables aux appareils radio exempts de licence. L'exploitation est autorisée aux deux conditions suivantes: (1) l'appareil ne doit pas produire de brouillage, et (2) l'utilisateur de l'appareil doit accepter tout brouillage radioélectrique subi, même si le brouillage est susceptible d'en compromettre le fonctionnement.

CAN ICES-003(A) / NMB-003(A)

UKCA

The computer system was evaluated for medical, IT equipment, automotive, maritime and railway EMC standards as a class A device. The computer complies with the relevant IT equipment directives for the UKCA mark

VCCI

This is a Class A product based on the standard of the Voluntary Control Council for Interference (VCCI). If this equipment is used in a domestic environment, radio interference may occur, in which case the user may be required to take corrective actions.

Download Documents

5.4- Appendices

Revision History

Date

Revision History

01 Dec 2024

First release of Axial AX300 Series Server manual

09 June 2025

Added acoustic & earthquake testing results. Added 28” slide rail.

Last updated