# AC101

## <mark style="color:blue;">1- Product Overview</mark>

### <mark style="color:blue;">1.1- Introduction</mark>

The Axial AC101 is a high-performance 1U edge server designed for robust and demanding applications. It features Intel's 13th Gen Alder Lake-S processors, supports up to 128GB of DDR5 memory, and offers extensive connectivity options including 1GbE and 10GbE networking ports. The system is built to support a full-height, full-length PCIe Gen 4.0 x16 expansion card up to 150W, making it suitable for GPU-intensive workloads. With integrated remote management capabilities via a dedicated BMC/IPMI port and a durable chassis, the AC101 is engineered for reliability and performance at the edge.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FzOt7JSMntCKYZKYLeqMP%2Fimage.png?alt=media&#x26;token=27d79796-f109-4bdf-af0d-b8089cb9e9d6" alt=""><figcaption><p><em>Axial AC101 without Security Bezel</em></p></figcaption></figure>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2F6Xxvy6aL8kJLGrNp8fu8%2Fimage.png?alt=media&#x26;token=a4bc563c-960e-4f22-839b-d2876cac8bfd" alt=""><figcaption><p><em>Axial AC101 with Security Bezel</em></p></figcaption></figure>

### <mark style="color:blue;">1.2- Safety</mark>

<details>

<summary><mark style="color:blue;">Safe use and installation instructions</mark></summary>

1. Install the device securely. Be careful handling the device to prevent injury and do not drop.
2. Equipment is intended for installation in a Restricted Access Area.
3. Elevated Operating Ambient - If installed in a closed or multi-unit rack assembly, the operating ambient temperature of the rack environment may be greater than room ambient. Therefore, consideration should be given to installing the equipment in an environment compatible with the maximum ambient temperature (Tma) specified by the manufacturer.
4. Reduced Air Flow - Installation of the equipment in a rack should be such that the amount of air flow required for safe operation of the equipment is not compromised.
5. Mechanical Loading - Mounting of the equipment in the rack should be such that a hazardous

condition is not achieved due to uneven mechanical loading.

6. Circuit Overloading - Consideration should be given to the connection of the equipment to the supply circuit and the effect that overloading of the circuits might have on overcurrent protection and supply wiring. Appropriate consideration of equipment nameplate ratings should be used when addressing this concern.
7. Reliable Earthing - Reliable earthing of rack-mounted equipment should be maintained. Particular attention should be given to supply connections other than direct connections to the branch circuit (e.g. use of power strips).
8. Ambient operating temperature must be between 5 °C to 40 °C with a non-condensing relative humidity of 8-85%.
9. The device can be stored at temperatures between -40 °C to 70 °C.
10. Keep the device away from liquids and flammable materials.
11. Do not clean the device with liquids. The chassis can be cleaned with a cloth.
12. Allow at least 2 inches of space around all sides of the device for proper cooling. If the device is mounted to a vertical surface then recommended device orientation is so that heatsink fins allow air to rise unobstructed. Alternative orientations may result in reduced operational temperature range.
13. This device is intended for indoor operation only.
14. Install the device only with shielded network cables.
15. Service and repair of the device must be done by qualified service personnel. This includes, but is not limited to, replacement of the CMOS battery. Replacement CMOS battery must be of the same type as the original.
16. Proper disposal of CMOS battery must comply with local governance.
17. Product must only be connected to a certified router, switch or similar network equipment.
18. Product is intended for indoor use only.
19. Product cannot be connected to the public network.\
    \\

WARNING: There is danger of explosion if the CMOS battery is replaced incorrectly. Disposal of battery into fire or a hot oven, or mechanically crushing or cutting of a battery can result in an explosion.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-d82f8442280ad089f9a7319e888d216a12e84cad%2F1e9ea7b05256e95c5e9a9041ce88c657aa766629cc55ed4c8fcb339009544b56.png?alt=media" alt=""><figcaption></figcaption></figure>

</details>

<details>

<summary><mark style="color:blue;">Précautions et guide d’installation</mark></summary>

Ne pas ouvrir ou modifier l'appareil. L'appareil utilise des composants conformes aux réglementations FCC et EC. La modification de l'appareil peut annuler ces certifications.

1. Installez l'appareil en toute sécurité. Manipulez l'appareil avec précaution pour éviter de vous blesser et ne le laissez pas tomber.
2. L'équipement est destiné à être installé dans une zone à accès restreint.
3. Température ambiante de fonctionnement élevée - En cas d'installation dans un rack fermé ou à plusieurs unités, la température ambiante de fonctionnement de l'environnement du rack peut être supérieure à la température ambiante de la pièce. Par conséquent, il convient de veiller à installer l'équipement dans un environnement compatible avec la température ambiante maximale (Tma) spécifiée par le fabricant.
4. Débit d'air réduit - L'installation de l'équipement dans un rack doit être telle que la quantité de débit d'air requise pour un fonctionnement sûr de l'équipement ne soit pas compromise.
5. Chargement mécanique - Le montage de l'équipement dans le rack doit être tel qu'un condition n'est pas atteinte en raison d'une charge mécanique inégale.
6. Surcharge de circuit - Il convient de tenir compte de la connexion de l'équipement au circuit d'alimentation et de l'effet que la surcharge des circuits pourrait avoir sur la protection contre les surintensités et le câblage d'alimentation. Une prise en compte appropriée des valeurs nominales de la plaque signalétique de l'équipement doit être utilisée pour répondre à cette préoccupation.
7. Mise à la terre fiable - Une mise à la terre fiable de l'équipement monté en rack doit être maintenue. Une attention particulière doit être accordée aux raccordements d'alimentation autres que les raccordements directs au circuit de dérivation (par exemple, utilisation de multiprises).
8. La température ambiante de fonctionnement doit être comprise entre 5 °C et 40 °C avec une humidité relative sans condensation de 8 à 85 %.
9. L'appareil peut être stocké à des températures comprises entre -40 °C et 70 °C.
10. Gardez l'appareil à l'écart des liquides et des matériaux inflammables.
11. Ne nettoyez pas l'appareil avec des liquides. Le châssis peut être nettoyé avec un chiffon.
12. Laissez au moins 2 pouces d'espace autour de tous les côtés de l'appareil pour un refroidissement correct. Si l'appareil est monté sur une surface verticale, l'orientation recommandée de l'appareil est de sorte que les ailettes du dissipateur thermique permettent à l'air de monter sans obstruction. Des orientations alternatives peuvent entraîner une plage de températures de fonctionnement réduite.
13. Cet appareil est destiné à une utilisation en intérieur uniquement.
14. Installez l'appareil uniquement avec des câbles réseau blindés.
15. L'entretien et la réparation de l'appareil doivent être effectués par un personnel qualifié. Cela inclut, mais sans s'y limiter, le remplacement de la batterie CMOS. La batterie CMOS de remplacement doit être du même type que celle d'origine.
16. L'élimination appropriée de la batterie CMOS doit être conforme à la gouvernance locale.
17. Le produit doit uniquement être connecté à un routeur, un commutateur ou un équipement réseau similaire certifié.
18. Le produit est destiné à une utilisation en intérieur uniquement.
19. Utilisez uniquement des connecteurs répertoriés UL pour la connexion aux panneaux de fusibles automobiles.
20. Le produit ne peut pas être connecté au réseau public.

ATTENTION: Il existe un risque d'explosion si la pile CMOS n'est pas remplacée correctement. L'élimination de la batterie dans le feu ou dans un four chaud, ou l'écrasement ou le découpage mécanique d'une batterie peut entraîner une explosion.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-d82f8442280ad089f9a7319e888d216a12e84cad%2F1e9ea7b05256e95c5e9a9041ce88c657aa766629cc55ed4c8fcb339009544b56.png?alt=media" alt=""><figcaption></figcaption></figure>

</details>

### <mark style="color:blue;">1.3- Box Contents & Accessories</mark>

The following accessories are included with every system:

* PSU filler (2RALXX5862A1)
* Nvidia 9.5” GPU mounting kit w/ screws (2RALXX5861A1)
* SSD cable brackets (2RALXX5859A1)
* Spare motherboard standoffs (2RALXX282300)
* Spare PCIe riser screws (2RALXX585800)
* Cable management ties
* Security bezel key

If additional items were purchased, such as rail mounting kits/brackets, they will be boxed separately.

### <mark style="color:blue;">1.4- Product Specifications</mark>

<table><thead><tr><th width="214.7999267578125">Feature</th><th>Details</th></tr></thead><tbody><tr><td>Variants</td><td>AC101 - High-Performance 1U with 150W PCIe 4.0 x16 Expansion</td></tr><tr><td>Processor</td><td><p>Intel 13th Gen Alder Lake-S (LGA1700)<br>Core i3, i5, i7 &#x26; i9 up to 24-core 32-thread<br>i3-13100E or TE, i5-13500E or TE, i7-13700E or TE, i9-13900E or TE</p><p>125W PL2 (Power Level 2)</p></td></tr><tr><td>Memory</td><td>Support up to 4x DDR5-4800 UDIMMs (non-ECC or ECC)<br>Up to 128GB total memory<br>Maximum operational speed: 4400 MT/s</td></tr><tr><td>Chipset</td><td>Intel W680</td></tr><tr><td>Integrated Graphics</td><td>Intel UHD Graphics 730 (i3) or 770 (i5, i7, i9)</td></tr><tr><td>Front I/O</td><td>2x USB 3.2 Gen 1 Type A<br>1x Power Button / LED (White)<br>1x ID button / LED (Blue)</td></tr><tr><td>Rear I/O</td><td>1x 1GbE Dedicated Management (BMC/IPMI)<br>2x 1GbE LAN Intel i210<br>2x 10GbE LAN Intel X710<br>2x USB 3.2 Gen 1 Type-A<br>1x DisplayPort<br>1x HDMI<br>1x VGA<br>1x DB9 (COM)<br>1x ID button / LED (Blue)</td></tr><tr><td>Expansion &#x26; Storage</td><td>1x M.2 2280/2260/2242/2230 M-key (PCIe Gen 3 x4)<br>1x PCIe Gen 4 x16 Full Height, Full Length slot (up to 150W)<br>Up to 4x 2.5” Drives (NVMe or SATA)</td></tr><tr><td>Special Features</td><td>ASPEED AST2600: Full Web UI, iKVM, vMedia support<br>1/10 Network Controller Sideband Interface (NC-SI)<br>Optional TPM 2.0 module (Infineon SLB9670) or Intel PTT (Native)<br>Chassis Intrusion Detection<br>Security Bezel<br>Secure Boot</td></tr><tr><td>Operating Systems</td><td>Microsoft Windows 10 IoT Enterprise 2021 LTSC (Value/High End) 64-bit<br>Microsoft Windows 11 Professional 64-bit<br>Red Hat Enterprise Linux 8.8 - 8.x<br>Red Hat Enterprise Linux 9.2 - 9.x<br>Ubuntu Desktop 22.04 Intel IoT for 13th Gen Intel Core processors<br>Ubuntu Server 22.04 Intel IoT for 13th Gen Intel Core processors</td></tr><tr><td>LAN Controllers</td><td>2x Intel i210 Controllers (2x 1GbE ports)<br>1x Intel X710 Controller (2 x 10GbE ports)</td></tr><tr><td>Power Supplies</td><td>Up to 2 PSUs with PMBUS monitoring, 100~240 VAC, 5A, 50-60Hz input<br>450W Gold<br>750W Platinum</td></tr><tr><td>Dimensions (WxHxD)</td><td>430 x 43.5 x 515mm (16.9 x 1.7 x 19.7”) without Security Bezel<br>483 x 43.5 x 534mm (19.0 x 1.7 x 21.0”) with Security Bezel</td></tr><tr><td>Weight</td><td>System Maximum: 10.02 kg (22.1 lbs)<br>Shipping Maximum: 12.88 kg (28.4 lbs)</td></tr><tr><td>Operating Temp.</td><td>5°C ~ 40°C (ASHRAE A3 Operating Temperature)<br>Maximum ambient temperature decreases by 1°C for every 175m (574 ft) increase in altitude above 900m (2,953 ft)</td></tr><tr><td>Storage Temp.</td><td>-40°C ~ 70°C</td></tr><tr><td>Operating Humidity</td><td>8~85% Relative, non-condensing<br>Maximum dew point 24°C</td></tr><tr><td>Storage Humidity</td><td>0~95% Relative, non-condensing<br>Maximum dew point 24°C</td></tr><tr><td>Shock &#x26; Vibration</td><td>ISTA 6-FEDEX-A</td></tr></tbody></table>

| <img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2F5ApMNKELH2vTOs5dYXu3%2Fimage.png?alt=media&#x26;token=e468cd5d-c726-4da2-8c14-da5dba96c23b" alt="" data-size="original"> | <p>FCC 47 CFR Part 15 Subpart B (Class A)<br>CAN ICES-003(A) / NMB-003(A) (Class A) (Class B upon request)</p>                        |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
| <img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FTQtGAXDOFuW1vn8mFyGN%2Fimage.png?alt=media&#x26;token=4d862cc2-c24b-4c20-bf19-e06e9533db00" alt="" data-size="original"> | <p>EN 63268-1<br>CISPR 32/EN 55032 (Class A; Class B upon request)<br>CISPR 35/EN 55035<br>Radio Equipment Directive (2014/53/EU)</p> |
| <img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FiNXomrJMQd6Gw80qT9yx%2Fimage.png?alt=media&#x26;token=2a3dde1f-7294-42f3-9822-3fb75d216467" alt="" data-size="original"> | RoHS 3 (2015/863/EU)                                                                                                                  |
| <img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FYU1TCl7Tp0V4nFbwrsEs%2Fimage.png?alt=media&#x26;token=d8d4cbac-738b-4f24-9e9b-116f5ce87427" alt="" data-size="original"> | WEEE Directive (2012/19/EU)                                                                                                           |
| <img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FSXHFVbBnBld527AnNVdX%2Fimage.png?alt=media&#x26;token=8ffd5c3b-069a-42cc-be80-73b7cc5a9837" alt="" data-size="original"> | IEC/EN/UL 62368-1 (UL File No. E490677)                                                                                               |

| Region   | Available Countries                                                                                                                                                                                                                                                                                           |
| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Americas | Canada, United States                                                                                                                                                                                                                                                                                         |
| Europe   | Austria, Belgium, Bulgaria, Croatia, Czech Republic, Cyprus, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lithuania, Liechtenstein, Luxembourg, Malta, Norway, The Netherlands, United Kingdom, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden |
| Asia     | Available countries upon request                                                                                                                                                                                                                                                                              |

Other countries may be available, contact us to learn more

### <mark style="color:blue;">1.5- System Identification & Labels</mark>

#### <mark style="color:blue;">System Label</mark>

The system label is located on the bottom of the chassis. It contains the following information:

* System Model
* OnLogic Serial Number
* Regulatory & Compliance Certification Logos

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FIo6o8eeWcS9m18qBc2Cr%2Fimage.png?alt=media&#x26;token=5e70be3b-74d1-4e39-b213-f5a431da1eb5" alt=""><figcaption></figcaption></figure>

#### <mark style="color:blue;">Front Service Label</mark>

On the front of the chassis, there is a retractable product information label containing pertinent product information such as:

* System Model
* OnLogic Serial Number
* BMC MAC addresses

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FIMlszg79UnNK8TDbwMYB%2Fimage.png?alt=media&#x26;token=9e5902fb-53c7-437a-b9f1-bd48a4d9e067" alt=""><figcaption></figcaption></figure>

## <mark style="color:blue;">2- Technical Specifications</mark>

### <mark style="color:blue;">2.1- External Features</mark>

#### <mark style="color:blue;">Front I/O</mark>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fhc3m9FEX5TjdKoQLoUE0%2Fimage.png?alt=media&#x26;token=7e472d9c-418f-4d48-ada5-a11f5d11f8e3" alt=""><figcaption></figcaption></figure>

#### <mark style="color:blue;">Front LEDs & Buttons</mark>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FZt69kiexZO6pbsL3fEAI%2Fimage.png?alt=media&#x26;token=05f490d8-3a32-40eb-99e0-ef85da0fab8f" alt=""><figcaption></figcaption></figure>

| LED / Button            | Color | On                    | Off                        | Blink                    |
| ----------------------- | ----- | --------------------- | -------------------------- | ------------------------ |
| **Power**               | White | Device is on          | Device is off              | -                        |
| **ID** (Identification) | Blue  | ID indicator asserted | ID indicator is deasserted | ID indicator is blinking |
| **RST** (Reset)         | -     | -                     | -                          | -                        |

The ID LED/Button is available to assist with locating the system. ID may be physically turned On / Off by physically pressing the ID button. ID may also be turned On, Off, or set to Blink from the Baseboard Management Controller (BMC) Web UI.

RST Button will reset the system.

<mark style="color:blue;">Rear I/O</mark>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fv5JASKIzoi3PrfxWoRxw%2Fimage.png?alt=media&#x26;token=919295e1-ecac-4739-9bc7-86b3466fb75b" alt=""><figcaption></figcaption></figure>

### <mark style="color:blue;">2.2- I/O Definitions</mark>

#### <mark style="color:blue;">Network Ports</mark>

The Axial AC101 features the following onboard Ethernet ports:

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fm5dVa3cFUWPjUkM9kmep%2Fimage.png?alt=media&#x26;token=a99ad56c-042d-40cf-b630-c5a237ca5149" alt=""><figcaption></figcaption></figure>

#### <mark style="color:blue;">1GbE Dedicated BMC Port LEDs</mark>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fe4oGh5dTS4mcvcdzTXNt%2Fimage.png?alt=media&#x26;token=21d609b6-9d24-40e3-ae77-d21e8703b349" alt=""><figcaption></figcaption></figure>

#### <mark style="color:blue;">1GbE Networking Port LEDs</mark>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2F2oR4uoqNI10Q5yECcRvn%2Fimage.png?alt=media&#x26;token=41523fd4-dfcc-41b6-ac7d-d4b4ad0240bf" alt=""><figcaption></figcaption></figure>

#### <mark style="color:blue;">10GbE Networking Port LEDs</mark>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2F0GcmEb10UnEi4esWVDQe%2Fimage.png?alt=media&#x26;token=5ee9189f-da66-44db-8679-45f584c25bb1" alt=""><figcaption></figcaption></figure>

#### <mark style="color:blue;">USB Ports</mark>

There are 4 USB 3.2 Gen 1 Type A ports on the Axial AC101 Edge Server.

* Two ports are on the front of the system.
* Two ports are on the rear of the system.

All USB ports support USB 2.0 connectivity.

#### <mark style="color:blue;">DisplayPort Video</mark>

There is one full-size DisplayPort (1.4a) located on the back of the Axial AC101 Edge Server.

#### <mark style="color:blue;">HDMI Video</mark>

There is one full-size HDMI (2.0b) port located on the back of the system.

#### <mark style="color:blue;">VGA Video</mark>

There is one VGA port located on the back of the system. HDMI, DisplayPort, VGA, COM and USB ports are only for setup use.

### <mark style="color:blue;">2.3- Internal Connectivity</mark>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FB2lWnJhhtOuefzfi8PlB%2Fimage.png?alt=media&#x26;token=98ab8cb7-5e8a-4038-96f4-b17a73126aa0" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
Note: SATA Ports are labeled in accordance with how they are enumerated in BIOS. See **SATA Headers** section for additional detail.
{% endhint %}

#### <mark style="color:blue;">M.2 2280/2260/2242/2230 M-key</mark>

This expansion slot is capable of supporting PCIe Gen 3 x4 and is routed directly to the W680 PCH. This slot is designed to support NVMe storage drives.

#### <mark style="color:blue;">TPM Header</mark>

The Axial AC101 supports an optional discrete TPM 2.0 module.

#### <mark style="color:blue;">Drive Headers, Labeling, and Recommended Population</mark>

#### **SATA Headers**

There are four SATA data headers on the motherboard. The data ports support SATA III 6Gbps storage devices.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FAD0Jr942lhfFCcnUqn7W%2FScreenshot%202025-06-26%20160902.png?alt=media&#x26;token=babf1b97-3565-4eb3-a3e2-ba8dfff75c8d" alt=""><figcaption><p>SATA connector labeling</p></figcaption></figure>

In BIOS, the SATA ports are enumerated starting with SATA\_4 (e.g. sSATA0 = SATA\_4, sSATA1 = SATA\_5, sSATA2 = SATA\_6, sSATA3 - SATA\_7).

When in an operating system, drive enumeration will start with the lowest connected SATA port number.

{% hint style="info" %}
**Note:** sSATA (or SSATA) is an acronym for secondary-Serial Advanced Technology Attachment and is referencing the connectivity method to the system chipset.
{% endhint %}

**OCuLink Headers**

There are four OCuLink headers on the motherboard that support PCIe 4.0 x4 connections to enable NVMe drives.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FWpWotvNQP81VNPTbtHbR%2FScreenshot%202025-06-26%20161142.png?alt=media&#x26;token=06511a1b-1088-42dc-959e-e7a854f787ec" alt=""><figcaption><p>OCuLink connector labeling</p></figcaption></figure>

When in an operating system, based on the PCIe topology, drive enumeration will be inverted from the OCuLink silkscreen labeling as per the following table:

| OCuLink Header | Drive Enumeration within Operating System |
| -------------- | ----------------------------------------- |
| OCU4           | Drive 0                                   |
| OCU3           | Drive 1                                   |
| OCU2           | Drive 2                                   |
| OCU1           | Drive 3                                   |

**SSD Physical Location**

The SSD Drive Bays for this system are labeled as follows:

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fp457eq1QwILmkfF1llYY%2FScreenshot%202025-06-26%20161242.png?alt=media&#x26;token=d629b79e-0094-4eec-864a-da1110648099" alt=""><figcaption></figcaption></figure>

**Drive Population**

The following drive population recommendations are provided to ensure consistency of connectivity, operation, and OS drive enumeration aligned to physical drive bay locations.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FO0lvFtvznPj9pFsKK9IK%2Fimage.png?alt=media&#x26;token=21657b76-80c6-40c3-8e3a-a505dfead06f" alt=""><figcaption></figcaption></figure>

#### PCIe Gen 4.0 x16 Slot

The Axial AC101 features one PCIe Gen 4.0 x16 connector accessible via a right angle riser card. The slot's edge power draw supports up to 75W. Adapters up to 150W are supported using the optional PCIe 6-Pin/8-Pin auxiliary power header.

#### DDR5 UDIMM Slots

The system supports up to four DDR5 UDIMM slots rated up to 4400MHz.

* 4400MT/s @ 2DPC-1DIMM
* 4000MT/s @ 2DPC-2DIMM 1R
* 3600MT/s @ 2DPC-2DIMM 2R

The system will support both ECC and non-ECC memory with all supported CPU options.

#### **Supported Memory Modes**

The Integrated Memory Controller (IMC) supports single-channel and dual-channel modes, depending on DIMM population.

* **Single-Channel Mode:** Used when DIMMs are installed in either Channel A or Channel B, but not both.
* **Dual-Channel Mode – Intel® Flex Memory Technology Mode:** In this mode, memory is divided into a symmetric and asymmetric zone. As per Intel documentation:
  * “The symmetric zone starts at the lowest address in each channel and is contiguous until the asymmetric zone begins or until the top address of the channel with the smaller capacity is reached. In this mode, the system runs with one zone of dual-channel mode and one zone of single-channel mode, simultaneously, across the whole memory array.”

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fevs8vK3ZkRmB5bieOlDq%2Fimage.png?alt=media&#x26;token=1e0189d1-ce0a-40dc-8d1e-581ac7d79c32" alt=""><figcaption><p>Source: <a href="https://edc.intel.com/content/www/us/en/design/ipla/software-development-platforms/client/platforms/alder-lake-desktop/12th-generation-intel-core-processors-datasheet-volume-1-of-2/system-memory-controller-organization-mode-ddr4-5-only/">https://edc.intel.com/content/www/us/en/design/ipla/software-development-platforms/client/platforms/alder-lake-desktop/12th-generation-intel-core-processors-datasheet-volume-1-of-2/system-memory-controller-organization-mode-ddr4-5-only/</a></p></figcaption></figure>

* **Dual-Channel Symmetric Mode (Interleaved Mode):** Dual-Channel Symmetric mode is fully interleaved and provides the maximum performance.\
  \
  The Axial AC101 will default to Dual-Channel Symmetric mode when both Channel A and Channel B DIMM connectors are populated in any order, with the total amount of memory in each channel being the same.\
  \
  When both channels are populated with the same memory capacity and the boundary between the dual channel zone and the single channel zone is the top of memory, IMC operates completely in Dual-Channel Symmetric mode.

**DIMM Population Requirements**

1. Only DDR5 DIMMs may be installed.
2. Memory frequency will not exceed that of the lowest frequency DIMM installed.
3. Dual Channel Memory Mode is only supported with 2 or 4 DIMMs installed (split equally between channels as indicated in the DIMM Population table).

The following population order is recommended to maximize performance:

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2F1rzLMehdIECYWnl9BvYx%2Fimage.png?alt=media&#x26;token=6d3fb744-6612-45fc-b4a3-b0ad1aaf173f" alt=""><figcaption></figcaption></figure>

### <mark style="color:blue;">2.4- Motherboard</mark>

#### <mark style="color:blue;">Layout & Component Overview</mark>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FrH8BvOnSefX0g0lxKMkV%2Fimage.png?alt=media&#x26;token=60fdb5ca-428e-40ec-8af3-198d5d94cdca" alt=""><figcaption></figcaption></figure>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Ftz2UJKhXG6YHH5UJY7ag%2Fimage.png?alt=media&#x26;token=417dbabf-8cf7-4c8d-884e-06fa7b31f545" alt=""><figcaption></figcaption></figure>

### <mark style="color:blue;">2.5- Power Management</mark>

#### <mark style="color:blue;">Supported Power Supplies</mark>

The system supports two redundant power supplies, which may either be 450W or 750W. These power supplies are hot-swappable, meaning they can be replaced while the system is running without interrupting its operation.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2F6YXNgSSo5ET26ztTF0kK%2Fimage.png?alt=media&#x26;token=2b666403-1553-4561-9638-60abefd7d863" alt=""><figcaption></figcaption></figure>

It is important to note that the two power supplies must be of the same wattage. Mixing power supplies of different wattages is not allowed. Please ensure that both power supplies are of the same wattage before installing them into the system.

If you need to replace a failed power supply, simply remove the failed unit and insert a new one of the same wattage. The system will automatically recognize the replacement power supply and bring it online to restore redundancy.

**IMPORTANT:** When utilizing 150W PCIe adapters (such as GPUs), a 750W power supply is recommended due to momentary power spikes (exceeding 150W) that may occur. When these power spikes occur, the power consumption of the PCIe adapter combined with power draw of other system components may exceed the available power of a 450W supply.

#### <mark style="color:blue;">Power Redundancy</mark>

The power supplies in this system are fully redundant in a primary/backup mode. This means that the two power supplies work in parallel, with one power supply acting as the primary source of power and the other as a backup.

In normal operation, the primary power supply is responsible for supplying power to the system, while the backup power supply remains idle. If the primary power supply fails, the backup power supply automatically takes over, ensuring that the system continues to receive power without interruption.

The power supplies are designed to work seamlessly together, with the primary power supply handling the majority of the load and the backup power supply providing additional power as needed. This redundancy ensures that the system can continue to operate even if one power supply fails, providing a high level of reliability for critical systems.

If a power supply failure occurs, the alerts will be presented via the Baseboard Management Controller (BMC) or an audible alarm may occur. If this happens, the failing supply can be serviced while the system remains operational on the backup power supply. Once the replacement power supply is installed, the system will automatically detect it and bring it online, restoring full redundancy.

#### <mark style="color:blue;">Wake-Up Events</mark>

The Axial AC101 supports multiple power states and wake-up events.

| Wake-Up Event             | From ACPI State  | Comments                |
| ------------------------- | ---------------- | ----------------------- |
| Power Button              | Deep S5 , S5, S4 |                         |
| PCIE/LAN                  | S5\*, S4, S3     | Must be enabled in BIOS |
| USB Keyboard/Mouse/Remote | S3               | Must be enabled in BIOS |
| RTC Alarm                 | S5               | Must be enabled in BIOS |

\* Onboard Intel® X710 Network controller only supports wake from S5

{% hint style="info" %}
**Note:** The Power LED is off when the system is in S4 sleep state or powered off (S5).
{% endhint %}

#### <mark style="color:blue;">Auto Power On Configuration</mark>

The Axial AC101 can be configured to turn on automatically when power is connected. This is useful for power outage recovery or if the unit is mounted in a hard to reach location. You can adjust Auto Power On settings by following the steps listed below.

1. Power on the system and press F2 a few times to access the BIOS
2. Navigate to ***Server Mgmt*** > ***BMC Tools***
3. Locate ***Restore AC Power Loss*** setting
4. This can be changed to any of the following states:
   * ***Power Off***: The system will remain off when power is restored
   * ***Last State***: The system will recover to the state it was in before the power loss event (i.e. If the unit was off, it would stay off. if the unit was powered on, it would power back on.)
   * ***Power On***: The system will power on after any power loss event
5. Press F10 to Save & Exit

### <mark style="color:blue;">2.6- Thermals & Cooling</mark>

The Axial AC101 Edge Server is designed to operate and function across a wide temperature (5 to 40°C) and humidity range (8 to 85% RH non-condensing). The following sections describe the thermals and cooling capabilities and behavior of the system.

#### <mark style="color:blue;">System Fans and Airflow Direction</mark>

The Axial AC101 Edge Server has five 40x40x56mm counter rotating system fans, which can be independently controlled and configured via the Baseboard Management Controller (BMC) relative to the supported system temperature sensors. The default fan duty and configuration settings have been validated to operate in accordance with the supported temperature range (up to 40°C). If the ambient operating temperature is tightly controlled, additional fan configuration optimizations may be manually adjusted to optimize acoustics and reduce power consumption. For additional information pertaining to manual fan configuration settings, please consult the [Axial Edge Server BMC Manual.](https://support.onlogic.com/product-documentation/server-products/axial-edge-server-bmc-manual)

The power supply fans operate independently and have their own closed-loop cooling algorithm.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FW0rL0Aiu9itcrAcNR9Sa%2Fimage.png?alt=media&#x26;token=78d4d05d-e622-45d2-80f1-8cd4fff3575b" alt=""><figcaption></figcaption></figure>

#### <mark style="color:blue;">Temperature Sensors</mark>

Sensor data is available for several onboard components.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2F7slqnIeXthenMhAYPIpm%2Fimage.png?alt=media&#x26;token=68e7f22f-a64e-406a-a3ed-3edfe8f28150" alt=""><figcaption></figcaption></figure>

| Sensor Name      | Upper Non Critical Temperature °C | Upper Critical Temperature °C |
| ---------------- | --------------------------------- | ----------------------------- |
| TEMP\_MB         | 54                                | 55                            |
| TEMP\_CPU        | TjMax - 1                         | TjMax                         |
| TEMP\_VR         | 99                                | 100                           |
| TEMP\_CARD\_SIDE | 69                                | 70                            |
| TEMP\_X710       | 99                                | 100                           |
| TEMP\_TR1        | 65                                |                               |
| TEMP\_M.2        | 70                                |                               |
| TEMP\_GPU        | 92                                | 93                            |

#### <mark style="color:blue;">Default Fan Setting</mark>s

The system uses a closed-loop thermal algorithm to balance performance, acoustics, and power consumption.

**Fan Zone Assignments**

**Fan Zone 1 - CPU Area**

**Assigned Sensor:** TEMP\_CPU

**Assigned Fans:** FAN3, FAN4, FAN5

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FYdUDIJhCW1mjzPMLGsb2%2Fimage.png?alt=media&#x26;token=6aa806e4-86ed-4373-8d2a-ad708af95b58" alt=""><figcaption></figcaption></figure>

**Behavior**: As per the default configuration settings, the system fans will increase duty cycle at 3% increments every 1 seconds when the CPU temperature is at or above 80°C. When the temperature drops below 75°C, the system fan duty cycle will reduce 3% every 3 seconds.

| Closed Loop Table 1      | Value |
| ------------------------ | ----- |
| Ramp Up Temp (°C)        | 80    |
| Ramp Up Interval (sec)   | 3     |
| Ramp Up Duty (%)         | 1     |
| Ramp Down Temp (°C)      | 75    |
| Ramp Down Interval (sec) | 3     |
| Ramp Down Duty (%)       | 3     |
| Ramp Threshold (°C)      | 0     |

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FcJGutc546kUinjuyRpWf%2Fimage.png?alt=media&#x26;token=6e2f5b4e-fbbd-46c4-82df-98992f4e31de" alt=""><figcaption></figcaption></figure>

**Fan Zone 2 - PCIe / GPU Area**

**Assigned Sensor:** TEMP\_GPU

**Assigned Fans:** FAN1, FAN2

**Behavior:** As per the default configuration settings, the system fans will increase duty cycle at 3% increments every 2 seconds when the GPU temperature is at or above 86°C. When the temperature drops below 76°C, the system fan duty cycle will reduce 3% every 1 second.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FS2nKhA9pfr7CjEP55AbT%2Fimage.png?alt=media&#x26;token=10776a19-eaba-444d-9f25-06cc088335a8" alt=""><figcaption></figcaption></figure>

{% hint style="info" %}
**Note:** GPU temperature sensing is only supported with Nvidia professional grade GPUs.
{% endhint %}

**Additional Fan Defaults**

* The default system idle duty cycle is 5%.
* Upon System Fan Failure or BMC Firmware Update, System Fans will ramp to maximum speed.

#### Thermal Performance and Validation

As previously noted, the default fan duty and configuration settings have been validated to operate in accordance with the supported temperature range (up to 40°C) as per the following test scenario and results.

**Test Conditions**

* Temperature Range: 5ºC to 40°C (+5)
* System Configuration:
  * i9-13900TE Processor (125W PL2)
    * Performance-core Max Turbo Frequency: 5.00 GHz
    * Efficient-core Max Turbo Frequency: 3.90 GHz
    * Performance-core Base Frequency: 1.00 GHz
    * Efficient-core Base Frequency: 800 MHz
  * 2 TB PCIe Gen4 x4 m.2 Storage
  * 4 PCIe 4.0 2.5” Storage Drives
  * 128GB DDR5 Memory
  * Nvidia T1000 GPU
    * Max Boost Frequency: 2100 MHz
    * Base Frequency: 1065 MHz
* Workload Applications/Test:
  * Memory 80% workload with PassMark BurnInTest
  * Storage 80% workload with PassMark BurnInTest
  * 3D Graphics 80% workload with PassMark BurnInTest
  * Processor loaded 100% with Intel XTU
  * Discrete GPU loaded with Nvidia Nbody

**Test Results**

The Axial AC101 system sustained a full processor workload and 80% workloads on memory, storage and 3D graphics, along with executing an Nbody simulation through its full rated temperature range without throttling and while maintaining greater than base clocks on all processor cores and GPU cores. During the test sequence numerous points throughout the system were monitored to ensure adequate cooling was provided to components in the system. The system was also tested 5ºC above and below its rated temperature range to help classify performance outside of the rated temperature range.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FPdDGRfDRvPFlQfdvMe8S%2Fimage.png?alt=media&#x26;token=a1d92090-be6a-4b56-8323-823cc6ab8e57" alt=""><figcaption></figcaption></figure>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fs12a2KvX5mUBuN7uJECA%2Fimage.png?alt=media&#x26;token=9311d427-3aff-4331-bb10-870326ad4095" alt=""><figcaption></figcaption></figure>

### <mark style="color:blue;">2.7- Block Diagram</mark>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FrDNo7CWNxJDkJrzZFYgQ%2Fimage.png?alt=media&#x26;token=7d309a08-492c-4a8a-b351-718fbbb04d69" alt=""><figcaption></figcaption></figure>

## <mark style="color:blue;">3- Installation & Mechanical</mark>

### <mark style="color:blue;">3.1- Dimensions</mark>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2F2R1jptsjCjR610KnP3l9%2Fimage.png?alt=media&#x26;token=b085e593-7d19-48a6-ac0d-4a99ceea4a66" alt=""><figcaption><p><em>Axial AC101 without Security Bezel Dimensions</em></p></figcaption></figure>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FQgxHvoIxhluvy35iOoy1%2Fimage.png?alt=media&#x26;token=1b857af5-0135-4ae8-9ec1-baf324e08a48" alt=""><figcaption><p><em>Axial AC101 with Security Bezel Dimensions</em></p></figcaption></figure>

### <mark style="color:blue;">3.2- Mounting</mark>

#### <mark style="color:blue;">Mounting Hardware</mark>

The Axial AC101 Edge Server has been designed with flexibility in mind and can be mounted in different ways. As the system is designed to meet industry standard 19” Electronic Industries Alliance (EIA) racks, there are multiple rack mounting rail kits available. Additionally, the system may also be wall mounted using the OnLogic wall mount kit.

#### <mark style="color:blue;">Rack Mounting</mark>

The Axial AC101 Edge Server has been designed to support standard 19" EIA rack mounting, which is a common form factor used in data centers and server rooms. To accommodate different rack depths, the system supports 23" and 28" rail kits that can be used to securely mount the server in the rack. These rail kits are easy to install and include all the necessary hardware for attachment into the rack.

#### <mark style="color:blue;">Rackmount 23" Ball Bearing Slide Rails</mark>

The 23" Ball Bearing Slide Rails are an optional accessory designed to enhance the functionality and ease of use of the Edge Server. These slide rails are designed to be used with standard 19" EIA racks and allow for easy installation and removal of the server from the rack. The ball bearing design ensures smooth and effortless sliding motion, while the sturdy construction provides a secure and stable platform for the server. With these slide rails, you can easily access the server for maintenance or upgrades without the need for complex disassembly or cumbersome lifting.

The 23” Ball Bearing Slide rail kit can be chosen at time of configuration based on the rack depth requirements.

**Mounting Hole:** Square, **Rack Depth Range (front to back flange)**:597mm (23.5in) to 927mm (36.5in)

\\

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fk43fGSNT9PX1J3WcIl05%2Fimage.png?alt=media&#x26;token=923c98b9-af8f-4323-82cd-b7163f88f81e" alt=""><figcaption><p>Install the six M4x0.7 L=4mm Low Profile Cheesehead screws provided with the rail kit</p></figcaption></figure>

#### <mark style="color:blue;">Rackmount 23" Ball Bearing Cable Management Arm Slide Rail Kit</mark>

The 23" Ball Bearing Cable Management Arm Slide Rail Kit is an optional accessory that enhances the standard ball bearing slide rail options by providing a cable management arm to neatly organize and secure cable connections to the Edge Server system while still supporting easy removal of the server from the rack for maintenance and upgrades.

The 23” Ball Bearing Slide rail kit can be chosen at time of configuration based on the rack depth requirements.

**Mounting Hole:** Square, **Rack Depth Range (front to back flange):** 597mm (23.5in) to 927mm (36.5in)

\\

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FwS5UPt35BNSoQxUXkUpl%2Fimage.png?alt=media&#x26;token=ac141b56-49f8-4247-9f12-9f6f3d88715d" alt=""><figcaption><p>Install the six M4x0.7 L=4mm Low Profile Cheesehead screws provided with the rail kit</p></figcaption></figure>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fnmqt2QKKoPDb1mFUPPjG%2Fimage.png?alt=media&#x26;token=74ba1133-f4c7-4162-8d39-d7ec83e2265b" alt=""><figcaption></figcaption></figure>

#### <mark style="color:blue;">Rackmount 28" Simple Locking Ball Bearing Slide Rails</mark>

The 28” Simple Lock Ball Bearing Slide Rails are an optional accessory designed to enhance the functionality and ease of use of the Edge Server. These slide rails are designed to be used with standard 19" EIA racks and allow for easy installation and removal of the server from the rack. The ball bearing design ensures smooth and effortless sliding motion, while the sturdy construction provides a secure and stable platform for the server.

With these slide rails, you can easily access the server for maintenance or upgrades without the need for complex disassembly or cumbersome lifting.

The simple locking mechanism allows for quick mounting into a rack without the use of any tools.

The 28” Simple Lock Ball Bearing Slide rail kit can be chosen at time of configuration based on the rack depth requirements.

**Mounting Hole:** Square, **Rack Depth Range (front to back flange):** 609mm (24in) to 921mm (36.2 in)

\\

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FZjLdLOq1vDVMv9sYvgic%2Fimage.png?alt=media&#x26;token=080d499a-ef06-4b8b-bfa2-58c707aa1a6e" alt=""><figcaption><p>Install the six M4x0.7 L=4mm Low Profile Cheesehead screws provided with the rail kit</p></figcaption></figure>

#### <mark style="color:blue;">Wall Mounting</mark>

#### **Wall mount kit**

The Axial AC101 Edge Server wall mount kit is made of sturdy metal and designed to securely hold the server in place against a wall. This optional accessory includes the necessary wall mounting brackets and hardware to flexibly mount the Axial AC101 Edge Server system where a rack is not available or practical.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FrKierquOjUtnNy0Cb5DP%2Fimage.png?alt=media&#x26;token=1e6b66db-3c81-434c-a709-b6e766d23068" alt=""><figcaption><p>Install the eight M3x0.5 L=4mm Flathead screws provided with the wall mount kit</p></figcaption></figure>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FajObQmlOFWezTnIKvD1W%2Fimage.png?alt=media&#x26;token=76fbea80-2545-4062-8381-cd870a2b0192" alt=""><figcaption></figcaption></figure>

### <mark style="color:blue;">3.3- System Servicing</mark>

#### <mark style="color:blue;">System Access</mark>

The AC101 can be opened by the user. This does not void the warranty, however, any damage caused by doing so will not be covered.

This section provides guidance for accessing and replacing internal components. Before performing any service, ensure the system is powered down and disconnected from its power source unless performing a hot-swap operation as described below.

#### Front panel access & Serial label <a href="#front-panel-access-serial-label" id="front-panel-access-serial-label"></a>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-6997473a63d88d02ffde77974c763fbe6b949e2d%2F193fcfc0a17f6477850d29a94398ecec28f35c1893026f28adce0400ba4e9249.png?alt=media" alt="Unlock front panel w/ key" width="375"><figcaption></figcaption></figure>

Unlock the front panel using the included keys in the accessory box

The front panel can now be removed. Pull from the left side (the side with the lock) first.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-8e532c42f8bf0d832e0200786c899a3b3b3eb580%2F6124101486088b785bf6e8a844a4842a082f50efec8a65b67295eff57e2bb0ff.png?alt=media" alt="" width="375"><figcaption></figcaption></figure>

You now have access to the power button, USB ports, and serial label tag. Pull the tab for easy access to your unit’s serial number and BMC MAC information. A second label can be found on the bottom of the unit as well.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-ebfee2a0a550ae34a0ed8d108ed1b5cba0882e0b%2Fbd004626fbffdb506578d16a0777a6a733290c40c5d3bd85aabce87b127ea83d.png?alt=media" alt="" width="375"><figcaption></figcaption></figure>

#### <mark style="color:blue;">Opening the System</mark>

The chassis lid features a two-point locking mechanism. The first is a top latch with a tamper-resistant screw, and the second is a thumbscrew at the rear of the system. Both must be unlocked to remove the lid.

1. Make sure the system is disconnected from power, monitor, and all peripheral connections before proceeding.
2. Loosen the black retaining screw on the back of the system.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-fb1e4742e5ce0e990b1fc24744d8fd9eb714c0e6%2Fe5bd8c1b0a763f2b62f63e0ba73fb7eb338b824cbd56c3245f77fceb5b3f93bc.png?alt=media" alt="Lid back screw" width="375"><figcaption></figcaption></figure>

3. Unlock the lid latch and press the blue button to release it. Pull back on the latch arm to loosen the lid.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-5cfd3e004a46376e8d14f0966451f277fe8f8759%2F3387fbf3a0605af220a6fde6a856fd0b7eba9d2874d4ba6ff83a9b58fdbef20a.png?alt=media" alt="lid latch release" width="375"><figcaption></figcaption></figure>

4. Th lid can now be removed. The internals of the system can be accessed for maintenance and troubleshooting.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-f71d97973984ee91081f71d3541f01a5c24428e7%2Fb09b602a0fafdd81a61ea0f6ca64d1a6734f7a74cb286a068b973b9a150bd52b.png?alt=media" alt="" width="375"><figcaption><p>System shown with optional configuration options</p></figcaption></figure>

#### <mark style="color:blue;">Hot-Swappable Components</mark>

* **Power Supplies:** The redundant 450W or 750W power supplies are hot-swappable. A failed unit can be replaced while the system is running without interrupting operation. Ensure the replacement PSU is the same wattage as the remaining unit.

#### <mark style="color:blue;">Other Replaceable Components</mark>

* Memory (DIMMs): See section 2.3 for DIMM population rules and physical locations.
* Storage Drives: See section 2.3 for SATA/NVMe drive locations and population guidelines.
* PCIe Cards: See section 2.3 for PCIe slot details.

#### <mark style="color:blue;">Servicing PCIe & GPU</mark>

#### **Adding/Removing PCIe card** <a href="#adding-removing-pcie-card" id="adding-removing-pcie-card"></a>

Additional motherboard ports and troubleshooting may require access under the PCIe card (if installed). Follow these steps to safely remove the PCIe card and support bracket.

1. Remove the retention screw on the back of the system (circled in **Orange**). If you have a longer PCIe card, such as some GPUs, it may have an extra supporting bracket. Remove the x2 screws located near the back of the PCIe card (circled in **Blue**).
2. Remove the PCIe card by lifting straight up. Be careful of any cables running around the support bracket or connected to the card. You can pull up using the hole in the metal bracket.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-76a7baa75e9a7fb32236746ef0d0bfea748f62ed%2Fd10344a6cd947db08a97c7f3dff2742a6ba004ce33aa5442735f6267d5eac758.png?alt=media" alt="" width="375"><figcaption></figcaption></figure>

### <mark style="color:blue;">3.4- CAD & Drawings</mark>

[AC101 Dimensional Drawings](https://media.onlogic.com/248f8472-1b41-4a43-9f88-4aee598a9ac5/3519504a-87cf-4368-9e20-01f69e6d62f1/721pHkAl6oQNPQD1A2d2mMEeq/HvN3cYiFGhpu15FVW3xBK8zBp.pdf?targetFileName=OnLogic-AC101-Spec-Sheet-V5.pdf)

{% file src="<https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FVhcZEsnjqOdMpMgMmWr4%2FAC101%20-%20Simplified%20Model.zip?alt=media&token=50ce9cd1-594f-4071-a6f5-62d1ccfe326f>" %}

## <mark style="color:blue;">4- Software & Firmware</mark>

### <mark style="color:blue;">4.1- BIOS/UEFI</mark>

The BIOS/UEFI provides critical low-level system configuration.

For complete details on BIOS/UEFI configuration, refer to the official User Manual:

{% content-ref url="ac101/axial-ac100-series-bios-uefi-manual" %}
[axial-ac100-series-bios-uefi-manual](https://support.onlogic.com/product-documentation/server-products/axial-ac100-series/ac101/axial-ac100-series-bios-uefi-manual)
{% endcontent-ref %}

### <mark style="color:blue;">4.2- Remote Management (IPMI/BMC)</mark>

The Axial AC101 includes a dedicated Baseboard Management Controller (ASPEED AST2600) for comprehensive remote management. This allows for out-of-band control of the server, including power cycling, health monitoring, virtual media access, and KVM functionality, all accessible through a web UI via the dedicated 1GbE Management port. The BMC also controls fan curves and logs system events like chassis intrusion. For detailed instructions on configuration and usage, please consult the separate BMC Manual:

{% content-ref url="../axial-edge-server-bmc-manual" %}
[axial-edge-server-bmc-manual](https://support.onlogic.com/product-documentation/server-products/axial-edge-server-bmc-manual)
{% endcontent-ref %}

#### <mark style="color:blue;">Changing the BMC Chassis ID (Flex GPU Support)</mark>

The following outlines how to configure the Axial AC101 BMC firmware to support Intel Flex dGPUs. The BMC's default chassis ID is set for Nvidia GPUs; changing it enables support for Intel dGPUs and their respective sensors and communication protocols.

#### Via BMC Web UI (Recommended)

1. Change Chassis ID:
   * Log in to the BMC Web UI.
   * Navigate to `Settings` > `Chassis ID Select`.
   * From the dropdown, select "Intel\_Flex\_GPU" and click `Save`.
   * Confirm the BMC reset when prompted.
2. Verify Change:
   * After the BMC reboots, log back into the Web UI.
   * Confirm "Intel\_Flex\_GPU" is displayed in the dropdown menu.
   * On the `Sensor` page, verify "TEMP\_GPU" and "PWR\_GPU" sensors report values if an Intel Flex GPU is installed and present.
3. Revert to Default (NVIDIA dGPU Support):
   * From the Home screen, select `Settings`, then `Chassis ID Select`.
   * Toggle back to "Default" and click `Save`. Click `OK` when prompted.
   * *Note: Resetting the BMC to default does not change the Chassis ID.*

#### Via IPMITOOL (CLI - Debug)

* *Commands can be executed remotely (local host OS not required). Substitute variables (`$IP`, `$username`, `$password`) with appropriate values. Refer to Axial AC101 BMC Manual, Section 7 for additional information.*

1. Change Chassis ID Value:
   * Execute: `ipmitool -H $IP -I lanplus -U $username -P $password raw 0x3a 0xaa 0x49 0x6e 0x74 0x65 0x6c 0x5f 0x46 0x6c 0x65 0x78 0x5f 0x47 0x50 0x55`
   * Apply Chassis ID (Reboot BMC): `ipmitool -H $IP -I lanplus -U $username -P $password raw 0x6 0x2`
2. Verify Change:
   * After approximately 3 minutes (BMC reboot), run the sensor listing command: `ipmitool -H $IP -I lanplus -U $username -P $password sensor list`
   * Confirm "PWR\_GPU" appears in the list.
3. Check Current Chassis ID:
   * To retrieve the current chassis ID, execute: `ipmitool -H $IP -I lanplus -U $username -P $password raw 0x3a 0xab`
4. Revert Chassis ID to Defaults (NVIDIA dGPU Support):
   * Execute: `ipmitool -H $IP -I lanplus -U $username -P $password raw 0x3a 0xaa 0xff`
   * Apply Chassis ID (Reboot BMC): `ipmitool -H $IP -I lanplus -U $username -P $password raw 0x6 0x2`
   * After approximately 3 minutes, verify "PWR\_GPU" shows a value when an Nvidia GPU is installed.
   * *Note: Changing the Chassis ID is persistent across BMC reboots and firmware updates.*

#### **Intel Flex GPU Driver Installation**

For Ubuntu:

1. Follow Intel's official driver installation guide: <https://dgpu-docs.intel.com/driver/installation.html>
2. Add Required Grub Kernel Argument:
   * Modify `/etc/default/grub` using a text editor (e.g., `vi`). Use `sudo` and enter your password when prompted.
   * Edit the line beginning with "GRUB\_CMDLINE\_LINUX\_DEFAULT", adding `pci=realloc=off` inside the double-quotes, typically after "quiet splash". (If "quiet splash" is absent, that's acceptable.)
   * Save the file and exit the text editor.
   * Update Grub: `sudo update-grub`
   * Restart the system.
   * *Note: This argument ensures proper Intel Flex GPU enumeration within the Intel Core CPU architecture. This function is typically enabled by default to accommodate PCI bridge resource reallocation if BIOS allocations are insufficient for child devices.*

**For Windows**:

* Follow Intel's official instructions and download drivers from: <https://www.intel.com/content/www/us/en/download/780185/intel-data-center-gpu-flex-series-windows.html>

#### Fan Settings

The following fan settings are deviations from the Nvidia defaults that will automatically be set with BMC firmware version 1.17 when the Chassis ID is set to Intel GPU.

* Closed Loop Control Table 2 (This likely refers to a table or further details within the content)
* Base Fan Speed:
  * From the BMC Web UI –> Settings –> Fan Settings –> Fan Mode page:
    * Set FAN1 and FAN2 to `Customized`.
    * Set the Minimum Duty to `25`.
    * Click `Save`.
* Adjusting Fan Locations:
  * The AC101 1U chassis accommodates two PCIe Expansion fan mount points.

#### <mark style="color:blue;">Configuring the Video Settings to Enable Remote Display</mark>

The current default display settings are as follows. Please refer to the screenshot where onboard VGA  and Intel IGFX are disabled. When a GPU is installed, the GPU ports will be the primary display output as default.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2F3AMqJ8PnREHhD26oxZ9d%2Funknown.png?alt=media&#x26;token=a5602a63-dc2f-4706-b167-68847fdfb05b" alt=""><figcaption></figcaption></figure>

\
Thcould lead to the below 2 problems:<br>

1. The display signal from the GPU port is slow. This tends to lead to a lack of video signal until the BMC is fully initialized and can still take some additional time after that before a video signal is available.
2. When trying to connect to the device remotely via BMC, there is no video output via KVM access.\
   The 3 BIOS settings need to be modified and then the video output will be displayed remotely. Please also refer to the screenshot below for the modifications.  <br>

   <figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fj8T6ZqJLG9dKqnBEUgtv%2Funknown.png?alt=media&#x26;token=91edbab9-f8cb-4004-be76-bcf48afb2047" alt=""><figcaption></figcaption></figure>

   \
   Even if there are no issues, please consider setting the default settings to be BMC access friendly by enabling these settings in the BIOS. <br>

### <mark style="color:blue;">4.3- Drivers & Downloads</mark>

#### <mark style="color:blue;">Drivers</mark> <a href="#drivers" id="drivers"></a>

[AC101 Windows 10 & 11 Drivers](https://static.onlogic.com/resources/drivers/AC101_Drivers_WIN10_11.zip)

#### <mark style="color:blue;">BIOS Updates</mark> <a href="#bios-updates" id="bios-updates"></a>

Refer to the AC101 BIOS Manual **Section 5.16 – Instant Flash** for update procedure.

| Bios Version | Release Date        | Link                                                                                                      |
| ------------ | ------------------- | --------------------------------------------------------------------------------------------------------- |
| 21.12.OL11   | November 14th, 2024 | [Download](https://static.onlogic.com/resources/bios/Axial-AC-Series/21.01.OL11-20250128T214640Z-001.zip) |
| 21.01.OL09   | December 18th 2023  | [Download](https://static.onlogic.com/resources/bios/Axial-AC-Series/21.01.OL11-20250128T214640Z-001.zip) |

#### <mark style="color:blue;">BMC Updates</mark> <a href="#bmc-updates" id="bmc-updates"></a>

| **BMC Version** | **Release Date**   | **Link**                                                                                       |
| --------------- | ------------------ | ---------------------------------------------------------------------------------------------- |
| 1.17.00         | November 6th, 2024 | [Download](https://drive.google.com/file/d/14EWwSr2xmqzX2hTUVwF2ubHzucwd4oCz/view?usp=sharing) |

### <mark style="color:blue;">4.4- Operating System Compatibility & Installation</mark>

#### <mark style="color:blue;">Supported Operating Systems</mark>

* Microsoft Windows 10 IoT Enterprise 2021 LTSC Value (Celeron/i3/i5) - 64 Bit
* Microsoft Windows 10 IoT Enterprise 2021 LTSC High End (i7/i9/Xeon) - 64 Bit
* Microsoft Windows 11 Professional 64-bit
* Red Hat Enterprise Linux 8.8 - 8.x
* Red Hat Enterprise Linux 9.2 - 9.x

#### <mark style="color:blue;">Windows 10 IoT Enterprise 2021 LTSC Licensing</mark>

Windows 10 IoT LTSC (Long-Term Servicing Channel) is a version of the Windows 10 operating system designed for use in embedded and IoT (Internet of Things) devices.

For information pertaining to the benefits of Windows 10 IoT, please refer to the following: [Windows 10 IoT and its Benefits for Businesses](https://www.onlogic.com/company/io-hub/windows-10-iot-and-its-benefits-for-businesses/).

The 2021 version of Windows 10 IoT LTSC comes in two licensing editions that are supported and may be preloaded on to the Axial AC101 Edge Server:

* Microsoft Windows 10 IoT Enterprise 2021 LTSC Value
  * This version of Windows 10 IOT is suitable for systems with Intel Core-i3 and Core-i5 processors.
* Microsoft Windows 10 IoT Enterprise 2021 LTSC High End
  * This version of Windows 10 IOT is suitable for systems with Intel Core-i7 and Core-i9

Both versions support Azure IoT Edge for Linux on Windows (EFLOW), allowing for containerized Linux workloads alongside Windows applications in Windows deployments. For additional information, see [What is Azure IoT Edge for Linux on Windows](https://learn.microsoft.com/en-us/azure/iot-edge/iot-edge-for-linux-on-windows?view=iotedge-1.4) from Microsoft.

### <mark style="color:blue;">4.5- RAID Configuration</mark>

The following contents will provide information on the RAID capabilities of the Axial AC101 Edge Server and guide users on how to configure RAID.

RAID (Redundant Array of Independent Disks) is a technology that allows multiple hard drives to work together as a single logical drive, providing increased performance and data redundancy. The idea behind RAID is to combine the storage capacity of multiple drives to create a larger virtual drive that appears to the operating system as a single disk.

RAID can improve system performance by distributing data across multiple drives, allowing for faster read and write speeds. Additionally, RAID can provide data redundancy by using multiple drives to store the same data, so that if one drive fails, data can still be accessed from the other drives. There are several RAID levels with different configurations and benefits, each offering varying levels of performance and data redundancy.

The Axial AC101 Edge Server supports onboard RAID via Intel® Rapid Storage Technology as supported by the Intel® W680 chipset.

Intel® RST (Intel® Rapid Storage Technology) is a software solution developed by Intel® Corporation that provides advanced storage management capabilities for Intel® chipset-based motherboards.

Prior to configuration of RAID, users are advised to back up their data before configuring RAID as the process may erase all data on the hard drives.

#### <mark style="color:blue;">Supported SATA RAID Types</mark>

The following sections will discuss the various SATA RAID types that are supported on the Axial AC101 Edge Server and their respective advantages/disadvantages.

**RAID 0: Striping**

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-6fb9eec285fd4ef5d74ca58d05c1a1be06fc0d48%2Fb22e1d5a09823a4e3ee3dd6103cadedc1fdaba9e752f8d897db38b53629ed07f.png?alt=media" alt="" width="563"><figcaption></figcaption></figure>

RAID 0 (Redundant Array of Inexpensive Disks level 0), also known as striping, is a method of combining multiple physical hard drives into a single logical volume for improved performance.

In RAID 0, data is divided into blocks and spread across two or more physical drives simultaneously. The blocks are written to the drives in a way that balances the load and optimizes performance. When data is read, the blocks are retrieved from multiple drives at the same time, increasing the read and write speed of the overall system.

An advantage of RAID 0 is its improved performance due to the parallel access to multiple drives. However, RAID 0 does not provide any fault tolerance or redundancy. If one drive fails, the entire RAID 0 volume will be lost, along with all data stored on it. Therefore, it is recommended to use RAID 0 only for non-critical data or as part of a larger backup and disaster recovery strategy.

RAID 0 requires a minimum of two drives..

For RAID 0, it is recommended to use disks of the same interface, speed, and capacity, but if the disks in a RAID 0 array have different sizes, performance may be limited and the capacity of the array will be limited by the size of the smallest disk.

#### RAID 1: Mirroring

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-6c3521eb9749b0edd42b5a5ad3b69dcbebb99973%2F9083bd14e0dbbbfb5c2abbf3482ac488c731fe44936c8a354a376538f9bc27f7.png?alt=media" alt="" width="563"><figcaption></figcaption></figure>

RAID 1 (Redundant Array of Inexpensive Disks level 1) is a type of data storage technology that provides data redundancy and fault tolerance by creating an exact copy, or mirror, of data on two or more physical drives.

In RAID 1, when data is written to one drive, it is simultaneously written to the other drive(s), creating an exact duplicate of the data on each drive. This ensures that if one drive fails, the data can still be accessed from the remaining drive(s). The read performance of RAID 1 can be faster than that of a single drive because data can be read from multiple drives at the same time. However, the write performance is generally slower because data must be written to multiple drives.

An advantage of RAID 1 is its data redundancy and fault tolerance. If one drive fails, the data is still available on the other drive(s). Additionally, RAID 1 can be hot-swappable, meaning that if a drive fails, it can be replaced without having to shut down the system.

However, RAID 1 has some disadvantages, including lower storage capacity compared to other RAID configurations and higher cost due to the need for multiple drives. RAID 1 is recommended for applications that require high data availability and reliability, such as mission-critical systems, servers, and database applications.

RAID 1 requires a minimum of two drives.

For RAID 1, it is recommended to use disks of the same interface, speed, and capacity, but if the disks in a RAID 1 array have different sizes, performance may be limited and the capacity of the array will be limited by the size of the smallest disk.

#### RAID 5: Striping with Parity

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-15c139962b338ce949e20800f5ccc566ef6b2fa5%2F431ff89cb1e058c58d8541c9694826dc85d8331d6a3a440d12bd5bc914812c4b.png?alt=media" alt="" width="563"><figcaption></figcaption></figure>

RAID 5 (Redundant Array of Inexpensive Disks level 5) is a type of data storage technology that uses striping with distributed parity.

In a RAID 5 configuration, data is striped across multiple disks, with parity information distributed across all the disks. This provides fault tolerance and redundancy, allowing data to be reconstructed in the event of a single drive failure.

RAID 5 offers good performance and fault tolerance for small to medium-sized businesses, but it has a higher overhead and is more complex than some other RAID configurations. Additionally, in the event of a second drive failure, data loss can occur. RAID 5 is often used in applications that require a balance between performance, fault tolerance, and cost.

RAID 5 requires a minimum of three disks, and the capacity of one disk is used for parity information.

For RAID 5, it is recommended to use disks of the same interface, speed, and capacity, but if the disks in a RAID 5 array have different sizes, performance may be limited and the capacity of the array will be limited by the size of the smallest disk.

#### RAID 10: Mirrored Striped

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-44762bfe5d069ad52f7a79e4b7edf73760b3c213%2Fa37ed29b0adeb1ab5411a4fdb12c22456c525b80be24715e59437b40429308ce.png?alt=media" alt="" width="563"><figcaption></figcaption></figure>

RAID 10 (Redundant Array of Inexpensive Disks level 10), also known as RAID 1+0 or mirrored striped volumes, is a combination of RAID 1 and RAID 0. It provides both data redundancy and improved performance.

In a RAID 10 configuration, multiple pairs of disks are configured as RAID 1 arrays, where data is mirrored between each pair of disks for redundancy. The resulting RAID 1 arrays are then striped together in a RAID 0 array, where data is striped across all of the mirrored pairs for increased performance.

Data is striped across the mirrored pairs, so the capacity of the RAID 10 array is equal to half of the total capacity of the disks. For example, in a four-disk RAID 10 array with 1TB disks, the total capacity of the array would be 2TB.

RAID 10 provides both performance and redundancy benefits, as it offers the performance benefits of RAID 0 while also providing the redundancy of RAID 1. In the event of a single disk failure, the mirrored pair can continue to provide access to the data. However, if both disks in a mirrored pair fail, data may be lost.

RAID 10 requires a minimum of four disks, and must have an even number of disks.

For RAID 10, it is recommended to use disks of the same interface, speed, and capacity, but if the disks in a RAID 10 array have different sizes, performance may be limited and the capacity of the array will be limited by the size of the smallest disk.

#### <mark style="color:blue;">RAID Configuration via BIOS</mark>

RAID volumes can be configured and created via the BIOS or from an operating system (OS).

If an operating system is to be installed on to a RAID volume, the processes outlined in this section must be followed in order to appropriately enable RAID and create the RAID volume where the OS will be deployed.

This section will outline the process for creating RAID volumes outside of the OS via the BIOS.

#### **Enabling VMD Configuration**

Prior to configuring or creating any RAID volumes using Intel® Rapid Storage Technology, Intel® Volume Management Device (VMD) must be appropriately configured/enabled.

1. From UEFI System Setup, navigate to Advanced → VMD Configuration → and set **Enable VMD Controller** to **Enabled.**
2. Next, configure the VMD Enabled devices to be enabled:
   * For SATA RAID: Above the “Root Port BDF Details value” of **SATA Controller**, set the “Map this Root Port under VMD” to **Enabled**

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-1d34a7db647eac1a610e0bfef2930c55a2dc5e35%2F1a3a0c377262036d260db93e67a98de400c7746846dadda7fd5aa6677fab6953.png?alt=media" alt=""><figcaption></figcaption></figure>

* For NVMe RAID: Above the “Root Port BDF Details value” of **XX/YY/ZZ**, set the “Map this Root Port under VMD” to **Enabled** (each root port corresponds to an NVMe drive)
* Alternatively, “Enable VMD Global Mapping” can be set to Enabled for all attached storage devices.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-87d9df919d091617fcc94b2986425a9ac2831ab8%2F82c7a44b0bf1c74e7315917af5d9ace790caf77cfa5a88ad4206b776f75c8b97.png?alt=media" alt=""><figcaption></figcaption></figure>

3. Press **F10** to Save and Exit. The system will then reboot.

<mark style="color:blue;">**Creating a RAID Volume in BIOS**</mark>

After enabling VMD, reboot and enter UEFI System Setup (**F2** or **DEL**).

1. Navigate to **Advanced → Intel(R) Rapid Storage Technology**.

   1. The available Physical Disks should be listed under Non-RAID Physical Disks:

   <figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fa8tSiIVvJFhhnKY05hyH%2Fimage.png?alt=media&#x26;token=20d654fb-c132-4386-8226-7817fd1f8f41" alt=""><figcaption></figcaption></figure>
2. Select **Create RAID Volume**.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FTgFl5gRGWDJUyDfi7JzH%2Fimage.png?alt=media&#x26;token=58db551c-14f0-4d9c-b7e1-e6838f0ba53b" alt=""><figcaption></figcaption></figure>

3. Assign a **Name** and select the **RAID Level** (e.g., RAID 0, 1, 5, 10).
4. Select the disks to include in the volume by marking them with an **X**.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FK7v3lEX4RT7TpNZ1Ntdx%2Fimage.png?alt=media&#x26;token=44bff284-efed-4dbb-a7ae-cf894869a537" alt=""><figcaption></figcaption></figure>

5. Select **Create Volume**.
6. Reboot the system and verify that the RAID array has been detected by the operating system or OS installation media.

<mark style="color:blue;">**Deleting a RAID Volume via RAID Option ROM**</mark>

Remember that configuring RAID will erase all data on the hard drives, so be sure to back up any important data before proceeding. The manual provides more detailed instructions and screenshots to help users navigate through the BIOS setup utility and the RAID configuration utility.

1. To delete a RAID volume, a user can follow these steps:
2. During the system boot-up process, press "Ctrl+I" to enter the RAID configuration utility.
3. Select the RAID volume you want to delete and choose the "Delete RAID Volume" option.
4. Confirm that you want to delete the RAID volume.
5. Save the changes and exit the RAID configuration utility.
6. Reboot the system and verify that the RAID volume has been deleted.

It's important to note that deleting a RAID volume will erase all data on the hard drives in the array, so be sure to back up any important data before proceeding. The specific steps to delete a RAID volume may vary depending on the RAID configuration utility used and the RAID level in use. The manual provides more detailed instructions and screenshots to help users navigate through the RAID configuration utility.

#### <mark style="color:blue;">Windows RAID Installation & Configuration</mark>

RAID volumes can be created, configured and managed from within Windows. This section will outline the requirements and processes for doing so.

#### I<mark style="color:blue;">nstalling Windows on to a RAID volume (F6 install method)</mark>

{% hint style="info" %}
**Note:** **Enabling VMD Configuration** & **Creating a RAID Volume in BIOS** is a prerequisite.
{% endhint %}

To install an OS on to a created RAID volume, perform the following steps to install the Intel Rapid Storage Technology driver during operating system setup:

1. Download the latest Intel® Rapid Storage Technology Driver package and extract the contents to a USB drive.
2. Connect the USB drive to the computer where you want to install Windows.
3. Power off the system
4. Connect or remotely mount (via BMC) the Windows installation media and power on the system
5. When the system starts, press F11 to bring up the boot menu and select the option to boot from the Windows installation media.
6. When the Windows Setup screen appears, press the "F6" key to install third-party RAID drivers.
7. Windows Setup will prompt you to insert the driver disk for the RAID controller. Insert the USB drive containing the RAID driver package and click "OK".
8. Windows Setup will scan the USB drive and display a list of compatible RAID drivers. Select the appropriate driver for the RAID controller (e.g. Intel® Rapid Storage Technology) and click "Next".
9. Windows Setup should now detect the created RAID volume(s) and allow for installation of Windows onto them as if they were a singular physical disk.
10. Continue with the Windows installation as usual.

#### <mark style="color:blue;">Configuring RAID from within Windows</mark>

**Installing Intel® Rapid Storage Technology Drivers**

Prior to configuring a RAID volume within the Windows OS environment, it is necessary to download the necessary drivers. The following procedure will outline the required steps to ensure the proper drivers are downloaded and installed:

1. Download the Intel Rapid Storage Technology software from the OnLogic website.
2. Save the file to a known location on your computer's hard drive.
3. Locate the file on your hard drive and double-click it.
4. Click Continue (if needed) to launch the installation program.
5. Click Next at the Welcome screen.
6. After reading and reviewing the warnings, Click Next.
7. Read the license agreement. To agree and proceed, click Yes to accept the terms and continue.
8. From the Readme file information, Click Next. The application files will now be installed.
9. When the appropriate installation files have been installed, you will be prompted to Click Next to continue.
10. Click Yes to the restart option and then click Finish to restart the system.
11. After restarting the system, an Intel® Rapid Storage Technology icon will appear in the Windows system tray, allowing for the Intel Rapid Storage Technology application to be quickly accessed.

#### Creating a RAID Volume via Intel® Rapid Storage Technology

1. The following process outlines the procedure for creating a new RAID volume within the Intel® Rapid Storage Technology application from the operating system.

   1. Open the Intel® Rapid Storage Technology application.
   2. Click the “Create” icon to create a RAID array.
   3. In “Select Volume Type”, click “Real-time data protection (RAID 1)”. Click “Next”.
   4. In “Configure Volume”, key-in the Volume Name with 1-16 letters, select the RAID disks, specify the volume size and then click “Next”.
   5. In “Confirm Volume Creation”, you may review the selected configuration, then click “Create Volume”.

   After creation of the volume, to make the RAID volume usable from within the OS, it will need to be initialized, partitioned, and formatted (similar to a standard physical disk). To do so, follow the procedure below:

   1. From the Windows Disk Management application, initialize the disk (the newly created RAID volume) such that Logical Disk Management can access it.
   2. Right-click on the Disk associated with the RAID Volume and select “New Simple Volume”
   3. Follow the instructions on the New Simple Volume Wizard.

   After the volume wizard process is completed, the RAID volume should now be operational and the RAID volume will appear as if it were a single storage drive.

#### **Deleting a RAID Volume via Intel® Rapid Storage Technology**

The following process outlines the procedure for deleting a RAID volume within the Intel® Rapid Storage Technology application from the operating system.

1. Open the Intel® Rapid Storage Technology application.
2. Click the “Manage” icon.
3. Select the RAID volume that is to be deleted.
4. Select “Delete Volume”

**Warning!** - Deleting a RAID volume will destroy all contents held within the RAID array.

#### <mark style="color:blue;">Linux RAID Installation & Configuration</mark>

For additional information pertaining to the utilizing Intel® Rapid Storage Technology with Linux operating systems, please refer to the following whitepaper:

**Intel® Rapid Storage Technology (Intel® RST) in Linux\* whitepaper**

<https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/rst-linux-paper.pdf>

Additionally, as the configuration and implementation details for Intel® RST RAID in Linux may vary between distributions, please refer to the additional documentation below:

**Red Hat Enterprise Linux 8 - Managing RAID**

<https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_storage_devices/managing-raid_managing-storage-devices>

**Red Hat Enterprise Linux 9 - Managing RAID**

<https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/managing_storage_devices/managing-raid_managing-storage-devices>

**Ubuntu Linux - Intel RST**

<https://help.ubuntu.com/rst/>

## <mark style="color:blue;">5- Support & Compliance</mark>

### 5.1- Troubleshooting & FAQs

<details>

<summary><strong>What is BMC, and what is it for?</strong></summary>

General information about the BMC, or Baseboard Management Controller, are discussed on [our blog post here](https://www.onlogic.com/blog/baseboard-management-controller/).

</details>

<details>

<summary><strong>Where are the storage drives shown in the BIOS?</strong></summary>

Storage drives are shown in a few different places in the BIOS depending on the type (SATA vs. NVMe) and where its connected (Oculink vs. M.2 PCIe).\
**SATA**: Advanced -> Storage Configuration -> SATA\_4 – SATA7 visible\
**Oculink**: Advanced -> Storage Configuration -> Oculink1\_SATA\_0 – Oculink1\_SATA\_3\
**NVMe**: Advanced -> NVME Configuration -> Shows a list of available drives. Select a specific drive to view additional information about it.\
**RAID**: Advanced -> Intel® Rapid Storage Technology -> Shows any configurated RAID arrays, and selecting one will display the Selected Disks in the particular RAID volume.

</details>

<details>

<summary>Reset CMOS</summary>

If the system fails to power on or is unresponsive, clearing the CMOS may help. It will also restore the BIOS to factory defaults.

1. Disconnect the system from all cables/connection (i.e. power, video, etc.) Follow the [Opening the System](#id-3.3-system-servicing) instructions above to gain access to the motherboard. If a PCIe card is installed, you may need to remove it. Follow the [Adding/Removing PCIe card](#servicing-pcie-and-gpu) instructions above, if needed.

<figure><img src="https://web.archive.org/web/20250326175234im_/https://support.onlogic.com/wp-content/uploads/2023/05/image-39-725x1024.png" alt="" height="1024" width="725"><figcaption></figcaption></figure>

2. Locate the CMOS pads indicated by the orange circle

<figure><img src="https://web.archive.org/web/20250326175234im_/https://support.onlogic.com/wp-content/uploads/2023/05/image-37-675x1024.png" alt="" height="1024" width="675"><figcaption></figcaption></figure>

3. Once you’ve located the CLRCMOS1 pads, use a screwdriver or other conductive tool to short the pads together for at least 30 seconds.

After at least 30 seconds, the CMOS has been cleared. Reassemble the system and power it back up. The unit may restart several times while the motherboard reinitializes.

</details>

<details>

<summary>Reset BMC</summary>

In the event the BMC is not-functional, or the CMOS reset does not restore proper functionality to the system, the BMC can be rebooted manually following these steps.

1. In order to reboot the BMC functionality of the motherboard, locate the “ID” button on the back of the system.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fgit-blob-004d14b30bc6cbeaf78a3a50e43d6d97e4b6726e%2F62dccf161a7c9864758f5d1de3f062bfa953b83b48abce44fd143dac2961b9ee.png?alt=media" alt="" width="375"><figcaption></figcaption></figure>

2. Press and hold the button for at least 5 seconds. This will force a reboot of the BMC chip on the motherboard.

</details>

### <mark style="color:blue;">5.2- Security</mark>

#### <mark style="color:blue;">Cyber Security Advisories</mark>

For the latest security advisories concerning OnLogic products, including vulnerability disclosures and necessary updates, please refer to our official Security Advisories page. It is recommended to regularly check this resource for critical security information.\
[**Access Security Advisories**](https://support.onlogic.com/support-articles/security-advisories)

#### <mark style="color:blue;">Physical Security Features</mark>

#### Security Bezel

The Axial AC101 comes with a security bezel to prevent unauthorized access to front ports and buttons. It is secured by a barrel lock, and a key is included in the accessory package.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FW8UON9Z3E7WQZT7HFflk%2Fimage.png?alt=media&#x26;token=82534e86-8488-457b-96b0-98d18408d64f" alt=""><figcaption></figcaption></figure>

#### Two Point Locking Lid with Intrusion Detection

The chassis lid has a two-point locking mechanism and a built-in intrusion switch.

* **Locking Points:** The first point is a top latch with a tamper-resistant screw, and the second is a thumbscrew at the rear.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FSb6oMCNPaFa3rwXuXsBU%2Fimage.png?alt=media&#x26;token=8e0e0c7c-5013-4c17-a1c7-e200825ccc9f" alt=""><figcaption><p>Top latch with a tamper-resistant screw</p></figcaption></figure>

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FKgWqnoNCHKcuE0v9mxo9%2Fimage.png?alt=media&#x26;token=8530c5fa-b951-488d-9a3a-444880522237" alt=""><figcaption><p>Thumbscrew at the rear</p></figcaption></figure>

* **Intrusion Detection:** If the lid is removed while the system has power, the intrusion switch will detect the event, and the Chassis Intrusion sensor will be asserted and logged in the BMC event log.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FFSY0dUOEXRVoLGd5ySKy%2Fimage.png?alt=media&#x26;token=49aecbe1-5c30-4193-92ac-fc43e4c4a23f" alt=""><figcaption></figcaption></figure>

### <mark style="color:blue;">5.3- Regulatory</mark>

#### Compliance Information

Do not open or modify the device. The device uses components that comply with FCC and CE regulations. Modification of the device may void these certifications.\
\
The use of shielded cables for connection of a monitor to the GPU is required to assure compliance with FCC and CE regulations.

#### CE

The computer system was evaluated for IT equipment EMC standards as a class A device.

The computer complies with the relevant IT equipment directives for the CE mark.

Modification of the system may void the certifications. Testing includes: EN 55032, EN 55035, EN 60601-1, EN 62368-1, EN 60950-1.

\
FCC Statement

This device complies with part 15 of the FCC rules as a Class A device. Operation is subject to the following two conditions: (1) this device may not cause harmful interference and (2) this device must accept any interference received, including interference that may cause undesired operation.

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FGB5rTgxlAatJX2yZjjQH%2Fimage.png?alt=media&#x26;token=d92c15bf-62a4-4b8e-8bdc-5e9cec417014" alt=""><figcaption></figcaption></figure>

#### ISED

This device complies with Industry Canada license-exempt RSS standard(s). Operation is subject to the following two conditions: (1) this device may not cause interference, and (2) this device must accept any interference, including interference that may cause undesired operation of the device.

Le présent appareil est conforme aux CNR d'Industrie Canada applicables aux appareils radio exempts de licence. L'exploitation est autorisée aux deux conditions suivantes: (1) l'appareil ne doit pas produire de brouillage, et (2) l'utilisateur de l'appareil doit accepter tout brouillage radioélectrique subi, même si le brouillage est susceptible d'en compromettre le fonctionnement.

**CAN ICES-003(A) / NMB-003(A)**

#### UKCA

The computer system was evaluated for medical, IT equipment, automotive, maritime and railway EMC standards as a class A device. The computer complies with the relevant IT equipment directives for the UKCA mark.

#### RoHS

<figure><img src="https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FfHZpMBaaUEy3IZTHA96T%2Fimage.png?alt=media&#x26;token=a4c7a967-f574-473d-b1b6-7b7fa1c25a13" alt=""><figcaption></figcaption></figure>

#### Download Documents

{% file src="<https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FNjijcA1T7rrWp56zVoXZ%2FCE%20DoC.pdf?alt=media&token=550cc086-c270-445f-876e-19e1b065a74b>" %}

{% file src="<https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FeDHUCoEpwwGsHSD3Zazm%2FFCC%20%26%20Canada%20ISED%20DoC.pdf?alt=media&token=7fa43153-2fc0-46e1-9234-5930319d83d1>" %}

{% file src="<https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FMq7AgA00ckvcseOkuMP4%2FMTBF%20Summary.pdf?alt=media&token=7c7ac147-8b61-4780-bb3b-915539058e5c>" %}

{% file src="<https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2F0RK8wtW5DRubVPwVFha9%2FAC101%20BIS%20Certification.pdf?alt=media&token=16cad1f2-1f86-4441-a5ac-4bfd35d9f741>" %}

{% file src="<https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2Fq7z8zVQU4Ndn3UmRYDev%2FCalifornia%20Proposition%2065%20Declaration.pdf?alt=media&token=62f08fa3-3c18-4a4d-8085-53ccd14b22a8>" %}

{% file src="<https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FHEtAE2DcFpFdEFPIvHB9%2FUL%20Listing%20Card.pdf?alt=media&token=84c12a53-ebd5-48ff-91a1-0c4cf25d1c1a>" %}

{% file src="<https://3062424488-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlLHqs7kbNoKOFTwGOfH6%2Fuploads%2FEs0CxA9Aj3JzxeAmGWSH%2FTAA%20Compliance.pdf?alt=media&token=13e92203-b5af-4bdf-aba3-290ed071727f>" %}

### <mark style="color:blue;">5.4- Appendices</mark>

#### Revision History

| Date      | Revision History                                                                                                                                                                             |
| --------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 5/24/2023 | First release of Axial AC101 manual                                                                                                                                                          |
| 6/06/2023 | Renamed Section 3 to Internal Connectivity, "SSD Header Updates, Drive Placement & Population", 750W PSU recommendation note (when using 150W GPU), Added RAID Configuration (new section 7) |
| 4/30/2024 | Updated guidance for FCC and CE regulations when using a GPU Updated FCC statement (added Taiwan and South Korea)                                                                            |
| 8/12/2024 | "Updated Section 1.3 - Product Specifications, Power Supply input specs Added Section 9.5 - RoHS", Updated Section 2.8 - VGA Video                                                           |
