ROCm 7.1 on Fedora 43

ROCm is AMD's GPU compute stack, and you need it to run GPU accelerated PyTorch code on team Red's hardware.

AMD ROCm 7.1 doesn't officially support Fedora, and Fedora hasn't yet updated their own packages of ROCm beyond 6.4. Since 7.1 brings a big bump in performance for old hardware and support for new hardware (like Strix Halo), I decided to try installing the Rocky Linux version of ROCm 7.1 on Fedora 43, and it more or less worked without issue.

Below are the steps I used to set up ROCm 7.1 on my system should you want to do the same.

I verified these steps on a system with a discrete GPU (Radeon RX 7900 XTX) and integrated graphics (Radeon RX 8060S), both worked fine. There were some crashes and errors with memory allocations, but that sadly just seems to be the state of amdgpu & ROCm today.

Preparations

These steps are listed in the installation instructions by AMD:

sudo dnf install dnf-plugin-config-manager python3-setuptools python3-wheel
sudo crb enable

If you have the Fedora rocm packages installed, it's best to uninstall them since they have incompatible dependencies compared to the AMD packages, and it won't just upgrade:

sudo dnf remove rocm rocm-*

Add ROCm repositories to your yum config

sudo tee /etc/yum.repos.d/rocm.repo <<EOF
[ROCm]
name=rocm
baseurl=https://repo.radeon.com/rocm/el9/latest/main/
enabled=1
priority=50
gpgcheck=1
gpgkey=https://repo.radeon.com/rocm/rocm.gpg.key

[AMDGraphics]
name=rocmgraphics
baseurl=https://repo.radeon.com/graphics/latest/el/9.6/main/x86_64/
enabled=1
priority=50
gpgcheck=1
gpgkey=https://repo.radeon.com/rocm/rocm.gpg.key
EOF

Install ROCm and dependencies

You could just install the rocm package at this point, but I typically install a few optional development packages as well as they are required to build llama.cpp's ROCm backend, among other things.

sudo dnf install rocm rocm-developer-tools hipblas-devel hip-devel rocwmma-devel rocm-opencl-devel --allowerasing

The --allowerasing option shouldn't be required, but if you have any other opencl or older rocm packages installed, the flag allows replacing them with the newer versions provided in the ROCm repository.

PyTorch setup

If you want to run machine learning / AI workloads using PyTorch, consider setting the following environment variable:

export TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1

This enables AMD's experimental ahead-of-time triton math library, which is required for using FlashAttention on ROCm.