Skip to main content
Ctrl+K
AMD Logo
Radeon Software for Linux Version List
  • Community
  • AMD Lab Notes
  • Infinity Hub
  • Support

Use ROCm on Radeon GPUs

  • Prerequisites
  • How to guides
    • Linux How to guide
      • Install Radeon software for Linux with ROCm
      • Install PyTorch for Radeon GPUs
      • Install ONNX Runtime for Radeon GPUs
      • Install TensorFlow for Radeon GPUs
      • Install Triton for Radeon GPUs
      • Install JAX for Radeon GPUs
      • Install MIGraphX for Radeon GPUs
      • mGPU setup and configuration
    • WSL How to guide
      • Install Radeon software for WSL with ROCm
      • Install PyTorch for Radeon GPUs on WSL
      • Install ONNX Runtime for Radeon GPUs on WSL
      • Install TensorFlow for Radeon GPUs on WSL
      • Install Triton for Radeon GPUs on WSL
      • Install JAX for Radeon GPUs on WSL
      • Install MIGraphX for Radeon GPUs on WSL
  • Usecases
    • vLLM
      • vLLM Docker image for Llama2 and Llama3
      • GEMM tuning for model inferencing with vLLM
    • ComfyUI
      • Install ComfyUI and MIGraphX extension
  • Compatibility matrices
    • Linux Compatibility
    • WSL Compatibility
  • Limitations and recommended settings
  • AI community
  • Report a bug
  • Limitations...

Limitations and recommended settings

Contents

  • 6.4.1 release known issues
    • WSL specific issues
  • Multi-GPU configuration
  • Windows Subsystem for Linux (WSL)
    • WSL recommended settings
    • ROCm support in WSL environments
    • Running PyTorch in virtual environments

Limitations and recommended settings#

This section provides information on software and configuration limitations.

Note
For ROCm on Instinct known issues, refer to AMD ROCm Documentation

For OpenMPI limitations, see ROCm UCX OpenMPI on Github

6.4.1 release known issues#

  • Intermittent script failure may be observed while running Stable Diffusion workloads with ONNX Runtime and MIGraphX. Users experiencing this issue are recommended to use MXR files instead of rebuilding the model as a temporary workaround.

  • Intermittent system or application crash may be observed while running Luxmark in conjunction with other compute workloads on Radeon™ RX 9000 series graphics products. Users experiencing this issue are recommended to shut down other applications and workloads while running Luxmark.

  • Intermittent script failure (out of memory) may be observed while running high-memory LLM workloads with multiple GPUs on Radeon RX 9060 series graphics products.

  • Intermittent script failure may be observed while running BERT training workloads with JAX.

  • Intermittent script failure may be observed while running Stable Diffusion 2.1 FP16 workloads with JAX.

  • Intermittent script failure (out of memory) may be observed while running Llama2 FP16 workloads with ONNX Runtime and MIGraphX.

  • Intermittent system or application crash may be observed while running TensorFlow ResNet50 training workloads.

  • Increased memory consumption may be observed while running TensorFlow Resnet50 training workloads.

WSL specific issues#

  • Intermittent script failure may be observed while running Llama 3 inference workloads with vLLM in WSL2. End users experiencing this issue are recommended to follow vLLM setup instructions here.

  • Intermittent script failure or driver timeout may be observed while running Stable Diffusion 3 inference workloads with JAX.

  • Intermittent script failure or driver timeout may be observed while running Llama3 or ChatGLM2 inference workloads with vLLM.

  • Lower than expected performance may be observed while running inference workloads with JAX in WSL2.

  • Intermittent script failure may be observed while running Resnet50, BERT, or InceptionV3 training workloads with ONNX runtime.

  • Output error message (resource leak) may be observed while running Llama 3.2 workloads with vLLM.

  • Output error message (VaMgr) may be observed while running PyTorch workloads in WSL2.

  • Intermittent script failure or driver timeout may be observed while running Stable Diffusion inference workloads with TensorFlow.

  • Intermittent application crash may be observed while running Stable Diffusion workloads with ComfyUI and MIGraphX on Radeon™ RX 9060 series graphics products.

  • Intermittent script failure may occur while running Stable Diffusion 2 workloads with PyTorch and MIGraphX

  • Intermittent script failure may occur while running LLM workloads with PyTorch on Radeon™ PRO W7700 graphics products.

  • Lower than expected performance (compared to native Linux) may be observed while running inference workloads (eg. Llama2, BERT) in WSL2.

Important!
Radeon™ PRO Series graphics cards are not designed nor recommended for datacenter usage. Use in a datacenter setting may adversely affect manageability, efficiency, reliability, and/or performance. GD-239.

Important!
ROCm is not officially supported on any mobile SKUs.

Multi-GPU configuration#

AMD has identified common errors when running ROCm™ on Radeon™ multi-GPU configuration at this time, along with the applicable recommendations.

See mGPU known issues and limitations for a complete list of mGPU known issues and limitations.

Windows Subsystem for Linux (WSL)#

WSL recommended settings and limitations.

WSL recommended settings#

Optimizing GPU utilization
WSL overhead is a noted bottleneck for GPU utilization. Increasing the batch size of operations will load the GPU more optimally, reducing time required for AI workloads. Optimal batch sizes vary by model, and macro-parameters.

ROCm support in WSL environments#

Due to WSL architectural limitations for native Linux User Kernel Interface (UKI), rocm-smi is not supported.

Issue

Limitations

UKI does not currently support rocm-smi

No current support for:
Active compute processes
GPU utilization
Modifiable state features

Not currently supported.

Not currently supported.

Running PyTorch in virtual environments#

Running PyTorch in virtual environments requires a manual libhsa-runtime64.so update.

When using the WSL usecase and hsa-runtime-rocr4wsl-amdgpu package (installed with PyTorch wheels), users are required to update to a WSL compatible runtime lib.

Solution:

Enter the following commands:

location=`pip show torch | grep Location | awk -F ": " '{print $2}'`
cd ${location}/torch/lib/
rm libhsa-runtime64.so*
cp /opt/rocm/lib/libhsa-runtime64.so.1.2 libhsa-runtime64.so

previous

WSL support matrices by ROCm version

next

AI community

Contents
  • 6.4.1 release known issues
    • WSL specific issues
  • Multi-GPU configuration
  • Windows Subsystem for Linux (WSL)
    • WSL recommended settings
    • ROCm support in WSL environments
    • Running PyTorch in virtual environments

  • Terms and Conditions
  • ROCm Licenses and Disclaimers
  • Privacy
  • Trademarks
  • Statement on Forced Labor
  • Fair and Open Competition
  • UK Tax Strategy
  • Cookie Policy
  • Cookie Settings
© 2025 Advanced Micro Devices, Inc