Published November 18, 2025 | Version v2.0.0-rc.4
Software Open

FLAME GPU

Description

FLAME GPU 2.0.0-rc.4 is the fifth release-candidate for FLAME GPU 2.0.0.

As a release-candidate the API should be stable, unless any issues are found during the release-candidate phase which require a breaking change to resolve.

Release candidate 4 is a relatively small release, with several important fixes for issues identified after the release of 2.0.0-rc.3:

  • Fix an unintended breaking change to agent variable and environment property logging was introduced in v2.0.0-rc.3. In JSON agent and environment output files, individual scalar values were output as lists containing a single element, rather than the single scalar value. The behaviour now matches the older behaviour (#1340)
  • Fixes incorrect InvalidCUDAComputeCapability exception when compiling with all-major (default) or all (#1339, #1342)
  • Fixes custom handling of CMAKE_CUDA_ARCHITECTURES to support a and f post-fixed compute capabilities such as 90a (#1284, #1342)

For users upgrading from 2.0.0-rc.2 or older, the following breaking changes are important to be aware of:

[!IMPORTANT] FLAMEGPU 2.0.0-rc.3 and newer had a change in licensing terms from MIT to a dual license model of AGPL 3.0 and commercial. This requires user contributions to sign our CLA via our CLA bot which will prompt new contributors on creation of a pull request.

  • Added nlohmann::json (replacing RapidJSON) with some breaking changes for nan/inf (#1277). Any special limit values (e.g. +/- nan/inf) are written to JSON as NULL and read from JSON as NaN.
  • Removed CUDA 11 support which removes support for Kepler (sm_35) hardware. I.e. CUDA Supported versions are now 12.x to 13.x (Windows requires >= 12.4) (#1302)
  • Switch from c++17 to c++20, CMake >= 3.25.2 is now required (#1302)
  • Windows Visual Studio 2019 support is removed (#1302)
  • Removed support for Python < 3.10 and added 3.13 and 3.14. Supported Python versions are now 3.10-3.14 (#1320,#1318, #1320 respectively)
  • Produce Python wheels on CI using ManyLinux_2_28 instead of ManyLinux2014. Python wheels will now require glibc >= 2.28 unless built from source. (#1228)

See the 2.0.0-rc.4 changelog & 2.0.0-rc.3 changelog for more detail.

This release-candidate release requires:

  • CMake >= 3.25.2
  • CUDA >= 12.0 (or >= 12.4 on Windows) and a Compute Capability >= 5.0 NVIDIA GPU.
  • C++20 capable C++ compiler (host), compatible with the installed CUDA version (i.e VS2022+, or GCC >= 10)
  • git
  • Python >= 3.10 (optional)
  • MPI >= 3 (optional)

For full version requirements, please see the Requirements section of the README.

Documentation and Support

Installing Pre-compiled Python Binary Wheels

Python binary wheels for pyflamegpu are not currently distributed via pip, however, they can be installed from the pyflamegpu wheelhouse - whl.flamegpu.com. They can also be installed by downloading the wheel artifacts from this release, and installing the local file via pip.

To install pyflamegpu 2.0.0rc4 from whl.flamegpu.com, install via pip with --extra-index-url or --find-links and the appropriate URI from whl.flamegpu.com. E.g. to install the latest pyflamegpu build for CUDA 13.x without visualiastion:

python3 -m pip install --extra-index-url https://whl.flamegpu.com/whl/cuda130/ pyflamegpu

To install pyflamegpu 2.0.0rc4 manually, download the appropriate .whl file for your platform, and install it into your python environment using pip. I.e. for CUDA 13 under linux with python 3.10:

python3 -m pip install pyflamegpu-2.0.0rc4+cuda130-cp10-cp10-linux_x86_64.whl

CUDA 12.x (>= 12.4 on Windows) or CUDA 13.x including nvrtc must be installed on your system containing a Compute Capability 5.0 or newer NVIDIA GPU.

Python binary wheels are available for x86_64 systems with:

  • Linux with glibc >= 2.28 (I.e. Ubuntu >= 13.04, CentOS/RHEL >= 7+, etc.) CHECK THIS
  • Windows 10+
  • Python 3.10 - 3.14
  • CUDA 12.x (>= 12.4 on Windows)
  • CUDA 13.x
  • Wheels with visualisation enabled or disabled.
    • Note that Linux wheels do not package shared object dependencies at this time, so are not strictly ManyLinux compliant

Wheel filenames are of the format pyflamegpu-2.0.0rc4+cuda<CUDA>[.vis]-cp<PYTHON>-cp<PYTHON>-<platform>.whl, where:

  • cuda<CUDA> encodes the CUDA version used
  • .vis indicates visualisation support is included
  • cp<PYTHON> identifies the python version
  • <platform> identifies the OS/CPU Architecture

For Example:

  • pyflamegpu-2.0.0rc4+cuda120-cp310-cp310-linux_x86_64.whl is a CUDA 12.0-12.x compatible wheel, without visualisation support, for python 3.10 on Linux x86_64.
  • pyflamegpu-2.0.0rc4+cuda130.vis-cp314-cp314-win_amd64.whl is a CUDA 13.x compatible wheel, with visualisation support, for python 3.14 on Windows 64-bit.

Building FLAME GPU from Source

For instructions on building FLAME GPU from source, please see the Building FLAME GPU section of the README.

<a id="2.0.0-rc.4-known-issues"></a>

Known Issues

  • Warnings and a loss of performance due to hash collisions in device code (#356)
  • Multiple known areas where performance can be improved (e.g. #449, #402)
  • Windows Driver 460 may encounter invalid argument errors when embedded PTX is used to execute on a higher compute capability device. Upgrading to 461 (CUDA 12.6 Update 3) or ensuring you compile with the correct CMAKE_CUDA_ARCHITECTURES appears to resolve this issue. See #1253 for more information.

Notes

If you use this software, please cite both the article from preferred-citation and the software itself.

Files

FLAMEGPU/FLAMEGPU2-v2.0.0-rc.4.zip

Files (1.4 MB)

Name Size Download all
md5:b492d4253a9cca05e1d053d0cb8cbc6e
1.4 MB Preview Download

Additional details

Related works