There is a newer version of the record available.

Published September 3, 2025 | Version LuxTestUtils-v2.0.1
Software Open

Lux: Explicit Parameterization of Deep Neural Networks in Julia

Authors/Creators

Description

LuxTestUtils LuxTestUtils-v2.0.1

Diff since LuxTestUtils-v1.7.2

Merged pull requests:

  • perf: benchmarking our models against Jax (Flax) (#1000) (@avik-pal)
  • fix: remove Optimisers.jl patch (#1247) (@avik-pal)
  • test: fix tests (#1278) (@avik-pal)
  • fix: try fixing broken tests (#1279) (@avik-pal)
  • show debug instead of warning if cant cuBLAS mult (#1280) (@ExpandingMan)
  • chore: bump crate-ci/typos from 1.30.2 to 1.31.0 (#1281) (@dependabot[bot])
  • chore: remove debug functionalities of reactant (#1285) (@avik-pal)
  • fix: add a ReactantOptimisers wrapper (#1288) (@avik-pal)
  • chore: bump crate-ci/typos from 1.31.0 to 1.31.1 (#1297) (@dependabot[bot])
  • ci: use JuliaFormatter v1 (#1299) (@avik-pal)
  • ci: multiple ci fixes (#1301) (@avik-pal)
  • CompatHelper: bump compat for JET to 0.10 for package LuxTestUtils, (keep existing compat) (#1302) (@github-actions[bot])
  • fix: new reactant version (#1303) (@avik-pal)
  • fix: update Reactant training (#1304) (@avik-pal)
  • fix: chain rules for recurrence tuple inputs (#1306) (@avik-pal)
  • feat: fix return (#1307) (@avik-pal)
  • fix: try increasing the samples in CI (#1309) (@avik-pal)
  • fix: restrict dispatch types for cublaslt (#1311) (@avik-pal)
  • chore: bump julia-actions/julia-format from 3 to 4 (#1313) (@dependabot[bot])
  • feat: use 3rd order derivatives using Reactant (#1315) (@avik-pal)
  • allow SelectDim to take arbitrary views (#1318) (@ExpandingMan)
  • chore: bump crate-ci/typos from 1.31.1 to 1.32.0 (#1320) (@dependabot[bot])
  • docs: fix wrong function names in RNG admonition in interface.md (#1325) (@KristianHolme)
  • CompatHelper: bump compat for DocumenterVitepress to 0.2 for package docs, (keep existing compat) (#1328) (@github-actions[bot])
  • CompatHelper: bump compat for Interpolations to 0.16 for package CIFAR10, (keep existing compat) (#1329) (@github-actions[bot])
  • docs: lstm encoder decoder using Reactant (#1331) (@avik-pal)
  • feat: lower embedding to direct indexing (#1332) (@avik-pal)
  • fix: indexing (#1333) (@avik-pal)
  • fix: reactant gradients + precision config (#1334) (@avik-pal)
  • feat: emit batchnorm ops (#1336) (@avik-pal)
  • fix: run more under with_config (#1340) (@avik-pal)
  • fix: update to use the new RNG from Reactant (#1341) (@avik-pal)
  • fix: use ignore derivatives for Reactant (#1342) (@avik-pal)
  • ci: taming down ci timings (#1343) (@avik-pal)
  • fix: remove onehotarrays patch (#1344) (@avik-pal)
  • fix: bump reactant min version (#1345) (@avik-pal)
  • CompatHelper: bump compat for MKL in [weakdeps] to 0.9 for package LuxLib, (keep existing compat) (#1346) (@github-actions[bot])
  • CompatHelper: bump compat for MKL to 0.9 for package test, (keep existing compat) (#1347) (@github-actions[bot])
  • chore: bump crate-ci/typos from 1.32.0 to 1.33.1 (#1349) (@dependabot[bot])
  • chore: use uv for python (#1350) (@avik-pal)
  • CompatHelper: bump compat for CairoMakie to 0.14 for package DDIM, (keep existing compat) (#1351) (@github-actions[bot])
  • CompatHelper: bump compat for CairoMakie to 0.14 for package GravitationalWaveForm, (keep existing compat) (#1352) (@github-actions[bot])
  • CompatHelper: bump compat for CairoMakie to 0.14 for package LSTMEncoderDecoder, (keep existing compat) (#1353) (@github-actions[bot])
  • CompatHelper: bump compat for CairoMakie to 0.14 for package OptimizationIntegration, (keep existing compat) (#1354) (@github-actions[bot])
  • CompatHelper: bump compat for CairoMakie to 0.14 for package PINN2DPDE, (keep existing compat) (#1355) (@github-actions[bot])
  • CompatHelper: bump compat for CairoMakie to 0.14 for package PolynomialFitting, (keep existing compat) (#1356) (@github-actions[bot])
  • CompatHelper: bump compat for CairoMakie to 0.14 for package RealNVP, (keep existing compat) (#1357) (@github-actions[bot])
  • fix: remove type piracy (#1360) (@avik-pal)
  • CompatHelper: bump compat for CairoMakie to 0.15 for package DDIM, (keep existing compat) (#1362) (@github-actions[bot])
  • CompatHelper: bump compat for CairoMakie to 0.15 for package GravitationalWaveForm, (keep existing compat) (#1363) (@github-actions[bot])
  • CompatHelper: bump compat for CairoMakie to 0.15 for package LSTMEncoderDecoder, (keep existing compat) (#1364) (@github-actions[bot])
  • CompatHelper: bump compat for CairoMakie to 0.15 for package OptimizationIntegration, (keep existing compat) (#1365) (@github-actions[bot])
  • CompatHelper: bump compat for CairoMakie to 0.15 for package PINN2DPDE, (keep existing compat) (#1366) (@github-actions[bot])
  • CompatHelper: bump compat for CairoMakie to 0.15 for package PolynomialFitting, (keep existing compat) (#1367) (@github-actions[bot])
  • CompatHelper: bump compat for CairoMakie to 0.15 for package RealNVP, (keep existing compat) (#1368) (@github-actions[bot])
  • fix: use Int32 for GCN cora example (#1369) (@avik-pal)
  • Fix backticks in examples/Basics (#1370) (@abhro)
  • Fix up minor docs/docstrings formatting (#1371) (@abhro)
  • feat: forwarddiff support for gather/scatter (#1373) (@avik-pal)
  • fix: handle multi-device reactant (#1374) (@avik-pal)
  • feat: serialization to tensorflow saved model (#1375) (@avik-pal)
  • chore: update version for release (#1376) (@avik-pal)
  • CompatHelper: add new compat entry for PythonCall at version 0.9 for package test, (keep existing compat) (#1377) (@github-actions[bot])
  • fix: missing variable (#1379) (@avik-pal)
  • chore: bump crate-ci/typos from 1.33.1 to 1.34.0 (#1382) (@dependabot[bot])
  • fix: State returned by MultiHeadAttention is incompatible with the initialized state (#1384) (@yeruoforever)
  • feat: annotate important parts of training loop (#1385) (@avik-pal)
  • fix: bypass fused kernels for mooncake (for now) (#1387) (@avik-pal)
  • feat: AutoMooncake for training lux models (#1388) (@avik-pal)
  • CompatHelper: add new compat entry for Mooncake at version 0.4 for package test, (keep existing compat) (#1389) (@github-actions[bot])
  • docs: cleanup CIFAR 10 example dependencies (#1391) (@avik-pal)
  • CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package LuxLib, (keep existing compat) (#1394) (@github-actions[bot])
  • CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package MLDataDevices, (keep existing compat) (#1395) (@github-actions[bot])
  • CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package WeightInitializers, (keep existing compat) (#1396) (@github-actions[bot])
  • De-fragment markdown list in distributed_utils.md (#1397) (@abhro)
  • Precompile environment before running tutorials (#1398) (@abhro)
  • Add meaning of tuple elements in docs/tutorials.jl (#1399) (@abhro)
  • Split function into two for dataset/dataloader concept separation (#1400) (@abhro)
  • feat: add preserves_state_type to the interface (#1401) (@avik-pal)
  • refactor: move stateful layer into LuxCore (#1402) (@avik-pal)
  • Add and update links to external packages/resources in docs (#1403) (@abhro)
  • Change doc title in docs/src/introduction/index.md (#1404) (@abhro)
  • chore: bump julia-actions/julia-downgrade-compat from 1 to 2 (#1405) (@dependabot[bot])
  • fix a typo in index.md (#1407) (@rzyu45)
  • Suppress output in docs and examples (#1408) (@abhro)
  • Add more explanatory text in tutorials' data generation (#1409) (@abhro)
  • Fix typos and fix up minor docs formatting (#1410) (@abhro)
  • Use write(filename, obj) for file I/O in docs (#1411) (@abhro)
  • Split up steps in PINN tutorial (#1412) (@abhro)
  • Use parenthesized version of @printf for better code formatting (#1413) (@abhro)
  • ci: streamline ci testing (#1415) (@avik-pal)
  • Improve logic for printing updates on an epoch (#1417) (@abhro)
  • chore: bump crate-ci/typos from 1.34.0 to 1.35.3 (#1420) (@dependabot[bot])
  • chore: fix mooncake circular dep (#1421) (@avik-pal)
  • chore: bump actions/checkout from 4 to 5 (#1423) (@dependabot[bot])
  • chore: bump crate-ci/typos from 1.35.3 to 1.35.4 (#1424) (@dependabot[bot])
  • Allow connection in Parallel and fusion in BranchLayer to be layers (#1425) (@Copilot)
  • Add comprehensive GitHub Copilot instructions with JuliaFormatter v1 and temporary environments (#1427) (@Copilot)
  • test: Enzyme now works for upsample in 1.10 (#1428) (@avik-pal)
  • docs: Fix wrong function name (#1429) (@agdestein)
  • ci: enable gh actions telemetry (#1430) (@avik-pal)
  • feat: better precision control for Reactant training API (#1431) (@avik-pal)
  • fix: support CompactLayer in freeze (#1432) (@avik-pal)
  • docs: improvements to tutorials (#1433) (@avik-pal)
  • docs: add comprehensive documentation for supporting both Flux and Lux frameworks (#1434) (@Copilot)
  • docs: fix tutorial links (#1436) (@avik-pal)
  • feat: compact printing (#1437) (@avik-pal)
  • feat: Qwen3 model with weight loading from huggingface (#1438) (@avik-pal)
  • CompatHelper: bump compat for JLD2 to 0.6 for package DDIM, (keep existing compat) (#1439) (@github-actions[bot])
  • CompatHelper: bump compat for JLD2 to 0.6 for package ImageNet, (keep existing compat) (#1440) (@github-actions[bot])
  • CompatHelper: bump compat for JLD2 to 0.6 for package SimpleRNN, (keep existing compat) (#1441) (@github-actions[bot])
  • chore: bump crate-ci/typos from 1.35.4 to 1.35.5 (#1442) (@dependabot[bot])
  • feat: add RMSNorm Layer (#1443) (@avik-pal)
  • feat: expose direct functions for computing RoPE (#1444) (@avik-pal)
  • refactor: cleanup WeightInitializers to reduce extensions (#1447) (@avik-pal)
  • fix: propagate runtime activity from AutoEnzyme (#1448) (@avik-pal)
  • fix: more streamlined testing (#1455) (@avik-pal)
  • ci: run more CUDA tests in parallel (#1459) (@avik-pal)
  • chore: bump crate-ci/typos from 1.35.5 to 1.35.7 (#1460) (@dependabot[bot])

Closed issues:

  • Let connection in Parallel be a layer (#377)
  • Export trained model for Tensorflow/PyTorch/C++? (#453)
  • Externalize gradient computations to DifferentiationInterface.jl? (#544)
  • Per-Layer Profiling (#864)
  • Integration of oneDNN for CPU operations (#1013)
  • Optimisers.jl patch for Reactant Support (#1146)
  • Emit Batchnorm Op for Training (#1208)
  • Add a AutoMooncake dispatch for training Lux models (#1238)
  • How to maintain a package that supports both Flux & Lux? (#1243)
  • Convolutional VAE for MNIST using Reactant failed to produce right results (#1274)
  • GPU Inefficiency in Gradient Computation with Custom Recurrence (#1284)
  • PINN2DPDE broke in the latest Optimisers Patch removal (#1286)
  • Duplicating Scalars for Optimisers prevents CSE (#1289)
  • Reactant 0.2.61 produces incorrect gradients (#1292)
  • missing or incorrect ProjectTo method breaks Recurrence with Zygote (#1305)
  • dense operations fail on views on nvidia due to missing method (#1308)
  • Gradient of while loop with Reactant seems broken (#1316)
  • getting concatenating and splitting working with Reactant/Enzyme (#1317)
  • Allow freeze for @compact defined model layers (#1319)
  • LuxLib doing LinearAlgebra.mul! on non-arrays can cause fallback to errors (#1322)
  • Don't materialize OneHotArrays with ReactantDevice (#1326)
  • gpu_device(device_id) attaches to first/zeroth device regardless of device_id (#1330)
  • Preference to control precision config (#1335)
  • LSTMEncoderDecoder example broken (#1337)
  • could not load library Reactant.TracedLinearAlgebra on Windows (#1339)
  • Error when freezing part of a model + Reactant (#1348)
  • TrainState with mutliple Reactant Devices (#1358)
  • ComponentArrays.jl type piracy? (#1359)
  • Reactant.jl pass pipeline broke GCN Cora (#1361)
  • jacobian_vector_product for Embedding (#1372)
  • UndefVarError (:fname, LuxReactantExt) (#1378)
  • Unable to evaluate a Lux model at a specific parameter set. (#1380)
  • State Tuple returned by MultiHeadAttention is incompatible with the initialized tuple (#1383)
  • MLIR Operand #1 doesn't dominate usage: Reactant + Cifar10 Example (#1386)
  • Conversion of Arrays of StaticArray to Device (#1406)
  • Move LuxLib Mooncake Ext to Mooncake (#1416)
  • Question: If I have a Lux model with a single input, how I I create one with two inputs (#1419)
  • ✨ Set up Copilot instructions (#1426)
  • Complex kaiming_uniform initializations, only positive imaginary weights (#1445)
  • Constant memory is stored (or returned) to a differentiable variable. when broadcasting vector (#1446)
  • Bump LuxCUDA compat for CUDA 13 (#1449)
  • Reactant testing rework (#1454)

Notes

If you use this software, please cite it as below.

Files

LuxDL/Lux.jl-LuxTestUtils-v2.0.1.zip

Files (14.1 MB)

Name Size Download all
md5:e931a697fea53f6bafe12c001251591c
14.1 MB Preview Download

Additional details

Related works

Software