Published February 17, 2026 | Version v5
Technical note Open

Expectation-Maximization Style Algorithm for Task-Driven Differentiable Renderer Optimization

Authors/Creators

Description

Author contact: yan.yang.research [at] proton.me

The author is working on minimal code (backpropagation tuning the Gray Scott parameters to match a type of patterns) here: https://github.com/Yan-Yang-bot/bp2renderer.


This technical note outlines a conceptual EM-style alternating optimization framework for task-driven differentiable renderer tuning. It proposes a theoretically plausible algorithm and discusses open issues such as training stability and long-range gradient propagation, without formal proof or experimental validation. The document serves as an archival record of a partially developed idea from an unfinished PhD-era project.

This technical note presents substantial original ideas and formulations. If you intend to use or extend these ideas in academic publications, please reach out to discuss appropriate citation or collaboration arrangements.

Notes

Version 2 added details of Intermediate supervision and truncated BPTT.

Notes

Version 3:

  • added discussions about robustness against collapsing to shortcuts;
  • discussed an alternative solution: implicitly solving the steady-state for dU/d\theta.

Notes

Version 4:

  • Conducted experiments to verify the viability of backpropagation through unrolled dynamical system time steps (zoomed in to the top-left corner of our full design graph), with a detailed diagnosis report appendix, including post-experimental probes into the loss landscape in parameter space.
  • Discussed three possible directions for the next steps. Discussed PINN as a possible option for the newly proposed direction (c).

Files

backprop2renderer-v4.pdf

Files (1.2 MB)

Name Size Download all
md5:590fc1ed01398a995981791ef685cf21
376.6 kB Preview Download
md5:59309feac64fbc87370ce780ac1cfacf
843.0 kB Preview Download