Published March 9, 2026 | Version v1
Preprint Open

Addressing Exploration Challenges in Sparse Reward Reinforcement Learning Environments via Intrinsic Curiosity Modules and Reward Shaping

Authors/Creators

  • 1. UC Berkeley

Description

Reinforcement learning (RL) algorithms often struggle in environments with sparse rewards, leading to inefficient exploration and prolonged training times. This paper investigates the integration of intrinsic curiosity modules (ICM) with reward shaping techniques to mitigate these challenges. We propose a novel approach that combines ICM-generated intrinsic rewards with carefully designed extrinsic reward shaping functions to guide the agent's exploration and accelerate learning. Our experiments demonstrate the effectiveness of this combined approach in benchmark sparse reward environments, achieving significant improvements in sample efficiency and overall performance compared to traditional RL methods and ICM-only implementations.

Files

preprint_elena_rossi_20260309_003704.pdf

Files (6.1 kB)

Name Size Download all
md5:d70d3fde73389f7878832963f3443d24
6.1 kB Preview Download

Additional details

References