Published March 16, 2026 | Version v1
Dataset Open

Computationally Efficient Parameter Retrievals for Active, Time-resolved, Neutron Albedo Measurements on Mars (Paper Data)

  • 1. ROR icon Arizona State University
  • 2. ROR icon Los Alamos National Laboratory

Description

DAN Retrieval Data Release User Guide

Overview

This archive accompanies the paper Computationally Efficient Parameter Retrievals for Active, Time-resolved, Neutron Albedo Measurements on Mars. It contains processed retrieval products used for comparison studies based on active measurements from the Dynamic Albedo of Neutrons (DAN) instrument on the Mars Science Laboratory (MSL) rover. The release is organized around two analysis sets:

first_1900m_comparison/: products used for comparison against previously published results from the first 1900 m of the Curiosity traverse.
marias_pass_comparison/: products used for comparison against prior Marias Pass analyses, including one-layer and layered retrieval cases.

These files are primarily intended as a reproducibility companion to the paper. They include processed observation inputs, retrieval intermediates, posterior summaries, and diagnostic plots generated by the retrieval pipeline. The package is not a raw telemetry release.

What the data represent

The retrieval workflow compares processed DAN active measurements against a library of MCNP forward-model outputs, then uses Markov Chain Monte Carlo (MCMC) parameter estimation to infer subsurface properties. Depending on the retrieval configuration, the fitted parameters can include:

- water-equivalent hydrogen (WEH)
- macroscopic neutron absorption cross section ($\Sigma_{abs}$)
- top-layer and bottom-layer values for layered models
- layer interface depth

The supplied code shows that the retrieval uses selected DAN time bins from background-subtracted observation data, constructs a simulation grid, estimates priors from the best-fit grid model, runs MCMC, and then saves posterior diagnostics and Gaussian-mixture summaries.

Directory layout

1. first_1900m_comparison/

Top-level contents:

first_1900m_comparison/
├── MCMC_ref_sims.csv
├── my_grid.csv
├── mitrofanov_results.csv
└── parameter_estimation/
    ├── sol00011/
    ├── sol00012/
    ├── ...
    └── sol00361/

Within parameter_estimation/, each solXXXXX/ directory contains one or more individual DAN observations. Each observation is stored in a folder named with its DAN product identifier, for example:parameter_estimation/sol00012/DNA_398555462EAC00120030004_______M1/

This level typically contains the observation-specific retrieval inputs and outputs.

2. `marias_pass_comparison/`

Top-level contents:

marias_pass_comparison/
├── site_1/
├── site_2/
├── ...
└── site_16/

Within each site, observations are grouped into a coadded regional product, for example:

site_10/
└── sol_1053-1056_Murray/
    ├── bg_dat.npy
    ├── coadded_raw.txt
    ├── coadded_bg_subtracted.txt
    ├── label.txt
    ├── raw_dat.npy
    ├── times.npy
    └── results/
        ├── one_layer/
        ├── two_layer_4_param/
        └── two_layer_5_param/

The results/ subdirectories separate retrieval products by model type.

Retrieval model folders

Where present, retrieval outputs are grouped by model type:

one_layer/: homogeneous-material retrieval.
two_layer_4_param/: layered retrieval with four free physical parameters.
two_layer_5_param/: layered retrieval with five free physical parameters.

The supplied retrieval code defines representative parameter sets for one-layer, full two-layer, and reduced layered configurations, including WEH, $\Sigma_{abs}$, and depth. 

Typical analysis path

A typical path in this release follows the pattern:

[comparison set] / [sol or site] / [observation or coadd] / [results] / [model type] / [files]

Examples:

first_1900m_comparison/parameter_estimation/sol00012/DNA_.../
marias_pass_comparison/site_10/sol_1053-1056_Murray/results/two_layer_5_param/

File summary

Observation Level Data Files

File Summary
raw_dat.npy Processed observation data array before background subtraction.
bg_dat.npy Background-subtracted observation array used by the retrieval. The code loads this file directly and uses selected bins from it to build the measurement vector and uncertainty vector.
times.npy Time-bin array associated with the DAN die-away measurement. It is passed into the comparison and chi-squared routines together with the processed observation data.
coadded_raw.txt Text export of coadded raw observation data.
coadded_bg_subtracted.txt Text export of coadded background-subtracted observation data.
label.txt Label summarizing observations in the coadded product.

Source Files

File Summary
*.DAT Raw observation file from the mission PDS.
*.LBL Raw observation label file from the mission PDS.
*.txt Summary of raw observation created by processing pipeline.

Retrieval Setup and Intermediate Files

File Summary
apriori.npy Prior means and standard deviations used to initialize the MCMC retrieval. If absent, the pipeline estimates these from the best-fit grid model and saves them here.
parameters.npy Array of parameter values associated with each retained simulation in the model grid. Saved during model-grid construction.
model_grid.npy Forward-model response grid used during retrieval. Saved during model-grid construction.
model_grid_err.npy Uncertainty array corresponding to model_grid.npy. Saved during model-grid construction.
utilized_sims.npy List of simulation files used to build the model grid. The code saves the full set of `.o` files used from the simulation directory.
simulation_data.csv MCNP forward modeling using in the analysis (geochemistry configuration and die away curve response)
densities.npy Density values of the fitted Gaussian mixture model evaluated at the component means. Saved after GMM estimation.

MCMC output files

File Summary
MCMC.h5.zip Compressed HDF5 backend containing the MCMC chain. The code periodically writes `MCMC.h5`, zips it, and later reopens it for plotting and posterior analysis.
results.npy Numerical summary of posterior results.

Diagnostic Plots

File Summary
corner.png Corner plot of the posterior distributions produced from the MCMC chain after burn-in removal.
all_walker_histories.png Trace plot showing walker evolution during MCMC sampling.
gmm_selection.png Diagnostic plot used to select the number of Gaussian-mixture components. The workflow evaluates up to 10 components.
gmm_mix.png Plot of the Gaussian-mixture decomposition of the posterior distribution.
chi2_grid.png Plot of fit between observation and each .o file used in the analysis.

Gaussian-Mixture Summary Files

File Summary
gmm_mean_results.npy Mean values of the fitted Gaussian-mixture components.
gmm_std_results.npy Standard deviations of the fitted Gaussian-mixture components.
gmm_selection.png See Diagnostic plots above.
gmm_mix.png See Diagnostic plots above.

Top-Level Comparison Files

File Summary
MCMC_results.csv Reference table used in the first-1900 m comparison workflow.
modeling_grid.csv Grid table associated with the first-1900 m comparison workflow.
mitrofanov_results.csv Comparison table associated with previously published first-1900 m DAN results. The paper explicitly frames this dataset as a comparison against prior published retrievals for the first 1900 m of traverse.

Notes for users

- This release is organized around processed retrieval products, not mission raw data.
- For most reuse cases, the most immediately useful files are bg_dat.npy, times.npy, apriori.npy, parameters.npy, results.npy, MCMC.h5.zip, corner.png, and all_walker_histories.png.
- Folder names encode the analysis context: either individual observations grouped by sol, or coadded regional products grouped by site.
- Where descriptions are blank, the attached paper and supplied code did not provide enough information to document the file confidently.

Suggested citation

Please cite the associated paper when using this dataset.

 

Files

Computationally Efficient Parameter Retrievals for Active_Time_Resolved_Neutron_Albedo_Measurements_on_Mars.zip

Additional details

Software

Repository URL
https://github.com/litehouse43/DANOPS
Programming language
Python