Datasets:
cube
array 3D | filename
stringlengths 15
15
|
|---|---|
[[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,3.0,16.0,34.0,47.0,54.0,66.0,73.0,80.0,79.0,6(...TRUNCATED)
|
0000/000001.npz
|
[[[33.0,9.0,5.0,3.0,7.0,9.0,16.0,24.0,44.0,51.0,50.0,66.0,55.0,47.0,36.0,28.0,12.0,8.0,11.0,10.0,9.0(...TRUNCATED)
|
0000/000002.npz
|
[[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED)
|
0000/000003.npz
|
[[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,2.0,1.0,1.0,1.0,2.0,3.0,2.0,2.0,1.0,2.0,4.0,2.0,3(...TRUNCATED)
|
0000/000004.npz
|
[[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED)
|
0000/000005.npz
|
[[[90.0,89.0,86.0,85.0,86.0,86.0,81.0,84.0,86.0,82.0,87.0,86.0,86.0,85.0,87.0,83.0,93.0,90.0,90.0,92(...TRUNCATED)
|
0000/000006.npz
|
[[[0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,2.0,2.0,0.0,0.0,0.0,0.0,0(...TRUNCATED)
|
0000/000007.npz
|
[[[0.0,0.0,7.0,30.0,36.0,44.0,51.0,54.0,56.0,58.0,61.0,58.0,63.0,58.0,58.0,62.0,64.0,65.0,66.0,59.0,(...TRUNCATED)
|
0000/000008.npz
|
[[[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0(...TRUNCATED)
|
0000/000009.npz
|
[[[2.0,3.0,2.0,0.0,1.0,3.0,3.0,4.0,7.0,6.0,9.0,7.0,5.0,6.0,3.0,3.0,3.0,1.0,3.0,2.0,3.0,5.0,4.0,3.0,3(...TRUNCATED)
|
0000/000010.npz
|
End of preview. Expand
in Data Studio
Hyperheight Data Cube Denoising and Super-Resolution
Dataset Summary
- Generation code and pipeline: https://github.com/Anfera/HHDC-Creator (HHDC-Creator repo).
- 3-D photon-count waveforms (Hyperheight data cubes) built from NEON discrete-return LiDAR using the HHDC pipeline (
hhdc/cube_generator.py). - Each cube stores a high-resolution canopy volume (default: 0.5 m vertical bins over 64 m height, footprints every 2 m) across a 96 m × 96 m tile. In the HHDC-Creator pipeline, the exact settings are recorded per-sample in metadata, but this HF dataset only exposes the processed cubes and filenames.
- Inputs for learning are simulated observations from the physics-based forward imaging model (
hhdc/forward_model.py) that emulates the Concurrent Artificially-intelligent Spectrometry and Adaptive Lidar System (CASALS), applying Gaussian beam aggregation, distance-based photon loss, and mixed Poisson + Gaussian noise to downsample/perturb the cube. - Targets are the clean, high-resolution cubes. The pairing supports denoising and spatial super-resolution with recommended settings of 10 m diameter footprints sampled on a 3 m × 6 m grid (along/across swath); users can adjust these parameters as needed.
Supported Tasks
- Denoising of LiDAR photon-count hyperheight data cubes.
- Super-resolution / resolution enhancement of lidar volumes.
- Robust reconstruction under realistic sensor noise simulated by the forward model.
Dataset Structure
Storage and splits
- Format on the Hub: Apache Arrow / Parquet, managed by 🤗 Datasets.
- Access: via
load_dataset("anfera236/HHDC", split=...). - Splits:
train,validation,test(seedataset_infofor exact sizes).
Per-sample fields
Each sample in this Hugging Face dataset contains:
cube—float32, shape[128, 48, 48]
High-resolution Hyperheight data cube (channel-first:[bins, H, W]), derived from NEON discrete-return LiDAR using the HHDC-Creator pipeline.filename—string
Identifier for the source tile / sample (matches the tile-level naming used in HHDC-Creator).
Additional fields produced by the HHDC-Creator pipeline (e.g. x_centers, y_centers, bin_edges, footprint_counts, metadata) are not stored in this HF dataset. They can be regenerated from NEON AOP LiDAR using the code in the HHDC-Creator repository.
Typical shapes and forward model
With the default cube configuration (e.g. cube_config_sample.json, cube_length = 96 m, footprint_separation = 2 m):
- Clean high-res cube (
cube):[128, 48, 48]- 64 m vertical extent / 0.5 m bins → 128 height bins
- 96 m × 96 m tile / 2 m grid → 48 × 48 footprints
Low-resolution, noisy measurements are generated on the fly using the physics-based forward model (LidarForwardImagingModel in HHDC-Creator). For example, with output_res_m=(3.0, 6.0):
- Noisy cube (model output, not stored in the dataset):
[128, 32, 16]
Users are expected to:
- Load
cubefrom this dataset as the clean target. - Apply the forward model to obtain noisy / low-res inputs for denoising and super-resolution experiments.
If you want to replicate our exact results, you can use the reference cube provided at SampleCube/gt2.npz.
Usage
from datasets import load_dataset
import torch
from hhdc.forward_model import LidarForwardImagingModel # or your actual import path
# Load dataset
ds = load_dataset("anfera236/HHDC", split="train")
ds.set_format(type="torch", columns=["cube"])
# Instantiate the LiDAR forward model (use your actual parameters)
forward_model = LidarForwardImagingModel(
input_res_m=(2.0, 2.0),
output_res_m=(3.0, 6.0),
footprint_diameter_m=10.0,
b=0.1,
eta=0.5,
ref_altitude=500.0,
ref_photon_count=20.0,
)
sample = ds[0]
# High-res “clean” HHDC: [bins, H, W]
clean = sample["cube"]
# Low-res noisy measurement generated by the forward model: [bins, H_low, W_low]
noisy = forward_model(clean)
# Example: train a denoising/super-res model (my_model: noisy -> clean)
pred = my_model(noisy.unsqueeze(0)) # [1, bins, H, W] ideally
loss = loss_fn(pred, clean.unsqueeze(0)) # shapes must match
loss.backward()
Evaluation
- Recommended metrics: PSNR and SSIM on the canopy height model (CHM), digital terrain model (DTM), and 50th percentile height maps (all derivable via
hhdc.canopy_plots.create_chmin the HHDC-Creator repo).
Limitations and Risks
- Forward model parameters (beam diameter, noise levels, output resolution, altitude) control task difficulty; we recommend documenting the values you use per experiment (e.g., in your own metadata/config). In the original HHDC-Creator pipeline these are stored per-sample in metadata, but this HF dataset does not include that field.
- Outputs are simulated; real sensor artifacts (boresight errors, occlusions, calibration drift) are not modeled.
- NEON LiDAR is collected over North America; models may not generalize to other biomes or sensor geometries without adaptation.
Licensing
- Derived from NEON AOP discrete-return LiDAR (DP1.30003.001). Follow the NEON Data Usage and Citation Policy and cite the original survey months/sites used.
- Include the citation for the Hyperheight paper when publishing results that use this dataset.
Citation
@article{ramirez2024hyperheight,
title={Hyperheight lidar compressive sampling and machine learning reconstruction of forested landscapes},
author={Ramirez-Jaime, Andres and Pena-Pena, Karelia and Arce, Gonzalo R and Harding, David and Stephen, Mark and MacKinnon, James},
journal={IEEE Transactions on Geoscience and Remote Sensing},
volume={62},
pages={1--16},
year={2024},
publisher={IEEE}
}
@article{ramirez2025super,
title={Super-Resolved 3D Satellite Lidar Imaging of Earth Via Generative Diffusion Models},
author={Ramirez-Jaime, Andres and Porras-Diaz, Nestor and Arce, Gonzalo R and Stephen, Mark},
journal={IEEE Transactions on Geoscience and Remote Sensing},
year={2025},
publisher={IEEE}
}
@inproceedings{ramirez2025denoising,
title={Denoising and Super-Resolution of Satellite Lidars Using Diffusion Generative Models},
author={Ramirez-Jaime, Andres and Porras-Diaz, Nestor and Arce, Gonzalo R and Stephen, Mark},
booktitle={2025 IEEE Statistical Signal Processing Workshop (SSP)},
pages={1--5},
year={2025},
organization={IEEE}
}
Maintainers
- Andres Ramirez-Jaime — [email protected]
- Downloads last month
- 1,457

