Hi Everyone,
I was recently working on writing regression test suite for the i.eb.hsebal01
module, and I have encountered persistent failures due to Delta T Convergence failed
errors, even when providing seemingly valid and diverse synthetic input maps.
The purpose of this test is to simulate minimal but physically consistent input conditions that can:
- trigger both manual and automatic pixel detection modes,
- provide stable heat flux results for future validation,
- and help ensure stability in CI environments.
Inputs Used
I am working in a small test region (10x10
, LL projection). Inputs include:
netradiation
: higher over vegetated pixelssoilheatflux
: computed as 30% ofRn
temperaturemeansealevel
: cooler in wet zones, hotter in dryvapourpressureactual
: higher for wet zonesaerodynresistance
: inversely correlated with NDVIfrictionvelocitystar
:~0.25
(nominal)
Example NDVI generation:
r.mapcalc expression="ndvi = if(row()==5 && col()==5, 0.8, if(row()==9 && col()==9, 0.1, 0.5 + 0.05 * rand(0,1)))"
I have explicitly declared (5,5) as wet and (9,9) as dry pixels for the manual mode.
Problem
Despite a stable region and reasonable gradients between wet and dry zones, the module still fails with:
ERROR: Delta T Convergence failed, exiting prematurely, please check output
This happens both with -a
(automatic detection) and with manually specified wet/dry coordinates.
What I have Tried
- Aligning
g.region
to raster extents (e.g.,g.region raster=ndvi
) - Using float values vs integer
- Adjusting synthetic values to stay within realistic SEBAL bounds
- Setting
row_wet_pixel
,column_wet_pixel
manually - Testing against both stable and dev builds of GRASS 8.4
If anyone has:
- suggestions on stabilizing Delta T convergence with synthetic data,
- or knowledge of thresholds/tolerances in the internal logic,
I would greatly appreciate any guidance. Ultimately, I would like to contribute this as a full regression test under imagery/i.eb.hsebal01/testsuite
.
Thanks in advance!