[GRASS-dev] [GRASS GIS] #2074: r3.mapcalc neighborhood modifier hash table and tile errors

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------
I'm getting errors when using the neighborhood modifier in
[http://grass.osgeo.org/grass70/manuals/r3.mapcalc.html r3.mapcalc] in
GRASS 7 (original post on mailing list: [http://lists.osgeo.org/pipermail
/grass-dev/2013-September/065614.html r3.mapcalc neighborhood modifier
error], [http://osgeo-org.1560.x6.nabble.com/r3-mapcalc-neighborhood-
modifier-error-td5076982.html nabbe]).

One error is about invalid tile or value and the other is about hash
table. I don't know if both have the same cause but they are probably
close enough to be in the same ticket.

To generate test data use:
{{{
r3.mapcalc "test_map = rand(0, 500)"
}}}

First command:
{{{
r3.mapcalc "new_map = (test_map[0, 0, 0] + test_map[1, 1, 0]) / 2"
}}}

Its output:
{{{
ERROR: Rast3d_get_double_region: error in Rast3d_get_tile_ptr.Region
        coordinates x 0 y 1 z 0 tile index 0 offset 64
}}}
(Here we miss the space in the error message.)

Second command:
{{{
r3.mapcalc "new_map = (test_map + test_map[1, 1, 0]) / 2" --o
}}}
(Note that `--overwrite` is necessary because the previous command created
already the (invalid) map.)

Its output:
{{{
ERROR: Rast3d_cache_hash_load_name: name already in hashtable
}}}

--
Ticket URL: <https://trac.osgeo.org/grass/ticket/2074&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by huhabla):

I can confirm this issue in case grass7 is compiled with pthreads support.
The errors appear randomly at different indices. Looks like a race
condition to me. Hence this issue my be related to the pthreads
parallelism in r3.mapcalc? The problem may be related to the static and
global variables used in the raster3d library.

When grass7 is compiled without pthreads support the errors disappear and
everything works as expected.

Can anyone suggest a nice toolset to easily detect race conditions on
linux?

--
Ticket URL: <https://trac.osgeo.org/grass/ticket/2074#comment:1&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by huhabla):

Race condition confirmed with valgrind.

Using:
{{{
valgrind --tool=helgrind r3.mapcalc "new_map = (test_map + test_map[1, 1,
0]) / 2" --o
}}}

Produces:

{{{
GRASS 7.0.svn (TestLL):~/src/grass7.0/grass_trunk > valgrind
--tool=helgrind r3.mapcalc "new_map = (test_map + test_map[1, 1, 0]) / 2"
--o
==24618== Helgrind, a thread error detector
==24618== Copyright (C) 2007-2011, and GNU GPL'd, by OpenWorks LLP et al.
==24618== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright
info
==24618== Command: r3.mapcalc new_map\ =\ (test_map\ +\ test_map[1,\ 1,\
0])\ /\ 2 --o
==24618==
==24618== ---Thread-Announcement------------------------------------------
==24618==
==24618== Thread #3 was created
==24618== at 0x5CD0C8E: clone (clone.S:77)
==24618== by 0x56CAF6F: do_clone.constprop.4 (createthread.c:75)
==24618== by 0x56CC57F: pthread_create@@GLIBC_2.2.5
(createthread.c:256)
==24618== by 0x4C2DAAD: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x508429E: G_init_workers (worker.c:124)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Thread #3: pthread_cond_{signal,broadcast}: dubious: associated
lock is not held by any thread
==24618== at 0x4C2CC23: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x5084022: worker (worker.c:47)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== ---Thread-Announcement------------------------------------------
==24618==
==24618== Thread #2 was created
==24618== at 0x5CD0C8E: clone (clone.S:77)
==24618== by 0x56CAF6F: do_clone.constprop.4 (createthread.c:75)
==24618== by 0x56CC57F: pthread_create@@GLIBC_2.2.5
(createthread.c:256)
==24618== by 0x4C2DAAD: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x508429E: G_init_workers (worker.c:124)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Thread #2: lock order "0x52934C0 before 0x6984810" violated
==24618==
==24618== Observed (incorrect) order is: acquisition of lock at 0x6984810
==24618== at 0x4C2D1BE: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x5083FEA: worker (worker.c:39)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== followed by a later acquisition of lock at 0x52934C0
==24618== at 0x4C2E0ED: pthread_mutex_lock (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5084069: G_begin_execute (worker.c:74)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Required order was established by acquisition of lock at
0x52934C0
==24618== at 0x4C2E0ED: pthread_mutex_lock (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5084069: G_begin_execute (worker.c:74)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x405A20: execute (evaluate.c:210)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== followed by a later acquisition of lock at 0x6984810
==24618== at 0x4C2E0ED: pthread_mutex_lock (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5084109: G_begin_execute (worker.c:86)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x405A20: execute (evaluate.c:210)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x52934C0 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5084200: G_init_workers (worker.c:114)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 8 at 0x6984840 by thread
#2
==24618== Locks held: 2, at addresses 0x52934C0 0x6984810
==24618== at 0x50840B2: G_begin_execute (worker.c:60)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x5083FFD: worker (worker.c:43)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x6984840 is 128 bytes inside a block of size 1024
alloc'd
==24618== at 0x4C29F64: calloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5063076: G__calloc (alloc.c:81)
==24618== by 0x5084245: G_init_workers (worker.c:118)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ---Thread-Announcement------------------------------------------
==24618==
==24618== Thread #4 was created
==24618== at 0x5CD0C8E: clone (clone.S:77)
==24618== by 0x56CAF6F: do_clone.constprop.4 (createthread.c:75)
==24618== by 0x56CC57F: pthread_create@@GLIBC_2.2.5
(createthread.c:256)
==24618== by 0x4C2DAAD: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x508429E: G_init_workers (worker.c:124)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Thread #4: pthread_cond_{signal,broadcast}: dubious: associated
lock is not held by any thread
==24618== at 0x4C2CC23: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x5084022: worker (worker.c:47)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 4 at 0x6982690 by thread
#3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x4E3DC80: Rast3d_cache_hash_name2index (cachehash.c:110)
==24618== by 0x4E3C99B: Rast3d_cache_elt_ptr (cache1.c:469)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 4 by thread #4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x4E3DCA8: Rast3d_cache_hash_name2index (cachehash.c:121)
==24618== by 0x4E3C631: Rast3d_cache_unlock (cache1.c:306)
==24618== by 0x4E3CAD1: Rast3d_cache_elt_ptr (cache1.c:500)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618==
==24618== Address 0x6982690 is 32 bytes inside a block of size 40 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x4E3BED6: Rast3d_malloc (alloc.c:28)
==24618== by 0x4E3DB51: Rast3d_cache_hash_new (cachehash.c:55)
==24618== by 0x4E3C495: Rast3d_cache_new (cache1.c:127)
==24618== by 0x4E3C559: Rast3d_cache_new_read (cache1.c:164)
==24618== by 0x4E3D715: Rast3d_init_cache (cache.c:25)
==24618== by 0x4E46032: Rast3d_fill_header (header.c:446)
==24618== by 0x4E49629: Rast3d_open_cell_old (open.c:164)
==24618== by 0x407F59: open_map (map3.c:510)
==24618== by 0x4051B4: initialize_function (evaluate.c:50)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 4 at 0x6982688 by thread
#3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x4E3DC87: Rast3d_cache_hash_name2index (cachehash.c:111)
==24618== by 0x4E3C99B: Rast3d_cache_elt_ptr (cache1.c:469)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 4 by thread #4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x4E3DCA5: Rast3d_cache_hash_name2index (cachehash.c:119)
==24618== by 0x4E3C631: Rast3d_cache_unlock (cache1.c:306)
==24618== by 0x4E3CAD1: Rast3d_cache_elt_ptr (cache1.c:500)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618==
==24618== Address 0x6982688 is 24 bytes inside a block of size 40 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x4E3BED6: Rast3d_malloc (alloc.c:28)
==24618== by 0x4E3DB51: Rast3d_cache_hash_new (cachehash.c:55)
==24618== by 0x4E3C495: Rast3d_cache_new (cache1.c:127)
==24618== by 0x4E3C559: Rast3d_cache_new_read (cache1.c:164)
==24618== by 0x4E3D715: Rast3d_init_cache (cache.c:25)
==24618== by 0x4E46032: Rast3d_fill_header (header.c:446)
==24618== by 0x4E49629: Rast3d_open_cell_old (open.c:164)
==24618== by 0x407F59: open_map (map3.c:510)
==24618== by 0x4051B4: initialize_function (evaluate.c:50)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 1 at 0x6982920 by thread
#3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x4E3DC98: Rast3d_cache_hash_name2index (cachehash.c:114)
==24618== by 0x4E3C99B: Rast3d_cache_elt_ptr (cache1.c:469)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 1 by thread #4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x4E3DC3E: Rast3d_cache_hash_load_name (cachehash.c:101)
==24618== by 0x4E3CA5A: Rast3d_cache_elt_ptr (cache1.c:491)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618==
==24618== Address 0x6982920 is 0 bytes inside a block of size 125 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x4E3BED6: Rast3d_malloc (alloc.c:28)
==24618== by 0x4E3DB72: Rast3d_cache_hash_new (cachehash.c:63)
==24618== by 0x4E3C495: Rast3d_cache_new (cache1.c:127)
==24618== by 0x4E3C559: Rast3d_cache_new_read (cache1.c:164)
==24618== by 0x4E3D715: Rast3d_init_cache (cache.c:25)
==24618== by 0x4E46032: Rast3d_fill_header (header.c:446)
==24618== by 0x4E49629: Rast3d_open_cell_old (open.c:164)
==24618== by 0x407F59: open_map (map3.c:510)
==24618== by 0x4051B4: initialize_function (evaluate.c:50)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 4 at 0x69826E0 by thread
#3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x4E3DCA2: Rast3d_cache_hash_name2index (cachehash.c:117)
==24618== by 0x4E3C99B: Rast3d_cache_elt_ptr (cache1.c:469)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 4 by thread #4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x4E3DC3B: Rast3d_cache_hash_load_name (cachehash.c:100)
==24618== by 0x4E3CA5A: Rast3d_cache_elt_ptr (cache1.c:491)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618==
==24618== Address 0x69826E0 is 0 bytes inside a block of size 500 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x4E3BED6: Rast3d_malloc (alloc.c:28)
==24618== by 0x4E3DB67: Rast3d_cache_hash_new (cachehash.c:62)
==24618== by 0x4E3C495: Rast3d_cache_new (cache1.c:127)
==24618== by 0x4E3C559: Rast3d_cache_new_read (cache1.c:164)
==24618== by 0x4E3D715: Rast3d_init_cache (cache.c:25)
==24618== by 0x4E46032: Rast3d_fill_header (header.c:446)
==24618== by 0x4E49629: Rast3d_open_cell_old (open.c:164)
==24618== by 0x407F59: open_map (map3.c:510)
==24618== by 0x4051B4: initialize_function (evaluate.c:50)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 4 at 0x6982688 by thread
#3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x4E3DCA5: Rast3d_cache_hash_name2index (cachehash.c:119)
==24618== by 0x4E3C99B: Rast3d_cache_elt_ptr (cache1.c:469)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 4 by thread #4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x4E3DCA5: Rast3d_cache_hash_name2index (cachehash.c:119)
==24618== by 0x4E3C631: Rast3d_cache_unlock (cache1.c:306)
==24618== by 0x4E3CAD1: Rast3d_cache_elt_ptr (cache1.c:500)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618==
==24618== Address 0x6982688 is 24 bytes inside a block of size 40 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x4E3BED6: Rast3d_malloc (alloc.c:28)
==24618== by 0x4E3DB51: Rast3d_cache_hash_new (cachehash.c:55)
==24618== by 0x4E3C495: Rast3d_cache_new (cache1.c:127)
==24618== by 0x4E3C559: Rast3d_cache_new_read (cache1.c:164)
==24618== by 0x4E3D715: Rast3d_init_cache (cache.c:25)
==24618== by 0x4E46032: Rast3d_fill_header (header.c:446)
==24618== by 0x4E49629: Rast3d_open_cell_old (open.c:164)
==24618== by 0x407F59: open_map (map3.c:510)
==24618== by 0x4051B4: initialize_function (evaluate.c:50)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 4 at 0x6982690 by thread
#3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x4E3DCA8: Rast3d_cache_hash_name2index (cachehash.c:121)
==24618== by 0x4E3C99B: Rast3d_cache_elt_ptr (cache1.c:469)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 4 by thread #4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x4E3DCA8: Rast3d_cache_hash_name2index (cachehash.c:121)
==24618== by 0x4E3C631: Rast3d_cache_unlock (cache1.c:306)
==24618== by 0x4E3CAD1: Rast3d_cache_elt_ptr (cache1.c:500)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618==
==24618== Address 0x6982690 is 32 bytes inside a block of size 40 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x4E3BED6: Rast3d_malloc (alloc.c:28)
==24618== by 0x4E3DB51: Rast3d_cache_hash_new (cachehash.c:55)
==24618== by 0x4E3C495: Rast3d_cache_new (cache1.c:127)
==24618== by 0x4E3C559: Rast3d_cache_new_read (cache1.c:164)
==24618== by 0x4E3D715: Rast3d_init_cache (cache.c:25)
==24618== by 0x4E46032: Rast3d_fill_header (header.c:446)
==24618== by 0x4E49629: Rast3d_open_cell_old (open.c:164)
==24618== by 0x407F59: open_map (map3.c:510)
==24618== by 0x4051B4: initialize_function (evaluate.c:50)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 4 at 0x698268C by thread
#3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x4E3DCAF: Rast3d_cache_hash_name2index (cachehash.c:120)
==24618== by 0x4E3C99B: Rast3d_cache_elt_ptr (cache1.c:469)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 4 by thread #4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x4E3DCAF: Rast3d_cache_hash_name2index (cachehash.c:120)
==24618== by 0x4E3C631: Rast3d_cache_unlock (cache1.c:306)
==24618== by 0x4E3CAD1: Rast3d_cache_elt_ptr (cache1.c:500)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618==
==24618== Address 0x698268C is 28 bytes inside a block of size 40 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x4E3BED6: Rast3d_malloc (alloc.c:28)
==24618== by 0x4E3DB51: Rast3d_cache_hash_new (cachehash.c:55)
==24618== by 0x4E3C495: Rast3d_cache_new (cache1.c:127)
==24618== by 0x4E3C559: Rast3d_cache_new_read (cache1.c:164)
==24618== by 0x4E3D715: Rast3d_init_cache (cache.c:25)
==24618== by 0x4E46032: Rast3d_fill_header (header.c:446)
==24618== by 0x4E49629: Rast3d_open_cell_old (open.c:164)
==24618== by 0x407F59: open_map (map3.c:510)
==24618== by 0x4051B4: initialize_function (evaluate.c:50)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 1 at 0x65D86B0 by thread
#3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x4E44C8D: Rast3d_get_double_region (getvalue.c:231)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 1 by thread #4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x4C2FDD6: memcpy (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x4E443C3: Rast3d_copy_from_xdr (fpxdr.c:230)
==24618== by 0x4E4CC2C: Rast3d_read_tile (tileread.c:22)
==24618== by 0x4E3D4B8: cacheRead_readFun (cache.c:14)
==24618== by 0x4E3CABA: Rast3d_cache_elt_ptr (cache1.c:505)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618==
==24618== Address 0x65D86B0 is 0 bytes inside a block of size 3840000
alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x4E3BED6: Rast3d_malloc (alloc.c:28)
==24618== by 0x4E3C3ED: Rast3d_cache_new (cache1.c:105)
==24618== by 0x4E3C559: Rast3d_cache_new_read (cache1.c:164)
==24618== by 0x4E3D715: Rast3d_init_cache (cache.c:25)
==24618== by 0x4E46032: Rast3d_fill_header (header.c:446)
==24618== by 0x4E49629: Rast3d_open_cell_old (open.c:164)
==24618== by 0x407F59: open_map (map3.c:510)
==24618== by 0x4051B4: initialize_function (evaluate.c:50)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 8 at 0x65D5670 by thread
#2
==24618== Locks held: 1, at address 0x6984810
==24618== at 0x5084184: G_end_execute (worker.c:98)
==24618== by 0x4053D6: evaluate_function (evaluate.c:112)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x508400F: worker (worker.c:45)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x65D5670 is 80 bytes inside a block of size 88 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x406757: mapname (expression.c:41)
==24618== by 0x4138E0: yyparse (mapcalc.y:128)
==24618== by 0x414077: parse_string (mapcalc.y:253)
==24618== by 0x40432C: main (main.c:148)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 8 at 0x6983050 by thread
#2
==24618== Locks held: 1, at address 0x6984810
==24618== at 0x409C44: f_add (xadd.c:66)
==24618== by 0x4053ED: evaluate_function (evaluate.c:174)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x407B0A: read_map.isra.1 (map3.c:103)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x6983050 is 0 bytes inside a block of size 960 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x405196: initialize_function (evaluate.c:27)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Thread #2: pthread_cond_{signal,broadcast}: dubious: associated
lock is not held by any thread
==24618== at 0x4C2CC23: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x5084022: worker (worker.c:47)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== ---Thread-Announcement------------------------------------------
==24618==
==24618== Thread #1 is the program's root thread
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 4 at 0x61A9E0 by thread
#1
==24618== Locks held: none
==24618== at 0x405984: execute (evaluate.c:330)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== This conflicts with a previous read of size 4 by thread #3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x405634: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Thread #1: lock order "0x6984810 before 0x52934C0" violated
==24618==
==24618== Observed (incorrect) order is: acquisition of lock at 0x52934C0
==24618== at 0x4C2E0ED: pthread_mutex_lock (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5084069: G_begin_execute (worker.c:74)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x405A20: execute (evaluate.c:210)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== followed by a later acquisition of lock at 0x6984810
==24618== at 0x4C2E0ED: pthread_mutex_lock (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5084109: G_begin_execute (worker.c:86)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x405A20: execute (evaluate.c:210)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Required order was established by acquisition of lock at
0x6984810
==24618== at 0x4C2D1BE: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x5083FEA: worker (worker.c:39)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== followed by a later acquisition of lock at 0x52934C0
==24618== at 0x4C2E0ED: pthread_mutex_lock (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5084069: G_begin_execute (worker.c:74)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== ---Thread-Announcement------------------------------------------
==24618==
==24618== Thread #5 was created
==24618== at 0x5CD0C8E: clone (clone.S:77)
==24618== by 0x56CAF6F: do_clone.constprop.4 (createthread.c:75)
==24618== by 0x56CC57F: pthread_create@@GLIBC_2.2.5
(createthread.c:256)
==24618== by 0x4C2DAAD: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x508429E: G_init_workers (worker.c:124)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Thread #5: pthread_cond_{signal,broadcast}: dubious: associated
lock is not held by any thread
==24618== at 0x4C2CC23: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x5084022: worker (worker.c:47)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x52934C0 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5084200: G_init_workers (worker.c:114)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 8 at 0x6984840 by thread
#3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x5083FFD: worker (worker.c:43)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous read of size 8 by thread #2
==24618== Locks held: 2, at addresses 0x52934C0 0x6984810
==24618== at 0x50840B2: G_begin_execute (worker.c:60)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x6984840 is 128 bytes inside a block of size 1024
alloc'd
==24618== at 0x4C29F64: calloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5063076: G__calloc (alloc.c:81)
==24618== by 0x5084245: G_init_workers (worker.c:118)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 8 at 0x65D5B70 by thread
#1
==24618== Locks held: none
==24618== at 0x5084184: G_end_execute (worker.c:98)
==24618== by 0x4053D6: evaluate_function (evaluate.c:112)
==24618== by 0x405A20: execute (evaluate.c:210)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x508400F: worker (worker.c:45)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x65D5B70 is 80 bytes inside a block of size 88 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x406CD8: operator (expression.c:41)
==24618== by 0x4136BA: yyparse (mapcalc.y:165)
==24618== by 0x414077: parse_string (mapcalc.y:253)
==24618== by 0x40432C: main (main.c:148)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 8 at 0x69834F0 by thread
#1
==24618== Locks held: none
==24618== at 0x40AB82: f_div (xdiv.c:66)
==24618== by 0x4053ED: evaluate_function (evaluate.c:174)
==24618== by 0x405A20: execute (evaluate.c:210)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x40AC0E: f_double (xdouble.c:37)
==24618== by 0x4053ED: evaluate_function (evaluate.c:174)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x69834F0 is 0 bytes inside a block of size 960 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x4050CF: initialize_function (evaluate.c:27)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 8 at 0x65D5B70 by thread
#1
==24618== Locks held: none
==24618== at 0x5084051: G_begin_execute (worker.c:71)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x405A20: execute (evaluate.c:210)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x508400F: worker (worker.c:45)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x65D5B70 is 80 bytes inside a block of size 88 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x406CD8: operator (expression.c:41)
==24618== by 0x4136BA: yyparse (mapcalc.y:165)
==24618== by 0x414077: parse_string (mapcalc.y:253)
==24618== by 0x40432C: main (main.c:148)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x52934C0 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5084200: G_init_workers (worker.c:114)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 8 at 0x6984840 by thread
#1
==24618== Locks held: 1, at address 0x52934C0
==24618== at 0x50840B2: G_begin_execute (worker.c:60)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x405A20: execute (evaluate.c:210)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x5083FFD: worker (worker.c:43)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x6984840 is 128 bytes inside a block of size 1024
alloc'd
==24618== at 0x4C29F64: calloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5063076: G__calloc (alloc.c:81)
==24618== by 0x5084245: G_init_workers (worker.c:118)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x52934C0 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5084200: G_init_workers (worker.c:114)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 8 at 0x65D5B70 by thread
#1
==24618== Locks held: 1, at address 0x52934C0
==24618== at 0x50840F8: G_begin_execute (worker.c:78)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x405A20: execute (evaluate.c:210)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x508400F: worker (worker.c:45)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x65D5B70 is 80 bytes inside a block of size 88 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x406CD8: operator (expression.c:41)
==24618== by 0x4136BA: yyparse (mapcalc.y:165)
==24618== by 0x414077: parse_string (mapcalc.y:253)
==24618== by 0x40432C: main (main.c:148)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984990 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 8 at 0x65D5670 by thread
#2
==24618== Locks held: 1, at address 0x6984810
==24618== at 0x5084051: G_begin_execute (worker.c:71)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #5
==24618== Locks held: 1, at address 0x6984990
==24618== at 0x508400F: worker (worker.c:45)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x65D5670 is 80 bytes inside a block of size 88 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x406757: mapname (expression.c:41)
==24618== by 0x4138E0: yyparse (mapcalc.y:128)
==24618== by 0x414077: parse_string (mapcalc.y:253)
==24618== by 0x40432C: main (main.c:148)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x52934C0 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5084200: G_init_workers (worker.c:114)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984990 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 8 at 0x65D5670 by thread
#2
==24618== Locks held: 2, at addresses 0x52934C0 0x6984810
==24618== at 0x50840F8: G_begin_execute (worker.c:78)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #5
==24618== Locks held: 1, at address 0x6984990
==24618== at 0x508400F: worker (worker.c:45)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x65D5670 is 80 bytes inside a block of size 88 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x406757: mapname (expression.c:41)
==24618== by 0x4138E0: yyparse (mapcalc.y:128)
==24618== by 0x414077: parse_string (mapcalc.y:253)
==24618== by 0x40432C: main (main.c:148)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984990 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 8 at 0x65D5670 by thread
#2
==24618== Locks held: 2, at addresses 0x6984810 0x6984910
==24618== at 0x508419C: G_end_execute (worker.c:104)
==24618== by 0x4053D6: evaluate_function (evaluate.c:112)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #5
==24618== Locks held: 1, at address 0x6984990
==24618== at 0x508400F: worker (worker.c:45)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x65D5670 is 80 bytes inside a block of size 88 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x406757: mapname (expression.c:41)
==24618== by 0x4138E0: yyparse (mapcalc.y:128)
==24618== by 0x414077: parse_string (mapcalc.y:253)
==24618== by 0x40432C: main (main.c:148)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 8 at 0x6983050 by thread
#4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x407B0A: read_map.isra.1 (map3.c:103)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous read of size 8 by thread #2
==24618== Locks held: 1, at address 0x6984810
==24618== at 0x409C44: f_add (xadd.c:66)
==24618== by 0x4053ED: evaluate_function (evaluate.c:174)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x6983050 is 0 bytes inside a block of size 960 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x405196: initialize_function (evaluate.c:27)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 8 at 0x65D5670 by thread
#4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x508400F: worker (worker.c:45)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous read of size 8 by thread #2
==24618== Locks held: 1, at address 0x6984810
==24618== at 0x5084184: G_end_execute (worker.c:98)
==24618== by 0x4053D6: evaluate_function (evaluate.c:112)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x65D5670 is 80 bytes inside a block of size 88 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x406757: mapname (expression.c:41)
==24618== by 0x4138E0: yyparse (mapcalc.y:128)
==24618== by 0x414077: parse_string (mapcalc.y:253)
==24618== by 0x40432C: main (main.c:148)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 8 at 0x65D5670 by thread
#2
==24618== Locks held: 2, at addresses 0x6984810 0x6984910
==24618== at 0x50841B3: G_end_execute (worker.c:104)
==24618== by 0x4053D6: evaluate_function (evaluate.c:112)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x508400F: worker (worker.c:45)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x65D5670 is 80 bytes inside a block of size 88 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x406757: mapname (expression.c:41)
==24618== by 0x4138E0: yyparse (mapcalc.y:128)
==24618== by 0x414077: parse_string (mapcalc.y:253)
==24618== by 0x40432C: main (main.c:148)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 4 at 0x6982688 by thread
#4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x4E3DCA5: Rast3d_cache_hash_name2index (cachehash.c:119)
==24618== by 0x4E3C631: Rast3d_cache_unlock (cache1.c:306)
==24618== by 0x4E3CAD1: Rast3d_cache_elt_ptr (cache1.c:500)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 4 by thread #3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x4E3DCA5: Rast3d_cache_hash_name2index (cachehash.c:119)
==24618== by 0x4E3C99B: Rast3d_cache_elt_ptr (cache1.c:469)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618==
==24618== Address 0x6982688 is 24 bytes inside a block of size 40 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x4E3BED6: Rast3d_malloc (alloc.c:28)
==24618== by 0x4E3DB51: Rast3d_cache_hash_new (cachehash.c:55)
==24618== by 0x4E3C495: Rast3d_cache_new (cache1.c:127)
==24618== by 0x4E3C559: Rast3d_cache_new_read (cache1.c:164)
==24618== by 0x4E3D715: Rast3d_init_cache (cache.c:25)
==24618== by 0x4E46032: Rast3d_fill_header (header.c:446)
==24618== by 0x4E49629: Rast3d_open_cell_old (open.c:164)
==24618== by 0x407F59: open_map (map3.c:510)
==24618== by 0x4051B4: initialize_function (evaluate.c:50)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 4 at 0x6982690 by thread
#4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x4E3DCA8: Rast3d_cache_hash_name2index (cachehash.c:121)
==24618== by 0x4E3C631: Rast3d_cache_unlock (cache1.c:306)
==24618== by 0x4E3CAD1: Rast3d_cache_elt_ptr (cache1.c:500)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 4 by thread #3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x4E3DCA8: Rast3d_cache_hash_name2index (cachehash.c:121)
==24618== by 0x4E3C99B: Rast3d_cache_elt_ptr (cache1.c:469)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618==
==24618== Address 0x6982690 is 32 bytes inside a block of size 40 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x4E3BED6: Rast3d_malloc (alloc.c:28)
==24618== by 0x4E3DB51: Rast3d_cache_hash_new (cachehash.c:55)
==24618== by 0x4E3C495: Rast3d_cache_new (cache1.c:127)
==24618== by 0x4E3C559: Rast3d_cache_new_read (cache1.c:164)
==24618== by 0x4E3D715: Rast3d_init_cache (cache.c:25)
==24618== by 0x4E46032: Rast3d_fill_header (header.c:446)
==24618== by 0x4E49629: Rast3d_open_cell_old (open.c:164)
==24618== by 0x407F59: open_map (map3.c:510)
==24618== by 0x4051B4: initialize_function (evaluate.c:50)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 4 at 0x698268C by thread
#4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x4E3DCAF: Rast3d_cache_hash_name2index (cachehash.c:120)
==24618== by 0x4E3C631: Rast3d_cache_unlock (cache1.c:306)
==24618== by 0x4E3CAD1: Rast3d_cache_elt_ptr (cache1.c:500)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 4 by thread #3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x4E3DCAF: Rast3d_cache_hash_name2index (cachehash.c:120)
==24618== by 0x4E3C99B: Rast3d_cache_elt_ptr (cache1.c:469)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618==
==24618== Address 0x698268C is 28 bytes inside a block of size 40 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x4E3BED6: Rast3d_malloc (alloc.c:28)
==24618== by 0x4E3DB51: Rast3d_cache_hash_new (cachehash.c:55)
==24618== by 0x4E3C495: Rast3d_cache_new (cache1.c:127)
==24618== by 0x4E3C559: Rast3d_cache_new_read (cache1.c:164)
==24618== by 0x4E3D715: Rast3d_init_cache (cache.c:25)
==24618== by 0x4E46032: Rast3d_fill_header (header.c:446)
==24618== by 0x4E49629: Rast3d_open_cell_old (open.c:164)
==24618== by 0x407F59: open_map (map3.c:510)
==24618== by 0x4051B4: initialize_function (evaluate.c:50)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984890 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 4 at 0x698268C by thread
#4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x4E3DCB8: Rast3d_cache_hash_name2index (cachehash.c:112)
==24618== by 0x4E3C99B: Rast3d_cache_elt_ptr (cache1.c:469)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 4 by thread #3
==24618== Locks held: 1, at address 0x6984890
==24618== at 0x4E3DCAF: Rast3d_cache_hash_name2index (cachehash.c:120)
==24618== by 0x4E3C99B: Rast3d_cache_elt_ptr (cache1.c:469)
==24618== by 0x4E4C1BB: Rast3d_get_tile_ptr (tileio.c:88)
==24618== by 0x4E44C82: Rast3d_get_double_region (getvalue.c:224)
==24618== by 0x4E44DCD: Rast3d_get_value_region (getvalue.c:263)
==24618== by 0x4E4ACF2: Rast3d_nearest_neighbor (resample.c:39)
==24618== by 0x407B3B: read_map.isra.1 (map3.c:99)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618==
==24618== Address 0x698268C is 28 bytes inside a block of size 40 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x4E3BED6: Rast3d_malloc (alloc.c:28)
==24618== by 0x4E3DB51: Rast3d_cache_hash_new (cachehash.c:55)
==24618== by 0x4E3C495: Rast3d_cache_new (cache1.c:127)
==24618== by 0x4E3C559: Rast3d_cache_new_read (cache1.c:164)
==24618== by 0x4E3D715: Rast3d_init_cache (cache.c:25)
==24618== by 0x4E46032: Rast3d_fill_header (header.c:446)
==24618== by 0x4E49629: Rast3d_open_cell_old (open.c:164)
==24618== by 0x407F59: open_map (map3.c:510)
==24618== by 0x4051B4: initialize_function (evaluate.c:50)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984990 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 8 at 0x6983058 by thread
#4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x404A2D: column_shift (column_shift.c:44)
==24618== by 0x407AD5: read_map.isra.1 (map3.c:361)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #5
==24618== Locks held: 1, at address 0x6984990
==24618== at 0x407B0A: read_map.isra.1 (map3.c:103)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x6983058 is 8 bytes inside a block of size 960 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x405196: initialize_function (evaluate.c:27)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 8 at 0x6983050 by thread
#4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x404A1C: column_shift (column_shift.c:47)
==24618== by 0x407AD5: read_map.isra.1 (map3.c:361)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous read of size 8 by thread #2
==24618== Locks held: 1, at address 0x6984810
==24618== at 0x409C44: f_add (xadd.c:66)
==24618== by 0x4053ED: evaluate_function (evaluate.c:174)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x6983050 is 0 bytes inside a block of size 960 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x405196: initialize_function (evaluate.c:27)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 8 at 0x6983050 by thread
#4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x52AE640: Rast_set_d_null_value (string3.h:52)
==24618== by 0x407A36: read_map.isra.1 (map3.c:348)
==24618== by 0x40564C: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== This conflicts with a previous read of size 8 by thread #2
==24618== Locks held: 1, at address 0x6984810
==24618== at 0x409C44: f_add (xadd.c:66)
==24618== by 0x4053ED: evaluate_function (evaluate.c:174)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x6983050 is 0 bytes inside a block of size 960 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x405196: initialize_function (evaluate.c:27)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984910 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during write of size 4 at 0x61A9F0 by thread
#1
==24618== Locks held: none
==24618== at 0x4059A0: execute (evaluate.c:329)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== This conflicts with a previous read of size 4 by thread #4
==24618== Locks held: 1, at address 0x6984910
==24618== at 0x40563D: do_evaluate (evaluate.c:151)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 8 at 0x65D6380 by thread
#1
==24618== Locks held: none
==24618== at 0x40AB76: f_div (xdiv.c:66)
==24618== by 0x4053ED: evaluate_function (evaluate.c:174)
==24618== by 0x405A20: execute (evaluate.c:210)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #2
==24618== Locks held: 1, at address 0x6984810
==24618== at 0x409C20: f_add (xadd.c:64)
==24618== by 0x4053ED: evaluate_function (evaluate.c:174)
==24618== by 0x5083FF8: worker (worker.c:41)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x65D6380 is 0 bytes inside a block of size 960 alloc'd
==24618== at 0x4C2B87D: malloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5062FE4: G__malloc (alloc.c:39)
==24618== by 0x4050CF: initialize_function (evaluate.c:27)
==24618== by 0x40513F: initialize_function (evaluate.c:88)
==24618== by 0x405920: execute (evaluate.c:71)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Lock at 0x52934C0 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5084200: G_init_workers (worker.c:114)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Lock at 0x6984810 was first observed
==24618== at 0x4C2DDF1: pthread_mutex_init (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x508427C: G_init_workers (worker.c:122)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== Possible data race during read of size 8 at 0x69847C0 by thread
#1
==24618== Locks held: 1, at address 0x52934C0
==24618== at 0x5084091: G_begin_execute (worker.c:60)
==24618== by 0x40539D: evaluate_function (evaluate.c:107)
==24618== by 0x405A20: execute (evaluate.c:210)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== This conflicts with a previous write of size 8 by thread #2
==24618== Locks held: 1, at address 0x6984810
==24618== at 0x5083FFD: worker (worker.c:43)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x69847C0 is 0 bytes inside a block of size 1024 alloc'd
==24618== at 0x4C29F64: calloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5063076: G__calloc (alloc.c:81)
==24618== by 0x5084245: G_init_workers (worker.c:118)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Possible data race during write of size 4 at 0x6984938 by thread
#1
==24618== Locks held: none
==24618== at 0x50842F4: G_finish_workers (worker.c:134)
==24618== by 0x4059B5: execute (evaluate.c:350)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== This conflicts with a previous read of size 4 by thread #4
==24618== Locks held: none
==24618== at 0x508402F: worker (worker.c:36)
==24618== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==24618== by 0x56CBE99: start_thread (pthread_create.c:308)
==24618==
==24618== Address 0x6984938 is 376 bytes inside a block of size 1024
alloc'd
==24618== at 0x4C29F64: calloc (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5063076: G__calloc (alloc.c:81)
==24618== by 0x5084245: G_init_workers (worker.c:118)
==24618== by 0x4057E6: execute (evaluate.c:327)
==24618== by 0x404387: main (main.c:158)
==24618==
==24618== ----------------------------------------------------------------
==24618==
==24618== Thread #1's call to pthread_mutex_destroy failed
==24618== with error code 16 (EBUSY: Device or resource busy)
==24618== at 0x4C2DF2F: pthread_mutex_destroy (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==24618== by 0x5084343: G_finish_workers (worker.c:141)
==24618== by 0x4059B5: execute (evaluate.c:350)
==24618== by 0x404387: main (main.c:158)
==24618==
  100%
==24618==
==24618== For counts of detected and suppressed errors, rerun with: -v
==24618== Use --history-level=approx or =none to gain increased speed, at
==24618== the cost of reduced accuracy of conflicting-access information
==24618== ERROR SUMMARY: 1029629 errors from 42 contexts (suppressed:
655560 from 190)
}}}

--
Ticket URL: <http://trac.osgeo.org/grass/ticket/2074#comment:2&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by glynn):

Replying to [comment:1 huhabla]:
> I can confirm this issue in case grass7 is compiled with pthreads
support. The errors appear randomly at different indices. Looks like a
race condition to me. Hence this issue my be related to the pthreads
parallelism in r3.mapcalc? The problem may be related to the static and
global variables used in the raster3d library.
>
> When grass7 is compiled without pthreads support the errors disappear
and everything works as expected.

Does calling putenv("WORKERS=1") from setup_maps() in map3.c avoid the
problem? Failing that, we'd need to add a mutex around most of the code in
map3.c (see cats_mutex in map.c for an example).

--
Ticket URL: <https://trac.osgeo.org/grass/ticket/2074#comment:3&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by huhabla):

It seems that putting putenv("WORKERS=1") into setup_maps() in map3.c does
the job. Many thanks Glynn!
Here the helgrind log:

{{{
GRASS 7.0.svn (nc_spm_08_grass7):~/src/grass7.0/grass_trunk > valgrind
--tool=helgrind r3.mapcalc "new_map = (test_map[0, 0, 0] + test_map[1, 1,
0]) / 2" --o
==30587== Helgrind, a thread error detector
==30587== Copyright (C) 2007-2011, and GNU GPL'd, by OpenWorks LLP et al.
==30587== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright
info
==30587== Command: r3.mapcalc new_map\ =\ (test_map[0,\ 0,\ 0]\ +\
test_map[1,\ 1,\ 0])\ /\ 2 --o
==30587==

==30587== ---Thread-Announcement------------------------------------------
==30587==
==30587== Thread #2 was created
==30587== at 0x5CE2C8E: clone (clone.S:77)
==30587== by 0x56DCF6F: do_clone.constprop.4 (createthread.c:75)
==30587== by 0x56DE57F: pthread_create@@GLIBC_2.2.5
(createthread.c:256)
==30587== by 0x4C2DAAD: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==30587== by 0x508F5EC: G_init_workers (worker.c:124)
==30587== by 0x405506: execute (evaluate.c:327)
==30587== by 0x407201: main (main.c:158)
==30587==
==30587== ----------------------------------------------------------------
==30587==
==30587== Thread #2: lock order "0x52A0A20 before 0x65D3B90" violated
==30587==
==30587== Observed (incorrect) order is: acquisition of lock at 0x65D3B90
==30587== at 0x4C2D1BE: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==30587== by 0x508F2B5: worker (worker.c:39)
==30587== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==30587== by 0x56DDE99: start_thread (pthread_create.c:308)
==30587==
==30587== followed by a later acquisition of lock at 0x52A0A20
==30587== at 0x4C2E0ED: pthread_mutex_lock (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==30587== by 0x508F3CA: G_begin_execute (worker.c:74)
==30587== by 0x404EB7: begin_evaluate (evaluate.c:107)
==30587== by 0x405098: evaluate_function (evaluate.c:165)
==30587== by 0x4052B6: evaluate (evaluate.c:228)
==30587== by 0x404E8B: do_evaluate (evaluate.c:102)
==30587== by 0x508F2D5: worker (worker.c:41)
==30587== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==30587== by 0x56DDE99: start_thread (pthread_create.c:308)
==30587==
==30587== Required order was established by acquisition of lock at
0x52A0A20
==30587== at 0x4C2E0ED: pthread_mutex_lock (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==30587== by 0x508F3CA: G_begin_execute (worker.c:74)
==30587== by 0x404EB7: begin_evaluate (evaluate.c:107)
==30587== by 0x405098: evaluate_function (evaluate.c:165)
==30587== by 0x4052B6: evaluate (evaluate.c:228)
==30587== by 0x40525B: evaluate_binding (evaluate.c:210)
==30587== by 0x4052C4: evaluate (evaluate.c:231)
==30587== by 0x405566: execute (evaluate.c:338)
==30587== by 0x407201: main (main.c:158)
==30587==
==30587== followed by a later acquisition of lock at 0x65D3B90
==30587== at 0x4C2E0ED: pthread_mutex_lock (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==30587== by 0x508F442: G_begin_execute (worker.c:86)
==30587== by 0x404EB7: begin_evaluate (evaluate.c:107)
==30587== by 0x405098: evaluate_function (evaluate.c:165)
==30587== by 0x4052B6: evaluate (evaluate.c:228)
==30587== by 0x40525B: evaluate_binding (evaluate.c:210)
==30587== by 0x4052C4: evaluate (evaluate.c:231)
==30587== by 0x405566: execute (evaluate.c:338)
==30587== by 0x407201: main (main.c:158)
==30587==
==30587== ----------------------------------------------------------------
==30587==
==30587== Thread #2: pthread_cond_{signal,broadcast}: dubious: associated
lock is not held by any thread
==30587== at 0x4C2CC23: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==30587== by 0x508F31B: worker (worker.c:47)
==30587== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==30587== by 0x56DDE99: start_thread (pthread_create.c:308)
==30587==
==30587== ---Thread-Announcement------------------------------------------
==30587==
==30587== Thread #1 is the program's root thread
==30587==
==30587== ----------------------------------------------------------------
==30587==
==30587== Thread #1: lock order "0x65D3B90 before 0x52A0A20" violated
==30587==
==30587== Observed (incorrect) order is: acquisition of lock at 0x52A0A20
==30587== at 0x4C2E0ED: pthread_mutex_lock (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==30587== by 0x508F3CA: G_begin_execute (worker.c:74)
==30587== by 0x404EB7: begin_evaluate (evaluate.c:107)
==30587== by 0x405098: evaluate_function (evaluate.c:165)
==30587== by 0x4052B6: evaluate (evaluate.c:228)
==30587== by 0x40525B: evaluate_binding (evaluate.c:210)
==30587== by 0x4052C4: evaluate (evaluate.c:231)
==30587== by 0x405566: execute (evaluate.c:338)
==30587== by 0x407201: main (main.c:158)
==30587==
==30587== followed by a later acquisition of lock at 0x65D3B90
==30587== at 0x4C2E0ED: pthread_mutex_lock (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==30587== by 0x508F442: G_begin_execute (worker.c:86)
==30587== by 0x404EB7: begin_evaluate (evaluate.c:107)
==30587== by 0x405098: evaluate_function (evaluate.c:165)
==30587== by 0x4052B6: evaluate (evaluate.c:228)
==30587== by 0x40525B: evaluate_binding (evaluate.c:210)
==30587== by 0x4052C4: evaluate (evaluate.c:231)
==30587== by 0x405566: execute (evaluate.c:338)
==30587== by 0x407201: main (main.c:158)
==30587==
==30587== Required order was established by acquisition of lock at
0x65D3B90
==30587== at 0x4C2D1BE: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==30587== by 0x508F2B5: worker (worker.c:39)
==30587== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==30587== by 0x56DDE99: start_thread (pthread_create.c:308)
==30587==
==30587== followed by a later acquisition of lock at 0x52A0A20
==30587== at 0x4C2E0ED: pthread_mutex_lock (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==30587== by 0x508F3CA: G_begin_execute (worker.c:74)
==30587== by 0x404EB7: begin_evaluate (evaluate.c:107)
==30587== by 0x405098: evaluate_function (evaluate.c:165)
==30587== by 0x4052B6: evaluate (evaluate.c:228)
==30587== by 0x404E8B: do_evaluate (evaluate.c:102)
==30587== by 0x508F2D5: worker (worker.c:41)
==30587== by 0x4C2DC3D: ??? (in /usr/lib/valgrind/vgpreload_helgrind-
amd64-linux.so)
==30587== by 0x56DDE99: start_thread (pthread_create.c:308)
==30587==
==30587== ----------------------------------------------------------------
==30587==
==30587== Thread #1's call to pthread_mutex_destroy failed
==30587== with error code 16 (EBUSY: Device or resource busy)
==30587== at 0x4C2DF2F: pthread_mutex_destroy (in /usr/lib/valgrind
/vgpreload_helgrind-amd64-linux.so)
==30587== by 0x508F696: G_finish_workers (worker.c:141)
==30587== by 0x4055FC: execute (evaluate.c:350)
==30587== by 0x407201: main (main.c:158)
==30587==
  100%
==30587==
==30587== For counts of detected and suppressed errors, rerun with: -v
==30587== Use --history-level=approx or =none to gain increased speed, at
==30587== the cost of reduced accuracy of conflicting-access information
==30587== ERROR SUMMARY: 12800 errors from 4 contexts (suppressed: 182683
from 129)
}}}

I would like to commit the patch if there are no objections against it.

--
Ticket URL: <https://trac.osgeo.org/grass/ticket/2074#comment:4&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by wenzeslaus):

I would like to understand. Does the
{{{
putenv("WORKERS=1")
}}}
mean that there will be no parallel computation in `r3.mapcalc`? Or what
is actually the patch?

--
Ticket URL: <https://trac.osgeo.org/grass/ticket/2074#comment:5&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by huhabla):

I will try an answer, Glynn, please correct me if i am wrong.

As far as i understand set's {{{putenv("WORKERS=1")}}} the number of
worker threads that have parallel read access to raster 3d maps (reading
values) and perform mapcalc sub-expression evaluation. Since the raster 3d
library was not designed to allow parallel access, a race condition
happens when several threads try to read values in parallel. In this case
the raster 3d map specific tile index gets corrupted.

The environment variable WORKERS will be analysed in lib/gis/worker.c that
implements the creation and execution of pthread based worker threads. The
default number of workers is 8, but can be overwritten using the
environment variable WORKERS.

--
Ticket URL: <https://trac.osgeo.org/grass/ticket/2074#comment:6&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by glynn):

Replying to [comment:5 wenzeslaus]:
> I would like to understand. Does the
{{{
putenv("WORKERS=1")
}}}
> mean that there will be no parallel computation in `r3.mapcalc`?
Yes. When I added pthread support to r.mapcalc, I largely overlooked the
fact that it shares most of its code with !r3.mapcalc, and the raster3d
library doesn't appear to be thread-safe.

I had to make a few changes to libgis and libraster to allow for multiple
threads (e.g. r34444), but (as with so many other invasive, project-wide
changes) libraster3d was deemed too much trouble and was left alone.

One alternative would be to use a single mutex for all of the functions in
map3.c. This would allow calculations to be parallelised, but not I/O
(unfortunately, I/O accounts for most of the execution time).

The more complex option is to make libraster3d thread-safe, at least for
accessing different maps from different threads, and have a mutex for each
map to prevent concurrent access from multiple threads (this is what
r.mapcalc does). That would require eliminating some or all of its static
data, or protecting it with mutexes (the functions in lib/gis/counter.c
can be used to safely perform one-shot initialisation).

--
Ticket URL: <https://trac.osgeo.org/grass/ticket/2074#comment:7&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by annakrat):

I am getting the same error again:

{{{
ERROR: Rast3d_get_float_region: error in Rast3d_get_tile_ptr.Region
coordinates x 108 y 0 z 0 tile index 6 offset 0
Segmentation fault (core dumped)
}}}

for command

{{{
r3.mapcalc expr="if ( not(isnull(masking_3d@anna)), salinity_rst_20@anna,
null() )" --o
}}}

I have the newest version of GRASS 7 and it happens even without pthread
support. In GRASS64, it creates the volume without complaining but there
are only NULL values in it. Both input maps look correct, they have the
same 3d region.

--
Ticket URL: <https://trac.osgeo.org/grass/ticket/2074#comment:8&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by stopkovaE):

I have had the same error using r3.mapcalc ("ERROR:
Rast3d_get_double_region: error in Rast3d_get_tile_ptr.Region coordinates
x 0 y 0 z 0 tile index 0 offset 0"). I tried to disable threading and it
works now...

{{{
#!diff
Index: raster/r.mapcalc/evaluate.c

--- raster/r.mapcalc/evaluate.c (revision 59096)
+++ raster/r.mapcalc/evaluate.c (working copy)
@@ -324,7 +324,7 @@
      count = rows * depths;
      n = 0;

- G_init_workers();
+ //G_init_workers();

      for (current_depth = 0; current_depth < depths; current_depth++)
         for (current_row = 0; current_row < rows; current_row++) {
@@ -347,7 +347,7 @@
             n++;
         }

- G_finish_workers();
+ //G_finish_workers();

      if (verbose)
         G_percent(n, count, 2);
}}}

--
Ticket URL: <http://trac.osgeo.org/grass/ticket/2074#comment:9&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by glynn):

Replying to [comment:9 stopkovaE]:
> I have had the same error using r3.mapcalc ("ERROR:
Rast3d_get_double_region: error in Rast3d_get_tile_ptr.Region coordinates
x 0 y 0 z 0 tile index 0 offset 0"). I tried to disable threading and it
works now...

If pthread support is enabled, the supplied patch isn't safe, as e.g.
G_begin_execute() will try to lock a mutex which was never initialised.

If pthread support is disabled, the patch is unnecessary, as
G_init_workers() and G_finish_workers() are empty in that case.

--
Ticket URL: <http://trac.osgeo.org/grass/ticket/2074#comment:10&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by wenzeslaus):

There is a lot of these errors in tests:

  *
http://fatra.cnr.ncsu.edu/grassgistests/reports_for_date-2014-09-30-07-00/report_for_nc_spm_08_grass7_nc/lib/python/gunittest/test_assertions_rast3d/index.html

  *
http://fatra.cnr.ncsu.edu/grassgistests/reports_for_date-2014-09-30-07-00/report_for_nc_spm_08_grass7_nc/lib/python/temporal/test_temporal_raster3d_algebra/index.html

{{{
ERROR: Rast3d_get_double_region: error in Rast3d_get_tile_ptr.Region
        coordinates x 0 y 0 z 34 tile index 1 offset 0
}}}

I though that `putenv("WORKERS=1")` solved it and I don't have to set
anything, is that right?

--
Ticket URL: <http://trac.osgeo.org/grass/ticket/2074#comment:11&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by glynn):

Replying to [comment:11 wenzeslaus]:

> I though that `putenv("WORKERS=1")` solved it and I don't have to set
anything, is that right?

Actually it probably needs to be `putenv("WORKERS=0")`. WORKERS=1 will
create 1 additional thread, but will also execute code in the main thread.

--
Ticket URL: <http://trac.osgeo.org/grass/ticket/2074#comment:12&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by wenzeslaus):

Replying to [comment:12 glynn]:
> Replying to [comment:11 wenzeslaus]:
>
> > I though that `putenv("WORKERS=1")` solved it and I don't have to set
anything, is that right?
>
> Actually it probably needs to be `putenv("WORKERS=0")`. WORKERS=1 will
create 1 additional thread, but will also execute code in the main thread.

This makes sense although it is not consistent with other usages of
`WORKERS`:

`r3.in.xyz`:
{{{
To enable parallel processing support, set the <b>workers=</b> option
to match the number of CPUs or CPU-cores avaiable on your system.
Alternatively, the <tt>WORKERS</tt> environment variable can be set
to the number of concurrent processes desired.
}}}

`i.oif`:
{{{
By default the module will calculate standard deviations for all bands in
parallel. To run serially use the <b>-s</b> flag. If the <tt>WORKERS</tt>
environment variable is set, the number of concurrent processes will be
limited to that number of jobs.
}}}

Anyway, I made the change in r62147. The issue is that I wanted to create
test which would show the problem with `WORKERS=1` (and run with
`WORKERS=0`) but I was able to get the error only sometimes. More often I
got different error. It was error in computation and I don't understand
it:

{{{

FAIL: test_difference_of_the_same_map_double
(__main__.TestBasicOperations)
Test zero difference of map with itself
----------------------------------------------------------------------
Traceback (most recent call last):
   File "test_r3_mapcalc.py", line 35, in
test_difference_of_the_same_map_double
     self.assertRaster3dMinMax('diff_a_a', refmin=0, refmax=0)
   File "/home/vasek/dev/grass/gcc_trunk/dist.x86_64-unknown-linux-
gnu/etc/python/grass/gunittest/case.py", line 449, in assertRaster3dMinMax
     self.fail(self._formatMessage(msg, stdmsg))
AssertionError: The actual minimum (-194.646443) is smaller than the
reference one (0) for 3D raster map diff_a_a (with maximum 0.0)

======================================================================
FAIL: test_difference_of_the_same_map_float (__main__.TestBasicOperations)
Test zero difference of map with itself
----------------------------------------------------------------------
Traceback (most recent call last):
   File "test_r3_mapcalc.py", line 45, in
test_difference_of_the_same_map_float
     self.assertRaster3dMinMax('diff_af_af', refmin=0, refmax=0)
   File "/home/vasek/dev/grass/gcc_trunk/dist.x86_64-unknown-linux-
gnu/etc/python/grass/gunittest/case.py", line 449, in assertRaster3dMinMax
     self.fail(self._formatMessage(msg, stdmsg))
AssertionError: The actual minimum (-193.378128) is smaller than the
reference one (0) for 3D raster map diff_af_af (with maximum 0.0)
}}}

--
Ticket URL: <http://trac.osgeo.org/grass/ticket/2074#comment:13&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
-------------------------+--------------------------------------------------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc | Platform: All
      Cpu: Unspecified |
-------------------------+--------------------------------------------------

Comment(by neteler):

Replying to [comment:13 wenzeslaus]:
> Replying to [comment:12 glynn]:
> > Replying to [comment:11 wenzeslaus]:
> >
> > > I though that `putenv("WORKERS=1")` solved it and I don't have to
set anything, is that right?
> >
> > Actually it probably needs to be `putenv("WORKERS=0")`. WORKERS=1 will
create 1 additional thread, but will also execute code in the main thread.
>
> This makes sense although it is not consistent with other usages of
`WORKERS`:

...

Just a (user) comment:

In general the notion of WORKERS=0 is a bit strange from a user's point of
view.
I know that it is C notion but not quite human readable. Most users will
put WORKERS=1 and then, suprise, find two parallel threads (?)...

--
Ticket URL: <http://trac.osgeo.org/grass/ticket/2074#comment:14&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
--------------------------------------------------------------------+-------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc, parallelization, pthreads, workers, nprocs | Platform: All
      Cpu: Unspecified |
--------------------------------------------------------------------+-------
Changes (by wenzeslaus):

  * keywords: r3.mapcalc => r3.mapcalc, parallelization, pthreads,
               workers, nprocs

Comment:

Replying to [comment:14 neteler]:
> Replying to [comment:13 wenzeslaus]:
> > Replying to [comment:12 glynn]:
> > > Replying to [comment:11 wenzeslaus]:
> > >
> > > > I though that `putenv("WORKERS=1")` solved it and I don't have to
set anything, is that right?
> > >
> > > Actually it probably needs to be `putenv("WORKERS=0")`. WORKERS=1
will create 1 additional thread, but will also execute code in the main
thread.
> >
> > This makes sense although it is not consistent with other usages of
`WORKERS`:
>
> ...
>
> Just a (user) comment:
>
> In general the notion of WORKERS=0 is a bit strange from a user's point
of view.
> I know that it is C notion but not quite human readable. Most users will
put WORKERS=1 and then, suprise, find two parallel threads (?)...

If this is true, this wouldn't be even C. In C 1 still means one and 2
two. For ordinal numbers, 0 means first, 1 means second. So, 0 for one
item and 2 for two items would be still wrong.

The problem is that in [source:grass/trunk/lib/gis/worker.c
lib/gis/worker.c], the number of ''new/additional workers/threads
created'' is `WORKERS`. But this means that there is always also the main
thread too. So, you have at the and `WORKERS` plus one threads (main
thread and additional thread according to `WORKERS`). And this is the
inconsistency with the other usages of "workers" or "nprocs".

However, I was not able to understand more about how the code is
executed/distributed between main thread and additional threads and if it
would be better to have `WORKERS=1` for only one (main) thread,
`WORKERS=2` for one additional thread etc. (and `WORKERS=0` would be
invalid or considered the same as `WORKERS=1`). From user point of view
probably yes but it depends on what is actually happening in the
implementation.

--
Ticket URL: <http://trac.osgeo.org/grass/ticket/2074#comment:15&gt;
GRASS GIS <http://grass.osgeo.org>

#2074: r3.mapcalc neighborhood modifier hash table and tile errors
--------------------------------------------------------------------+-------
Reporter: wenzeslaus | Owner: grass-dev@…
     Type: defect | Status: new
Priority: normal | Milestone: 7.0.0
Component: Raster3D | Version: svn-trunk
Keywords: r3.mapcalc, parallelization, pthreads, workers, nprocs | Platform: All
      Cpu: Unspecified |
--------------------------------------------------------------------+-------

Comment(by glynn):

Replying to [comment:15 wenzeslaus]:

> The problem is that in [source:grass/trunk/lib/gis/worker.c
lib/gis/worker.c], the number of ''new/additional workers/threads
created'' is `WORKERS`. But this means that there is always also the main
thread too.

Sort of; if the "force" argument of G_begin_execute() is non-zero, it
won't use the main thread (so WORKERS=0 will cause it to block forever).
Currently, nothing does that (r.mapcalc is the only user of that code).

> However, I was not able to understand more about how the code is
executed/distributed between main thread and additional threads

G_begin_execute() attempts to obtain a worker thread. If "force" is non-
zero, it will keep trying until it gets one. Once this is done, it will
either instruct the worker (if it got one) to execute the function, or (if
it didn't get a worker) execute it directly.

The way this is used by r.mapcalc is that when evaluating a function call,
if the function only has one argument or is the eval() function, it will
evaluate the argument(s) sequentially in the main thread, otherwise it
will attempt to evaluate the arguments concurrently.

> and if it would be better to have `WORKERS=1` for only one (main)
thread, `WORKERS=2` for one additional thread etc. (and `WORKERS=0` would
be invalid or considered the same as `WORKERS=1`). From user point of view
probably yes but it depends on what is actually happening in the
implementation.

I don't have a strong opinion; it would be simple enough to modify the
code to subtract 1 from the value of WORKERS in determining the number of
additional threads to create. Regardless, the case of WORKERS=0 and
force=1 should be fixed (i.e. the force argument should be ignored if
there is only the main thread).

--
Ticket URL: <http://trac.osgeo.org/grass/ticket/2074#comment:16&gt;
GRASS GIS <http://grass.osgeo.org>