Opened 8 years ago
Closed 8 years ago
#3210 closed defect (fixed)
r.texture: bug when non continuous series of values
Reported by: | mlennert | Owned by: | |
---|---|---|---|
Priority: | normal | Milestone: | 7.2.1 |
Component: | Raster | Version: | svn-trunk |
Keywords: | r.texture | Cc: | |
CPU: | Unspecified | Platform: | Unspecified |
Description
This is ok:
r.in.ascii in=- out=test_matrix <<EOF north: 3 south: 0 east: 3 west: 0 rows: 3 cols: 3 1 1 1 3 0 2 0 0 3 EOF r.texture -s test_matrix out=text method=idm r.stats -1n text_IDM_90 0.46666667
But this isn't:
r.in.ascii --o in=- out=test_matrix <<EOF north: 3 south: 0 east: 3 west: 0 rows: 3 cols: 3 1 1 1 3 0 0 0 0 3 EOF r.texture -s test_matrix out=text method=idm --o r.stats -1n text_IDM_90 0.48333332
The result should be 0.4.
Testing shows that this always seems to happen when the series of values is discontinuous, as in the second case where we have 0, 1, 3, but no 2.
I really have no time right now to look into this, but if someone with some knowledge of that module could, this would be great, especially if we could fix this before 7.2...
And, yes, the above is a nice start for implementing tests...
I can provide a spreadsheet where a colleague of mine has done the IDM calculations by hand for a series of matrices (comparing GRASS (almost all correct, yeah !) with PCI Geomatica (really weird results) and eCognition by-object texture variables (fairly similar to GRASS 3x3 texture means by object when the objects are above a certain size)).
Attachments (2)
Change History (39)
by , 8 years ago
Attachment: | texture.patch added |
---|
comment:2 by , 8 years ago
Replying to mmetz:
Please try the attached patch for r.texture.
Oh look, it's Lucky "Markus" Luke, the coder who codes faster than his shadow ;-)
Thanks. This solves the issue for all my test cases.
And the solution seems quite straightforward and logical, so I think this can be applied and backported, including to the 7.2 branch, or ?
follow-up: 4 comment:3 by , 8 years ago
Just one question:
The manual says:
r.texture assumes grey levels ranging from 0 to 255 as input. The input is automatically rescaled to 0 to 255 if the input map
range is outside of this range.
But when I rescale the above (second) matrix to 0,255 I get a different result:
r.in.ascii --o in=- out=test_matrix <<EOF north: 3 south: 0 east: 3 west: 0 rows: 3 cols: 3 1 1 1 3 0 0 0 0 3 EOF r.texture -s test_matrix out=text method=idm --o r.stats -1n text_IDM_90 0.40000007 r.rescale test_matrix out=test_matrix_rescaled to=0,255 r.texture -s test_matrix_rescaled out=text_rescaled method=idm --o r.stats -1n text_rescaled_IDM_90 0.16672371
I would have thought that the result should be the same ?
Moritz
follow-up: 5 comment:4 by , 8 years ago
Replying to mlennert:
Just one question:
The manual says:
r.texture assumes grey levels ranging from 0 to 255 as input. The input is automatically rescaled to 0 to 255 if the input map
range is outside of this range.
But when I rescale the above (second) matrix to 0,255 I get a different result:
r.in.ascii --o in=- out=test_matrix <<EOF north: 3 south: 0 east: 3 west: 0 rows: 3 cols: 3 1 1 1 3 0 0 0 0 3 EOF r.texture -s test_matrix out=text method=idm --o r.stats -1n text_IDM_90 0.40000007 r.rescale test_matrix out=test_matrix_rescaled to=0,255 r.texture -s test_matrix_rescaled out=text_rescaled method=idm --o r.stats -1n text_rescaled_IDM_90 0.16672371I would have thought that the result should be the same ?
I see this comes from lines 277ff in main.c:
inscale = 0; if (min < 0 || max > 255) { inscale = 255. / (max - min); } /* input has 0 - 1 range */ else if (max <= 1.) { inscale = 255. / (max - min); }
So, as long as the data does not contain a value above 255 we just assume that it is scaled to 255, unless the max is < 1.
So, data with 0, 1, 2, 3 is just considered as already being scaled to 0,255...
Don't know, yet, what to think of this.
follow-up: 6 comment:5 by , 8 years ago
Replying to mlennert:
Replying to mlennert:
Just one question:
The manual says:
r.texture assumes grey levels ranging from 0 to 255 as input. The input is automatically rescaled to 0 to 255 if the input map
range is outside of this range.
But when I rescale the above (second) matrix to 0,255 I get a different result:
r.in.ascii --o in=- out=test_matrix <<EOF north: 3 south: 0 east: 3 west: 0 rows: 3 cols: 3 1 1 1 3 0 0 0 0 3 EOF r.texture -s test_matrix out=text method=idm --o r.stats -1n text_IDM_90 0.40000007 r.rescale test_matrix out=test_matrix_rescaled to=0,255 r.texture -s test_matrix_rescaled out=text_rescaled method=idm --o r.stats -1n text_rescaled_IDM_90 0.16672371I would have thought that the result should be the same ?
No, because the differences become larger, therefore IDM becomes smaller.
There are other measurements that are also possibly affected: Contrast, Correlation, Variance, Sum Average, Sum Variance.
IMHO, there is also a bug in all the entropy and information measures because p * log(p) is zero for p = 0 while the code uses p * log(p + EPSILON) as a workaround which provides slightly different results for p != 0.
follow-up: 7 comment:6 by , 8 years ago
Replying to mmetz:
Replying to mlennert:
Replying to mlennert:
Just one question:
The manual says:
r.texture assumes grey levels ranging from 0 to 255 as input. The input is automatically rescaled to 0 to 255 if the input map
range is outside of this range.
But when I rescale the above (second) matrix to 0,255 I get a different result:
r.in.ascii --o in=- out=test_matrix <<EOF north: 3 south: 0 east: 3 west: 0 rows: 3 cols: 3 1 1 1 3 0 0 0 0 3 EOF r.texture -s test_matrix out=text method=idm --o r.stats -1n text_IDM_90 0.40000007 r.rescale test_matrix out=test_matrix_rescaled to=0,255 r.texture -s test_matrix_rescaled out=text_rescaled method=idm --o r.stats -1n text_rescaled_IDM_90 0.16672371I would have thought that the result should be the same ?
No, because the differences become larger, therefore IDM becomes smaller.
According to Haralick et al. (1973), the gray tones are quantized which "guarantees that images which are monotonic transformations of each other produce the same results". Therefore IDM should indeed stay the same. That also means that the manually calculated result of 0.4 for the second test matrix is wrong and the correct result is indeed 0.48333332.
follow-up: 8 comment:7 by , 8 years ago
Replying to mmetz:
Replying to mmetz:
Replying to mlennert:
Replying to mlennert:
Just one question:
The manual says:
r.texture assumes grey levels ranging from 0 to 255 as input. The input is automatically rescaled to 0 to 255 if the input map
range is outside of this range.
But when I rescale the above (second) matrix to 0,255 I get a different result:
r.in.ascii --o in=- out=test_matrix <<EOF north: 3 south: 0 east: 3 west: 0 rows: 3 cols: 3 1 1 1 3 0 0 0 0 3 EOF r.texture -s test_matrix out=text method=idm --o r.stats -1n text_IDM_90 0.40000007 r.rescale test_matrix out=test_matrix_rescaled to=0,255 r.texture -s test_matrix_rescaled out=text_rescaled method=idm --o r.stats -1n text_rescaled_IDM_90 0.16672371I would have thought that the result should be the same ?
No, because the differences become larger, therefore IDM becomes smaller.
According to Haralick et al. (1973), the gray tones are quantized which "guarantees that images which are monotonic transformations of each other produce the same results". Therefore IDM should indeed stay the same. That also means that the manually calculated result of 0.4 for the second test matrix is wrong and the correct result is indeed 0.48333332.
Well, to be complete, the sentence says "Image normalization by equal-probability quantizing guarantees..." And an algorithm to perform such quantizing is given in the appendix. I don't think this algorithm is implemented in r.texture. So, I'm not sure that r.texture in its current state _should_ give equal results, unless one image is the output of equal-probability quantizing of the other. But maybe I misunderstand the whole concept.
follow-up: 9 comment:8 by , 8 years ago
Replying to mlennert:
Replying to mmetz:
Replying to mmetz:
Replying to mlennert:
Replying to mlennert:
Just one question:
The manual says:
r.texture assumes grey levels ranging from 0 to 255 as input. The input is automatically rescaled to 0 to 255 if the input map
range is outside of this range.
But when I rescale the above (second) matrix to 0,255 I get a different result:
r.in.ascii --o in=- out=test_matrix <<EOF north: 3 south: 0 east: 3 west: 0 rows: 3 cols: 3 1 1 1 3 0 0 0 0 3 EOF r.texture -s test_matrix out=text method=idm --o r.stats -1n text_IDM_90 0.40000007 r.rescale test_matrix out=test_matrix_rescaled to=0,255 r.texture -s test_matrix_rescaled out=text_rescaled method=idm --o r.stats -1n text_rescaled_IDM_90 0.16672371I would have thought that the result should be the same ?
No, because the differences become larger, therefore IDM becomes smaller.
According to Haralick et al. (1973), the gray tones are quantized which "guarantees that images which are monotonic transformations of each other produce the same results". Therefore IDM should indeed stay the same. That also means that the manually calculated result of 0.4 for the second test matrix is wrong and the correct result is indeed 0.48333332.
Well, to be complete, the sentence says "Image normalization by equal-probability quantizing guarantees..." And an algorithm to perform such quantizing is given in the appendix. I don't think this algorithm is implemented in r.texture.
Right. So it seems 0.4 is the correct result for the second test matrix. This will be a somewhat larger patch.
Equal-probability quantizing could be done with r.quantile + r.recode. According to literature and other resources, the number of gray levels should not be too large, at least 16 but apparently less than 256. Maybe a note in the manual would be helpful.
follow-up: 10 comment:9 by , 8 years ago
Replying to mmetz:
Replying to mlennert:
Replying to mmetz:
Replying to mmetz:
Replying to mlennert:
Replying to mlennert:
Just one question:
The manual says:
r.texture assumes grey levels ranging from 0 to 255 as input. The input is automatically rescaled to 0 to 255 if the input map
range is outside of this range.
But when I rescale the above (second) matrix to 0,255 I get a different result:
r.in.ascii --o in=- out=test_matrix <<EOF north: 3 south: 0 east: 3 west: 0 rows: 3 cols: 3 1 1 1 3 0 0 0 0 3 EOF r.texture -s test_matrix out=text method=idm --o r.stats -1n text_IDM_90 0.40000007 r.rescale test_matrix out=test_matrix_rescaled to=0,255 r.texture -s test_matrix_rescaled out=text_rescaled method=idm --o r.stats -1n text_rescaled_IDM_90 0.16672371I would have thought that the result should be the same ?
No, because the differences become larger, therefore IDM becomes smaller.
According to Haralick et al. (1973), the gray tones are quantized which "guarantees that images which are monotonic transformations of each other produce the same results". Therefore IDM should indeed stay the same. That also means that the manually calculated result of 0.4 for the second test matrix is wrong and the correct result is indeed 0.48333332.
Well, to be complete, the sentence says "Image normalization by equal-probability quantizing guarantees..." And an algorithm to perform such quantizing is given in the appendix. I don't think this algorithm is implemented in r.texture.
Right. So it seems 0.4 is the correct result for the second test matrix. This will be a somewhat larger patch.
You mean if you implement the quantizing within r.texture ? I don't think this is necessary. Informing the user should be enough, including about how to do it with the existing tools.
Looking over Haralick et al. (1973), I see that i and j are summed over Ng, where Ng are the grey tone levels that the map is quantized to. The set Ng contains all grey levels, and so your patch does seem the right one.
So I would plead for applying this patch...
Equal-probability quantizing could be done with r.quantile + r.recode. According to literature and other resources, the number of gray levels should not be too large, at least 16 but apparently less than 256. Maybe a note in the manual would be helpful.
Yes. If you have a small example of how to do the quantizing it would be great to add that to the manual.
If you're still planning on looking at r.texture some more, could you also look at #2325 ? And I also have some doubt about the memory management in r.texture. In main.c, one can find this:
for (j = 0; j < nrows; j++) { Rast_get_row(infd, dcell_row, j, DCELL_TYPE); for (i = 0; i < ncols; i++) { if (Rast_is_d_null_value(&(dcell_row[i]))) data[j][i] = -1; else if (inscale) { data[j][i] = (CELL)((dcell_row[i] - min) * inscale); } else data[j][i] = (CELL)dcell_row[i]; } }
i.e. IIUC the entire map is read into memory. Seems a bit dangerous in light of some of the images we are dealing with now, and quite unnecessary. Shouldn't this be dealt with using the segment library ?
But that's a different and bigger issue. For this specific issue, let's just apply the small patch before getting 7.2 out, or ?
And then, the next step, one day, will be to implement #2111 ;-)
comment:10 by , 8 years ago
Replying to mlennert:
Replying to mmetz:
[...] So it seems 0.4 is the correct result for the second test matrix. This will be a somewhat larger patch.
You mean if you implement the quantizing within r.texture ?
No, some other measurements need the same patch like IDM.
I don't think this is necessary. Informing the user should be enough, including about how to do it with the existing tools.
Looking over Haralick et al. (1973), I see that i and j are summed over Ng, where Ng are the grey tone levels that the map is quantized to. The set Ng contains all grey levels, and so your patch does seem the right one.
So I would plead for applying this patch...
Equal-probability quantizing could be done with r.quantile + r.recode. According to literature and other resources, the number of gray levels should not be too large, at least 16 but apparently less than 256. Maybe a note in the manual would be helpful.
Yes. If you have a small example of how to do the quantizing it would be great to add that to the manual.
g.region -p rast=ortho_2001_t792_1m r.quantile in=ortho_2001_t792_1m quantiles=16 -r | r.recode in=ortho_2001_t792_1m out=ortho_2001_t792_1m_q16 rules=-
The frequencies of occurrence are not equal but similar.
If you're still planning on looking at r.texture some more, could you also look at #2325 ?
OK.
And I also have some doubt about the memory management in r.texture. In main.c, one can find this:
for (j = 0; j < nrows; j++) { Rast_get_row(infd, dcell_row, j, DCELL_TYPE); for (i = 0; i < ncols; i++) { if (Rast_is_d_null_value(&(dcell_row[i]))) data[j][i] = -1; else if (inscale) { data[j][i] = (CELL)((dcell_row[i] - min) * inscale); } else data[j][i] = (CELL)dcell_row[i]; } }i.e. IIUC the entire map is read into memory. Seems a bit dangerous in light of some of the images we are dealing with now, and quite unnecessary. Shouldn't this be dealt with using the segment library ?
IMHO, the segment library would be overkill here. Instead, a cache like used by r.proj or a mechanism like used by r.neighbors would be appropriate.
But that's a different and bigger issue. For this specific issue, let's just apply the small patch before getting 7.2 out, or ?
I would rather fix all measurements that are affected by collapsing the array, taking out zero values.
And then, the next step, one day, will be to implement #2111 ;-)
Technically possible...
Related, an option to use a circular neighborhood would be nice.
Before that, NULL values need to be handled. Currently, NULL cells are allowed.
comment:14 by , 8 years ago
Replying to mmetz:
In 69838:
r.texture is a big mess. Five years ago I spent a lot of time fixing bugs and optimizing code, submitted with r47112, r47113, r47160, r47698, r47700. Apparently, many bugs were not resolved. I made a new attempt with https://trac.osgeo.org/grass/changeset/69838/grass/trunk/raster/r.texture.
The cause of this mess is that the texture calculations have been taken from netpbm, that code in itself is full of bugs. For GRASS, a wrapper has been created in 2003, using the netpbm code for an analysis based on moving windows. IMHO, that made things worse, not better.
According to the original authors of texture features (Haralick et al.), texture features can be used for image classification which is probably of wider interest. Therefore it might be worth to spend more effort on r.texture. The features are interesting, the code is still dubious.
follow-up: 16 comment:15 by , 8 years ago
Resolution: | fixed |
---|---|
Status: | closed → reopened |
Reopening this ticket as the modifications seem to include an error in the Contrast measure calculations. IIUC, the f2_contrast() function has to return the sum and not the bigsum as bigsum is never calculated (commented out) and the result is thus always 0:
--- h_measure.c (révision 69847) +++ h_measure.c (copie de travail) @@ -434,7 +434,7 @@ } } - return bigsum; + return sum;
MarkusM, can you confirm this ?
With this change, the results are much more useful than the Contrast measurement output of <= g7.0.5 !
More generally, I think the difference in some of the measure are so important that this is probably a candidate for RFC5 style treatment.
comment:16 by , 8 years ago
Replying to mlennert:
Reopening this ticket as the modifications seem to include an error in the Contrast measure calculations. IIUC, the f2_contrast() function has to return the sum and not the bigsum as bigsum is never calculated (commented out) and the result is thus always 0:
--- h_measure.c (révision 69847) +++ h_measure.c (copie de travail) @@ -434,7 +434,7 @@ } } - return bigsum; + return sum;MarkusM, can you confirm this ?
Yes, thanks for testing! Fixed in all branches with r69866-8. The new changes also include a small optimization for f2_contrast, reducing the number of iterations by half.
With this change, the results are much more useful than the Contrast measurement output of <= g7.0.5 !
More generally, I think the difference in some of the measure are so important that this is probably a candidate for RFC5 style treatment.
Probably yes.
follow-up: 19 comment:17 by , 8 years ago
There also seems to be something wrong with the Difference Variance calculation in the new version.
r.texture lsat7_2002_80 out=lsat8_text method=dv r.info lsat8_text_DV -r min=-inf max=0.1154514
By checking whether tmp is 0 in the function, I get an output that again looks much more interesting than the <= g7.0.5 output, but not sure this is right way to approach this:
- var = ((tmp * sum_sqr) - (sum * sum)) / (tmp * tmp); + if (tmp > 0) { + var = ((tmp * sum_sqr) - (sum * sum)) / (tmp * tmp); + } else { + var = 0; + }
comment:18 by , 8 years ago
And while we're at it: r.texture does not check for existing files and so overwrites any files of the same name as basename+method. Not sure if this should be changed in the grass7 line though as it could be considered an API change...
comment:19 by , 8 years ago
Replying to mlennert:
There also seems to be something wrong with the Difference Variance calculation in the new version.
r.texture lsat7_2002_80 out=lsat8_text method=dv r.info lsat8_text_DV -r min=-inf max=0.1154514By checking whether tmp is 0 in the function, I get an output that again looks much more interesting than the <= g7.0.5 output, but not sure this is right way to approach this:
Division by zero results in inf. For variance, variance should in this case not be inf but zero, therefore the fix is IMHO correct. A slightly different patch, also testing for division by zero in other texture measures, has been applied in r69872-4 for all G7 branches.
follow-ups: 21 22 comment:20 by , 8 years ago
Thanks for all these fixes !
r.texture now runs much faster than before and results seem more meaningful. I'll attach an image with results of running
r.texture lsat7_2002_80 -a
using 7.0.5 (left) and current trunk (right) for each of the texture indicators. On 7.0.5 this takes over 2 minutes, where in trunk it takes less than 30 seconds.
Several results now seem more relevant (SV, SA, DV, Contrast). Others do not change. I don't know whether this is because they were already ok or not. Variance seems to be less clear after the change. And MOC-2 doesn't give any result (0 everywhere) in both versions.
I think r.texture is already much better than before, but still needs some serious revision to make sure that all the indicators are actually calculated correctly.
I don't know if we just leave this bug open, or whether we should open a separate bug.
comment:21 by , 8 years ago
Replying to mlennert:
And MOC-2 doesn't give any result (0 everywhere) in both versions.
Trying to debug this, I am a bit stumped. Using the following debug patch:
Index: h_measure.c =================================================================== --- h_measure.c (révision 69876) +++ h_measure.c (copie de travail) @@ -669,7 +669,9 @@ } } - /* fprintf(stderr,"hx=%f\thxy2=%f\n",hx,hxy2); */ + fprintf(stderr, "1-exp() = %f\n", (1 - exp(-2.0 * (hxy2 - hxy)))); + fprintf(stderr, "abs() = %f\n", abs(1 - exp(-2.0 * (hxy2 - hxy)))); + fprintf(stderr, "sqrt() = %f\n", sqrt(abs(1 - exp(-2.0 * (hxy2 - hxy))))); return (sqrt(abs(1 - exp(-2.0 * (hxy2 - hxy))))); }
I get results such as:
... 1-exp() = 0.994315 abs() = 0.994315 sqrt() = 0.000000 1-exp() = 0.993262 abs() = 0.993262 sqrt() = 0.000000 1-exp() = 0.994315 abs() = 0.994315 sqrt() = 0.000000 1-exp() = 0.993262 abs() = 0.993262 sqrt() = 0.000000 ....
So something seems to be wrong in the sqrt() calculations. However, when I comment out the first fprintf() in the above patch, I get this:
abs() = 0.000000 sqrt() = 0.000000 abs() = 0.000000 sqrt() = 0.000000 abs() = 0.000000 sqrt() = 0.000000 abs() = 0.000000 sqrt() = 0.000000 abs() = 0.000000 sqrt() = 0.000000
No idea how to interpret this behaviour.
follow-up: 23 comment:22 by , 8 years ago
Replying to mlennert:
Thanks for all these fixes !
r.texture now runs much faster than before and results seem more meaningful.
[...]
using 7.0.5 (left) and current trunk (right) for each of the texture indicators. On 7.0.5 this takes over 2 minutes, where in trunk it takes less than 30 seconds.
Several results now seem more relevant (SV, SA, DV, Contrast). Others do not change. I don't know whether this is because they were already ok or not. Variance seems to be less clear after the change.
Fixed in r69877-9.
And MOC-2 doesn't give any result (0 everywhere) in both versions.
That bug was introduced 14 years ago with r11641 when r.texture was added to GRASS. An f got lost.
by , 8 years ago
Attachment: | r_texture_tests.png added |
---|
results of tests as described in comment:20 with old on the left and new on the right
follow-ups: 24 25 comment:23 by , 8 years ago
Replying to mmetz:
Replying to mlennert:
Thanks for all these fixes !
r.texture now runs much faster than before and results seem more meaningful.
[...]
using 7.0.5 (left) and current trunk (right) for each of the texture indicators. On 7.0.5 this takes over 2 minutes, where in trunk it takes less than 30 seconds.
Several results now seem more relevant (SV, SA, DV, Contrast). Others do not change. I don't know whether this is because they were already ok or not. Variance seems to be less clear after the change.
Fixed in r69877-9.
And MOC-2 doesn't give any result (0 everywhere) in both versions.
That bug was introduced 14 years ago with r11641 when r.texture was added to GRASS. An f got lost.
Wow. Really sounds like noone has actually ever seriously used r.texture before...
Thanks for the quick fixes !
I've updated the summary image.
And now, I'll really stop working on this for now... ;-)
comment:24 by , 8 years ago
Replying to mlennert:
Wow. Really sounds like noone has actually ever seriously used r.texture before...
Well, I guess that the complexity of texture calculation exceeds the knowledge of most users.
Thanks for the quick fixes !
Yes, this is great, thanks!
The test cases here should be saved as a test script for future investigations.
follow-up: 26 comment:25 by , 8 years ago
Replying to mlennert:
Replying to mmetz:
Replying to mlennert:
Thanks for all these fixes !
r.texture now runs much faster than before and results seem more meaningful.
[...]
using 7.0.5 (left) and current trunk (right) for each of the texture indicators. On 7.0.5 this takes over 2 minutes, where in trunk it takes less than 30 seconds.
Several results now seem more relevant (SV, SA, DV, Contrast). Others do not change. I don't know whether this is because they were already ok or not. Variance seems to be less clear after the change.
Fixed in r69877-9.
And MOC-2 doesn't give any result (0 everywhere) in both versions.
That bug was introduced 14 years ago with r11641 when r.texture was added to GRASS. An f got lost.
Wow. Really sounds like noone has actually ever seriously used r.texture before...
Maybe because noone knows what this is good for? According to Haralick's publications, the texture measurements can be used for image classification. Could you update the manual if you can suggest applications for these texture measurments? Thanks!
Thanks for the quick fixes !
I've updated the summary image.
And now, I'll really stop working on this for now... ;-)
The manual... r.texture should also be mentioned in the manuals of other modules where the output of r.texture might be useful.
Also, there is potential for more speed optimizations. r.texture is (was) essentially a wrapper for the netpbm code, using the netpbm code not only once but for each cell separately. The netpbm code was not designed to be used like this, instead the purpose was to calculate texture measurements over the whole image, providing a single value for each texture measurement.
It does not help that the netpbm code has bugs and that the initial version of r.texture introduced more bugs.
follow-up: 27 comment:26 by , 8 years ago
Replying to mmetz:
Replying to mlennert:
Replying to mmetz:
Replying to mlennert:
Thanks for all these fixes !
r.texture now runs much faster than before and results seem more meaningful.
[...]
using 7.0.5 (left) and current trunk (right) for each of the texture indicators. On 7.0.5 this takes over 2 minutes, where in trunk it takes less than 30 seconds.
Several results now seem more relevant (SV, SA, DV, Contrast). Others do not change. I don't know whether this is because they were already ok or not. Variance seems to be less clear after the change.
Fixed in r69877-9.
And MOC-2 doesn't give any result (0 everywhere) in both versions.
That bug was introduced 14 years ago with r11641 when r.texture was added to GRASS. An f got lost.
Wow. Really sounds like noone has actually ever seriously used r.texture before...
Maybe because noone knows what this is good for? According to Haralick's publications, the texture measurements can be used for image classification. Could you update the manual if you can suggest applications for these texture measurments? Thanks!
I just committed what I consider some improvements, including a very simple, but thus hopefully pedagogical, example. I allowed myself to rearrange some of the existing text, including some of your recent additions.
Please have a look and if you think this is ok, we can backport to the other branches.
follow-up: 28 comment:27 by , 8 years ago
Replying to mlennert:
Replying to mmetz:
[...] Could you update the manual if you can suggest applications for these texture measurments? Thanks!
I just committed what I consider some improvements, including a very simple, but thus hopefully pedagogical, example. I allowed myself to rearrange some of the existing text, including some of your recent additions.
Thanks for your update on the manual! I am trying to make r.texture produce correct results in a reasonable time, but you are actually using it, so you know best what should go into the manual. IMHO, the description of a manual should be as short as possible (what does the module do, what is its output good for). Technical details should go into Notes. In order to keep the description short, I suggest to move the second last paragraph of the description to notes.
Please have a look and if you think this is ok, we can backport to the other branches.
You decide.
About the current defaults of r.texture:
The default size of the moving window is 3, and I suspect that the reason for this small size is processing time, After what I read, common sizes are in the range of 20 - 50 (Haralick et al. (1973): 64), the reason is probably calculating texture measurements for a few sample regions. In a moving window approach as in r.texture, a smaller window size seems reasonable, but a size of 3 might be too small to capture texture in the current area.
We could remove cropping at margins, the results would not be perfect, but better than nothing.
follow-ups: 29 31 comment:28 by , 8 years ago
Replying to mmetz:
Replying to mlennert:
Replying to mmetz:
[...] Could you update the manual if you can suggest applications for these texture measurments? Thanks!
I just committed what I consider some improvements, including a very simple, but thus hopefully pedagogical, example. I allowed myself to rearrange some of the existing text, including some of your recent additions.
Thanks for your update on the manual! I am trying to make r.texture produce correct results in a reasonable time, but you are actually using it, so you know best what should go into the manual. IMHO, the description of a manual should be as short as possible (what does the module do, what is its output good for). Technical details should go into Notes. In order to keep the description short, I suggest to move the second last paragraph of the description to notes.
Ok, I've moved things around quite a lot and added some more notes. Now the description is short and focuses on the explanation of the actual parameters of the module and more detailed content discussion is in the notes.
Please have a look and if you think this is ok, we can backport to the other branches.
You decide.
IMHO, this is ready to go into the other branches, especially 7.2 before its release. However, should I merge all of the different commits we did in the last days, or can I just copy over the r.texture.html file ? Don't know what consequences that has on svn history...
About the current defaults of r.texture:
The default size of the moving window is 3, and I suspect that the reason for this small size is processing time, After what I read, common sizes are in the range of 20 - 50 (Haralick et al. (1973): 64), the reason is probably calculating texture measurements for a few sample regions. In a moving window approach as in r.texture, a smaller window size seems reasonable, but a size of 3 might be too small to capture texture in the current area.
This really depends both on the phenomenon you wish to detect and, obviously, the resolution of your imagery. 3x3 in Landsat is not the same as 3x3 in Pleiades... And detecting sill lines in a field with a 1m resolution image should be possible with quite small windows, but for detecting tree plantations with the same imagery, the windows would have to be much larger...
I think it is best to leave the default at 3x3, because I wouldn't be able to determine another "best" default, and in that case it's probably better to leave the default at the lowest processing time.
We could remove cropping at margins, the results would not be perfect, but better than nothing.
That's debatable. In my colleagues tests we saw that PCI Geomatica "solves" this problem by just replicating the margin row/col as many times as necessary depending on window size. This leads to weird textures at the margins.
Another (IMHO preferrable) option would be to calculate the texture with just the available pixels, but this would mean that for a 3x3 window you would only have one neighbor in each direction (except for the n-s or e-w). Don't know if this is an issue ?
Moritz
follow-up: 30 comment:29 by , 8 years ago
Replying to mlennert: ...
Ok, I've moved things around quite a lot and added some more notes. Now the description is short and focuses on the explanation of the actual parameters of the module and more detailed content discussion is in the notes.
...
IMHO, this is ready to go into the other branches, especially 7.2 before its release. However, should I merge all of the different commits we did in the last days, or can I just copy over the r.texture.html file ? Don't know what consequences that has on svn history...
Current best practice in case of HTML pages:
copy over the entire file if you want everything to be taken over (or use e.g. "meld" otherwise). Then a commit mesg like this:
svn ci -m"r.texture manual: sync to trunk (rxxx, ryyy, ...)" raster/r.texture/r.texture.html
Like this anyone can look up the individual modifications. We stopped copying the svn history across branches some time ago since it caused more troubles than gave advantages.
Note: in case of C or Python source code backports this script is better:
grass-addons/tools/svn-merge.sh
comment:30 by , 8 years ago
Replying to neteler:
Replying to mlennert: ...
Ok, I've moved things around quite a lot and added some more notes. Now the description is short and focuses on the explanation of the actual parameters of the module and more detailed content discussion is in the notes.
...
IMHO, this is ready to go into the other branches, especially 7.2 before its release. However, should I merge all of the different commits we did in the last days, or can I just copy over the r.texture.html file ? Don't know what consequences that has on svn history...
Current best practice in case of HTML pages:
copy over the entire file if you want everything to be taken over (or use e.g. "meld" otherwise). Then a commit mesg like this:
svn ci -m"r.texture manual: sync to trunk (rxxx, ryyy, ...)" raster/r.texture/r.texture.htmlLike this anyone can look up the individual modifications. We stopped copying the svn history across branches some time ago since it caused more troubles than gave advantages.
Thanks for the guidance.
follow-up: 32 comment:31 by , 8 years ago
Replying to mlennert:
Replying to mmetz:
We could remove cropping at margins, the results would not be perfect, but better than nothing.
That's debatable. In my colleagues tests we saw that PCI Geomatica "solves" this problem by just replicating the margin row/col as many times as necessary depending on window size. This leads to weird textures at the margins.
Inventing values is a bad idea.
Another (IMHO preferrable) option would be to calculate the texture with just the available pixels, but this would mean that for a 3x3 window you would only have one neighbor in each direction (except for the n-s or e-w). Don't know if this is an issue ?
I would use just the available pixels, the question is then more general, how to handle NULL cells, also inside the current region. If NULL cells are allowed, there is no reason to crop at the margins. If NULL cells are not allowed, any moving window that contains NULL cells would be skipped, also very large moving windows with only a single NULL cell. Therefore I would prefer to allow NULL cells.
follow-up: 33 comment:32 by , 8 years ago
Replying to mmetz:
Replying to mlennert:
Replying to mmetz:
We could remove cropping at margins, the results would not be perfect, but better than nothing.
That's debatable. In my colleagues tests we saw that PCI Geomatica "solves" this problem by just replicating the margin row/col as many times as necessary depending on window size. This leads to weird textures at the margins.
Inventing values is a bad idea.
Another (IMHO preferrable) option would be to calculate the texture with just the available pixels, but this would mean that for a 3x3 window you would only have one neighbor in each direction (except for the n-s or e-w). Don't know if this is an issue ?
I would use just the available pixels, the question is then more general, how to handle NULL cells, also inside the current region. If NULL cells are allowed, there is no reason to crop at the margins. If NULL cells are not allowed, any moving window that contains NULL cells would be skipped, also very large moving windows with only a single NULL cell. Therefore I would prefer to allow NULL cells.
+1
comment:33 by , 8 years ago
Replying to mlennert:
Replying to mmetz:
Replying to mlennert:
Replying to mmetz:
We could remove cropping at margins, the results would not be perfect, but better than nothing.
That's debatable. In my colleagues tests we saw that PCI Geomatica "solves" this problem by just replicating the margin row/col as many times as necessary depending on window size. This leads to weird textures at the margins.
Inventing values is a bad idea.
Another (IMHO preferrable) option would be to calculate the texture with just the available pixels, but this would mean that for a 3x3 window you would only have one neighbor in each direction (except for the n-s or e-w). Don't know if this is an issue ?
I would use just the available pixels, the question is then more general, how to handle NULL cells, also inside the current region. If NULL cells are allowed, there is no reason to crop at the margins. If NULL cells are not allowed, any moving window that contains NULL cells would be skipped, also very large moving windows with only a single NULL cell. Therefore I would prefer to allow NULL cells.
+1
I have added a flag to allow NULL cells which also avoids cropping along edges of the computational region in trunk r69965.
follow-up: 36 comment:35 by , 8 years ago
I think we can close this bug. AFAIK, all defects have been solved.
There still remains a performance issue when size is large (running r.texture -a on the NC orthophoto with size=27, I'm at 32% after over 2 hours), but this is another issue and should thus be another ticket.
Any objections to closing ?
comment:36 by , 8 years ago
Replying to mlennert:
I think we can close this bug. AFAIK, all defects have been solved.
There still remains a performance issue when size is large (running r.texture -a on the NC orthophoto with size=27, I'm at 32% after over 2 hours),
340 minutes in total !
but this is another issue and should thus be another ticket.
See #3293
comment:37 by , 8 years ago
Resolution: | → fixed |
---|---|
Status: | reopened → closed |
patch to fix IDM