Lower Accuracy for Product with Longer Duration

Hi,

I’m recently compiling my crop type map products from version 1.8.1 and here are the results:

I’m just wondering because it doesn’t make sense that the product with longer acquisition period/date (second row) had a lower accuracy compared with the first row. Should it be the other way around? What’s the reason for this?

Note: The said products used the same vector polygons.

Best.

Hi,

Using more data can introduce noise in the model and lead to lower accuracy. I know basically nothing about your area. Does the longer period span two growing seasons or something like that?

Hello,

My longer period covers 10 months while the shorter period is about 5 months.
Anyway, I had anotherL4B product that had an accuracy of 0.88 but the random forest parameters are in default mode (Sample Ratio = 0.75, Random Seed = 0, Number of trees = 100, Max Depth = 25, Min Samples = 25). It seemed that there had been over-fitting with the first product.

Thank you.

1 Like

Hi @brentf, @lnicola ,

Could you please tell me how to see the results like you have here in the screenshots that you attached in the post’s above?

I am interested in seeing what the “Overall Accuracy” of my L4B products are.

Regards

Hello,

Sorry for the lare reply, you can see the accuracy in:/mnt/archive/{site_name}/l4b/S2AGRI_L4B_PRD_S5_20180629T014849_V20180102_20180313/AUX_DATA.

All ancillary data are there (in .xml files)

The run time can be seen in the monitoring tab of the web interface, and just click the “output” link of your job, click the “Output” link (in L4B’s case).
Screenshot%20at%202018-12-14%2009-40-25

For naming convention of products.

Thanks for the help @brentf

Here is what I get in the .xml file

<QualityMetrics>
<Precision class="12"> 0.927076</Precision>
<Recall class="12"> 0.826614</Recall>
<F-score class="12"> 0.873968</F-score>
<Precision class="41"> 0.248498</Precision>
<Recall class="41"> 0.271063</Recall>
<F-score class="41"> 0.259291</F-score>
<Precision class="42"> 0.646994</Precision>
<Recall class="42"> 0.789887</Recall>
<F-score class="42"> 0.711335</F-score>
<Precision class="74"> 0.891748</Precision>
<Recall class="74"> 0.997739</Recall>
<F-score class="74"> 0.941771</F-score>
<Precision class="201"> 0.774979</Precision>
<Recall class="201"> 0.998918</Recall>
<F-score class="201"> 0.872813</F-score>
<Precision class="205"> 0.312303</Precision>
<Recall class="205"> 0.612363</Recall>
<F-score class="205"> 0.413647</F-score>
<Kappa> 0.590477</Kappa>
<Accuracy> 0.801862</Accuracy>
</QualityMetrics>
</CropType>

From this, am I right in assuming my L4B product has a 0.8 accuracy?
And how much trust can one put in this accuracy estimate?

Yes, that is right. Well I’m not so sure but you can also based the “correctness” of your output not only from Overall Accuracy, but also from the F-Score, precision and recall values on each class as well.

In addition it greatly depends on the quality/quantity of your in situ/training data (if you’ve used one).

This link could be of great help in understanding the said measures.

Cheers,
Brent

1 Like