Hi,
I’ve got a couple of questions considering Sen2Agri’s resource use and performance:
We’re currently running Sen2Agri on a VM with 8 vCPUs and 32GB RAM, which is below the recommended prerequisites as stated in the SUM (if needed additional resources are available).
Our area covers 14 S2-tiles with a season from March to December of 2017, so a year worth of data (including Landsat-8).
However, during the whole processing chain, we never came close to max out any of the available resources. RAM stays typically below 5GB, processor use around 20%.
Here’s a screenshot of our system overview tab (currently running one L4B process in automated mode and one L4A process through commandline):
This looks considerably different compared to the example in the SUM (p.48).
I’ve also compared our L2 processing times with the performance example given in Appendix C of the SUM:
On a more powerful machine, the MACCS processor seems to take ~8.5 mins per S2 tile to complete processing. We’re currently taking around 36 minutes per tile.
This all leads me to believe, that some performance tweaking has to be performed, but until now all attempts have been unsuccessful.
So the questions are:
For automatic mode, is it as easy as installing Sen2Agri on a more powerful machine and SLURM will do the rest? Or do some parameters need to be changed?
For commandline mode, there are some flags for controlling number of threads/processes or tiles processed in parallel.
For instance, for demmacs.py
there are the flags --processes-number-dem
and --processes-number-maccs
. I’ve tried to adjust both, but didn’t see any change in processing time.
Is the only way to run MACCS in parallel to create multiple sub-sites and merge them afterwards?
For CropMaskFused.py
and CropTypeFused.py
there are the flags max-parallelism
and tile-threads-hint
. These seem to have an effect on performance. Is there any way to change the default values so it doesn’t need to be explicitly called in commandline? I’ve looked at the config db tables, but I haven’t found a corresponding entry.
Many thanks,
Valentin