Custom processing orders stays as "submitted"

Hi all,

To test the manual processing features I configured a local database with already downloaded L1C products and a season with all of the processors deactivated in his corresponding checkboxes, except the L2A processor who is mandatory. The L2A Processor runned fine with all products stored in the database. Once finished, I submitted from the website custom job orders for the L3A, L3B, L3E and L4A processors, but all the orders stays at “submitted” in the system overview status for several hour´s, without performing any task, like the screenshot. What is happen?

Nils
instantánea5

Hi,

Can you check the status of SLURM?

systemctl status slurm{dbd,ctld,d}

If it’s running, you could try looking at the sen2agri-orchestrator logs:

journalctl -u sen2agri-orchestrator --since today # the time when you've submitted the job
1 Like

Hi Inicola,

It seems that SLRUM isn´t working.

Nils

instantánea6

What’s in /etc/slurm?

1 Like

The /etc/slurm folder doesn´t exist :anguished:. Should I reinstall the entire System?

It certainly shouldn’t be missing. You could try reinstalling the system, it should be quicker this time and make no changes otherwise.

1 Like

Inicola,

A complete reinstall (MACC Cots, MACC and Sen2Agri Distribution) run OK but don´t create /etc/slrum folder. Another way?

Do you still have the console output? It might show some errors.

1 Like

It seems like the install script was unable to find the SLRUM rpm packs. The outputs:

[nils.kaiser@localhost ~]$ su
Contraseña:
[root@localhost nils.kaiser]# /Sen2AgriDistribution/install_script/sen2agriPlatformInstallAndConfig.sh
Checking paths…
Creating /mnt/archive/reference_data
Copying reference data
Disabling SELinux
The Sen2Agri system is not inherently incompatible with SELinux, but relabelling the file system paths is not implemented yet in the installer.
Disabling the firewall
Warning: ZONE_ALREADY_SET: trusted
success
success
Complementos cargados:fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    updates/7/x86_64/primary_db FAILED
    http://mirrors.coreix.net/centos/7/updates/x86_64/repodata/679c94759958781e4268930ad88373f085c7e9d7e576764a664fd343c0ecc050-primary.sqlite.bz2: [Errno 14] curl#7 - "Failed to connect to 2a01:c0:2:3d::2: Network is unreachable"
    Intentando con otro espejo.
    epel/x86_64/primary_db FAILED ============ ] 43 kB/s | 2.2 MB 00:03:37 ETA
    http://mirror.globo.com/epel/7/x86_64/repodata/6715c1a11d00b3b4f3f069abbf343614a9bf92c8305a1723b5ad75709131ee60-primary.sqlite.bz2: [Errno 12] Timeout on http://mirror.globo.com/epel/7/x86_64/repodata/6715c1a11d00b3b4f3f069abbf343614a9bf92c8305a1723b5ad75709131ee60-primary.sqlite.bz2: (28, ‘Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds’)
    Intentando con otro espejo.
    (1/2): updates/7/x86_64/primary_db | 5.3 MB 00:00:27
    (2/2): epel/x86_64/primary_db | 6.2 MB 00:00:14
    El paquete epel-release-7-11.noarch ya se encuentra instalado con su versión más reciente
    Nada para hacer
    Complementos cargados:fastestmirror, langpacks
    pgdg-centos94-9.4-3.noarch.rpm | 5.4 kB 00:00:00
    Examinando /var/tmp/yum-root-CMm6Wp/pgdg-centos94-9.4-3.noarch.rpm: pgdg-centos94-9.4-3.noarch
    /var/tmp/yum-root-CMm6Wp/pgdg-centos94-9.4-3.noarch.rpm: no actualiza el paquete instalado.
    Nada para hacer
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    El paquete libxslt-1.1.28-5.el7.x86_64 ya se encuentra instalado con su versión más reciente
    El paquete gd-2.0.35-26.el7.x86_64 ya se encuentra instalado con su versión más reciente
    Nada para hacer
    Looking for MACCS…
    MACCS found at /opt/maccs/core/5.1/bin/maccs
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    El paquete gdal-python-1.11.4-10.rhel7.x86_64 ya se encuentra instalado con su versión más reciente
    El paquete python-psycopg2-2.7.3.2-1.rhel7.x86_64 ya se encuentra instalado con su versión más reciente
    El paquete gd-2.0.35-26.el7.x86_64 ya se encuentra instalado con su versión más reciente
    Nada para hacer
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    No existe disponible ningún paquete …/rpm_binaries/otb-*.rpm.
    Error: Nada para hacer
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    No existe disponible ningún paquete …/rpm_binaries/sen2agri-processors-*.centos7.x86_64.rpm.
    Error: Nada para hacer
    ln: fallo al crear el enlace simbólico «/usr/lib64/libproj.so»: El fichero ya existe
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    No existe disponible ningún paquete …/rpm_binaries/sen2agri-app-*.centos7.x86_64.rpm.
    Error: Nada para hacer
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    No existe disponible ningún paquete …/rpm_binaries/slurm/slurm-15.08.7-1.el7.centos.x86_64.rpm.
    No existe disponible ningún paquete …/rpm_binaries/slurm/slurm-devel-15.08.7-1.el7.centos.x86_64.rpm.
    No existe disponible ningún paquete …/rpm_binaries/slurm/slurm-munge-15.08.7-1.el7.centos.x86_64.rpm.
    No existe disponible ningún paquete …/rpm_binaries/slurm/slurm-perlapi-15.08.7-1.el7.centos.x86_64.rpm.
    No existe disponible ningún paquete …/rpm_binaries/slurm/slurm-pam_slurm-15.08.7-1.el7.centos.x86_64.rpm.
    No existe disponible ningún paquete …/rpm_binaries/slurm/slurm-plugins-15.08.7-1.el7.centos.x86_64.rpm.
    No existe disponible ningún paquete …/rpm_binaries/slurm/slurm-sjobexit-15.08.7-1.el7.centos.x86_64.rpm.
    No existe disponible ningún paquete …/rpm_binaries/slurm/slurm-sjstat-15.08.7-1.el7.centos.x86_64.rpm.
    No existe disponible ningún paquete …/rpm_binaries/slurm/slurm-slurmdbd-15.08.7-1.el7.centos.x86_64.rpm.
    No existe disponible ningún paquete …/rpm_binaries/slurm/slurm-slurmdb-direct-15.08.7-1.el7.centos.x86_64.rpm.
    No existe disponible ningún paquete …/rpm_binaries/slurm/slurm-sql-15.08.7-1.el7.centos.x86_64.rpm.
    No existe disponible ningún paquete …/rpm_binaries/slurm/slurm-torque-15.08.7-1.el7.centos.x86_64.rpm.
    Error: Nada para hacer
    adduser: el usuario «sen2agri-service» ya existe
    adduser: el usuario «munge» ya existe
    1024+0 registros leídos
    1024+0 registros escritos
    1024 bytes (1,0 kB) copiados, 0,0017912 s, 572 kB/s
    MUNGE SERVICE: Active: active (running) since jue 2018-01-25 08:00:57 -03; 4min 56s ago
    adduser: el usuario «slurm» ya existe
    cp: falta el fichero de destino después de «/etc/slurm»
    Pruebe ‘cp --help’ para más información.
    cp: falta el fichero de destino después de «/etc/slurm»
    Pruebe ‘cp --help’ para más información.
    sed: no se puede leer /etc/slurm/slurm.conf: No es un directorio
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    El paquete expect-5.45-14.el7_1.x86_64 ya se encuentra instalado con su versión más reciente
    El paquete expectk-5.45-14.el7_1.x86_64 ya se encuentra instalado con su versión más reciente
    Nada para hacer
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    El paquete 1:mariadb-server-5.5.56-2.el7.x86_64 ya se encuentra instalado con su versión más reciente
    El paquete 1:mariadb-5.5.56-2.el7.x86_64 ya se encuentra instalado con su versión más reciente
    Nada para hacer
    MYSQL SERVICE: Active: active (running) since jue 2018-01-25 08:01:02 -03; 4min 52s ago
    spawn mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we’ll need the current
password for the root user. If you’ve just installed MariaDB, and
you haven’t set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on…

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] n
… skipping.

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
… Success!

Normally, root should only be allowed to connect from ‘localhost’. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
… Success!

By default, MariaDB comes with a database named ‘test’ that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y

  • Dropping test database…
    … Success!
  • Removing privileges on test database…
    … Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
… Success!

Cleaning up…

All done! If you’ve completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!
spawn mysql -u root -p -e create database slurm_acct_db;create user slurm@localhost;
set password for slurm@localhost = password(‘sen2agri’);grant usage on . to slurm;grant all privileges on slurm_acct_db.* to slurm;flush privileges;
Enter password:
ERROR 1007 (HY000) at line 1: Can’t create database ‘slurm_acct_db’; database exists
Job for slurmdbd.service failed because the control process exited with error code. See “systemctl status slurmdbd.service” and “journalctl -xe” for details.
slurmdbd.service is not a native service, redirecting to /sbin/chkconfig.
Executing /sbin/chkconfig slurmdbd on
SLURM DB SERVICE: Active: failed (Result: exit-code) since jue 2018-01-25 08:06:04 -03; 288ms ago
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr add cluster “sen2agri”"
spawn sacctmgr add cluster sen2agri
mkdir: no se puede crear el directorio «/var/spool/slurm»: El fichero ya existe
mkdir: no se puede crear el directorio «/var/log/slurm»: El fichero ya existe
Failed to start slurmctld.service: Unit not found.
Failed to execute operation: No such file or directory
Unit slurmctld.service could not be found.
SLURM CTL SERVICE:
Failed to start slurmd.service: Unit not found.
Failed to execute operation: No such file or directory
Unit slurmd.service could not be found.
SLURM NODE SERVICE:
Failed to start slurm.service: Unit not found.
Failed to execute operation: No such file or directory
Unit slurm.service could not be found.
SLURM SERVICE:
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr add account “sen2agri-service”"
spawn sacctmgr add account sen2agri-service
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr add user “sen2agri-service” Account=“sen2agri-service”"
spawn sacctmgr add user sen2agri-service Account="sen2agri-service"
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify user “sen2agri-service” set adminlevel=Admin"
spawn sacctmgr modify user sen2agri-service set adminlevel=Admin
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr add qos “qosMaccs”"
spawn sacctmgr add qos qosMaccs
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify qos qosMaccs set GrpJobs=1"
spawn sacctmgr modify qos qosMaccs set GrpJobs=1
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify user sen2agri-service set qos+=qosMaccs"
spawn sacctmgr modify user sen2agri-service set qos+=qosMaccs
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr add qos “qosComposite”"
spawn sacctmgr add qos qosComposite
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify qos qosComposite set GrpJobs=1"
spawn sacctmgr modify qos qosComposite set GrpJobs=1
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify user sen2agri-service set qos+=qosComposite"
spawn sacctmgr modify user sen2agri-service set qos+=qosComposite
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr add qos “qosCropMask”"
spawn sacctmgr add qos qosCropMask
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify qos qosCropMask set GrpJobs=1"
spawn sacctmgr modify qos qosCropMask set GrpJobs=1
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify user sen2agri-service set qos+=qosCropMask"
spawn sacctmgr modify user sen2agri-service set qos+=qosCropMask
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr add qos “qosCropType”"
spawn sacctmgr add qos qosCropType
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify qos qosCropType set GrpJobs=1"
spawn sacctmgr modify qos qosCropType set GrpJobs=1
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify user sen2agri-service set qos+=qosCropType"
spawn sacctmgr modify user sen2agri-service set qos+=qosCropType
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr add qos “qosPheno”"
spawn sacctmgr add qos qosPheno
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify qos qosPheno set GrpJobs=1"
spawn sacctmgr modify qos qosPheno set GrpJobs=1
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify user sen2agri-service set qos+=qosPheno"
spawn sacctmgr modify user sen2agri-service set qos+=qosPheno
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr add qos “qosLai”"
spawn sacctmgr add qos qosLai
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify qos qosLai set GrpJobs=1"
spawn sacctmgr modify qos qosLai set GrpJobs=1
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify user sen2agri-service set qos+=qosLai"
spawn sacctmgr modify user sen2agri-service set qos+=qosLai
couldn’t execute “sacctmgr”: no such file or directory
while executing
"spawn sacctmgr modify user sen2agri-service set qos+=normal"
spawn sacctmgr modify user sen2agri-service set qos+=normal
CLUSTER,USERS,QOS INFO:
/Sen2AgriDistribution/install_script/sen2agriPlatformInstallAndConfig.sh: línea 315: sacctmgr: no se encontró la orden
QOS INFO:
/Sen2AgriDistribution/install_script/sen2agriPlatformInstallAndConfig.sh: línea 318: sacctmgr: no se encontró la orden
Partition INFO:
/Sen2AgriDistribution/install_script/sen2agriPlatformInstallAndConfig.sh: línea 321: scontrol: no se encontró la orden
Nodes INFO:
/Sen2AgriDistribution/install_script/sen2agriPlatformInstallAndConfig.sh: línea 324: scontrol: no se encontró la orden
Complementos cargados:fastestmirror, langpacks
Loading mirror speeds from cached hostfile

Complementos cargados:fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    El paquete postgresql94-contrib-9.4.15-1PGDG.rhel7.x86_64 ya se encuentra instalado con su versión más reciente
    Nada para hacer
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    El paquete postgis22_94-2.2.6-1.rhel7.x86_64 ya se encuentra instalado con su versión más reciente
    Nada para hacer
    POSTGRESQL SERVICE: Active: active (running) since jue 2018-01-25 08:00:58 -03; 5min ago
    cat: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/00-database/sen2agri.sql: No existe el fichero o el directorio
    sed: no se puede leer ./.kde/share/apps/activitymanager/resources/database/07-data/09.config.sql: No es un directorio
    Executing SQL script: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/01-extensions/.sql
    cat: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/01-extensions/
    .sql: No existe el fichero o el directorio
    Executing SQL script: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/02-types/.sql
    cat: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/02-types/
    .sql: No existe el fichero o el directorio
    Executing SQL script: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/03-tables/.sql
    cat: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/03-tables/
    .sql: No existe el fichero o el directorio
    Executing SQL script: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/04-views/.sql
    cat: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/04-views/
    .sql: No existe el fichero o el directorio
    Executing SQL script: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/05-functions/.sql
    cat: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/05-functions/
    .sql: No existe el fichero o el directorio
    Executing SQL script: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/06-indexes/.sql
    cat: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/06-indexes/
    .sql: No existe el fichero o el directorio
    Executing SQL script: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/07-data/.sql
    cat: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/07-data/
    .sql: No existe el fichero o el directorio
    Executing SQL script: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/08-keys/.sql
    cat: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/08-keys/
    .sql: No existe el fichero o el directorio
    Executing SQL script: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/09-privileges/.sql
    cat: ./.kde/share/apps/activitymanager/activityranking/database
    ./.kde/share/apps/activitymanager/resources/database/09-privileges/
    .sql: No existe el fichero o el directorio
    cp: falta el fichero de destino después de «/var/lib/pgsql/9.4/data/»
    Pruebe ‘cp --help’ para más información.
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    El paquete php-pgsql-5.4.16-43.el7_4.x86_64 ya se encuentra instalado con su versión más reciente
    Nada para hacer
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    El paquete httpd-2.4.6-67.el7.centos.6.x86_64 ya se encuentra instalado con su versión más reciente
    Nada para hacer
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    El paquete php-5.4.16-43.el7_4.x86_64 ya se encuentra instalado con su versión más reciente
    El paquete php-mysql-5.4.16-43.el7_4.x86_64 ya se encuentra instalado con su versión más reciente
    Nada para hacer
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    No existe disponible ningún paquete …/rpm_binaries/sen2agri-website-*.centos7.x86_64.rpm.
    Error: Nada para hacer
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    El paquete wget-1.14-15.el7_4.1.x86_64 ya se encuentra instalado con su versión más reciente
    El paquete python-lxml-3.2.1-4.el7.x86_64 ya se encuentra instalado con su versión más reciente
    El paquete bzip2-1.0.6-13.el7.x86_64 ya se encuentra instalado con su versión más reciente
    El paquete python-beautifulsoup4-4.3.2-1.el7.noarch ya se encuentra instalado con su versión más reciente
    El paquete python-dateutil-1.5-7.el7.noarch ya se encuentra instalado con su versión más reciente
    El paquete 1:java-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64 ya se encuentra instalado con su versión más reciente
    Nada para hacer
    Complementos cargados:fastestmirror, langpacks
    Loading mirror speeds from cached hostfile
  • base: mirror.us.leaseweb.net
  • epel: mirror.globo.com
  • extras: mirror.centos.org
  • updates: mirror.centos.org
    No existe disponible ningún paquete …/rpm_binaries/sen2agri-downloaders-demmaccs-*.centos7.x86_64.rpm.
    Error: Nada para hacer
    Please edit the following files to set up your USGS and SciHub credentials:
    /usr/share/sen2agri/sen2agri-downloaders/usgs.txt
    /usr/share/sen2agri/sen2agri-downloaders/apihub.txt
    [root@localhost nils.kaiser]#

Do you know why that might be? How does your installation package look like?

Hi all,

I have a similar problem. I submitted some custom jobs, however, they are never executed:

Screenshot-2018-3-5%20Sentinel-2%20for%20Agriculture

Slurm seems to be running without problems though:

● slurmdbd.service - Slurm DBD accounting daemon
   Loaded: loaded (/usr/lib/systemd/system/slurmdbd.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-03-05 13:50:18 UTC; 1h 28min ago
  Process: 894 ExecStart=/usr/sbin/slurmdbd $SLURMDBD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 1601 (slurmdbd)
   CGroup: /system.slice/slurmdbd.service
           └─1601 /usr/sbin/slurmdbd

Mar 05 13:50:13 benchmark-32-4sites.novalocal systemd[1]: Starting Slurm DBD accounting daemon...
Mar 05 13:50:18 benchmark-32-4sites.novalocal systemd[1]: PID file /var/run/slurmdbd.pid not readable (yet?) after start.
Mar 05 13:50:18 benchmark-32-4sites.novalocal systemd[1]: Started Slurm DBD accounting daemon.

● slurmctld.service - Slurm controller daemon
   Loaded: loaded (/usr/lib/systemd/system/slurmctld.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2018-03-05 13:50:13 UTC; 1h 28min ago
  Process: 888 ExecStart=/usr/sbin/slurmctld $SLURMCTLD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 937 (slurmctld)
   CGroup: /system.slice/slurmctld.service
           └─937 /usr/sbin/slurmctld

Mar 05 13:50:13 benchmark-32-4sites.novalocal systemd[1]: Starting Slurm controller daemon...
Mar 05 13:50:13 benchmark-32-4sites.novalocal systemd[1]: PID file /var/run/slurmctld.pid not readable (yet?) after start.
Mar 05 13:50:13 benchmark-32-4sites.novalocal systemd[1]: slurmctld.service: Supervising process 937 which is not our child. We'll mo...exits.
Mar 05 13:50:13 benchmark-32-4sites.novalocal systemd[1]: Started Slurm controller daemon.

● slurmd.service - Slurm node daemon
   Loaded: loaded (/usr/lib/systemd/system/slurmd.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/slurmd.service.d
           └─override.conf
   Active: active (running) since Mon 2018-03-05 13:50:13 UTC; 1h 28min ago
  Process: 895 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 993 (slurmd)
   CGroup: /system.slice/slurmd.service
           └─993 /usr/sbin/slurmd

Mar 05 13:50:13 benchmark-32-4sites.novalocal systemd[1]: Starting Slurm node daemon...
Mar 05 13:50:13 benchmark-32-4sites.novalocal systemd[1]: PID file /var/run/slurmd.pid not readable (yet?) after start.
Mar 05 13:50:13 benchmark-32-4sites.novalocal systemd[1]: Started Slurm node daemon.

Further, scheduled tasks that I created in “Dashboard” are never executed.

Does anybody have any ideas why this is the case?

Quick update on this:
I found that in the job table, the status of the individual jobs is set to 3, which means “NeedsInput | The activity has been suspended due to lack of necessary input.”

id  | job_id |     module_short_name      | parameters |       submit_timestamp        | start_timestamp | end_timestamp | status_id |       
status_timestamp        | preceding_task_ids 
-----+--------+----------------------------+------------+-------------------------------+-----------------+---------------+-----------+-------
------------------------+--------------------
 374 |     18 | composite-total-weight     | null       | 2018-03-07 10:18:55.647457+00 |                 |               |         3 | 2018-0
3-07 10:18:55.647457+00 | {373,372}

I submitted the job with the default configuration, so I don’t really know what input is missing. Any ideas?

Hi,

That seems a bit concerning. In this case, the status means that the task in cause is waiting for a previous task to finish. Which probably means that a notification was lost at some point. Can you run something like the queries below? They might show us what (the system thinks) happened to the job.

select task.*,
       step.*
from task
inner join step on step.task_id = task.id
where task.job_id = 18
order by task.id, step.submit_timestamp;

select *
from event
where (data->>'job_id') :: int = 18;

Here is the result of the first query:

id  | job_id |     module_short_name      | parameters |       submit_timestamp        | start_timestamp | end_timestamp | status_id |       status_timestamp        | preceding_task_ids |          name          | task_id |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           parameters                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |       submit_timestamp        | start_timestamp | end_timestamp | exit_code | status_id |       status_timestamp        
-----+--------+----------------------------+------------+-------------------------------+-----------------+---------------+-----------+-------------------------------+--------------------+------------------------+---------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------+-----------------+---------------+-----------+-----------+-------------------------------
 370 |     18 | composite-mask-handler     | null       | 2018-03-07 10:18:55.42136+00  |                 |               |         1 | 2018-03-07 10:18:55.42136+00  |                    | MaskHandler            |     370 | {"arguments":["MaskHandler","-xml","/mnt/archive/maccs_def/waldviertel/l2a/LC08_L2A_190026_20170629_20170714_01_T1/L8_TEST_L8C_L2VALD_190026_20170629.HDR","-out","/mnt/archive/orchestrator_temp/l3a/18/370-composite-mask-handler/all_masks_file.tif","-sentinelres","30"]}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   | 2018-03-07 10:18:55.885844+00 |                 |               |           |         2 | 2018-03-07 10:18:56.40966+00
 371 |     18 | composite-preprocessing    | null       | 2018-03-07 10:18:55.4962+00   |                 |               |         3 | 2018-03-07 10:18:55.4962+00   | {370}              | CompositePreprocessing |     371 | {"arguments":["CompositePreprocessing2","-xml","/mnt/archive/maccs_def/waldviertel/l2a/LC08_L2A_190026_20170629_20170714_01_T1/L8_TEST_L8C_L2VALD_190026_20170629.HDR","-bmap","/usr/share/sen2agri/bands_mapping_L8.txt","-res","30","-msk","/mnt/archive/orchestrator_temp/l3a/18/370-composite-mask-handler/all_masks_file.tif","-outres","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/img_res_bands.tif","-outcmres","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/cld_res.tif","-outwmres","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/water_res.tif","-outsmres","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/snow_res.tif","-outaotres","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/aot_res.tif","-scatcoef","/usr/share/sen2agri/scattering_coeffs_20m.txt"]}                                                                                                                                                                                                                                                                                                                                            | 2018-03-07 10:18:55.885844+00 |                 |               |           |         1 | 2018-03-07 10:18:55.885844+00
 372 |     18 | composite-weight-aot       | null       | 2018-03-07 10:18:55.548426+00 |                 |               |         3 | 2018-03-07 10:18:55.548426+00 | {371}              | WeightAOT              |     372 | {"arguments":["WeightAOT","-xml","/mnt/archive/maccs_def/waldviertel/l2a/LC08_L2A_190026_20170629_20170714_01_T1/L8_TEST_L8C_L2VALD_190026_20170629.HDR","-in","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/aot_res.tif","-waotmin","0.33","-waotmax","1","-aotmax","0.8","-out","/mnt/archive/orchestrator_temp/l3a/18/372-composite-weight-aot/weight_aot.tif"]}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        | 2018-03-07 10:18:55.885844+00 |                 |               |           |         1 | 2018-03-07 10:18:55.885844+00
 373 |     18 | composite-weight-on-clouds | null       | 2018-03-07 10:18:55.588695+00 |                 |               |         3 | 2018-03-07 10:18:55.588695+00 | {371}              | WeightOnClouds         |     373 | {"arguments":["WeightOnClouds","-inxml","/mnt/archive/maccs_def/waldviertel/l2a/LC08_L2A_190026_20170629_20170714_01_T1/L8_TEST_L8C_L2VALD_190026_20170629.HDR","-incldmsk","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/cld_res.tif","-coarseres","240","-sigmasmallcld","2","-sigmalargecld","10","-out","/mnt/archive/orchestrator_temp/l3a/18/373-composite-weight-on-clouds/weight_cloud.tif"]}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      | 2018-03-07 10:18:55.885844+00 |                 |               |           |         1 | 2018-03-07 10:18:55.885844+00
 374 |     18 | composite-total-weight     | null       | 2018-03-07 10:18:55.647457+00 |                 |               |         3 | 2018-03-07 10:18:55.647457+00 | {373,372}          | TotalWeight            |     374 | {"arguments":["TotalWeight","-xml","/mnt/archive/maccs_def/waldviertel/l2a/LC08_L2A_190026_20170629_20170714_01_T1/L8_TEST_L8C_L2VALD_190026_20170629.HDR","-waotfile","/mnt/archive/orchestrator_temp/l3a/18/372-composite-weight-aot/weight_aot.tif","-wcldfile","/mnt/archive/orchestrator_temp/l3a/18/373-composite-weight-on-clouds/weight_cloud.tif","-l3adate","20170714","-halfsynthesis","25","-wdatemin","0.5","-out","/mnt/archive/orchestrator_temp/l3a/18/374-composite-total-weight/weight_total.tif"]}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           | 2018-03-07 10:18:55.885844+00 |                 |               |           |         1 | 2018-03-07 10:18:55.885844+00
 375 |     18 | composite-update-synthesis | null       | 2018-03-07 10:18:55.697656+00 |                 |               |         3 | 2018-03-07 10:18:55.697656+00 | {374}              | UpdateSynthesis        |     375 | {"arguments":["UpdateSynthesis","-in","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/img_res_bands.tif","-bmap","/usr/share/sen2agri/bands_mapping_L8.txt","-xml","/mnt/archive/maccs_def/waldviertel/l2a/LC08_L2A_190026_20170629_20170714_01_T1/L8_TEST_L8C_L2VALD_190026_20170629.HDR","-csm","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/cld_res.tif","-wm","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/water_res.tif","-sm","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/snow_res.tif","-wl2a","/mnt/archive/orchestrator_temp/l3a/18/374-composite-total-weight/weight_total.tif","-out","/mnt/archive/orchestrator_temp/l3a/18/375-composite-update-synthesis/L3AResult.tif"]}                                                                                                                                                                                                                                                                                                                                                                                                                                                   | 2018-03-07 10:18:55.885844+00 |                 |               |           |         1 | 2018-03-07 10:18:55.885844+00
 376 |     18 | composite-splitter         | null       | 2018-03-07 10:18:55.737594+00 |                 |               |         3 | 2018-03-07 10:18:55.737594+00 | {375}              | CompositeSplitter      |     376 | {"arguments":["CompositeSplitter2","-in","/mnt/archive/orchestrator_temp/l3a/18/375-composite-update-synthesis/L3AResult.tif","-xml","/mnt/archive/maccs_def/waldviertel/l2a/LC08_L2A_190026_20170629_20170714_01_T1/L8_TEST_L8C_L2VALD_190026_20170629.HDR","-bmap","/usr/share/sen2agri/bands_mapping_L8.txt","-outweights","\"/mnt/archive/orchestrator_temp/l3a/18/376-composite-splitter/L3AResult_weights.tif?gdal:co:COMPRESS=DEFLATE\"","-outdates","\"/mnt/archive/orchestrator_temp/l3a/18/376-composite-splitter/L3AResult_dates.tif?gdal:co:COMPRESS=DEFLATE\"","-outrefls","\"/mnt/archive/orchestrator_temp/l3a/18/376-composite-splitter/L3AResult_refls.tif?gdal:co:COMPRESS=DEFLATE\"","-outflags","\"/mnt/archive/orchestrator_temp/l3a/18/376-composite-splitter/L3AResult_flags.tif?gdal:co:COMPRESS=DEFLATE\"","-isfinal","1","-outrgb","\"/mnt/archive/orchestrator_temp/l3a/18/376-composite-splitter/L3AResult_rgb.tif?gdal:co:COMPRESS=DEFLATE\""]}                                                                                                                                                                                                                                                    | 2018-03-07 10:18:55.885844+00 |                 |               |           |         1 | 2018-03-07 10:18:55.885844+00
 377 |     18 | files-remover              | null       | 2018-03-07 10:18:55.780345+00 |                 |               |         3 | 2018-03-07 10:18:55.780345+00 | {376}              | CleanupTemporaryFiles  |     377 | {"arguments":["/mnt/archive/orchestrator_temp/l3a/18/370-composite-mask-handler/all_masks_file.tif","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/img_res_bands.tif","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/cld_res.tif","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/water_res.tif","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/snow_res.tif","/mnt/archive/orchestrator_temp/l3a/18/371-composite-preprocessing/aot_res.tif","/mnt/archive/orchestrator_temp/l3a/18/372-composite-weight-aot/weight_aot.tif","/mnt/archive/orchestrator_temp/l3a/18/373-composite-weight-on-clouds/weight_cloud.tif","/mnt/archive/orchestrator_temp/l3a/18/374-composite-total-weight/weight_total.tif","/mnt/archive/orchestrator_temp/l3a/18/375-composite-update-synthesis/L3AResult.tif"]}                                                                                                                                                                                                                                                                                                                                                 | 2018-03-07 10:18:55.885844+00 |                 |               |           |         1 | 2018-03-07 10:18:55.885844+00
 378 |     18 | product-formatter          | null       | 2018-03-07 10:18:55.827354+00 |                 |               |         3 | 2018-03-07 10:18:55.827354+00 | {377}              | ProductFormatter       |     378 | {"arguments":["ProductFormatter","-destroot","/mnt/archive/waldviertel/l3a/","-fileclass","SVT1","-level","L3A","-timeperiod","20170714","-baseline","01.00","-siteid","14","-processor","composite","-gipp","/mnt/archive/orchestrator_temp/l3a/18/378-product-formatter/executionInfos.xml","-outprops","/mnt/archive/orchestrator_temp/l3a/18/378-product-formatter/product_properties.txt","-il","/mnt/archive/maccs_def/waldviertel/l2a/LC08_L2A_190026_20170629_20170714_01_T1/L8_TEST_L8C_L2VALD_190026_20170629.HDR","-lut","/usr/share/sen2agri/composite.map","-processor.composite.refls","TILE_190026","/mnt/archive/orchestrator_temp/l3a/18/376-composite-splitter/L3AResult_refls.tif","-processor.composite.weights","TILE_190026","/mnt/archive/orchestrator_temp/l3a/18/376-composite-splitter/L3AResult_weights.tif","-processor.composite.flags","TILE_190026","/mnt/archive/orchestrator_temp/l3a/18/376-composite-splitter/L3AResult_flags.tif","-processor.composite.dates","TILE_190026","/mnt/archive/orchestrator_temp/l3a/18/376-composite-splitter/L3AResult_dates.tif","-processor.composite.rgb","TILE_190026","/mnt/archive/orchestrator_temp/l3a/18/376-composite-splitter/L3AResult_rgb.tif"]} | 2018-03-07 10:18:55.885844+00 |                 |               |           |         1 | 2018-03-07 10:18:55.885844+00

And here the second one:

id  | type_id |                                                                                   data                                                                                   |      submitted_timestamp      | processing_started_timestamp  | processing_completed_timestamp 
-----+---------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------+-------------------------------+--------------------------------
 363 |       7 | {"job_id":18, "processor_id":2, "site_id":14, "parameters":{"resolution":"10","input_products":["LC08_L2A_190026_20170629_20170714_01_T1"],"synthesis_date":"20170714"}} | 2018-03-07 10:18:50.205909+00 | 2018-03-07 10:18:55.341496+00 | 2018-03-07 10:18:55.899006+00
 364 |       1 | {"job_id":18, "processor_id":2, "task_id":370}                                                                                                                           | 2018-03-07 10:18:55.42136+00  | 2018-03-07 10:18:55.910447+00 | 2018-03-07 10:18:55.995666+00

What is the contents of /var/log/slurm?

-rw-------. 1 slurm slurm  14655 Mar  5 13:50 slurmdbd.log
-rw-------. 1 root  root   90728 Mar  5 13:50 slurmd.log
-rw-------. 1 slurm slurm 116531 Mar  7 14:08 slurm.log

and in slurm.log:

/var/log/slurm/slurm.log 
[2018-03-07T12:28:48.251] error: _slurm_rpc_node_registration node=localhost: Invalid argument
[2018-03-07T13:02:08.951] error: Node localhost has low socket*core*thread count (8 < 12)
[2018-03-07T13:02:08.951] error: Node localhost has low cpu count (8 < 12)
[2018-03-07T13:02:08.951] error: _slurm_rpc_node_registration node=localhost: Invalid argument
[2018-03-07T13:35:28.691] error: Node localhost has low socket*core*thread count (8 < 12)
[2018-03-07T13:35:28.691] error: Node localhost has low cpu count (8 < 12)
[2018-03-07T13:35:28.691] error: _slurm_rpc_node_registration node=localhost: Invalid argument
[2018-03-07T14:08:48.367] error: Node localhost has low socket*core*thread count (8 < 12)
[2018-03-07T14:08:48.368] error: Node localhost has low cpu count (8 < 12)
[2018-03-07T14:08:48.368] error: _slurm_rpc_node_registration node=localhost: Invalid argument

Coming back to my latest message: does this mean that a minimum number of 12 CPUs is required?

Coming back to my latest message: does this mean that a minimum number of 12 CPUs is required?

Take a look at the bottom of /etc/slurm/slurm.conf. It’s possible that the node configuration is incorrect.

However, I’m not sure that’s what’s causing the your issue. Does sudo srun ls / work?

In slurm conf, I found the following configuration:

NodeName=localhost CPUs=12

I changed it to:

NodeName=localhost CPUs=8

This doesn’t seem to have any effects though

srun ls / gives me the following:

srun: Required node not available (down, drained or reserved)
srun: job 184 queued and waiting for resources