Nextflow workflow report

[hopeful_ramanujan]

Workflow execution completed unsuccessfully!

The exit status of the task that caused the workflow execution to fail was: 127.

The full error message was:

Error executing process > 'SNPS:preprocessing (sample1)'

Caused by:
  Process `SNPS:preprocessing (sample1)` terminated with an error exit status (127)

Command executed:

  samtools sort -T deleteme -m 966367642 -@ 4 \
  -o sorted.bam sample1.bam || exit $?
  samtools calmd -b sorted.bam genome.fa 1> calmd.bam 2> /dev/null && rm sorted.bam
  samtools index calmd.bam

Command exit status:
  127

Command output:
  (empty)

Command error:
  .command.sh: line 2: samtools: command not found

Work dir:
  /home/shared/8TB_HDD_01/sr320/github/nb-2022/work/66/98ca96794904c6379d1f997f9cf631

Tip: view the complete command output by changing to the process work dir and entering the command `cat .command.out`
Run times
19-Jun-2022 08:45:06 - 19-Jun-2022 08:45:23 (duration: 17.1s)
  0 succeeded  
  0 cached  
  0 ignored  
  12 failed  
Nextflow command
nextflow run epidiverse/snp -profile test, docker
CPU-Hours
(a few seconds)
Launch directory
/home/shared/8TB_HDD_01/sr320/github/nb-2022
Work directory
/home/shared/8TB_HDD_01/sr320/github/nb-2022/work
Project directory
/home/sr320/.nextflow/assets/epidiverse/snp
Script name
main.nf
Script ID
b0e299ed332648922c66d8a020a05bca
Workflow session
026c4dd0-8ba9-40cb-a862-9b36a8cd7339
Workflow repository
https://github.com/epidiverse/snp, revision master (commit hash 9c814703c690c2ade21c4586b36159940e092a4e)
Workflow profile
test,
Nextflow version
version 21.10.6, build 5660 (21-12-2021 16:55 UTC)

Resource Usage

These plots give an overview of the distribution of resource usage for each process.

CPU

Memory

Job Duration

I/O

Tasks

This table shows information about each task in the workflow. Use the search box on the right to filter rows for specific values. Clicking headers will sort the table by that value and scrolling side to side will reveal more columns.

(tasks table omitted because the dataset is too big)