Pulling Container with Apptainer at BASECALL:DORADO_BASECALL:downloadModel step fails · Issue #119 · biocorecrg/master_of_pores · GitHub
More Web Proxy on the site http://driver.im/
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working to get the Master of Pores pipeline running at my HPC. Apptainer is our HPC's preferred method for containerization. I have tried running the MOP pipeline as such:
Pulling the container at the BASECALL:DORADO_BASECALL:downloadModel step always fails, with this error message:
Command error:
INFO: gocryptfs not found, will not be able to use gocryptfs
FATAL: container creation failed: while getting /proc/self/ns/user file descriptor: while requesting file descriptors send: read unix @->@: read: connection reset by peer
The container successfully downloads. My question is - Nextflow supports of apptainer, and reading through the MOP docs, it looks like MOP supports the use of apptainer. So why does the container creation step fail 100% of the time when I change to -with-apptainer?
Lastly, I'll add that, with the -with-singularity flag, the MOP pipeline fails a few steps further (due to a versioning issue with singularity). Due to this error and my HPC's strong preference for apptainer overs singularity, I switched to trying -with-apptainer, instead.
Here is my params.yml file:
pod5: "/scratch/general/nfs1/fleming_mop_input/pod5_converted_files/*.pod5"
fastq: ""
## This can be empty but then you need to add specify kit and flowcell via command line inside pars_tools
conffile: "final_hek_tnfa_1-1.txt"
# Basecalling can be either NO, dorado, dorado-mod or dorado-duplex
basecalling: "dorado"
## Can be OFF / cuda10 / cuda11. Newer version of GUPPY may require cuda11
GPU: "cuda11"
# OFF may not be legitimate options for demultiplexing and demulti_fast5.
demultiplexing: "NO"
demulti_fast5: "OFF"
demulti_pod5: "OFF"
### Number of fast5 basecalled per parallel job
granularity: 1
### File with the list of accepted barcodes. It can be empty
barcodes: ""
# Needed for fastq input
## fast5: ""
# Common
reference: "/scratch/general/nfs1/u0432542/all_ref_rrna_human.fasta"
## Can be transcriptome / genome
ref_type: "transcriptome"
annotation: ""
## command line options
pars_tools: "${projectDir}/tool_opts/drna_tool_splice_opt.tsv"
## Cut off quality for QC
qualityqc: 5
## Can be nanoq / nanofilt
filtering: "nanoq"
## Can be graphmap / graphmap2 / minimap2 / bwa
mapping: "minimap2"
## Can be nanocount for transcriptome / htseq for genome
counting: "nanocount"
## Can be NO / bambu / isoquant
discovery: "NO"
## Convert bam to cram
cram_conv: "NO"
subsampling_cram: 50
progPars:
basecalling:
dorado: "fast"
dorado-duplex: ""
dorado-mod: ""
demultiplexing:
seqtagger: ""
dorado: ""
filtering:
seqkit: ""
nanoq: ""
mapping:
graphmap: ""
graphmap2: ""
minimap2: ""
bwa: ""
winnowmap: ""
counting:
htseq: ""
nanocount: ""
discovery:
bambu: ""
The text was updated successfully, but these errors were encountered:
I am working to get the Master of Pores pipeline running at my HPC. Apptainer is our HPC's preferred method for containerization. I have tried running the MOP pipeline as such:
Pulling the container at the BASECALL:DORADO_BASECALL:downloadModel step always fails, with this error message:
However, if I run the command like this:
The container successfully downloads. My question is - Nextflow supports of apptainer, and reading through the MOP docs, it looks like MOP supports the use of apptainer. So why does the container creation step fail 100% of the time when I change to
-with-apptainer
?Lastly, I'll add that, with the
-with-singularity
flag, the MOP pipeline fails a few steps further (due to a versioning issue with singularity). Due to this error and my HPC's strong preference for apptainer overs singularity, I switched to trying-with-apptainer
, instead.Here is my params.yml file:
The text was updated successfully, but these errors were encountered: