[RESOLVED] ERROR: paramfile not found

Description:
Hi all,
I’m currently trying to deploy the QuNex singularity container (qunex_suite-0.97.1.sif) on our hpc system.

When calling run_turnkey, the following error occurs even if the directory is bound: ERROR: --paramfile flag set but file not found in default locations.

Running the docker container locally with the same file structure works perfectly fine.

I’m not quite sure what I’m missing, so I would really appreciate your help!

Many thanks!

Best
LM

Call:

# -- Set the name of the study
export STUDY_NAME="nmdare_hpc"

# -- Set your working directory
export WORK_DIR="${HOME}/scratch/qunex"

# -- Specify the container
export QUNEX_CONTAINER="/data/gpfs-1/users/martinl_c/scratch/qunex/qunexcontainer/qunex_suite-0.97.1.sif"

# -- Location of previously prepared data
export RAW_DATA="${WORK_DIR}/data/Ritter_MRI"

# -- Batch parameters file
export INPUT_BATCH_FILE="${RAW_DATA}/LE_parameters.txt"

# -- Mapping file
export INPUT_MAPPING_FILE="${RAW_DATA}/LE_mapping.txt"

# -- Sessions to run
export SESSIONS="LE0261"

# -- You will run everything on the local file system as opposed to pulling data from a database (e.g. XNAT system)
export RUNTURNKEY_TYPE="local"

# -- List the processing steps (QuNex commands) you want to run
export RUNTURNKEY_STEPS="create_study,map_raw_data,import_dicom,create_session_info,setup_hcp,create_batch" #,hcp_pre_freesurfer,hcp_freesurfer,hcp_post_freesurfer,hcp_fmri_volume,hcp_fmri_surface"

qunex_container run_turnkey \
    --rawdatainput="${RAW_DATA}" \
    --batchfile="${INPUT_BATCH_FILE}" \
    --paramfile="${INPUT_BATCH_FILE}" \
    --mappingfile="${INPUT_MAPPING_FILE}" \
    --workingdir="${WORK_DIR}" \
    --projectname="${STUDY_NAME}" \
    --path="${WORK_DIR}/${STUDY_NAME}" \
    --sessions="${SESSIONS}" \
    --sessionsfoldername="sessions" \
    --turnkeytype="${RUNTURNKEY_TYPE}" \
    --container="${QUNEX_CONTAINER}" \
    --bind="${WORK_DIR}:/${WORK_DIR}" \
    --turnkeysteps="${RUNTURNKEY_STEPS}"

Logs:

--> QuNex will run the command over 1 sessions. It will utilize:

    Maximum sessions run in parallel for a job: 1.
    Maximum elements run in parallel for a session: 1.
    Up to 1 processes will be utilized for a job.
    Job #1 will run sessions: LE0261
(base) bash-4.4$     # --scheduler="SLURM,time=04-00:00:00,cpus-per-task=1,mem-per-cpu=16000,jobname=qx_quickstart"
(base) bash-4.4$ --> unsetting the following environment variables: PATH MATLABPATH PYTHONPATH QUNEXVer TOOLS QUNEXREPO QUNEXPATH QUNEXEXTENSIONS QUNEXLIBRARY QUNEXLIBRARYETC TemplateFolder FSL_FIXDIR FREESURFERD
IR FREESURFER_HOME FREESURFER_SCHEDULER FreeSurferSchedulerDIR WORKBENCHDIR DCMNIIDIR DICMNIIDIR MATLABDIR MATLABBINDIR OCTAVEDIR OCTAVEPKGDIR OCTAVEBINDIR RDIR HCPWBDIR AFNIDIR PYLIBDIR FSLDIR FSLGPUDIR PALMDIR 
QUNEXMCOMMAND HCPPIPEDIR CARET7DIR GRADUNWARPDIR HCPPIPEDIR_Templates HCPPIPEDIR_Bin HCPPIPEDIR_Config HCPPIPEDIR_PreFS HCPPIPEDIR_FS HCPPIPEDIR_PostFS HCPPIPEDIR_fMRISurf HCPPIPEDIR_fMRIVol HCPPIPEDIR_tfMRI HCPP
IPEDIR_dMRI HCPPIPEDIR_dMRITract HCPPIPEDIR_Global HCPPIPEDIR_tfMRIAnalysis HCPCIFTIRWDIR MSMBin HCPPIPEDIR_dMRITractFull HCPPIPEDIR_dMRILegacy AutoPtxFolder FSL_GPU_SCRIPTS FSLGPUBinary EDDYCUDADIR USEOCTAVE QUN
EXENV CONDADIR MSMBINDIR MSMCONFIGDIR R_LIBS FSL_FIX_CIFTIRW FSFAST_HOME SUBJECTS_DIR MINC_BIN_DIR MNI_DIR MINC_LIB_DIR MNI_DATAPATH FSF_OUTPUT_FORMAT
 
Generated by QuNex 
------------------------------------------------------------------------ 
Version: 0.97.1 
User: martinl_c 
System: hpc-cpu-97 
OS: RedHat Linux #1 SMP Mon Nov 15 20:49:28 UTC 2021 
------------------------------------------------------------------------ 
 
 
 ---> Setting up Octave  
===> Executing QuNex run_turnkey workflow... 
------------------------ Initiating QuNex Turnkey Workflow ------------------------------- 
 --> Note: Acceptance Test type not specified. Setting default type to: no 
 --> Note: Turnkey cleaning not specified. Setting default to: no 
--> Checking that requested create_study map_raw_data import_dicom create_session_info setup_hcp create_batch are supported... 
     create_study is supported. 
     map_raw_data is supported. 
     import_dicom is supported. 
     create_session_info is supported. 
     setup_hcp is supported. 
     create_batch is supported. 
--> Verified list of supported Turnkey steps to be run:  create_study map_raw_data import_dicom create_session_info setup_hcp create_batch 
ERROR: --paramfile flag set but file not found in default locations: /data/gpfs-1/users/martinl_c/scratch/qunex/data/Ritter_MRI/LE_parameters.txt 

Hi!

Welcome to the QuNex forum!

There seems to be an extra slash in the bind definition above. Try:

`--bind="${WORK_DIR}:${WORK_DIR}" \`

Let me know if it works. The reason this works with Docker is that Docker automatically binds the home folder.

Kind regards, Jure

Dear Jure,

thanks for the quick reply!
I already tried without the slash but it does unfortunately not do the trick…

I also checked if file permissions are the issue. But the file which the error message points to can be read with cat and has rwx permissions.

Do you have another idea what the problem could be there?

Warm regards,
Leon

Hi,

Do not worry, we will get to the bottom of this.

The first thing you can try so to enter the container manually and check what is happening.

# 1. prepare the work dir variable
export WORK_DIR="${HOME}/scratch/qunex"

# 2. enter the container
singularity shell --bind=$WORK_DIR:$WORK_DIR /data/gpfs-1/users/martinl_c/scratch/qunex/qunexcontainer/qunex_suite-0.97.1.sif

# ----- we are now inside the container -----

# 3. check the contents of the data folder
ls /data/gpfs-1/users/martinl_c/scratch/qunex/data/Ritter_MRI

# 4. check the contents of the param file
cat /data/gpfs-1/users/martinl_c/scratch/qunex/data/Ritter_MRI/LE_parameters.txt 

# 5. exit the container
exit

You can also then manually “walk around” the container and inspect that is in the /data folder. Let me know how this goes and what the outputs of the above test are. Next, we can also try running things step-by-step and not in the turnkey fashion and see where that gets us. I can prepare instructions for this as well, but first let us see what the above test achieves.

Jure

Hi,

when running ls /data/gpfs-1/users/martinl_c/scratch/qunex/data/Ritter_MRI I get the follwing error:
ls: cannot access /data/gpfs-1/users/martinl_c/scratch/qunex/data/Ritter_MRI: No such file or directory

I am however in my working directory, bc the files there can be listed. But moving to another directory is not possible.

The whole code:

(base) bash-4.4$ export WORK_DIR="${HOME}/scratch/qunex"
(base) bash-4.4$ singularity shell --bind=$WORK_DIR:$WORK_DIR /data/gpfs-1/users/martinl_c/scratch/qunex/qunexcontainer/qunex_suite-0.97.1.sif
Apptainer> ls /data/gpfs-1/users/martinl_c/scratch/qunex/data/Ritter_MRI
ls: cannot access /data/gpfs-1/users/martinl_c/scratch/qunex/data/Ritter_MRI: No such file or directory
Apptainer> pwd
/data/gpfs-1/users/martinl_c
Apptainer> ls
Desktop                                                qunex_container_command_2023-03-15_17.32.44.958174.sh
...

Best
Leon

Interesting, what happens if you bind:

--bind /data/gpfs-1/users/martinl_c/scratch/qunex:/qunex

And then type

cd /qunex
ls

It seems like Singularity has issues binding the scratch space on your system.

OK, I just had another idea.

/data/gpfs-1/users/martinl_c/scratch

Is most likely not a real folder, but a link to the scratch space on your system. To find this out type:

cd /data/gpfs-1/users/martinl_c
ls -l

This should show where this link is pointing. Next, bind the actual folder where the link is pointing and not the folder where link is stored.

Binding the actual folder does also not work, unfortunately…
It also seems that despite starting the container, I cannot see its contents when running it. ls inside the container shows only the files in my home directory.

singularity shell --bind /fast/scratch/users/martinl_c/qunex:/qunex /data/gpfs-1/users/martinl_c/scratch/qunex/qunexcontainer/qunex_suite-0.97.1.sif
Apptainer> cd qunex
bash: cd: qunex: No such file or directory
Apptainer> pwd
/data/gpfs-1/users/martinl_c
Apptainer> 

The container contents should be there, just type

cd /opt/qunex
ls

They should be there because QuNex actually gets executed (the error about the paramfile coms from QuNex). To see if scratch is the problem, try this:

# create a new dir in home
mkdir /data/gpfs-1/users/martinl_c/qunex

# copy the paramfile there
cp /data/gpfs-1/users/martinl_c/scratch/qunex/data/Ritter_MRI/LE_parameters.txt /data/gpfs-1/users/martinl_c/qunex/

# enter the container
singularity shell --bind=/data/gpfs-1/users/martinl_c:/data/gpfs-1/users/martinl_c /data/gpfs-1/users/martinl_c/scratch/qunex/qunexcontainer/qunex_suite-0.97.1.sif

# is the qunex folder there
cd /data/gpfs-1/users/martinl_c
ls

# is the param file there
cat /data/gpfs-1/users/martinl_c/qunex/LE_parameters.txt

# exit
exit

wow, thanks that worked! I now can read the file stored in my home directory. So the filesystem is the problem?

Yes, your tests show that is a filesystem thing and something I unfortunately cannot help you with :slight_smile:. For some reason it seems like you are unable to bind the scratch folder into the container.

In any case scratch folders are usually not the best location for storing for neuroimaging studies as they get wiped regularly. From my experience, scratch folders mainly serve as temporary storages. Maybe ask your system admin what is an appropriate location to store your studies and data. If you have enough user space, you can just store it in /data/gpfs-1/users/martinl_c/qunex or something similar.

Jure

Mh strange… Usually, binding scratch or work to other singularity containers (e.g. fmriprep, hcp-bids) works perfectly fine. I might have to contact our system administrator.

Unfortunately we have a quota limit for our home directory, so I process in scratch and copy the results to a permanent storage location.

Many thanks for your great help! It’s very much appreciated. :slight_smile:

Best
Leon

Yes, never saw this before, I can bind scratch on our system just like any other folder. At your current system, at this moment does binding the scratch folder via another Singularity container (e.g. fMRIPrep) work?

Jure

fmriprep works fine whatsoever.

(base) bash-4.4$ singularity shell --bind /fast/scratch/users/martinl_c/qunex:/qunex /data/gpfs-1/users/martinl_c/scratch/qunex/qunexcontainer/nipreps_fmriprep.simg 
INFO:    underlay of /etc/localtime required more than 50 (107) bind mounts
Singularity> cd qunex
Singularity> ls
LE_parameters.txt
Singularity> cat LE_parameters.txt 
# -------------------------------------------------
# -- Qu|Nex Environment Preprocessing Parameters --
# -------------------------------------------------
# -------------------------------------
# ---> HCP Pipelines general parameters
# -------------------------------------
_hcp_Pipeline            : ${HCPPIPEDIR}
_parsessions             : 1
_parelements             : 1

Note though that you are accessing the qunex sub-folder inside your home folder and not the one you are binding (they are the same, but bind uses a different path). Bind works like this:

--bind HOST_PATH:CONTAINER_PATH

In your example you are binding /fast/scratch/users/martinl_c/qunex => /qunex (note the slash before qunex in the CONTAINER_PATH). So the correct way to check if bind works would be cd /qunex instead of cd qunex. cd qunex goes into the qunex sub-folder of your home folder. It seems like fmriprep auto binds the home folder and everything in it.

Jure

Ah yes sorry, I forgot that we created a test parameters.txt in my home folder. Still with fmriprep I can access my scratch directory.

Page up
(base) bash-4.4$ singularity shell --bind /fast/scratch/users/martinl_c/qunex:/qunex /data/gpfs-1/users/martinl_c/scratch/qunex/qunexcontainer/nipreps_fmriprep.simg 
INFO:    underlay of /etc/localtime required more than 50 (107) bind mounts
Singularity> cd /qunex   
Singularity> cat data/Ritter_MRI/LE_parameters.txt 
# -------------------------------------------------
# -- Qu|Nex Environment Preprocessing Parameters --
# -------------------------------------------------
...

And with QuNex this does not work right?

yep omg. I was using qunex in stead of /qunex
Thank you so much!

I am glad we solved it. :smiley: