[RESOLVED] Python Error During pre-freesurfer step

Hi all,

Firstly, I’ve asked a couple of questions here recently as our group gets up to speed with using HCP processing and the support is very timely and helpful, so thank you!

My issue is in running the hcp_pre_freesurfer command. For each session, I get the message that the command has exited with error code 1 and am directed to a log file (pasted below). There is a Python error that I’ve googled and traced to having something to do with the LD_LIBRARY_PATH environment variable, according to these posts: Trying to run hp-toolbox from HPLIP but gives python errors - Ask Ubuntu, 14.04 - Python Error when opening software-center - Ask Ubuntu)

Here is a minimal example to re-create the error. Running the offending line outside of the QuNex container:
jj1006 :python
Python 3.6.8 (default, May 8 2021, 09:11:34)
[GCC 8.4.1 20210423 (Red Hat 8.4.1-2)] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

from pyexpat import *
exit()

No error. Now within the container:

erso[0]:opt$ python
Python 3.6.8 (default, Nov 16 2020, 16:55:22)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

from pyexpat import *
Traceback (most recent call last):
File “”, line 1, in
ImportError: /usr/lib64/python3.6/lib-dynload/pyexpat.cpython-36m-x86_64-linux-gnu.so: undefined symbol: XML_SetHashSalt

I’d greatly appreciate any insight.

Best,
John

Full log file:

Generated by QuNex 0.91.11 on 2021-11-08_10.26.1636385181


Running external command via QuNex:
PreFreeSurfer/PreFreeSurferPipeline.sh \
–path=“/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp” \
–subject=“170508_4PR00011” \
–t1=“/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/unprocessed/T1w/170508_4PR00011_T1w_MPR1.nii.gz@/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/unprocessed/T1w/170508_4PR00011_T1w_MPR2.nii.gz” \
–t2=“/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/unprocessed/T2w/170508_4PR00011_T2w_SPC1.nii.gz@/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/unprocessed/T2w/170508_4PR00011_T2w_SPC2.nii.gz” \
–t1template=“global/templates/MNI152_T1_0.8mm.nii.gz” \
–t1templatebrain=“global/templates/MNI152_T1_0.8mm_brain.nii.gz” \
–t1template2mm=“global/templates/MNI152_T1_2mm.nii.gz” \
–t2template=“global/templates/MNI152_T2_0.8mm.nii.gz” \
–t2templatebrain=“global/templates/MNI152_T2_0.8mm_brain.nii.gz” \
–t2template2mm=“global/templates/MNI152_T2_2mm.nii.gz” \
–templatemask=“global/templates/MNI152_T1_0.8mm_brain_mask.nii.gz” \
–template2mmmask=“global/templates/MNI152_T1_2mm_brain_mask_dil.nii.gz” \
–brainsize=“150” \
–fnirtconfig=“global/config/T1_2_MNI152_2mm.cnf” \
–t1samplespacing=“0.0000021” \
–t2samplespacing=“0.0000021” \
–gdcoeffs=“/autofs/space/nihilus_001/users/john/logistics/scanner_info/coeff.grad” \
–avgrdcmethod=“NONE” \
–processing-mode=“HCPStyleData”

Test file:
/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/MNINonLinear/T1w_restore_brain.nii.gz

========================================
DIRECTORY: /opt/HCP/HCPpipelines
PRODUCT: HCP Pipeline Scripts
VERSION: v4.3.0

Mon Nov 8 10:26:21 EST 2021:PreFreeSurferPipeline.sh: HCPPIPEDIR: /opt/HCP/HCPpipelines
Mon Nov 8 10:26:21 EST 2021:PreFreeSurferPipeline.sh: FSLDIR: /opt/fsl/fsl
Mon Nov 8 10:26:21 EST 2021:PreFreeSurferPipeline.sh: HCPPIPEDIR_Global: /opt/HCP/HCPpipelines/global/scripts
Mon Nov 8 10:26:21 EST 2021:PreFreeSurferPipeline.sh: Platform Information Follows:
Linux erso.nmr.mgh.harvard.edu 4.18.0-310.el8.x86_64 #1 SMP Tue Jun 8 00:24:50 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Mon Nov 8 10:26:21 EST 2021:PreFreeSurferPipeline.sh: Parsing Command Line Options
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: Finished Parsing Command Line Options
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: StudyFolder: /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: Subject: 170508_4PR00011
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: T1wInputImages: /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/unprocessed/T1w/170508_4PR00011_T1w_MPR1.nii.gz@/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/unprocessed/T1w/170508_4PR00011_T1w_MPR2.nii.gz
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: T2wInputImages: /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/unprocessed/T2w/170508_4PR00011_T2w_SPC1.nii.gz@/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/unprocessed/T2w/170508_4PR00011_T2w_SPC2.nii.gz
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: T1wTemplate: global/templates/MNI152_T1_0.8mm.nii.gz
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: T1wTemplateBrain: global/templates/MNI152_T1_0.8mm_brain.nii.gz
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: T1wTemplate2mm: global/templates/MNI152_T1_2mm.nii.gz
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: T2wTemplate: global/templates/MNI152_T2_0.8mm.nii.gz
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: T2wTemplateBrain: global/templates/MNI152_T2_0.8mm_brain.nii.gz
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: T2wTemplate2mm: global/templates/MNI152_T2_2mm.nii.gz
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: TemplateMask: global/templates/MNI152_T1_0.8mm_brain_mask.nii.gz
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: Template2mmMask: global/templates/MNI152_T1_2mm_brain_mask_dil.nii.gz
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: BrainSize: 150
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: FNIRTConfig: global/config/T1_2_MNI152_2mm.cnf
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: MagnitudeInputName:
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: PhaseInputName:
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: GEB0InputName:
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: TE:
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: SpinEchoPhaseEncodeNegative:
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: SpinEchoPhaseEncodePositive:
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: SEEchoSpacing:
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: SEUnwarpDir:
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: T1wSampleSpacing: 0.0000021
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: T2wSampleSpacing: 0.0000021
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: UnwarpDir:
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: GradientDistortionCoeffs: /autofs/space/nihilus_001/users/john/logistics/scanner_info/coeff.grad
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: AvgrdcSTRING: NONE
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: TopupConfig:
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: BiasFieldSmoothingSigma:
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: UseJacobian: true
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: T1wBiasCorrect:
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: CustomBrain: NONE
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: ProcessingMode: HCPStyleData
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: T1wFolder: /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/T1w
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: T2wFolder: /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/T2w
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: AtlasSpaceFolder: /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/MNINonLinear
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: POSIXLY_CORRECT=
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: Processing Modality: T1w
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: Performing Gradient Nonlinearity Correction
Mon Nov 8 10:26:22 EST 2021:PreFreeSurferPipeline.sh: mkdir -p /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/T1w/T1w1_GradientDistortionUnwarp
GradientDistortionUnwarp.sh: HCPPIPEDIR: /opt/HCP/HCPpipelines
GradientDistortionUnwarp.sh: FSLDIR: /opt/fsl/fsl
Mon Nov 8 10:26:23 EST 2021:GradientDistortionUnwarp.sh: START
Traceback (most recent call last):
File “/autofs/space/nihilus_001/users/john/analyses/pyenv/bin/gradient_unwarp.py”, line 11, in
from gradunwarp.core import (globals, coeffs, utils)
File “/autofs/space/nihilus_001/users/john/analyses/pyenv/lib64/python3.6/site-packages/gradunwarp/core/init.py”, line 8, in
from .unwarp_resample import Unwarper
File “/autofs/space/nihilus_001/users/john/analyses/pyenv/lib64/python3.6/site-packages/gradunwarp/core/unwarp_resample.py”, line 20, in
import nibabel as nib
File “/autofs/space/nihilus_001/users/john/analyses/pyenv/lib64/python3.6/site-packages/nibabel/init.py”, line 47, in
from .loadsave import load, save
File “/autofs/space/nihilus_001/users/john/analyses/pyenv/lib64/python3.6/site-packages/nibabel/loadsave.py”, line 18, in
from .imageclasses import all_image_classes
File “/autofs/space/nihilus_001/users/john/analyses/pyenv/lib64/python3.6/site-packages/nibabel/imageclasses.py”, line 13, in
from .cifti2 import Cifti2Image
File “/autofs/space/nihilus_001/users/john/analyses/pyenv/lib64/python3.6/site-packages/nibabel/cifti2/init.py”, line 20, in
from .parse_cifti2 import Cifti2Extension
File “/autofs/space/nihilus_001/users/john/analyses/pyenv/lib64/python3.6/site-packages/nibabel/cifti2/parse_cifti2.py”, line 15, in
from .cifti2 import (Cifti2MetaData, Cifti2Header, Cifti2Label,
File “/autofs/space/nihilus_001/users/john/analyses/pyenv/lib64/python3.6/site-packages/nibabel/cifti2/cifti2.py”, line 22, in
from … import xmlutils as xml
File “/autofs/space/nihilus_001/users/john/analyses/pyenv/lib64/python3.6/site-packages/nibabel/xmlutils.py”, line 15, in
from xml.parsers.expat import ParserCreate
File “/usr/lib64/python3.6/xml/parsers/expat.py”, line 4, in
from pyexpat import *
ImportError: /usr/lib64/python3.6/lib-dynload/pyexpat.cpython-36m-x86_64-linux-gnu.so: undefined symbol: XML_SetHashSalt
Mon Nov 8 10:26:26 EST 2021:GradientDistortionUnwarp.sh: While running ‘/opt/HCP/HCPpipelines/global/scripts/GradientDistortionUnwarp.sh --workingdir=/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/T1w/T1w1_GradientDistortionUnwarp --coeffs=/autofs/space/nihilus_001/users/john/logistics/scanner_info/coeff.grad --in=/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/T1w/T1w1_GradientDistortionUnwarp/T1w1 --out=/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/T1w/T1w1_gdc --owarp=/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/T1w/xfms/T1w1_gdc_warp’:
Mon Nov 8 10:26:26 EST 2021:GradientDistortionUnwarp.sh: ERROR: ‘gradient_unwarp.py’ command failed with return code: 1
Mon Nov 8 10:26:26 EST 2021:GradientDistortionUnwarp.sh: ERROR: ‘gradient_unwarp.py’ command failed with return code: 1

===> ERROR: Command returned with nonzero exit code

     script: GradientDistortionUnwarp.sh

stopped at line: 107
call: gradient_unwarp.py ${BaseName}_vol1.nii.gz trilinear.nii.gz siemens -g $InputCoeffs -n
expanded call: gradient_unwarp.py T1w1_vol1.nii.gz trilinear.nii.gz siemens -g /autofs/space/nihilus_001/users/john/logistics/scanner_info/coeff.grad -n
exit code: 1

Hi John!

Could you provide the full QuNex command call, somewhere along the lines of:

qunex_container hcp_pre_freesurfer \
  --param1=value1 \
  --param2=value2 \
  ...

Huh well actually I just reran and got an entirely different error. The command I’m using is:
qunex_container hcp_pre_freesurfer
–sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt"
–sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions"
–container="/autofs/space/nihilus_001/users/john/bin/qunex/qunex_suite-0.91.11.sif" --bind=/autofs

This time the log file says this:

Generated by QuNex 0.91.11 on 2021-11-08_12.52.1636393946


Running external command via QuNex:
PreFreeSurfer/PreFreeSurferPipeline.sh \
–path="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/TAB99809/hcp" \
–subject=“TAB99809” \
–t1="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/TAB99809/hcp/TAB99809/unprocessed/T1w/TAB99809_T1w_MPR2.nii.gz@/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/TAB99809/hcp/TAB99809/unprocessed/T1w/TAB99809_T1w_MPR1.nii.gz" \
–t2="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/TAB99809/hcp/TAB99809/unprocessed/T2w/TAB99809_T2w_SPC1.nii.gz@/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/TAB99809/hcp/TAB99809/unprocessed/T2w/TAB99809_T2w_SPC2.nii.gz" \
–t1template=“global/templates/MNI152_T1_0.8mm.nii.gz” \
–t1templatebrain=“global/templates/MNI152_T1_0.8mm_brain.nii.gz” \
–t1template2mm=“global/templates/MNI152_T1_2mm.nii.gz” \
–t2template=“global/templates/MNI152_T2_0.8mm.nii.gz” \
–t2templatebrain=“global/templates/MNI152_T2_0.8mm_brain.nii.gz” \
–t2template2mm=“global/templates/MNI152_T2_2mm.nii.gz” \
–templatemask=“global/templates/MNI152_T1_0.8mm_brain_mask.nii.gz” \
–template2mmmask=“global/templates/MNI152_T1_2mm_brain_mask_dil.nii.gz” \
–brainsize=“150” \
–fnirtconfig=“global/config/T1_2_MNI152_2mm.cnf” \
–t1samplespacing=“0.0000021” \
–t2samplespacing=“0.0000021” \
–gdcoeffs="/autofs/space/nihilus_001/users/john/logistics/scanner_info/coeff.grad" \
–avgrdcmethod=“NONE” \
–processing-mode=“HCPStyleData”

Test file:
/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/TAB99809/hcp/TAB99809/MNINonLinear/T1w_restore_brain.nii.gz

/bin/sh: PreFreeSurfer/PreFreeSurferPipeline.sh: No such file or directory

Hi John!

Could you please replicate your minimal example but with --cleanenv option?
https://sylabs.io/guides/3.7/user-guide/environment_and_metadata.html#environment-from-the-host

Also do you remember if you have any kinds of python virtual environment enabled (e.g., venv, virtualenv, pyenv, conda) when you launched the qunex container though the qunex_container command?

Lining

So if I use
singularity exec -B /autofs qunex_suite-0.91.11.sif bash
And then manually cd to /opt/HCP/HCPPipelines
I get the original Python error (with or without the --cleanenv option given to the above singularity command)

But if I just run the qunex_container command outside the container, I get the second error where it can’t find the PreFreeSurfer directory.

I don’t have a python environment active, but the default python binary on my system is one within a virtual environment that I created with venv.

I noticed on that page you linked that singularity does in fact alter the LD_LIBRARY_PATH variable, though I’m not sure what that means.

–cleanenv seems to also confuse QuNex about the location of my data:

ERROR: Running external command failed!
Try running the command directly for more detailed error information:
PreFreeSurfer/PreFreeSurferPipeline.sh \
–path="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/HCA6405157_V1_A/hcp" \
–subject=“HCA6405157_V1_A” \
–t1="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/HCA6405157_V1_A/hcp/HCA6405157_V1_A/unprocessed/T1w/HCA6405157_V1_A_T1w_MPR2.nii.gz@/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/HCA6405157_V1_A/hcp/HCA6405157_V1_A/unprocessed/T1w/HCA6405157_V1_A_T1w_MPR1.nii.gz" \
–t2="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/HCA6405157_V1_A/hcp/HCA6405157_V1_A/unprocessed/T2w/HCA6405157_V1_A_T2w_SPC2.nii.gz@/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/HCA6405157_V1_A/hcp/HCA6405157_V1_A/unprocessed/T2w/HCA6405157_V1_A_T2w_SPC1.nii.gz" \

The filepaths are duplicated (the path starts over again from root after the .nii.gz).

EDIT: I also tried removing the python binary I usually use from the system path to no avail - same error during the same import line.

Please try the following:

singularity shell --cleanenv -B /autofs qunex_suite-0.91.11.sif

source /opt/qunex/env/qunex_environment.sh

# if you run which python here, you should get /opt/env/qunex/bin/python
# if you run which gradient_unwarp.py, you should get /opt/gradunwarp/gradunwarp/gradunwarp/core/gradient_unwarp.py

qunex hcp_pre_freesurfer \
  --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" \
  --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions"

If this works, then I can upgrade the qunex_container script so it will support he --cleanenv flag.

Could you please check if you any pyenv related settings in your ~/.bashrc file?
In most cases, it should be something like this:

export PATH="$HOME/.pyenv/bin:$PATH"
eval "$(pyenv init --path)"
eval "$(pyenv virtualenv-init -)"

If there is, singularity containers will not work properly, and --cleanenv will not help in this situation. Jure’s code snippet could work because we are effectively running qunex environment setup script after user-level ~/.bashrc, which is normally the other way around.

This kind of transparency between the container and the host user environment is a “feature” of singularity by design.

Commenting out pyenv related settings in ~/.bashrc may solve this problem.

Ha, I didn’t know ‘pyenv’ was the name of an actual package. I don’t have settings related to the package in my bashrc, but it just so happens that this is what I’ve named my python environment in my directories and aliases. Is it possible that’s creating any confusion? For instance:

alias pyenv=’/autofs/space/nihilus_001/users/john/analyses/pyenv/bin/python3’
export PATH=/autofs/space/nihilus_001/users/john/analyses/pyenv/bin:$PATH

I use the built-in ‘venv’ for my virtual environments.

For the record, demsarjure’s solution works locally. But I get an error when I ssh to our HPC cluster and try to use a SLURM scheduler. I’ll paste the output below. It can’t find the ‘sbatch’ command. Is this because of how we’re messing with the environment?

Yeah I wish I could use Docker, unfortunately our sysadmins say Singularity is easier to run with HPC clusters.

Thanks again for all the troubleshooting.

Run with scheduler output:

(/opt/env/qunex) [QuNex qunex]$ qunex hcp_pre_freesurfer --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" --parjobs=20 --parsessions=1 --scheduler=“SLURM,account=bandlab,partition=basic,mem-per-cpu=8000,time=05:00:00”

… Running QuNex v0.91.11 …

==> Note: is part of the QuNex MATLAB.

— Full QuNex call for command: hcp_pre_freesurfer

gmri hcp_pre_freesurfer --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" --parjobs=“20” --parsessions=“1” --scheduler=“SLURM,account=bandlab,partition=basic,mem-per-cpu=8000,time=05:00:00”


WARNING: Parameter qx_cifti_tail was not specified. Its value was imputed from parameter hcp_cifti_tail and set to ‘’!
WARNING: Parameter qx_nifti_tail was not specified. Its value was imputed from parameter hcp_nifti_tail and set to ‘’!
WARNING: Parameter cifti_tail was not specified. Its value was imputed from parameter qx_cifti_tail and set to ‘’!
WARNING: Parameter nifti_tail was not specified. Its value was imputed from parameter qx_nifti_tail and set to ‘’!

Generated by QuNex 0.91.11 on 2021-11-09_12.22.1636478542

=================================================================
gmri hcp_pre_freesurfer
–sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt"
–sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions"
–parjobs=“20”
–parsessions=“1”
–scheduler=“SLURM,account=bandlab,partition=basic,mem-per-cpu=8000,time=05:00:00” \

Starting multiprocessing sessions in /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt with a pool of 1 concurrent processes

===> Running scheduler for command hcp_pre_freesurfer

→ QuNex will run the command over 184 sessions. It will utilize:

Scheduled jobs: 20 
Maximum sessions run in parallel for a job: 1.
Maximum elements run in parallel for a session: 1.
Up to 1 processes will be utilized for a job.

Job #1 will run sessions: 170508_4PR00011,180522_4PR00067,HCA6125050_V1_A,HCA7064970_V1_A,HCA7536884_V1_B,HCA8435277_V1_B,HCA9341879_V1_B,HCA9968318_V1_B,TAB79227,TABT51356
Job #2 will run sessions: 170710_4PR00019,180612_4PR00069,HCA6125050_V1_B,HCA7064970_V1_B,HCA7567693_V1_A,HCA8638594_V1_A,HCA9354585_V1_A,TAB180552,TAB80552,TABT73634
Job #3 will run sessions: 170804_4PR00023,180619_4PR00070,HCA6131449_V1_A,HCA7106657_V1_A,HCA7567693_V1_B,HCA8638594_V1_B,HCA9354585_V1_B,TAB18732,TAB88074,TABT79227
Job #4 will run sessions: 170828_4PR00029,180622_4PR00071,HCA6131449_V1_B,HCA7106657_V1_B,HCA7982706_V1_A,HCA8794407_V1_A,HCA9380283_V1_A,TAB23322,TAB96483,TABT88074
Job #5 will run sessions: 171219_4PR00046,180626_4PR00072,HCA6405157_V1_A,HCA7121350_V1_A,HCA7982706_V1_B,HCA8794407_V1_B,HCA9380283_V1_B,TAB25032,TAB97476
Job #6 will run sessions: 180123_4PR00048,180703_4PR00073,HCA6405157_V1_B,HCA7121350_V1_B,HCA8065270_V1_A,HCA8913489_V1_A,HCA9438086_V1_A,TAB27952,TAB99809
Job #7 will run sessions: 180123_4PR00049,180703_4PR00074,HCA6428169_V1_A,HCA7155973_V1_A,HCA8065270_V1_B,HCA8913489_V1_B,HCA9438086_V1_B,TAB29176,TABP18732
Job #8 will run sessions: 180206_4PR00050,180710_4PR00075,HCA6428169_V1_B,HCA7155973_V1_B,HCA8140662_V1_A,HCA9090779_V1_A,HCA9449798_V1_A,TAB29726,TABP27952
Job #9 will run sessions: 180220_4PR00054,180717_4PR00076,HCA6461066_V1_A,HCA7178177_V1_A,HCA8140662_V1_B,HCA9090779_V1_B,HCA9449798_V1_B,TAB30272,TABP29176
Job #10 will run sessions: 180306_4PR00056,180724_4PR00077,HCA6461066_V1_B,HCA7178177_V1_B,HCA8192378_V1_A,HCA9094484_V1_A,HCA9498307_V1_A,TAB31951,TABP31951
Job #11 will run sessions: 180403_4PR00057,180731_4PR00078,HCA6589294_V1_A,HCA7219165_V1_A,HCA8192378_V1_B,HCA9094484_V1_B,HCA9498307_V1_B,TAB41391,TABP41391
Job #12 will run sessions: 180410_4PR00058,180807_4PR00080,HCA6589294_V1_B,HCA7350567_V1_A,HCA8206666_V1_A,HCA9114262_V1_A,HCA9510169_V1_A,TAB44018,TABP51356
Job #13 will run sessions: 180410_4PR00059,180814_4PR00081,HCA6605569_V1_A,HCA7350567_V1_B,HCA8206666_V1_B,HCA9114262_V1_B,HCA9510169_V1_B,TAB46323,TABP73634
Job #14 will run sessions: 180416_4PR00060,180814_4PR00082,HCA6605569_V1_B,HCA7406265_V1_A,HCA8253978_V1_A,HCA9118876_V1_A,HCA9726192_V1_A,TAB48574,TABP80552
Job #15 will run sessions: 180423_4PR00061,180828_4PR00083,HCA6682688_V1_A,HCA7406265_V1_B,HCA8253978_V1_B,HCA9118876_V1_B,HCA9726192_V1_B,TAB51356,TABP88074
Job #16 will run sessions: 180424_4PR00062,180828_4PR00084,HCA6682688_V1_B,HCA7434674_V1_A,HCA8280274_V1_A,HCA9121158_V1_A,HCA9865005_V1_A,TAB54119,TABT18732
Job #17 will run sessions: 180508_4PR00063,HCA6058162_V1_A,HCA6848090_V1_A,HCA7434674_V1_B,HCA8280274_V1_B,HCA9121158_V1_B,HCA9865005_V1_B,TAB66863,TABT27952
Job #18 will run sessions: 180508_4PR00064,HCA6058162_V1_B,HCA6848090_V1_B,HCA7497395_V1_A,HCA8405975_V1_A,HCA9300764_V1_A,HCA9913090_V1_A,TAB71749,TABT29176
Job #19 will run sessions: 180515_4PR00065,HCA6072156_V1_A,HCA6954998_V1_A,HCA7497395_V1_B,HCA8405975_V1_B,HCA9300764_V1_B,HCA9913090_V1_B,TAB73634,TABT31951
Job #20 will run sessions: 180522_4PR00066,HCA6072156_V1_B,HCA6954998_V1_B,HCA7536884_V1_A,HCA8435277_V1_A,HCA9341879_V1_A,HCA9968318_V1_A,TAB74031,TABT46323

—> submitting hcp_pre_freesurfer_#00

gmri hcp_pre_freesurfer --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" --parsessions=“1” --sessionids=“170508_4PR00011,180522_4PR00067,HCA6125050_V1_A,HCA7064970_V1_A,HCA7536884_V1_B,HCA8435277_V1_B,HCA9341879_V1_B,HCA9968318_V1_B,TAB79227,TABT51356”

Submitting:

#!/bin/sh
#SBATCH --account=bandlab
#SBATCH --partition=basic
#SBATCH --mem-per-cpu=8000
#SBATCH --time=05:00:00
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=1
#SBATCH --job-name=hcp_pre_freesurfer(0)
#SBATCH -o /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/logs/batchlogs/SLURM_hcp_pre_freesurfer_job00.2021-11-09_12.22.1636478542.397946.log
#SBATCH -e /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/logs/batchlogs/SLURM_hcp_pre_freesurfer_job00.2021-11-09_12.22.1636478542.397946.log

gmri hcp_pre_freesurfer --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" --parsessions=“1” --sessionids=“170508_4PR00011,180522_4PR00067,HCA6125050_V1_A,HCA7064970_V1_A,HCA7536884_V1_B,HCA8435277_V1_B,HCA9341879_V1_B,HCA9968318_V1_B,TAB79227,TABT51356”


/bin/sh: sbatch: command not found

—> submitting hcp_pre_freesurfer_#01

gmri hcp_pre_freesurfer --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" --parsessions=“1” --sessionids=“170710_4PR00019,180612_4PR00069,HCA6125050_V1_B,HCA7064970_V1_B,HCA7567693_V1_A,HCA8638594_V1_A,HCA9354585_V1_A,TAB180552,TAB80552,TABT73634”

Submitting:

#!/bin/sh
#SBATCH --account=bandlab
#SBATCH --partition=basic
#SBATCH --mem-per-cpu=8000
#SBATCH --time=05:00:00
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=1
#SBATCH --job-name=hcp_pre_freesurfer(1)
#SBATCH -o /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/logs/batchlogs/SLURM_hcp_pre_freesurfer_job01.2021-11-09_12.22.1636478542.405215.log
#SBATCH -e /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/logs/batchlogs/SLURM_hcp_pre_freesurfer_job01.2021-11-09_12.22.1636478542.405215.log

gmri hcp_pre_freesurfer --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" --parsessions=“1” --sessionids=“170710_4PR00019,180612_4PR00069,HCA6125050_V1_B,HCA7064970_V1_B,HCA7567693_V1_A,HCA8638594_V1_A,HCA9354585_V1_A,TAB180552,TAB80552,TABT73634”


/bin/sh: sbatch: command not found

—> submitting hcp_pre_freesurfer_#02

gmri hcp_pre_freesurfer --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" --parsessions=“1” --sessionids=“170804_4PR00023,180619_4PR00070,HCA6131449_V1_A,HCA7106657_V1_A,HCA7567693_V1_B,HCA8638594_V1_B,HCA9354585_V1_B,TAB18732,TAB88074,TABT79227”

Submitting:

#!/bin/sh
#SBATCH --account=bandlab
#SBATCH --partition=basic
#SBATCH --mem-per-cpu=8000
#SBATCH --time=05:00:00
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=1
#SBATCH --job-name=hcp_pre_freesurfer(2)
#SBATCH -o /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/logs/batchlogs/SLURM_hcp_pre_freesurfer_job02.2021-11-09_12.22.1636478542.408556.log
#SBATCH -e /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/logs/batchlogs/SLURM_hcp_pre_freesurfer_job02.2021-11-09_12.22.1636478542.408556.log

gmri hcp_pre_freesurfer --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" --parsessions=“1” --sessionids=“170804_4PR00023,180619_4PR00070,HCA6131449_V1_A,HCA7106657_V1_A,HCA7567693_V1_B,HCA8638594_V1_B,HCA9354585_V1_B,TAB18732,TAB88074,TABT79227”


/bin/sh: sbatch: command not found

—> submitting hcp_pre_freesurfer_#03

gmri hcp_pre_freesurfer --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" --parsessions=“1” --sessionids=“170828_4PR00029,180622_4PR00071,HCA6131449_V1_B,HCA7106657_V1_B,HCA7982706_V1_A,HCA8794407_V1_A,HCA9380283_V1_A,TAB23322,TAB96483,TABT88074”

Submitting:

#!/bin/sh
#SBATCH --account=bandlab
#SBATCH --partition=basic
#SBATCH --mem-per-cpu=8000
#SBATCH --time=05:00:00
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=1
#SBATCH --job-name=hcp_pre_freesurfer(3)
#SBATCH -o /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/logs/batchlogs/SLURM_hcp_pre_freesurfer_job03.2021-11-09_12.22.1636478542.413434.log
#SBATCH -e /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/logs/batchlogs/SLURM_hcp_pre_freesurfer_job03.2021-11-09_12.22.1636478542.413434.log

gmri hcp_pre_freesurfer --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" --parsessions=“1” --sessionids=“170828_4PR00029,180622_4PR00071,HCA6131449_V1_B,HCA7106657_V1_B,HCA7982706_V1_A,HCA8794407_V1_A,HCA9380283_V1_A,TAB23322,TAB96483,TABT88074”


/bin/sh: sbatch: command not found

(This continues for the rest of the jobs)
And then at the end…

===> Submitted jobs
NA → hcp_pre_freesurfer_#00
NA → hcp_pre_freesurfer_#01
NA → hcp_pre_freesurfer_#02
NA → hcp_pre_freesurfer_#03
NA → hcp_pre_freesurfer_#04
NA → hcp_pre_freesurfer_#05
NA → hcp_pre_freesurfer_#06
NA → hcp_pre_freesurfer_#07
NA → hcp_pre_freesurfer_#08
NA → hcp_pre_freesurfer_#09
NA → hcp_pre_freesurfer_#10
NA → hcp_pre_freesurfer_#11
NA → hcp_pre_freesurfer_#12
NA → hcp_pre_freesurfer_#13
NA → hcp_pre_freesurfer_#14
NA → hcp_pre_freesurfer_#15
NA → hcp_pre_freesurfer_#16
NA → hcp_pre_freesurfer_#17
NA → hcp_pre_freesurfer_#18
NA → hcp_pre_freesurfer_#19

Using /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt for input.

(/opt/env/qunex) [QuNex qunex]$

Yes, you cannot enter the container and then schedule commands, SLURM is not available inside the container. When inside the container you are in a “new” environment. For this to work, you need to schedule a command that enters the container and then executes QuNex. There are two options to do this. Option 1 is easier, option 2 should give you an insight into how SLURM and Singularity work.

OPTION 1: You can get the updated qunex_container script that now cleans the environment (uses the --cleanenv flag) at:

wget --no-check-certificate -r 'https://drive.google.com/uc?export=download&id=1wdWgKvr67yX5J8pVUa6tBGXNAg3fssWs' -O qunex_container

Then you should be able to use the renewed qunex_container to run hcp_pre_freesurfer:

qunex_container hcp_pre_freesurfer \
  --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" \
  --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" \
  --parjobs=20 \
  --parsessions=1 \
  --scheduler="SLURM,account=bandlab,partition=basic,mem-per-cpu=8000,time=05:00:00"

OPTION 2: You can always setup SLURM/Singularity manually. Here we traditionally prepare two scripts (A and B below).

SCRIPT A: The command scripts, say ~/qunex_hcp_pre_freesurfer.sh:

#!/bin/bash

# source qunex environment
source /opt/qunex/env/qunex_environment.sh

# execute
qunex hcp_pre_freesurfer \
  --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" \
  --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" \
  --sessionids="170508_4PR00011"

SCRIPT B: The sbatch script that enters the container and then executes QuNex stuff, say ~/qunex_hcp_pre_freesurfer.sbatch:

#!/bin/bash
#SBATCH --account=bandlab
#SBATCH --partition=basic
#SBATCH --mem-per-cpu=8000
#SBATCH --time=05:00:00
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2
#SBATCH --job-name=hcp_pfs
#SBATCH -o ~/output.txt

# load singularity module
# not sure if needed, it depends on your HCP system, it might be loaded already
# also you might need to specify the version of the module
module load singularity

# execute via singularity exec
singularity exec \
  --cleanenv \
  --bind /autofs \
  qunex_suite-0.91.11.sif ~/qunex_hcp_pre_freesurfer.sh

Note that the above example executes only processing of a single session (170508_4PR00011). Now, you just run sbatch ~/qunex_hcp_pre_freesurfer.sbatch from the login node of your HPC and things should work.

Hi John,

Did you maybe already try any of the two options above? If you did, did any of them work? Thanks!

Jure

Hi, sorry for the delay. I did try both of these and ran into different problem with each. Sorry for all the fuss, I’m not sure what it is about my environment that’s making this difficult.

This is the output of a SLURM job created by running

qunex_container hcp_pre_freesurfer
–sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt"
–sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions"
–parjobs=20
–parsessions=1
–scheduler=“SLURM,account=bandlab,partition=basic,mem-per-cpu=8000,time=05:00:00”
–overwrite=“yes”

on the SLURM login node:

qunex :cat slurm-669147.out
→ unsetting all environment variables: PATH MATLABPATH PYTHONPATH QUNEXVer TOOLS QUNEXREPO QUNEXPATH QUNEXLIBRARY QUNEXLIBRARYETC TemplateFolder FSL_FIXDIR FREESURFERDIR FREESURFER_HOME FREESURFER_SCHEDULER FreeSurferSchedulerDIR WORKBENCHDIR DCMNIIDIR DICMNIIDIR MATLABDIR MATLABBINDIR OCTAVEDIR OCTAVEPKGDIR OCTAVEBINDIR RDIR HCPWBDIR AFNIDIR PYLIBDIR FSLDIR FSLGPUDIR PALMDIR QUNEXMCOMMAND HCPPIPEDIR CARET7DIR GRADUNWARPDIR HCPPIPEDIR_Templates HCPPIPEDIR_Bin HCPPIPEDIR_Config HCPPIPEDIR_PreFS HCPPIPEDIR_FS HCPPIPEDIR_PostFS HCPPIPEDIR_fMRISurf HCPPIPEDIR_fMRIVol HCPPIPEDIR_tfMRI HCPPIPEDIR_dMRI HCPPIPEDIR_dMRITract HCPPIPEDIR_Global HCPPIPEDIR_tfMRIAnalysis MSMBin HCPPIPEDIR_dMRITractFull HCPPIPEDIR_dMRILegacy AutoPtxFolder FSLGPUScripts FSLGPUBinary EDDYCUDADIR USEOCTAVE QUNEXENV CONDADIR MSMBINDIR MSMCONFIGDIR R_LIBS FSL_FIX_CIFTIRW

Generated by QuNex

Version: 0.91.11
User: jj1006
System: r440-29.nmr.mgh.harvard.edu
OS: RedHat Linux #1 SMP Tue Jun 8 00:24:50 UTC 2021

    ██████\                  ║      ██\   ██\                        
   ██  __██\                 ║      ███\  ██ |                       
   ██ /  ██ |██\   ██\       ║      ████\ ██ | ██████\ ██\   ██\     
   ██ |  ██ |██ |  ██ |      ║      ██ ██\██ |██  __██\\██\ ██  | 
   ██ |  ██ |██ |  ██ |      ║      ██ \████ |████████ |\████  /     
   ██ ██\██ |██ |  ██ |      ║      ██ |\███ |██   ____|██  ██\      
   \██████ / \██████  |      ║      ██ | \██ |\███████\██  /\██\     
    \___███\  \______/       ║      \__|  \__| \_______\__/  \__|    
        \___|                ║                                       


                   DEVELOPED & MAINTAINED BY: 

                Anticevic Lab, Yale University 
           Mind & Brain Lab, University of Ljubljana 
                 Murray Lab, Yale University 

                  COPYRIGHT & LICENSE NOTICE: 

Use of this software is subject to the terms and conditions defined in
‘LICENSE.md’ which is a part of the QuNex Suite source code package:
https://bitbucket.org/oriadev/qunex/src/master/LICENSE.md

—> Setting up Octave

… Running QuNex v0.91.11 …

==> Note: is part of the QuNex MATLAB.

— Full QuNex call for command: hcp_pre_freesurfer

gmri hcp_pre_freesurfer --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt–sessionsfolder=/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" --parsessions=“1” --overwrite=“yes”


--------------------==== QuNex failed! ====--------------------
ERROR: Execution of qunex command hcp_pre_freesurfer failed!
ERROR: The specified session file is not found! [/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt–sessionsfolder=/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions]!

The call received was:
(please note that when run through scheduler, all possible parameters,
even non relevant ones are passed)

qunex hcp_pre_freesurfer
–sessions=/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt–sessionsfolder=/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions
–parsessions=1
–overwrite=yes
Traceback (most recent call last):
File “/opt/qunex/python/qx_utilities/gmri”, line 502, in
main()
File “/opt/qunex/python/qx_utilities/gmri”, line 470, in main
runCommand(comm, opts)
File “/opt/qunex/python/qx_utilities/gmri”, line 167, in runCommand
gp.run(command, args)
File “/opt/qunex/python/qx_utilities/general/process.py”, line 609, in run
sessions, gpref = gc.getSessionList(options[‘sessions’], filter=options[‘filter’], sessionids=options[‘sessionids’], verbose=False)
File “/opt/qunex/python/qx_utilities/general/core.py”, line 256, in getSessionList
raise ValueError(“ERROR: The specified session file is not found! [%s]!” % listString)
ValueError: ERROR: The specified session file is not found! [/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt–sessionsfolder=/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions]!

---> Resetting to defaults: 

qunex :

The file that it says it can’t find definitely does exist at that path. I also tried adding --bind /autofs to the qunex_container call to no avail (also using the new qunex_container script you pointed me to).

For option B, I get pointed to an error log file with the same error I was getting earlier:

Generated by QuNex 0.91.11 on 2021-11-11_15.00.1636660823


Running external command via QuNex:
PreFreeSurfer/PreFreeSurferPipeline.sh \
–path="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp" \
–subject=“170508_4PR00011” \
–t1="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/unprocessed/T1w/170508_4PR00011_T1w_MPR1.nii.gz@/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/unprocessed/T1w/170508_4PR00011_T1w_MPR2.nii.gz" \
–t2="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/unprocessed/T2w/170508_4PR00011_T2w_SPC1.nii.gz@/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/unprocessed/T2w/170508_4PR00011_T2w_SPC2.nii.gz" \
–t1template=“global/templates/MNI152_T1_0.8mm.nii.gz” \
–t1templatebrain=“global/templates/MNI152_T1_0.8mm_brain.nii.gz” \
–t1template2mm=“global/templates/MNI152_T1_2mm.nii.gz” \
–t2template=“global/templates/MNI152_T2_0.8mm.nii.gz” \
–t2templatebrain=“global/templates/MNI152_T2_0.8mm_brain.nii.gz” \
–t2template2mm=“global/templates/MNI152_T2_2mm.nii.gz” \
–templatemask=“global/templates/MNI152_T1_0.8mm_brain_mask.nii.gz” \
–template2mmmask=“global/templates/MNI152_T1_2mm_brain_mask_dil.nii.gz” \
–brainsize=“150” \
–fnirtconfig=“global/config/T1_2_MNI152_2mm.cnf” \
–t1samplespacing=“0.0000021” \
–t2samplespacing=“0.0000021” \
–gdcoeffs="/autofs/space/nihilus_001/users/john/logistics/scanner_info/coeff.grad" \
–avgrdcmethod=“NONE” \
–processing-mode=“HCPStyleData”

Test file:
/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/170508_4PR00011/hcp/170508_4PR00011/MNINonLinear/T1w_restore_brain.nii.gz

/bin/sh: PreFreeSurfer/PreFreeSurferPipeline.sh: No such file or directory

Well this is interesting. I just noticed while replying that the --sessions and --sessionfolder options seem to be stuck together, but I am confident that this is not a typo on my part. I still have my terminal open and looked at the command history. This is an exact copy-paste:

qunex_SLURM_test :qunex_container hcp_pre_freesurfer --sessions=/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt --sessionsfolder=/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions --parjobs=20 --parsessions=1 --scheduler=“SLURM,account=bandlab,partition=basic,mem-per-cpu=8000,time=05:00:00” --overwrite=“yes” --container=“qunex_suite-0.91.11.sif”

There is definitely a space present.

Try inserting line breaks (\ followed by a new line in bash) into the command, just copy paste this. Also it makes sense to encapsulate parameters in “”. I also noticed that the call is missing the bind parameter.

qunex_container hcp_pre_freesurfer \
  --sessions="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/batch_hcp.txt" \
  --sessionsfolder="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions" \
  --parjobs="20" \
  --parsessions="1" \
  --scheduler="SLURM,account=bandlab,partition=basic,mem-per-cpu=8000,time=05:00:00" \
  --overwrite="yes" \
  --bind="/autofs" \
  --container="qunex_suite-0.91.11.sif"

I also tested a copy paste of your command above (no line breaks, no double quotes) and it worked fine on my end. Not sure what is going on here.

I think we’re almost there. I’m getting the same error with both options A and B now. It seems like it’s assuming that it’s in the directory right above “PreFreeSurfer” so it can’t find it. This is the log of one of the jobs after copy-pasting your command:

qunex :cat /autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/processing/logs/comlogs/error_hcp_pre_freesurfer_TAB51356_2021-11-16_08.54.1637070861.log

Generated by QuNex 0.91.11 on 2021-11-16_08.54.1637070861


Running external command via QuNex:
PreFreeSurfer/PreFreeSurferPipeline.sh \
–path="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/TAB51356/hcp" \
–subject=“TAB51356” \
–t1="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/TAB51356/hcp/TAB51356/unprocessed/T1w/TAB51356_T1w_MPR1.nii.gz@/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/TAB51356/hcp/TAB51356/unprocessed/T1w/TAB51356_T1w_MPR2.nii.gz" \
–t2="/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/TAB51356/hcp/TAB51356/unprocessed/T2w/TAB51356_T2w_SPC1.nii.gz@/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/TAB51356/hcp/TAB51356/unprocessed/T2w/TAB51356_T2w_SPC2.nii.gz" \
–t1template=“global/templates/MNI152_T1_0.8mm.nii.gz” \
–t1templatebrain=“global/templates/MNI152_T1_0.8mm_brain.nii.gz” \
–t1template2mm=“global/templates/MNI152_T1_2mm.nii.gz” \
–t2template=“global/templates/MNI152_T2_0.8mm.nii.gz” \
–t2templatebrain=“global/templates/MNI152_T2_0.8mm_brain.nii.gz” \
–t2template2mm=“global/templates/MNI152_T2_2mm.nii.gz” \
–templatemask=“global/templates/MNI152_T1_0.8mm_brain_mask.nii.gz” \
–template2mmmask=“global/templates/MNI152_T1_2mm_brain_mask_dil.nii.gz” \
–brainsize=“150” \
–fnirtconfig=“global/config/T1_2_MNI152_2mm.cnf” \
–t1samplespacing=“0.0000021” \
–t2samplespacing=“0.0000021” \
–gdcoeffs="/autofs/space/nihilus_001/users/john/logistics/scanner_info/coeff.grad" \
–avgrdcmethod=“NONE” \
–processing-mode=“HCPStyleData”

Test file:
/autofs/cluster/bang/TAW_New/fMRI_processing/qunex/study_main/sessions/TAB51356/hcp/TAB51356/MNINonLinear/T1w_restore_brain.nii.gz

/bin/sh: PreFreeSurfer/PreFreeSurferPipeline.sh: No such file or directory
qunex :

Yeah, it looks like we are nearly there :). It seems like QuNex is unable to find the HCPpipelines directory, everything else looks good now.

Is the entry _hcp_Pipeline : ${HCPPIPEDIR} present in your batch file?

Cheers, Jure

Alright the jobs have been running for about an hour now and haven’t errored out so I think we may be in the clear! Thank you again for all the support, I don’t know how I would’ve worked through all that on my own.

1 Like

No worries, happy to help! Please let me know if processing finishes successfully, so I can mark the ticket as resolved.

Hi, sorry for the delay. It is in fact resolved, thank you!

No problem. If something else pops up, let us know and we will try to help.