Description:
I am attempting to run the FLS bedpostx command on the HCP-unrelated sample. However, when I run the command using qunexcontainer the slurm job fails but doesn’t report an error. When I run it without the container there seems to be an issue with read dicom. Qunex command calls and outputs are listed below.
Call:
All calls were run from : /gpfs/project/fas/n3/Studies/Connectome/processing/logs
sourceQunex
qunexContainer FSLBedpostxGPU \
--sessionsfolder="/gpfs/project/fas/n3/Studies/Connectome/subjects" \
--sessions="101410, 102008, 102109, 102614" \
--fibers='3' \
--burnin='3000' \
--model='3' \
--overwrite="no" \
--cores=4 \
--scheduler="SLURM,time=1-00:00:00,ntasks=4,cpus-per-task=1,mem-per-cpu=20000,partition=pi_anticevic_gpu" \
--container="/gpfs/project/fas/n3/software/Singularity/qunex_suite-latest.sif"
And I also tried:
sshgpu
sourceQunex
qunex FSLBedpostxGPU \
--sessionsfolder="/gpfs/project/fas/n3/Studies/Connectome/subjects" \
--sessions="101410, 102008, 102109, 102614" \
--fibers='3' \
--burnin='3000' \
--model='3' \
--overwrite="no" \
--cores=4 \
--scheduler="SLURM,time=1-00:00:00,ntasks=4,cpus-per-task=1,mem-per-cpu=20000,partition=pi_anticevic_gpu"
Logs:
Runlogs and comlogs are not being generated.
Example slurm output when running qunexContainer call:
/gpfs/project/fas/n3/Studies/Connectome/processing/logs/slurm-9081999.out
Terminal output when not using the container:
........................ Running Qu|Nex v0.62.6 ........................
Traceback (most recent call last):
File "/gpfs/project/fas/n3/software/qunex/niutilities/gmri", line 4, in <module>
import niutilities as niu
File "/gpfs/loomis/pi/n3/software/qunex/niutilities/niutilities/__init__.py", line 2, in <module>
import g_dicom
File "/gpfs/loomis/pi/n3/software/qunex/niutilities/niutilities/g_dicom.py", line 46, in <module>
import dicom.filereader as dfr
ImportError: No module named dicom.filereader
** For an example of how to report an issue, please refer to this post.