Hi!
I’m currently trying to run only
hcp_diffusion
on HCP-D data where structural and functional MRI has already been preprocessed (by the HCP consortium). I tried to follow this question but I don’t really understand where I should manually put the processed data
In order to figure this out I tried to run
qunex_container hcp_pre_freesurfer \
--batchfile="/home/magda/qunex/test/sessions/HCD0001305_V1_MR/session.txt"\
--sessionsfolder="home/magda/qunex/test/sessions/HCD001305_V1_MR/hcp" \
--overwrite=yes \
--container="gitlab.qunex.yale.edu:5002/qunex/qunexcontainer:0.97.3"
where ~/qunex/test/sessions/HCD0001305_V1_MR/hcp$ tree
gives this output:
└── HCD0001305_V1_MR
├── ASL
│ ├── HCD0001305_V1_MR_ASL_PA.nii.gz
│ ├── HCD0001305_V1_MR_ASL_SE-FM-AP.nii.gz
│ └── HCD0001305_V1_MR_ASL_SE-FM-PA.nii.gz
├── BOLD_1_AP
│ └── HCD0001305_V1_MR_BOLD_1_AP.nii.gz
├── BOLD_1_AP_SBRef
│ └── HCD0001305_V1_MR_BOLD_1_AP_SBRef.nii.gz
├── BOLD_2_PA
│ └── HCD0001305_V1_MR_BOLD_2_PA.nii.gz
├── BOLD_2_PA_SBRef
│ └── HCD0001305_V1_MR_BOLD_2_PA_SBRef.nii.gz
├── BOLD_3_AP
│ └── HCD0001305_V1_MR_BOLD_3_AP.nii.gz
├── BOLD_3_AP_SBRef
│ └── HCD0001305_V1_MR_BOLD_3_AP_SBRef.nii.gz
├── BOLD_4_PA
│ └── HCD0001305_V1_MR_BOLD_4_PA.nii.gz
├── BOLD_4_PA_SBRef
│ └── HCD0001305_V1_MR_BOLD_4_PA_SBRef.nii.gz
├── BOLD_5_PA
│ └── HCD0001305_V1_MR_BOLD_5_PA.nii.gz
├── BOLD_5_PA_SBRef
│ └── HCD0001305_V1_MR_BOLD_5_PA_SBRef.nii.gz
├── BOLD_6_AP
│ └── HCD0001305_V1_MR_BOLD_6_AP.nii.gz
├── BOLD_6_AP_SBRef
│ └── HCD0001305_V1_MR_BOLD_6_AP_SBRef.nii.gz
├── BOLD_7_PA
│ └── HCD0001305_V1_MR_BOLD_7_PA.nii.gz
├── BOLD_7_PA_SBRef
│ └── HCD0001305_V1_MR_BOLD_7_PA_SBRef.nii.gz
├── BOLD_8_AP
│ └── HCD0001305_V1_MR_BOLD_8_AP.nii.gz
├── BOLD_8_AP_SBRef
│ └── HCD0001305_V1_MR_BOLD_8_AP_SBRef.nii.gz
├── BOLD_9_PA
│ └── HCD0001305_V1_MR_BOLD_9_PA.nii.gz
├── BOLD_9_PA_SBRef
│ └── HCD0001305_V1_MR_BOLD_9_PA_SBRef.nii.gz
├── Diffusion
│ ├── HCD0001305_V1_MR_DWI_dir98_AP.bval
│ ├── HCD0001305_V1_MR_DWI_dir98_AP.bvec
│ ├── HCD0001305_V1_MR_DWI_dir98_AP.nii.gz
│ ├── HCD0001305_V1_MR_DWI_dir98_PA.bval
│ ├── HCD0001305_V1_MR_DWI_dir98_PA.bvec
│ ├── HCD0001305_V1_MR_DWI_dir98_PA.nii.gz
│ ├── HCD0001305_V1_MR_DWI_dir99_AP.bval
│ ├── HCD0001305_V1_MR_DWI_dir99_AP.bvec
│ ├── HCD0001305_V1_MR_DWI_dir99_AP.nii.gz
│ ├── HCD0001305_V1_MR_DWI_dir99_PA.bval
│ ├── HCD0001305_V1_MR_DWI_dir99_PA.bvec
│ └── HCD0001305_V1_MR_DWI_dir99_PA.nii.gz
├── SpinEchoFieldMap1
│ ├── HCD0001305_V1_MR_BOLD_AP_SB_SE.nii.gz
│ └── HCD0001305_V1_MR_BOLD_PA_SB_SE.nii.gz
├── SpinEchoFieldMap2
│ ├── HCD0001305_V1_MR_BOLD_AP_SB_SE.nii.gz
│ └── HCD0001305_V1_MR_BOLD_PA_SB_SE.nii.gz
├── SpinEchoFieldMap3
│ ├── HCD0001305_V1_MR_BOLD_AP_SB_SE.nii.gz
│ └── HCD0001305_V1_MR_BOLD_PA_SB_SE.nii.gz
├── T1w
│ └── HCD0001305_V1_MR_T1w_MPR1.nii.gz
└── T2w
└── HCD0001305_V1_MR_T2w_SPC1.nii.gz
26 directories, 41 files
Nonetheless, hcp_pre_freesurfer
gives
Session id: HCD0001305_V1_MR
[started on Sunday, 14. May 2023 11:11:32]
Running HCP PreFreeSurfer Pipeline [HCPStyleData] …
—> ERROR: Could not find T1w image file. []
—> ERROR: The requested HCP processing mode is ‘HCPStyleData’, however, no T2w image was specified!
Consider using LegacyStyleData processing mode.
—> WARNING: No distortion correction method specified.
—> Due to missing files session can not be processed.
HCP PreFS completed on Sunday, 14. May 2023 11:11:32
Have I imported the HCP data wrongly in order to run qunex? And where in this folder structure should I put the already preprocessed data?
Thank you!
Magda
Hi Magda,
Welcome to QuNex forums! We are actively working on adding functionality that would allow you to import (partially) preprocssed HCP data and continue processing. Planned release date for this is by the end of the month.
For this to work right now, you need to manually prepare the batch
file. And provide it in the batchfile
parameter. See How to create Parameter, Batch and Mapping Files for HCP data that was preprocessed outside of Qunex for some additional details.
Jure
Thank you for the rapid reply!
I think I got the folder structure (and batch
file) working, and
hcp_diffusion
now runs, but finishes with the following error:
Running HCP Diffusion Preprocessing
---> hcp_diffusion test file missing:
/home/magda/qunex/test3/sessions/HCD0001305_V1_MR/hcp/HCD0001305_V1_MR/T1w/Diffusion/data.nii.gz
Checking the errorlog I also found:
Mon May 15 05:53:57 EDT 2023:run_eddy.sh: /opt/fsl/fsl/bin/eddy_cuda10.2 --cnr_maps --imain=/home/magda/qunex/test3/sessions/HCD0001305_V1_MR/hcp/HCD0001305_V1_MR/Diffusion/eddy/Pos_Neg --mask=/home/magda/qunex/test3/sessions/HCD0001305_V1_MR/hcp/HCD0001305_V1_MR/Diffusion/eddy/nodif_brain_mask --index=/home/magda/qunex/test3/sessions/HCD0001305_V1_MR/hcp/HCD0001305_V1_MR/Diffusion/eddy/index.txt --acqp=/home/magda/qunex/test3/sessions/HCD0001305_V1_MR/hcp/HCD0001305_V1_MR/Diffusion/eddy/acqparams.txt --bvecs=/home/magda/qunex/test3/sessions/HCD0001305_V1_MR/hcp/HCD0001305_V1_MR/Diffusion/eddy/Pos_Neg.bvecs --bvals=/home/magda/qunex/test3/sessions/HCD0001305_V1_MR/hcp/HCD0001305_V1_MR/Diffusion/eddy/Pos_Neg.bvals --fwhm=0 --topup=/home/magda/qunex/test3/sessions/HCD0001305_V1_MR/hcp/HCD0001305_V1_MR/Diffusion/topup/topup_Pos_Neg_b0 --out=/home/magda/qunex/test3/sessions/HCD0001305_V1_MR/hcp/HCD0001305_V1_MR/Diffusion/eddy/eddy_unwarped_images
\B8\97ɾa
\B8\97ɾa
\B8\97ɾa
\B8\97ɾa
EDDY::: Eddy failed with message \F0\E9\CF?XU
Mon May 15 05:54:13 EDT 2023:run_eddy.sh: Completed with return value: 1
I tried this both the already preprocessed data as well as unprocessed data processed using qunex_container pre_freesurfer_hcp
, and both gives the same errors.
Best and thank you in advance,
Magda
Hi,
This seems like a GPU issue as the command fails once it tries to start GPU processing. What GPU do you have, which CUDA do you have installed on your system?
Kind regards, Jure
Hi,
I saw that… However, with
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
I get errorlog messages
GPU-enabled version of eddy found: /opt/fsl/fsl/bin/eddy_cuda10.2
with failed eddy
. My system did not respond well to me trying to update cuda - is it possible to run hcp_diffusion
with cuda 10.1 instead?
I tried using hcp_dwi_nogpu
but then I got
ERROR: Non-GPU-enabled version of eddy NOT found: /opt/fsl/fsl/bin/eddy_openmp
instead. Can I circumvent this error in some way?
Best regards and thank you in advance,
Magda
Hi Magda,
For hcp_dwi_nogpu
you need to use QuNex 0.96.2a, in the current version there is a weird interaction between HCP Pipelines and FSL that breaks everything. That has already been fixed and will be in the next QuNex release.
You can also try binding your local CUDA over the one in the container, I am not sure it will work, but it is worth a try. Try:
qunex_container hcp_pre_freesurfer \
--batchfile="/home/magda/qunex/test/sessions/HCD0001305_V1_MR/session.txt"\
--sessionsfolder="home/magda/qunex/test/sessions/HCD001305_V1_MR/hcp" \
--overwrite=yes \
--dockeropt="-v <local_cuda_path>:/usr/local/cuda" /
--container="gitlab.qunex.yale.edu:5002/qunex/qunexcontainer:0.97.3"
Replace <local_cuda_path>
with the path to CUDA on your system, you can find it by running which nvcc
and removing the bin
part at the end.
Also, if your system supports CUDA 10.1 it also supports 10.2 so you could upgrade it.
Jure
Hi!
hcp_dwi_nogpu
on 0.96.2a didn’t give that error so thanks for that!
However I did try both
bash_post="export DEFAULT_CUDA_VERSION=11.5"
and cuda_path=<cuda path>
(with flag --nv
) using Singularity
--container="/home/karin/qunex/qunex_suite-0.97.3.sif"
, and both
failed after log message
GPU-enabled version of eddy found: /opt/fsl/fsl/bin/eddy_cuda10.2
.
Is there any other version that doesn’t use CUDA 10.2 as default?
Best regards,
Magda
Hi Magda,
FSL ships with CUDA 10.2 eddy binary, so this is the official FSL one, this is not a problems as eddy_cuda10.2
will work fine as long as you use CUDA 10.2 or newer, so it is more like eddy_cuda10.2+
. I am just wrapping up my work on the DWI pipeline upgrade which will bring full support to CUDA 10.2+ and nogpu
to all DWI commands (bedpostx, probtrackx …) and I was able to run the whole pipeline without any issues on the latest CUDA version and also without a GPU/without CUDA.
Jure
Ok! Thank you so very much for patiently answering all kinds of questions!
0.96.2a
anyway did the job so I’m happy!
Best
Karin