[RESOLVED] Issue Running hcp_diffusion with QuNex – Error in run_eddy Step

Hi,

We’ve been trying to run the hcp_diffusion pipeline using QuNex, but we consistently encounter errors during execution.
All previous steps in the HCP pipeline (PreFreeSurfer, FreeSurfer, and PostFreeSurfer) completed successfully.

The DWI data we use was acquired using these protocols:
(1) dMRI_MB4_185dirs_d15D45_AP
(2) dMRI_MB4_6dirs_d15D45_PA
(3) two SBRef scans

We use FSL version is 6.0.7.1 and run it on an external Docker version 18.09.2, build 6247962.

We keep getting error messages.
we’ve tried to run the hcp_diffusion command both with --hcp_nogpu flag and without it.

When running this commend (with GPU):

qunex_container hcp_diffusion -hcp_nogpu --sessions=“${SESSIONS}” --mappingfile=“${INPUT_MAPPING_FILE}” --container=“${QUNEX_CONTAINER}” --dockeropt=“-v ${BIND_FOLDER}:${BIND_FOLDER}” --sessionsfolder=“${STUDY_FOLDER}/sessions” --batchfile=“${STUDY_FOLDER}/processing/batch.txt” --hcp_dwi_negdata=“10002_DWI_dir6_PA.nii.gz” --hcp_dwi_posdata=“10002_DWI_dir185_AP.nii.gz”

When running this commend (no GPU):

qunex_container hcp_diffusion --sessions=“${SESSIONS}” --mappingfile=“${INPUT_MAPPING_FILE}” --container=“${QUNEX_CONTAINER}” --dockeropt=“-v ${BIND_FOLDER}:${BIND_FOLDER}” --sessionsfolder=“${STUDY_FOLDER}/sessions” --batchfile=“${STUDY_FOLDER}/processing/batch.txt” --hcp_dwi_negdata=“10001_DWI_dir6_PA.nii.gz” --hcp_dwi_posdata=“10001_DWI_dir185_AP.nii.gz”

We get the following error messages:
error_hcp_diffusion_10002_2025-06-30_09.27.31.191653.log (56.8 KB)
error(no_gpu)_hcp_diffusion_10001_2025-06-30_09.26.08.181841.log (28.6 KB)

From the log files, it appears that the PreEddy step completes , but the pipeline fails during run_eddy.sh .
This is the section of the log where things begin to go wrong:

Any idea on what could be the problem?

Thanks in advance!

Hi,

Welcome to QuNex forum!

A couple of issues:

  1. To use CUDA/GPU for processing, you have to add a --nv flag to the qunex_container call. This will give the container access to the GPU. On some systems this does not work, you can also try --cuda flag which does things a bit differently. The werid symbols in the error log are probably because of issues related to that.

  2. As outlined in the log file:

Mon Jun 30 09:26:10 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: Wrong Input! No pairs of phase encoding directions have been found!
Mon Jun 30 09:26:10 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: Wrong Input! No pairs of phase encoding directions have been found!
Mon Jun 30 09:26:10 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: At least one pair is needed!
Mon Jun 30 09:26:10 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: At least one pair is needed!

Your data is not HCP compliant. HCP Diffusion processing needs to have at least one pair of images. Pair being pos+neg images with the same dir count. In your case you do not have this, dir count missmatches (6 vs 185). For such processing, we offer the dwi_legacy_gpu command which works on a single diffusion image.

Best, Jure

Hi Jure,

I have been working with Yael Accav on this issue. First of all, thank you for the swift response.

I have a question regarding your suggestion - does dwi_legacy_gpu skip topup correction altogether? Because that might be problematic.

Thanks for your help.

(Also) Yael

Yes, unfortunately it does just that:

dwi_legacy_gpu

This function runs the DWI preprocessing using the FUGUE method for legacy data
that are not TOPUP compatible.

If youo want the details, see dwi_legacy_gpu for the code of this command.

Unfortunately, we have no in-between variant, that woud use TOPUP and work on the data you have.

Best, Jure

I see.

In the past, we used HCP-pipeline scripts to run the entire DWI preprocessing (including TOPUP) on a dMRI dataset with the exact same number of volumes in each direction as the current one. That was before we started working with QuNex. Is there possibly a workaround we can use, with QuNex, to run the TOPUP correction?

Thanks!

Hm, the errors you are seeing:

Mon Jun 30 09:26:10 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: Wrong Input! No pairs of phase encoding directions have been found!
Mon Jun 30 09:26:10 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: Wrong Input! No pairs of phase encoding directions have been found!
Mon Jun 30 09:26:10 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: At least one pair is needed!
Mon Jun 30 09:26:10 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: At least one pair is needed!

are not printed out by QuNex, but by the HCP pipelines. The constraints that you have a full POS+NEG pair with the same number of dirs is on their end. I am not familiar with the exact details of this constraint, but as they are printing ERROR not WARNING this is something that probably should not be ignored as the results are probably invalid.

You are saying that you ran the exact same data as you have right now through their Diffusion pipeline? It could be that they added this error check for exactly this reason. I guess you could by pass it by renaming 10002_DWI_dir6_PA.nii.gz to 10002_DWI_dir185_PA.nii.gz. But like I said there is probably a solid reason that HCP folks are not allowing such processing.

Best, Jure

Thank you for your help. I’ll investigate this further on our end and let you know if we have any more questions.

Best,
Yael

Let me know if there is anything I can help with!

The error check above was added to HCP Pipelines approximately a year and a half ago. Maybe you processed your similar data before that point when HCP pipelines did not have this check in place?

Best, Jure

I did (it was 2021), and it worked perfectly. I will have to look forward into this.
If you have any more ideas on how I can carry on with the preprocessing with the TOPUP correction (which is an essential part of our image preprocessing), please let me know.

Thanks again!

Hi!

I just saw your post on the HCP users group. I am not and expert on diffusion processing, sorry for that. Per M. Harms’ suggestions, you might be able to process the data after all.

To set --combine-data-flag=2 set --hcp_dwi_combinedata=2 in the QuNex call.

Furthermore, the QuNex flag --hcp_dwi_selectbestb0 will set --select-best-b0 on the HCP side.

Maybe try first with the no GPU mode to check if this works without using the GPU and then once you are sure that all is good you can try also with the GPU mode (add --nv flag to the qunex_container call here).

Best, Jure

Thank you, I appreciate your help!

We’ll give it a try and let you know if it worked.

Hi Jure,

We’ve been trying to run the hcp_diffusion command again, but still no success.

Here’s what we ran:
qunex_container hcp_diffusion --bind="/home/docker/volumes/hcppipelines/yael_practice" --mappingfile="${INPUT_MAPPING_FILE}" --container="${QUNEX_CONTAINER}" --dockeropt="-v ${BIND_FOLDER}:${BIND_FOLDER}" --sessions="${SESSIONS}" --sessionsfolder="${STUDY_FOLDER}/sessions" --batchfile="${STUDY_FOLDER}/processing/batch.txt" --hcp_dwi_posdata="10001_DWI_dir185_AP.nii.gz" --hcp_dwi_negdata="10001_DWI_dir6_PA.nii.gz" --hcp_dwi_phasepos="AP" --hcp_dwi_combinedata="2" --hcp_dwi_selectbestb0 --nv

Unfortunately, we get this feedback:

Starting processing of sessions 10001 at Monday, 07. July 2025 06:39:22
Running external command: /opt/HCP/HCPpipelines/DiffusionPreprocessing/DiffPreprocPipeline.sh                 --path="/home/docker/volumes/hcppipelines/yael_practice/PTSD_BB_study/sessions/10001/hcp"                 --subject="10001"                 --PEdir=2                 --posData="/home/docker/volumes/hcppipelines/yael_practice/PTSD_BB_study/sessions/10001/hcp/10001/unprocessed/Diffusion/10001_DWI_dir185_AP.nii.gz**@EMPTY**"                 --negData="**EMPTY@**/home/docker/volumes/hcppipelines/yael_practice/PTSD_BB_study/sessions/10001/hcp/10001/unprocessed/Diffusion/10001_DWI_dir6_PA.nii.gz"                 --echospacing="0.689998"                 --gdcoeffs="NONE"                 --dof="6"                 --b0maxbval="50"                 --combine-data-flag="2"                 --printcom=""                --select-best-b0                --cuda-version=10.2

For some reason, it adds “EMPTY” before/after the data paths.
Then, the process fails again (log report is attached).
error_hcp_diffusion_10001_2025-07-07_06.39.22.822482.log (29.8 KB)

What are doing wrong here?

Thanks again.

Please use a triple back tick when pasting code and command examples and format them so they are easier to read:

Your version:
bind="/home/docker/volumes/hcppipelines/yael_practice" --mappingfile="${INPUT_MAPPING_FILE}" --container="${QUNEX_CONTAINER}" --dockeropt="-v ${BIND_FOLDER}:${BIND_FOLDER}" --sessions="${SESSIONS}" --sessionsfolder="${STUDY_FOLDER}/sessions" --batchfile="${STUDY_FOLDER}/processing/batch.txt" --hcp_dwi_posdata="10001_DWI_dir185_AP.nii.gz" --hcp_dwi_negdata="10001_DWI_dir6_PA.nii.gz" --hcp_dwi_phasepos="AP" --hcp_dwi_combinedata="2" --hcp_dwi_selectbestb0 --nv

Readable version:

qunex_container hcp_diffusion \
  --bind="/home/docker/volumes/hcppipelines/yael_practice" \
  --mappingfile="${INPUT_MAPPING_FILE}" \
  --container="${QUNEX_CONTAINER}" \
  --dockeropt="-v ${BIND_FOLDER}:${BIND_FOLDER}" \
  --sessions="${SESSIONS}" \
  --sessionsfolder="${STUDY_FOLDER}/sessions" \
  --batchfile="${STUDY_FOLDER}/processing/batch.txt" \
  --hcp_dwi_posdata="10001_DWI_dir185_AP.nii.gz" \
  --hcp_dwi_negdata="10001_DWI_dir6_PA.nii.gz" \
  --hcp_dwi_phasepos="AP" \
  --hcp_dwi_combinedata="2" \
  --hcp_dwi_selectbestb0 \
  --nv
  1. You do not need the mappingfile parameter that is from a completely different command.
  2. EMPTY is inserted since that is the correct way to use HCP Diffusion pipelines. When there is no Pos/Neg pair, you are supposed to insert EMPTY for the missing one.
  3. The outgoing call to HCP Pipelines looks OK. There is still the error of missing pairs:
Mon Jul  7 06:39:26 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: Wrong Input! No pairs of phase encoding directions have been found!
Mon Jul  7 06:39:26 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: Wrong Input! No pairs of phase encoding directions have been found!
Mon Jul  7 06:39:26 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: At least one pair is needed!
Mon Jul  7 06:39:26 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: At least one pair is needed!

So this might be an issue on the HCP side. Let me add this info tn the HCP users group post.

Best, Jure