[RESOLVED] Hcp_pre_freesurfer fails due to field map with excessive phase range

Hello,
I have troubles to run hcp_pre_freesurfer on my Philips sessions.

The comlog indicates “ERROR: input phase image exceeds allowable phase range. Allowable range is 6.283 radians. Image range is: 71.4188 radians. Aborting.”.

Based on this forum post JISCMail - FSL Archives I used fslmaths to divide the Phase field map per 12. However I couldn’t use fsl_prepare_fieldmap because it’s not working with Philips sessions according to the documentation of fsl_prepare_fieldmap.

Path to one session:
/gpfs/project/fas/n3/Studies/STIMZO/sessions/11071JA_J00

Command:

qunex_container hcp_pre_freesurfer \
  --sessionids="11040SC_J00,11044GS_J00,11050BN_J00,11051SL_J00,11055LF_J00,11062FJ_J00,11063LK_J00,11071JA_J00,11078SB_J00,11080FL_J00" \
  --sessionsfolder="/gpfs/project/fas/n3/Studies/STIMZO/sessions" \
  --sessions="/gpfs/project/fas/n3/Studies/STIMZO/processing/batch_caen.txt" \
  --overwrite="yes" \
  --container="$qunex_container" \
  --scheduler="SLURM,time=1-00:00:00,ntasks=1,cpus-per-task=2,mem-per-cpu=8000,partition=pi_anticevic"

Output:
error_hcp_pre_freesurfer_11071JA_J00_2022-06-28_15.49.41.035772.log (21.0 KB)

Output after dividing the Phase image per 12:
error_hcp_pre_freesurfer_11071JA_J00_2022-06-28_16.47.05.622578.log (22.3 KB)

Thanks!
Layla

Hi Layla, I will look into this issue shortly and will let you know once it has been resolved.

Layla, to solve the Philips field map distortion correction problem, we need to change the code of the HCP pipelines. For this reason, this solution will take a longer time. I will keep you updated on the progress.

1 Like

Hi, Do you have news regarding this issue or an estimated timeline?

Thanks!
Layla

Layla, Hi,

We are currently working on this issue and are expecting to prepare a temporary solution in the next two weeks. It will take us longer to integrate the final solution into QuNex, as we will need to coordinate the development with the developers of the HCP pipelines.

I will let you know once we have the temporary solution prepared and how to use it to perform field map distortion correction on your data.

Best regards,

Aleksij

Hi Aleksij,
Clara told me to run my command after running these two lines:

export HCPPIPEDIR=${TOOLS}/HCP/HCPpipelines.philips
export HCPPIPEDIR_Global=${TOOLS}/HCP/HCPpipelines.philips/global/scripts

However, hcp_pre_freesurfer stops pretty quickly with this error comlog: /bin/sh: /opt/HCP/HCPpipelines.philips/PreFreeSurfer/PreFreeSurferPipeline.sh: No such file or directory

This is the command I runned:

qunex_container hcp_pre_freesurfer \
  --sessionids="11040SC_J00,11044GS_J00,11050BN_J00,11051SL_J00,11055LF_J00,11062FJ_J00,11063LK_J00,11071JA_J00,11078SB_J00,11080FL_J00" \
  --sessionsfolder="/gpfs/gibbs/pi/n3/Studies/STIMZO/sessions" \
  --sessions="/gpfs/gibbs/pi/n3/Studies/STIMZO/processing/batch_caen_b0map.txt" \
  --overwrite="yes" \
  --container="$qunex_container" \
  --scheduler="SLURM,time=1-00:00:00,ntasks=1,cpus-per-task=2,mem-per-cpu=8000,partition=day"

Thanks for your help!
Layla

Hi Layla,

it seems that our $TOOLS variables differ. Can you run the following two lines instead:

export HCPPIPEDIR=/gpfs/gibbs/pi/n3/software/HCP/HCPpipelines.philips
export HCPPIPEDIR_Global=/gpfs/gibbs/pi/n3/software/HCP/HCPpipelines.philips/global/scripts

Please let me know if this fixes the problem.

Best,

Aleksij

Hi,
I still have the same error comlog with

export HCPPIPEDIR=/gpfs/gibbs/pi/n3/software/HCP/HCPpipelines.philips
export HCPPIPEDIR_Global=/gpfs/gibbs/pi/n3/software/HCP/HCPpipelines.philips/global/scripts

Thanks,
Layla

Hi Layla,

I apologize, I did not notice that you are running the command through the container. At this time, it is only possible to use the modified (philips) HCP Pipelines code by running QuNex without the container. Can you do that by running:

qunex_container hcp_pre_freesurfer \
  --sessionids="11040SC_J00,11044GS_J00,11050BN_J00,11051SL_J00,11055LF_J00,11062FJ_J00,11063LK_J00,11071JA_J00,11078SB_J00,11080FL_J00" \
  --sessionsfolder="/gpfs/gibbs/pi/n3/Studies/STIMZO/sessions" \
  --sessions="/gpfs/gibbs/pi/n3/Studies/STIMZO/processing/batch_caen_b0map.txt" \
  --overwrite="yes" \
  --scheduler="SLURM,time=1-00:00:00,ntasks=1,cpus-per-task=2,mem-per-cpu=8000,partition=day"

Please let me know if it works. Otherwise, I can try and run this code for you, because it will be easier to debug any issues this way.

Hi Aleksij,
When I’m running the command with:

export HCPPIPEDIR=/gpfs/gibbs/pi/n3/software/HCP/HCPpipelines.philips
export HCPPIPEDIR_Global=/gpfs/gibbs/pi/n3/software/HCP/HCPpipelines.philips/global/scripts

and then:

qunex_container hcp_pre_freesurfer \
  --sessionids="11040SC_J00,11044GS_J00,11050BN_J00,11051SL_J00,11055LF_J00,11062FJ_J00,11063LK_J00,11071JA_J00,11078SB_J00,11080FL_J00" \
  --sessionsfolder="/gpfs/gibbs/pi/n3/Studies/STIMZO/sessions" \
  --sessions="/gpfs/gibbs/pi/n3/Studies/STIMZO/processing/batch_caen_b0map.txt" \
  --overwrite="yes" \
  --scheduler="SLURM,time=1-00:00:00,ntasks=1,cpus-per-task=2,mem-per-cpu=8000,partition=day"

I have this error: No Singularity image or Docker container name specified either in the command line or as a QUNEXCONIMAGE environment variable!
I don’t know how to run a command without specifying any container.

Hi Layla,

it seems that your environment runs QuNex through the container by default. I asked for help on how to resolve this and I will get back to you soon.

To run qunex from source, you need to use qunex instead of qunex_container. Do you have a qunex feature branch for this patch?

If you only need to swap the HCP pipeline inside the container, you can use the --bash_post option. In your previous command you are also missing the --container parameter from the qunex_container command.

https://qunex.readthedocs.io/en/latest/wiki/UsageDocs/RunningQunexContainer.html#executing-custom-bash-commands

qunex_container hcp_diffusion \
  --batchfile="/myStudy/processing/batch.txt" \
  --sessions="sub1" \
  --sessionsfolder="/myStudy/sessions" \
  --nv \
  --bash_pre="cd /studies/qunex/;module load singularity" \
  --bash_post="export FSL_FIXDIR=/studies/qunex/fix" \
  --container="/mySingularityContainers/qunexcontainer-latest.sif" \
  --overwrite="yes"

Hi Lining,
I’m not sure to understand what i need to do.
When I’m running this command:

qunex_container hcp_pre_freesurfer \
  --sessions="11040SC_J00,11044GS_J00,11050BN_J00,11051SL_J00,11055LF_J00,11062FJ_J00,11063LK_J00,11071JA_J00,11078SB_J00,11080FL_J00" \
  --sessionsfolder="/gpfs/gibbs/pi/n3/Studies/STIMZO/sessions" \
  --batchfile="/gpfs/gibbs/pi/n3/Studies/STIMZO/processing/batch_caen_b0map.txt" \
  --bash_pre="cd /studies/qunex/;module load singularity" \
  --bash_post="export FSL_FIXDIR=/studies/qunex/fix" \
  --container="/mySingularityContainers/qunexcontainer-latest.sif" \
  --overwrite="yes"

I’m getting this error:

/bin/sh: line 0: cd: /studies/qunex/: No such file or directory
Lmod has detected the following error:  The following module(s) are unknown: "singularity"

Please check the spelling or version number. Also try "module spider ..."
It is also possible your cache file is out-of-date; it may help to try:
  $ module --ignore_cache load "singularity"

Also make sure that all modulefiles written in TCL start with the string #%Module



/bin/sh: singularity: command not found

Thanks !

That was just an example command from the list page in qunex documentation, showing that it is possible to use bash_post to set an environment variable inside the container. You should be able to adjust your command accordingly. P

qunex_container hcp_pre_freesurfer \
  --sessionids="11040SC_J00,11044GS_J00,11050BN_J00,11051SL_J00,11055LF_J00,11062FJ_J00,11063LK_J00,11071JA_J00,11078SB_J00,11080FL_J00" \
  --sessionsfolder="/gpfs/gibbs/pi/n3/Studies/STIMZO/sessions" \
  --sessions="/gpfs/gibbs/pi/n3/Studies/STIMZO/processing/batch_caen_b0map.txt" \
  --bash_post="export HCPPIPEDIR=/gpfs/gibbs/pi/n3/software/HCP/HCPpipelines.philips;export HCPPIPEDIR_Global=/gpfs/gibbs/pi/n3/software/HCP/HCPpipelines.philips/global/scripts" \
  --overwrite="yes" \
  --container="/gpfs/gibbs/pi/n3/software/Singularity/qunex_suite-0.94.7.sif" \
  --scheduler="SLURM,time=1-00:00:00,ntasks=1,cpus-per-task=2,mem-per-cpu=8000,partition=day"

Hi,
I was finally able to run hcp_pre_freesurfer! However, there is still an issue with the command with my data. Here is the comlog:
error_hcp_pre_freesurfer_11078SB_J00_2022-09-07_12.05.04.899028.log (22.6 KB)

Thanks!

Layla

Hi Layla,

I am looking into this issue. I will let you know when I resolve it.

Best,

Aleksij

I found and fixed the bug that was causing the error. It was introduced in the latest version of my code for Philips fieldmap processing. Now I ran hcp_pre_freesurfer myself and it ran ok. Tomorrow morning I will check the results to see if the distortion correction was performed correctly.

Hi Aleksij,
I also tried to run hcp_pre_freesurfer this morning for the subjects 11040SC_J00, 11044GS_J00, 11050BN_J00, 11051SL_J00, 11055LF_J00, 11062FJ_J00, 11063LK_J00, 11071JA_J00, 11078SB_J00, 11080FL_J00, 04021LC_J00, 04036DE_J00, 04039SD_J00, 04091TC_J00, 04098SA_J00 if you want to check the outputs.

Thanks for your help!
Layla

Hi Aleksij,
When running hcp_frmi_volume with the pipeline for my Philips acquisitions I’m getting this error comlog relative to an issue when unwarping the filed maps. Let me know if you think that the pipeline could work today. If this is not resolved, I’m gonna process the 15 Philips sessions without the fieldmaps.

error_hcp_fmri_volume_BOLD_1_11062FJ_J00_2022-09-13_11.08.19.417529.log (54.9 KB)