[RESOLVED] Hcp_diffusion on a diffusion dataset with a single phase encoding direction

Dear Jure, Lining and QuNex experts:

I was wondering if it is possible to run hcp_diffusion on a dataset that only has a single phase encoding direction?

Many thanks.
Ed

Hi Ed,

I think that this is not possible, for HCP Diffusion you need at least one phase encoding direction pair. For that purpose QuNex has the dwi_legacy_gpu command. This one works on a single diffusion file. Do you have multiple DWI files all with the same phase encoding direction?

Jure

Hi Jure:

Yes you are right. The diffusion data were all in the same phase encoding direction. But I do have field maps (spin echo epi) in both phase encoding directions.

Thanks so much as always!

Best,
Ed

I assume this is different data than the one mentioned here, becaus there you do have an AP/PA pair?

If you have only multiple files that all have the same pe direction, you should merge the images together and then run the dwi_legacy_gpu pipeline on that merged file. Note that merging of images is not currently supported by QuNex and needs to be done manually. Also note that I am not a DWI expert and there might be a better way, but my colleagues used this approach in one of their studies when they had similar data as you have. If you need additional information about what exactly they did, let me know and I will ask them.

Hi Jure:

Yes it’s a different dataset.

Please do kindly ask your colleagues how they processed the data.

Do I need to calculate the field map myself? Since I have acquired an AP/PA for estimating field map and is not the same as the conventional way to acquire the Siemens field map.

Many thanks,
Ed

The colleague that did this on our end is a bit busy at the moment, but she should be able to help you out next week.

Hi Jure:

Well noted, thanks a million for your help as always!

Best,
Ed

Hi Ed,

Sorry to keep you waiting for so long, our DWI experts are very busy because of some pressing deadlines. We are meeting with the DWI team that collaborates with FSL on Tuesday, I will pick their brain about this, and try to help you out myself then.

Cheers, Jure

Hi Jure:

I really appreciate your help indeed! Thanks a lot!

Best,
Ed

Hi Ed,

I apologize for the late response.

In order to run the dwi_legacy_gpu pipeline with your data, you would first need to merge your DWI files (nifti, bvec and bval) as the following:

  • fslmerge -t merge_dwi dwi1 dwi2 dwi3 dwi4
  • cat bval1 bval2 bval3 bval4 > merge_bvals
  • cat bvec1 bvec2 bvec3 bvec4 > merged_bvecs

Once this step is done, you can run the initial qunex steps using a mapping file that identifies this merged dwi.

The input to dwi_legacy_gpu will be this merged file, that you would specify in --diffdatasuffix.

Please let me know if you need any additional help when running this pipeline.

Best,
Clara

Hi Clara:

Really appreciate your help indeed. Thanks so much for the code snippets.

Please forgive me for asking this trivial question. May I ask specifically which initial qunex steps I should perform as you mentioned "Once this step is done, you can run the initial qunex steps using a mapping file that identifies this merged dwi."?

Many thanks!
Ed

Ed,

there are two options here. The first one is to merge the raw data and then onboard that into QuNex. This is what Clara meant with running the initial steps. So you do the data merging outside of QuNex and then onboard that along with all the other steps (create_session_info, setup_hcp, …).

However, probably an easier way for you is to merge the DWI data along with bvecs and bvals and then put the merged data into the hcp unprocessed folder (where the unmerged DWI98_AP etc. are put) and then use the dwidatasuffix to make QuNex target the new merged data.

Hope this helps.

Jure

Hi Jure:

Got you, I will probably go with the second option then;)

Thanks so much for your help as always!

Ed