[RESOLVED] Dmri processing issues

Hi, Jure

I noticed that QuNex provides the dwi_legacy_gpu command for legacy data using the FSL processing schemes in addition to hcp_diffusion, which is really helpful!

However, I still have some doubts.

  1. When building mapping.txt ahead of time, it seems that the naming of dmri sequences is usually SequenceName_for_DiffusionSequenceDirection_AnteriorPosterior_90directions => DWI. dir90_AP, I notice that the documentation gives the number of directions as being greater than 90, I’m not sure if the number of directions in the naming is just a name or if it has a practical use in subsequent processing? For example, if my legacy data has an actual direction number of 64, should I name it DWI:dir90_AP or DWI:dir64_AP?

  2. if I have done the initial processing of the dmri data using the dwi_legacy_gpu command, I would like to end up with a structural connectivity matrix based on the specified Atals (e.g. HCP-MMP). Can I execute the commands sequentially in the following order to finalize the structural connectivity matrix?

  • Bayesian multifiber modeling using dwi_bedpostx_gpu to estimate the directional distribution and uncertainty of up to 3 fibers per voxel.
  • Using dwi_pre_tractography to construct a dense trajectory space.
  • Whole-brain probability tracking with dwi_probtrackx_dense_gpu.
  • Use the dwi_parcellate command to extract the connectivity matrix based on HCP_MMP1.0_Glasser.32k_fs_LR.dlabel.nii.

Looking forward to your reply, best wishes!

Hi Oliver,

Welcome to QuNex forums.

  1. The dir part of the naming is more for your own use and clarity and a somewhat standard naming convention. The number of directions is not extracted from this string for processing. A more appropriate name is thus DWI:dir64_AP, but even if you used a “wrong” name (DWI:dir90_AP) you would be able to process it through QuNex and the results would be the same.

  2. That is correct, the dwi_legacy_gpu command produces outputs that are compliant with DWI processing that follows (bedpostx, dtifit, tractography, NODDI …).

Best, Jure

1 Like

Thank you, Jure!

I will follow the commands in the Diffusion Analyses section of the QuNex support materials and execute the commands in order to hopefully get the structural connectivity matrix I’m looking for.

Hi, Jure

I have a similar question about the processing of dmri data, and I thought I’d check with you about the process for processing dmri legacy data.

I am currently executing the dwi_bedpostx_gpu, dwi_pre_tractography, and dwi_protrackx_dense_gpu commands in sequence after processing with the dwi_legacy_gpu command. Among them, when executing dwi_probtrackx_dense_gpu command, the processing time is 2-10:00:00, I don’t know if there is something wrong with my data or I should set a longer processing time? Or should I add dwi_dtifit, dwi_noddi_gpu commands?

This is the output log from dwi_probtrackx_dense_gpu and it’s not encountering any errors, so I’m not sure if I need to set a longer time? Or maybe the memory I’m setting is still a bit small (50G)?
tmp_dwi_probtrackx_dense_gpu_31-JA-JNLA-0053_2025-04-23_03.05.12.142719.log (1.9 KB)


So, the complete dmri processing flow is,

  • dwi_legacy_gpu
  • dwi_bedpostx_gpu
  • dwi_dtifit
  • dwi_pre_tractography
  • dwi_probtrackx_dense_gpu
  • dwi_noddi_gpu
  • dwi_parcellate

Is there some problems with the above process and order of processing? :eyes:

Best, YJia

1 Like

No, looks good. It seems like dwi_probtrackx_dense_gpu is still running as it has the tmp_ prefix. If it crashes/fails then it changes to error_, if all seems OK, it changes to done_.

dwi_noddi_gpu is for microstructure modelling (Using GPUs to accelerate computational diffusion MRI: From microstructure estimation to tractography and connectomes - ScienceDirect), if you will not use this in your final analyses, you can skip it.

Best, Jure

2 Likes

Hi, Jure

I also think there’s probably nothing wrong with ‘dwi_probtrackx_dense_gpu’, so I’m planning to re-run it and set it to a longer processing time with a bit more memory. (e.g. 4 days, 80G)

The maximum task duration that our slurm supports seems to be five days, so I may need to split the task into batches.

Best, Yjia