[RESOLVED] Struggling with import_bids using ADNI data converted with Clinca

No problem, glad to help and glad we got it up and running.

In the data you were onboarding, I did not see any fieldmaps (I only glanced at it, so I could be wrong). So maybe the setting we made is optimal from that standpoint, sometimes this is called legacy style data/pre-processing.

Best, Jure

Hi Jure

I started running the job about 8 hours ago and there doesn’t seem to be any update, looks like is stuck processing
tmp_hcp_pre_freesurfer_002_S_1155_2024-05-02_11.21.34.389335.log (22.9 KB)
I cannot see any reason why it would have gone wrong.

I reran the example data about a couple of hours after (i’d moved the files and I came in to issue running map_hcp_data so thought I’d start from the begining), using turn key and that started and completed the pre_freesurfer within 2 hours.
done_hcp_pre_freesurfer_HCPA001_2024-05-02_14.28.47.176758.log (43.5 KB)

Any ideas?

Thank you, Scott

Not sure, it could be that some processing parameters are way off. Another thing that pops to mind are the two T2ws, maybe try just with a single one for your starting test runs.

Best, Jure

Hi Jure

I am only using one T2w as I didn’t revert back after thinking it was causing an issue before. I could try the other T2w or both.

Currently I have started it again and see if anything different happens.

Should I be concerned by any of these warning?

WARNING: Use of parameters with changed name(s)!
         The following parameters have new names:
         ... TR is now tr!
         Please correct the listed parameter names in command line or batch file!
WARNING: Parameter qx_cifti_tail was not specified. Its value was imputed from parameter hcp_cifti_tail and set to '_Atlas'!
WARNING: Parameter qx_nifti_tail was not specified. Its value was imputed from parameter hcp_nifti_tail and set to ''!
WARNING: Parameter cifti_tail was not specified. Its value was imputed from parameter qx_cifti_tail and set to '_Atlas'!
WARNING: Parameter nifti_tail was not specified. Its value was imputed from parameter qx_nifti_tail and set to ''!

Thank you, Scott

No, you can safely ignore those. These are irrelevant for the processing you are trying to do.

Best, Jure

Hi Jure

It got stuck again in the same place. Is there any other logs to help debug?
Any suggestions that will help to get the pipeline moving again?

Thank you, Scott

Hi Scott,

I talked with one of my colleagues who processed some ADNI data. You can find the parameters file they used here: adni_parameters.txt (1.4 KB).

Best, Jure

Hi Jure

Please pass my sincere thanks and gratitude for sharing the file, it is appreciate as is your continued support.

I have started the job again and hopefully this time it doesn’t get stuck :crossed_fingers:

Thank you, Scott

Hi Jure

It got stuck again, just a bit further along, probably because the T2w was ignored in the paramters.txt file.
tmp_hcp_pre_freesurfer_002_S_1155_2024-05-03_15.41.55.924139.log (28.8 KB)

What could be causing them to stall and not error?

Is there and info on what all the parameters in the paramters.txt file are so I can pair them up with the information from ADNI? This is what I know about the scans




Thank you, Scott

I do not think the lack of T2w is an issue. We often process such data and hcp_pre_freesurfer is usually quite quick.

You could try processing a different ADNI session as a test, maybe you picked a problematic sessions. When processing gets stuck there are usually two outcomes, it either needs a lot of time to do something or it crashes with an error (which is hopefully meaningful and gives us some insights into the issue). So I would suggest you wait a day or something like that to see what is happening.

Acquisition device is probably the most widespread device (Siemens Prisma 3T) and the parameters look pretty much standard, so the parameters you are using should work fine.

Best, Jure

Hi Jure

Quick question as I go to re-run with alternative subject.

Where do my adni_mappings.txt and adni_parameters.txt need to be stored?
I am specifically confused about the parameters as at one point I am doing:
export INPUT_BATCH_FILE="${RAW_DATA}/adni_parameters.txt" and then another I am doing export PARAMS="${SESSIONS_FOLDER}/specs/adni_parameters.txt"
Does it need to be in both?

Also, I cannot see where I am using export INPUT_MAPPING_FILE="${RAW_DATA}/adni_mapping.txt" outside of the turnkey approach, which I do not use for the ADNI data, just the example data. So I wonder if this is where I am going wrong.

Thank you, Scott

Looks like it manged to complete successfully today.
done_hcp_pre_freesurfer_012_S_4643_2024-05-06_16.36.17.900404.log (28.3 KB)

Hi Jure

It would appear that hcp_freesurfer has stalled and not updated in over 5 hours without error message.
tmp_hcp_freesurfer_012_S_4643_2024-05-06_17.15.50.375369.log (79.8 KB)

Kind regards, Scott

edit, just checked this morning and still no further progress

Hi Scott,

hcp_freesurfer takes quite some time to complete, it usually takes way more than 5 hours. As long as it does not crash you should not be concerned.

adni_mappings.txt and adni_parmeters.txt can be stored anywhere you wish. There is a folder in QuNex called specs (sessions/specs), I usually put such files in there. These two files are used only in the beginning, preparation phases. The mapping file is used to inject additional imaging info that is required to prepare everything for HCP processing, this is done in the create_session_info step. The mapping file is used in combination with session.txt to create the session_hcp.txt. Once that is done, you do not need the mapping anymore (for the session where it was already applied). We then use setup_hcp to sort out the HCP folder structure based on the data that was acquired with create_session_info. The parameters are injected into the top of the batch file (batch file is composed of the parameters and of the sessions_hcp.txt content) and is then used for at scale processing while assuring consistent use of parameters.

Best, Jure

Hi Jure

There been no movement/update to the tmp file since last night, making it now ~15hours without progress. Still keep waiting?

Thank you, Scott

For some sessions, FreeSurfer can take very long to process. It will end eventually (either successfully or with a crash). You should check though if the process is running in the background :slight_smile:.

Also note that some commands do not “stream” its outputs into a log file but dump all of the logging at the end, so it often happens that the tmp log is not updated in real time.

Best, Jure

Hi Jure

I managed to get hcp_freesurfer and hcp_post_freesurfer to complete. Whoop!!
I did this using the turnkey method as this includes the --scheduler option which meant the cluster wasn’t killing any processes due to my inactivity.
Can I use the --scheduler tag on the individual commands?

I just need to get the fMRI data processed now, but I put that under my other topic as that didn’t run as part of the turnkey even though hcp_fmri_volume,hcp_fmri_surface were included in the steps.

Do you think the lack of T2w will be an issue with my ADNI data?
What steps do I need to do differently when trying to process multiple subjects at once, preferably in parallel?

Thank you again for all your help, patience and support. It really is appreciated.

Kind regards, Scott

done_hcp_post_freesurfer_012_S_4643_2024-05-08_05.51.44.390458.log (1.8 MB)
done_hcp_freesurfer_012_S_4643_2024-05-07_19.48.59.412153.log (555.0 KB)

Great news!

You should let me know from the get go that you are running this on a high performance compute cluster (HPC) :slight_smile:. You should never run things there without switching to a compute node. When you login you get access to a weak login node that is very slow, so this is why everything was so slow and causing all kinds of issues. The purpose of the login node is not to run anything but ONLY to schedule jobs on powerful compute nodes. You can do this scheduling manually with special SLURM scripts, while QuNex does this for you through the --scheduler parameter. Yes, the --scheduler parameter will also work on individual commands.

To process things in parallel, this is what you do:

  1. Run create_study.

  2. Import all sessions you want to process with import_dicom.

  3. Run create_session_info, for the --sessions parameter provide a comma separated list of sessions so QuNex will run across all of them.

  4. Run setup_hcp and create_bathc as such:

qunex_container setup_hcp \
    --sessionsfolder="${STUDY_FOLDER}/sessions" \
    --sessions="<COMMA_SEPARATED_LIST_OF_SESSIONS>" \
    --scheduler="<SCHEDULER PARAMETERS>" \
    --container="${CONTAINER}"
qunex_container create_batch \
    --sessionsfolder="${STUDY_FOLDER}/sessions" \
    --sessions="<COMMA SEPARATED LIST OF SESSIONS>" \
    --paramfile="<PATH TO THE PARAM FILE>" \
    --targetfile="${STUDY_FOLDER}/processing/batch.txt" \
    --scheduler="<SCHEDULER PARAMETERS>" \
    --container="${CONTAINER}"

At this point you should have a batch file with parameters on top and all sessions underneath. You can use this now to process everything at scale.

  1. Start real work:
qunex_container hcp_pre_freesurfer \
    --sessionsfolder="${STUDY_FOLDER}/sessions" \
    --batchfile="${STUDY_FOLDER}/processing/batch.txt" \
    --scheduler="<SCHEDULER PARAMETERS>" \
    --container="${CONTAINER}"

Since we are using a scheduler, QuNex will create a separate job for each of the sessions in the batchfile. If you have 10 sessions, QuNex will create 10 jobs, and each job will run independently, in parallel on its own compute node of your system.

Best, Jure

Hi Jure

My sincere apologies about the cluster. I included that information on my other post, but failed to carry that important piece of the puzzle over.
I wasn’t running it on the login node, I did start an interactive node, but I believe it was this what was timing out and killing the nodes jobs.

Sorry, Scott