[RESOLVED] How to handle subjects with multiple sessions?

Hi, I’m a new user of Qunex. I’m trying to process the MSC dataset using HCPpipeline. Each subject has multiple sessions, and the data is in BIDS format. However, the functional and structural sessions are separated. Each subject has two structural sessions, while the remaining functional sessions only contain func and fmap data. How should I handle this situation?

The following is a portion of the folder structure for one of the subjects.
├── ses-func09
│ ├── fmap
│ │ ├── sub-MSC01_ses-func09_magnitude1.nii.gz
│ │ ├── sub-MSC01_ses-func09_magnitude2.nii.gz
│ │ └── sub-MSC01_ses-func09_phasediff.nii.gz
│ ├── func
│ │ ├── sub-MSC01_ses-func09_task-glasslexical_run-01_bold.nii.gz
│ │ ├── sub-MSC01_ses-func09_task-glasslexical_run-01_events.tsv
│ │ ├── sub-MSC01_ses-func09_task-glasslexical_run-02_bold.nii.gz
│ │ ├── sub-MSC01_ses-func09_task-glasslexical_run-02_events.tsv
│ │ ├── sub-MSC01_ses-func09_task-memoryfaces_bold.nii.gz
│ │ ├── sub-MSC01_ses-func09_task-memoryfaces_events.tsv
│ │ ├── sub-MSC01_ses-func09_task-memoryscenes_bold.nii.gz
│ │ ├── sub-MSC01_ses-func09_task-memoryscenes_events.tsv
│ │ ├── sub-MSC01_ses-func09_task-memorywords_bold.nii.gz
│ │ ├── sub-MSC01_ses-func09_task-memorywords_events.tsv
│ │ ├── sub-MSC01_ses-func09_task-motor_run-01_bold.nii.gz
│ │ ├── sub-MSC01_ses-func09_task-motor_run-01_events.tsv
│ │ ├── sub-MSC01_ses-func09_task-motor_run-02_bold.nii.gz
│ │ ├── sub-MSC01_ses-func09_task-motor_run-02_events.tsv
│ │ └── sub-MSC01_ses-func09_task-rest_bold.nii.gz
│ └── sub-MSC01_ses-func09_scans.tsv
├── ses-func10
│ ├── fmap
│ │ ├── sub-MSC01_ses-func10_magnitude1.nii.gz
│ │ ├── sub-MSC01_ses-func10_magnitude2.nii.gz
│ │ └── sub-MSC01_ses-func10_phasediff.nii.gz
│ ├── func
│ │ ├── sub-MSC01_ses-func10_task-glasslexical_run-01_bold.nii.gz
│ │ ├── sub-MSC01_ses-func10_task-glasslexical_run-01_events.tsv
│ │ ├── sub-MSC01_ses-func10_task-glasslexical_run-02_bold.nii.gz
│ │ ├── sub-MSC01_ses-func10_task-glasslexical_run-02_events.tsv
│ │ ├── sub-MSC01_ses-func10_task-memoryfaces_bold.nii.gz
│ │ ├── sub-MSC01_ses-func10_task-memoryfaces_events.tsv
│ │ ├── sub-MSC01_ses-func10_task-memoryscenes_bold.nii.gz
│ │ ├── sub-MSC01_ses-func10_task-memoryscenes_events.tsv
│ │ ├── sub-MSC01_ses-func10_task-memorywords_bold.nii.gz
│ │ ├── sub-MSC01_ses-func10_task-memorywords_events.tsv
│ │ ├── sub-MSC01_ses-func10_task-motor_run-01_bold.nii.gz
│ │ ├── sub-MSC01_ses-func10_task-motor_run-01_events.tsv
│ │ ├── sub-MSC01_ses-func10_task-motor_run-02_bold.nii.gz
│ │ ├── sub-MSC01_ses-func10_task-motor_run-02_events.tsv
│ │ └── sub-MSC01_ses-func10_task-rest_bold.nii.gz
│ └── sub-MSC01_ses-func10_scans.tsv
├── ses-struct01
│ ├── anat
│ │ ├── sub-MSC01_ses-struct01_run-01_angio.nii.gz
│ │ ├── sub-MSC01_ses-struct01_run-01_mod-angio_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct01_run-01_mod-T1w_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct01_run-01_mod-T2w_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct01_run-01_T1w.nii.gz
│ │ ├── sub-MSC01_ses-struct01_run-01_T2w.nii.gz
│ │ ├── sub-MSC01_ses-struct01_run-02_angio.nii.gz
│ │ ├── sub-MSC01_ses-struct01_run-02_mod-angio_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct01_run-02_mod-T1w_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct01_run-02_mod-T2w_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct01_run-02_T1w.nii.gz
│ │ └── sub-MSC01_ses-struct01_run-02_T2w.nii.gz
│ └── sub-MSC01_ses-struct01_scans.tsv
├── ses-struct02
│ ├── anat
│ │ ├── sub-MSC01_ses-struct02_acq-coronal_run-01_mod-veno_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct02_acq-coronal_run-01_veno.nii.gz
│ │ ├── sub-MSC01_ses-struct02_acq-coronal_run-02_mod-veno_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct02_acq-coronal_run-02_veno.nii.gz
│ │ ├── sub-MSC01_ses-struct02_acq-coronal_run-03_mod-veno_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct02_acq-coronal_run-03_veno.nii.gz
│ │ ├── sub-MSC01_ses-struct02_acq-sagittal_run-01_mod-veno_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct02_acq-sagittal_run-01_veno.nii.gz
│ │ ├── sub-MSC01_ses-struct02_acq-sagittal_run-02_mod-veno_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct02_acq-sagittal_run-02_veno.nii.gz
│ │ ├── sub-MSC01_ses-struct02_acq-sagittal_run-03_mod-veno_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct02_acq-sagittal_run-03_veno.nii.gz
│ │ ├── sub-MSC01_ses-struct02_acq-sagittal_run-04_mod-veno_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct02_acq-sagittal_run-04_veno.nii.gz
│ │ ├── sub-MSC01_ses-struct02_run-01_angio.nii.gz
│ │ ├── sub-MSC01_ses-struct02_run-01_mod-angio_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct02_run-01_mod-T1w_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct02_run-01_mod-T2w_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct02_run-01_T1w.nii.gz
│ │ ├── sub-MSC01_ses-struct02_run-01_T2w.nii.gz
│ │ ├── sub-MSC01_ses-struct02_run-02_angio.nii.gz
│ │ ├── sub-MSC01_ses-struct02_run-02_mod-angio_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct02_run-02_mod-T1w_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct02_run-02_mod-T2w_defacemask.nii.gz
│ │ ├── sub-MSC01_ses-struct02_run-02_T1w.nii.gz
│ │ └── sub-MSC01_ses-struct02_run-02_T2w.nii.gz
│ └── sub-MSC01_ses-struct02_scans.tsv

Do I need to manually adjust the paths of the structural image folders before running import_bids? Or should I configure certain settings in one of the steps after import_bids?

Hi, welcome to the QuNex forums!

If your data is BIDS compliant, then import_bids should be able to onboard it properly. If in the -sessions parameter you provide the subject name only, QuNex will import all sessions from that subject. Let us assume that your raw data is stored as /data/msc_bids_archive and you are onboarding it into a QuNex study at /data/qx_study. To create this study use the create_study command. I will assume that you are using a Singularity/Apptainer container located at /qx_containers/qunex_suite-1.0.3.sif To onboard subject sub-MSC01 and its sessions you can now run:

qunex_container import_bids \
  --sessionsfolder="/data/qx_study/sessions" \
  --sessions="sub-MSC01" \
  --inbox="/data/msc_bids_archive" \
  --bind="/data:/data" \
  --container="/qx_containers/qunex_suite-1.0.3.sif"

The --bind parameter will give container the access to the /data folder on your disk.

Some additional info about the qunex_container script can be found at QuNex quick start using a Docker container — QuNex documentation and other pages of our documentation https://qunex.readthedocs.io.

Let me know how it goes!

Best, Jure

I have imported all session files of msc01 into the specified sessionfolder directory. The file structure after import remains consistent with the original data structure, including two structural session folders and ten functional session folders. Below is the folder structure of sessionfolder :
├── MSC01_func01
│ ├── behavior
│ ├── bids
│ ├── hcp
│ └── nii
├── MSC01_func02
│ ├── behavior
│ ├── bids
│ ├── hcp
│ └── nii
├── MSC01_func03
│ ├── behavior
│ ├── bids
│ ├── hcp
│ └── nii
├── MSC01_func04
│ ├── behavior
│ ├── bids
│ ├── hcp
│ └── nii
├── MSC01_func05
│ ├── behavior
│ ├── bids
│ ├── hcp
│ └── nii
├── MSC01_func06
│ ├── behavior
│ ├── bids
│ └── nii
├── MSC01_func07
│ ├── behavior
│ ├── bids
│ ├── hcp
│ └── nii
├── MSC01_func08
│ ├── behavior
│ ├── bids
│ ├── hcp
│ └── nii
├── MSC01_func09
│ ├── behavior
│ ├── bids
│ ├── hcp
│ └── nii
├── MSC01_func10
│ ├── behavior
│ ├── bids
│ ├── hcp
│ └── nii
├── MSC01_struct01
│ ├── behavior
│ ├── bids
│ ├── hcp
│ └── nii
├── MSC01_struct02
│ ├── behavior
│ ├── bids
│ ├── hcp
│ └── nii
├── QC
└── specs

I have proceeded to the hcp_pre_freesurfer step. When running hcp_pre_freesurfer, the command iterated through all session files in the sessionfolder. It reported errors and skipped the 10 functional folders that did not contain structural images, directly starting to process the files in the two structural image folders. I think the preprocessing of structural image files may not have issues. However, I am concerned that when I start running the fMRI preprocessing pipeline, the subsequent preprocessing might still treat each session separately. If so, this could cause problems during the preprocessing of functional images because the functional image folders do not contain structural files, and the processed structural files are in the other two sessions.

I would like to ask if there are any issues with my file import and data preparation steps. I will attach the batch file after running the setup_hcp command.

Another question is that it seems the two structural image session folders are also processed separately. If I only need an averaged structural image result in the end, should I merge all the structural image files into a single folder beforehand?
batch.txt (58.2 KB)

Hi,

Just one remark before I get to the problem. When pasting folder structures or code blocks it is nice to encapsulate them into triple back ticks so they are more readable. For example:

├── MSC01_func01
│ ├── behavior
│ ├── bids
│ ├── hcp
│ └── nii
├── MSC01_func02
│ ├── behavior
│ ├── bids
│ ├── hcp
│ └── nii
...

OK, now I see what the issue is. The structure you have will be problematic for most pipelines as they traditionally assume that structural images that belong to functional images were acquired during the same session. I think the easiest path forward for you is to organize the data into 2 sessions. One sessions has struct01 and all functional images that should be preprocessed using struct01 images. The second sessions has struct02 and all functional images that belong to this one. It will require some work to get everything into this format, but should save you a bunch of time down the line. Here, you only need to prepare the data right, everything else should be then properly handled by QuNex.

The alternative is to preprocess the two structural sessions first and then manually distribute preprocessed data to all other sessions. This would require manual operations in the terminal and manual edits to session/batch files, so I would try to go the first route if possible.

Best, Jure

1 Like

I will try reorganizing the file structure using the first method you suggested. Thank you for your help!

@zZq I was planning to preprocess MSC using HCP pipelines and ran into the same problem. Here is the code that you may find helpful

map_sessions_to_subjects.py:

#!/usr/bin/env python3

import os
import glob
import re
import yaml

def join_sessions(sourcepath, targetpath, mapping_file):
    """
    Function to join sessions of the same subject into one directory

    Parameters
    ----------
    sourcepath : str
        Path to the sessions folder
    targetpath : str
        Path to the new sessions folder
    mapping_file : str
        Path to the yaml file containing sessions-to-subjects mapping information
    """

    # prepare regex patterns
    sniiname = re.compile(r'([0-9]+)(.+)')
    stxtline = re.compile(r'([0-9]+):(.+)')

    # loop through subjects
    subjects = {}
    missing = []

    # open yaml mapping file
    with open(mapping_file, 'r') as f:
        subjects = yaml.safe_load(f)

    for sub, ses in subjects.items():
        print(f'\n ===> processing subject {sub}')
        subject_targetpath = os.path.join(targetpath, sub)

        subject_niipath = os.path.join(subject_targetpath, "nii")
        subject_hcppath = os.path.join(subject_targetpath, "hcp")

        if os.path.exists(subject_targetpath):
            print(f"WARNING: directory exists: {subject_targetpath}")
        else:
            print(f" ---> creating subject directory: {subject_targetpath}")
            os.makedirs(subject_targetpath)

        if os.path.exists(subject_niipath):
            print(f"WARNING: directory exists: {subject_niipath}")
        else:
            print(f" ---> creating subject nifti directory: {subject_niipath}")
            os.makedirs(subject_niipath)

        # prepare session.txt
        stxt = os.path.join(subject_targetpath, 'session.txt')
        if os.path.exists(stxt):
            print(f"WARNING: {stxt} exists! Deleting it.")
            os.remove(stxt)

        print(f" ---> Creating {stxt}")
        with open(stxt, 'w') as stxtf:
            stxtf.write(f"id: {sub}\n")
            stxtf.write(f"subject: {sub}\n")
            stxtf.write(f"raw_data: {subject_niipath}\n")
            stxtf.write(f"hcp: {subject_hcppath}\n\n")

            # combine sessions
            add = 0
            for sessid in ses:
                add += 100
                print(f"     --=> Processing session {sessid}")

                # link NIfTI files
                niftis = glob.glob(os.path.join(sourcepath, f"{sessid}/nii/*"))
                for nifti in niftis:
                    niiname = sniiname.search(os.path.basename(nifti))
                    if niiname:
                        niino = int(niiname.group(1))
                        niiext = niiname.group(2)
                        tnii = os.path.join(subject_niipath, f"{niino + add:03d}{niiext}")

                        if not os.path.exists(tnii):
                            print(f"     ---> linking {nifti} to {tnii}")
                            os.link(nifti, tnii)
                        else:
                            print(f"WARNING: {tnii} exists!")
                    else:
                        print(f"WARNING: Could not process {nifti}")

                # combine session.txt files
                ostxt = os.path.join(sourcepath, f"{sessid}/session.txt")
                if os.path.exists(ostxt):
                    with open(ostxt) as f:
                        for line in f:
                            ln = stxtline.search(line)
                            if ln:
                                stxtf.write(f"{int(ln.group(1)) + add:03d}:{ln.group(2)}\n")
                else:
                    print(f"WARNING: Did not find a session.txt file for {sessid} session {sessid}!")
                    missing.append(sessid)

    if missing:
        print("\n-----------------------\nWARNING: The following files were missing:")
        for missed in missing:
            print(missed)
        print("-------------------------")

if __name__ == "__main__":
    studypath = '/data/studies/fMRI/MSC_2/'
    sourcepath = os.path.join(studypath, 'sessions')
    targetpath = os.path.join(studypath, 'sessions_joined')
    mapping_file = os.path.join(studypath, 'processing', 'scripts', 'sessions_to_subjects.yaml')
    join_sessions(sourcepath, targetpath, mapping_file)

sessions_to_subjects.yaml:

MSC01: [MSC01_struct01, MSC01_struct02, MSC01_func01, MSC01_func02, MSC01_func03, MSC01_func04, MSC01_func05, MSC01_func06, MSC01_func07, MSC01_func08, MSC01_func09, MSC01_func10]
MSC02: [MSC02_struct01, MSC02_struct02, MSC02_func01, MSC02_func02, MSC02_func03, MSC02_func04, MSC02_func05, MSC02_func06, MSC02_func07, MSC02_func08, MSC02_func09, MSC02_func10, MSC02_func11]
MSC03: [MSC03_struct01, MSC03_struct02, MSC03_func01, MSC03_func02, MSC03_func03, MSC03_func04, MSC03_func05, MSC03_func06, MSC03_func07, MSC03_func08, MSC03_func09, MSC03_func10]
MSC04: [MSC04_struct01, MSC04_struct02, MSC04_func01, MSC04_func02, MSC04_func03, MSC04_func04, MSC04_func05, MSC04_func06, MSC04_func07, MSC04_func08, MSC04_func09, MSC04_func10]
MSC05: [MSC05_struct01, MSC05_struct02, MSC05_func01, MSC05_func02, MSC05_func03, MSC05_func04, MSC05_func05, MSC05_func06, MSC05_func07, MSC05_func08, MSC05_func09, MSC05_func10]
MSC06: [MSC06_struct01, MSC06_struct02, MSC06_func01, MSC06_func02, MSC06_func03, MSC06_func04, MSC06_func05, MSC06_func06, MSC06_func07, MSC06_func08, MSC06_func09, MSC06_func10]
MSC07: [MSC07_struct01, MSC07_struct02, MSC07_func01, MSC07_func02, MSC07_func03, MSC07_func04, MSC07_func05, MSC07_func06, MSC07_func07, MSC07_func08, MSC07_func09, MSC07_func10]
MSC08: [MSC08_struct01, MSC08_struct02, MSC08_func01, MSC08_func02, MSC08_func03, MSC08_func04, MSC08_func05, MSC08_func06, MSC08_func07, MSC08_func08, MSC08_func09, MSC08_func10]
MSC09: [MSC09_struct01, MSC09_struct02, MSC09_func01, MSC09_func02, MSC09_func03, MSC09_func04, MSC09_func05, MSC09_func06, MSC09_func07, MSC09_func08, MSC09_func09, MSC09_func10]
MSC10: [MSC10_struct01, MSC10_struct02, MSC10_func01, MSC10_func02, MSC10_func03, MSC10_func04, MSC10_func05, MSC10_func06, MSC10_func07, MSC10_func08, MSC10_func09, MSC10_func10, MSC10_func11]

I haven’t had time to preprocess the data, but I would be interested if you manage to do it, especially if you will also do ICAFIX (with classifier based on MSC data).

1 Like

Hello, thank you for providing the code. I have reorganized the file structure and started running the structural image preprocessing pipeline.I am thinking of further denoising the time series using ICAFIX, but I am not familiar with ICA denoising.

Here is my rough plan:

After completing the HCP minimal preprocessing pipeline, I will temporarily leave Qunex. I plan to select all the runs or a subset of runs from one session for each subject. For each run, I will use MELODIC to decompose the components and manually select the signal and noise components. However, I am unsure whether this workload is too large. Additionally, I am not sure whether I need to concatenate the functional images before performing ICA decomposition.

After completing these steps, I will input the data into FIX for training.

In the following steps, I will return to Qunex and use the hcp_icafix command to automatically denoise all the preprocessed functional images, setting the --hcp_icafix_traindata parameter to the previously trained dataset.

This is my initial idea. I’m not sure if it’s correct. Do you have any suggestions?

There is no need to do any processing outside Qunex, you can run ICA denosing using functions within Qunex. The only step that needs to be done outside Qunex is training a custom ICA classification model.

ICA decomposition
Once the HCP minimal preprocessing is complete (hcp_pre_freesurfer, hcp_freesurfer, hcp_post_freesurfer, hcp_fmri_volume, hcp_fmri_surface), you can run the hcp_icafix, which will decompose your data into independent components and perform automatic classification of components into signal and noise (hcp_icafix — QuNex documentation).

In the hcp_icafix command you can select specific runs that should be used for ICA decomposition using the parameter --hcp_icafix_bolds. If the --hcp_icafix_bolds parameter is not provided ICAFix will bundle all bolds together and execute multi-run HCP ICAFix. If the hcp_icafix step finishes succesfully, hcp_post_fix will run automatically. The hcp_post_fix creates Workbench scene files that can be used to visually review the signal and noise classification generated by ICAFix.

qunex_container hcp_icafix \
--sessionsfolder="$study/sessions" \
--batchfile="$study/processing/batch_hcp.txt" \
--hcp_icafix_traindata="HCP_hp2000.RData" \
--container="$qunex_container_path"

Training the classification model
Depending on the acquisition parameters of your data, you can try using one of already prepared models for automatic classification of components (FSL FIX documentation) selected by the parameter --hcp_icafix_traindata. You should manually inspect the classification (workbench files prepared by hcp_post_fix). If the classification is not good enough you have two options:

  1. If your dataset is relatively small, you can manually classify all components and run hcp_reapply.
  2. If your dataset is too large to do manual classification, you need to prepare a custom classification model. For that purpose, you can first run hcp_icafix and then manually classify the resulting components only on a subset of data. Next, this manual classification on a subset of data is used to train a custom classification model using the fix command (FSL FIX documentation (15-20 ICAs is recommended for to build a model).
    Once you prepare your custom classification model, you can use it to automatically classify components on the rest of your data. The automatic classification on the existing ICA components using a new classification model can be again done using the hcp_icafix command, where you need to specify the classification model that should be used in the --hcp_icafix_traindata parameter and set the --hcp_reuse_existing_ica parameter to TRUE. Setting --hcp_reuse_existing_ica=TRUE avoid running ICA again when reclassifying components. However, this is a newer option which has been recently added, thus it will be available in the Qunex container released in February. Alternatively, you can run ICAfix without the --hcp_reuse_existing_ica parameter, but in this case ICA decomposition will be done again, which means that your components and their order will change.
qunex_container hcp_icafix \
--sessionsfolder="$study/sessions" \
--batchfile="$study/processing/batch_hcp.txt" \
--hcp_icafix_traindata="/data/studies/fMRI/ICA_model/analysis/model_pyfix/mblab_s15_TR25.pyfix_model" \
--hcp_icafix_threshold=5 \
--container="$qunex_container_path" \
--hcp_reuse_existing_ica=TRUE \
--hcp_fix_backup=fix_v1

As noted above, it is recommended to build ICA model on 15-20 ICA solutions (usually 1 solution is 1 subject). In the case of MSC there are 10 subjects (actually I think MSC08 is not useful due to large movement see Figure 3 in Gratton, 2018). That means that you can compute 10 ICA solutions. Alternatively, you could do one ICA solution for each subject and task (about 100 ICA solutions). In the latter case it would make sense to build a model, in the first case you can just classify all components by hand (still a lot of work).

1 Like

Andraž, thank you for your help. I understand the workflow for ICA denoising using Qunex. I think I will first check the effectiveness of distinguishing noise components using the existing pre-trained model and try manually classifying the components. Afterward, I’ll decide whether further training of the FIX model is necessary. The ICA denoising work will start later, and if I succeed, I’d be happy to share my progress.

1 Like

Hi, I’m really sorry to encounter another issue and need help with it. I have successfully completed the pre_freesurfer, freesurfer, and post_freesurfer preprocessing steps. However, I encountered an error while running the fmri_volume pipeline, and I couldn’t find out where the issue is. Below is the error message:

fig1

The following section shows the parameters I have set for hcp_fmri_volume in the batch file

##— hcp_fmri_volume options

_hcp_bold_echospacing : 0.000590001
_hcp_bold_dcmethod : SiemensFieldMap
_hcp_bold_sbref : NONE
_hcp_bold_echodiff : 2.46
_hcp_bold_unwarpdir : y-
_hcp_bold_doslicetime : TRUE
_hcp_bold_slicetimerparams : --odd

How should I resolve this issue?

Can you please attach relevant comlog and runlog files (folder processing/logs/) and one session_hcp.txt file?
Processing mode should be set to LegacyStyleData if you want to use slice timing correction.

SiemensFieldMap should be SiemensFieldmap (notice lowercase M). We’ll correct this in the next version of qunex.

Hi,Andraž.
I changed SiemensFieldMap to SiemensFieldmap, but the following error occurs:

ERROR: Unknown distortion correction method: SiemensFieldmap! Please check your settings!

Log-hcp_fmri_volume-2025-01-26_08.29.51.827133.log (5.0 KB)

Below is the error log when I continue using the ‘‘SiemensFieldMap’’ parameter and add the LegacyStyledata parameter:
Log-hcp_fmri_volume-2025-01-26_09.42.07.218207.log (13.6 KB)

The error log is only generated in the runlog, and no new record files are created in the comlog.

Here is my session_hcp.txt file. I didn’t use the code you provided to reorganize the file. I chose to combine the first two structural image sessions for each MSC subject and distribute them to each functional image session, so each session has 4 T1w and T2w structural image files. Additionally, for the two magnitude images, I manually merged them using fslmerge
session_hcp.txt (1.2 KB)

Thanks for the details. Our main developer is currently on vacation, and I don’t currently have appropriate data to test this, but there seems to be some inconsistencies in the code regarding SiemensFieldmap/SiemensFieldMap value that may be causing the problem. Can you try using the FIELDMAP value? That should be equivalent to SiemensFieldMap.
Also, we recently added support for multiple FM magnitude images, so there is no need to manually average them.

edit: I tested this and an error further down occurs. I’ll discuss with a team member and reply tomorrow.

Hi, the easiest way for you right now would be to open the Docker container interactively and change all occurrences of SiemensFieldmap in process_hcp.py to SiemensFieldMap (/opt/qunex/python/qx_utilities/hcp/process_hcp.py). We can include the fix in qunex next week.

1 Like

Thank you, Andraz. I modified process_hcp.py and tested it. Both the fmri_volume and fmri_surface pipelines completed successfully.

1 Like

Hi,

A patch for your latest issues, will be included in QuNex 1.0.4 which will be released later this week.

Best, Jure

1 Like