[RESOLVED] Temporal ICA processing question

I’m trying to run the temporal ICA pipeline and I found an error that is probably simple to address, but I need help. I ran the following qunex container command:

msi_resources_time=24:00:00; msi_resources_nodes=1; msi_resources_ntaskspernode=24; msi_resources_mem=64000; msi_queue=agsmall; msi_resources_jobname=HCPtICA; \
study_sharedfolder=/home/moanae/shared/project_K99_ChrTMDHCP_qunex02; \
qunex_container hcp_temporal_ica \
--batchfile=${study_sharedfolder}/processing/batch_K99Aim2.txt --sessionsfolder=${study_sharedfolder}/sessions \
--hcp_tica_bolds="fMRI_CONCAT_ALL" --hcp_tica_outfmriname="fMRI_CONCAT_ALL" --hcp_tica_mrfix_concat_name="fMRI_CONCAT_ALL" --hcp_tica_surfregname="MSMAll" \
--hcp_outgroupname="K99Aim2_AllParticip_n52" --hcp_tica_timepoints="1680" --hcp_tica_num_wishart="4" \
--hcp_tica_stop_after_step="ComputeTICAFeatures" --hcp_tica_icamode="INITIALIZE_TICA" \
--hcp_tica_precomputed_clean_folder="${study_sharedfolder}/analysis/tICA/S1200_MSMAll3T1071" \
--hcp_tica_precomputed_fmri_name="rfMRI_REST" --hcp_tica_precomputed_group_name="S1200_MSMAll3T1071" --hcp_tica_sicadim_override=82 \
--hcp_tica_fix_legacy_bias="NO" \
--scheduler=SLURM,time=${msi_resources_time},nodes=${msi_resources_nodes},cpus-per-task=${msi_resources_ntaskspernode},mem=${msi_resources_mem},partition=${msi_queue},jobname=${msi_resources_jobname} \
--bind=${study_sharedfolder}:${study_sharedfolder} --container=${HOME}/qunex/qunex_suite-0.97.3.sif

and my batch file has this entry for tICA:

_hcp_tica_procstring      : _Atlas_MSMAll_hp0_clean

The error log shows that the the parameter “–proc-string” is not parsing these values correctly (see bolded):

# Generated by QuNex 0.97.3 on 2023-04-19_09.55.50.624304
#
------------------------------------------------------------
Running external command via QuNex:

/opt/HCP/HCPpipelines/tICA/tICAPipeline.sh                 --study-folder="/home/moanae/shared/project_K99_ChrTMDHCP_qunex02/sessions/K99Aim2_AllParticip_n52"                 --subject-list="10001@10002@10004@10005@10006@10007@10009@10011@10012@10013@10014@10015@10016@10018@10019@10021@10022@10023@10024@10030@10031@10033@10034@10035@10036@10037@10038@10039@10041@10042@11004@11006@11009@11012@11015@20001@20002@20004@20005@20006@20008@20010@20011@20012@20013@20015@20016@20017@20018@20019@20021@20022"                 --fmri-names="fMRI_CONCAT_ALL"                 --output-fmri-name="fMRI_CONCAT_ALL"                 --surf-reg-name="MSMAll"                 --fix-high-pass="0"                 **--proc-string="_Atlas_MSMAll_hp0_clean_MSMAll_hp0_clean"**                 --out-group-name="K99Aim2_AllParticip_n52"                 --fmri-resolution="2"                 --subject-expected-timepoints="1680"                 --num-wishart="4"                 --low-res="32"                 --matlab-run-mode="0"                 --stop-after-step="ComputeTICAFeatures"                    --mrfix-concat-name="fMRI_CONCAT_ALL"                    --ica-mode="INITIALIZE_TICA"                    --precomputed-clean-folder="/home/moanae/shared/project_K99_ChrTMDHCP_qunex02/analysis/tICA/S1200_MSMAll3T1071"                    --precomputed-clean-fmri-name="rfMRI_REST"                    --precomputed-group-name="S1200_MSMAll3T1071"                    --sicadim-override="82"                    --fix-legacy-bias="NO"
------------------------------------------------------------

Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: arguments: --study-folder=/home/moanae/shared/project_K99_ChrTMDHCP_qunex02/sessions/K99Aim2_AllParticip_n52 --subject-list=10001@10002@10004@10005@10006@10007@10009@10011@10012@10013@10014@10015@10016@10018@10019@10021@10022@10023@10024@10030@10031@10033@10034@10035@10036@10037@10038@10039@10041@10042@11004@11006@11009@11012@11015@20001@20002@20004@20005@20006@20008@20010@20011@20012@20013@20015@20016@20017@20018@20019@20021@20022 --fmri-names=fMRI_CONCAT_ALL --output-fmri-name=fMRI_CONCAT_ALL --surf-reg-name=MSMAll --fix-high-pass=0 --proc-string=_Atlas_MSMAll_hp0_clean_MSMAll_hp0_clean --out-group-name=K99Aim2_AllParticip_n52 --fmri-resolution=2 --subject-expected-timepoints=1680 --num-wishart=4 --low-res=32 --matlab-run-mode=0 --stop-after-step=ComputeTICAFeatures --mrfix-concat-name=fMRI_CONCAT_ALL --ica-mode=INITIALIZE_TICA --precomputed-clean-folder=/home/moanae/shared/project_K99_ChrTMDHCP_qunex02/analysis/tICA/S1200_MSMAll3T1071 --precomputed-clean-fmri-name=rfMRI_REST --precomputed-group-name=S1200_MSMAll3T1071 --sicadim-override=82 --fix-legacy-bias=NO
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: StudyFolder: /home/moanae/shared/project_K99_ChrTMDHCP_qunex02/sessions/K99Aim2_AllParticip_n52
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: SubjlistRaw: 10001@10002@10004@10005@10006@10007@10009@10011@10012@10013@10014@10015@10016@10018@10019@10021@10022@10023@10024@10030@10031@10033@10034@10035@10036@10037@10038@10039@10041@10042@11004@11006@11009@11012@11015@20001@20002@20004@20005@20006@20008@20010@20011@20012@20013@20015@20016@20017@20018@20019@20021@20022
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: fMRINames: fMRI_CONCAT_ALL
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: MRFixConcatName: fMRI_CONCAT_ALL
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: OutputfMRIName: fMRI_CONCAT_ALL
**Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: fMRIProcSTRING: _Atlas_MSMAll_hp0_clean_MSMAll_hp0_clean**
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: HighPass: 0
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: GroupAverageName: K99Aim2_AllParticip_n52
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: fMRIResolution: 2
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: RegName: MSMAll
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: numWisharts: 4
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: LowResMesh: 32
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: subjectExpectedTimepoints: 1680
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: ICAmode: INITIALIZE_TICA
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: precomputeTICAFolder: /home/moanae/shared/project_K99_ChrTMDHCP_qunex02/analysis/tICA/S1200_MSMAll3T1071
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: precomputeTICAfMRIName: rfMRI_REST
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: precomputeGroupName: S1200_MSMAll3T1071
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: extraSuffix: 
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: PCAOutputDim: 
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: PCAInternalDim: 
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: migpResume: YES
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: sicadimIters: 100
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: sicadimOverride: 82
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: LowsICADims: 7@8@9@10@11@12@13@14@15@16@17@18@19@20@21
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: RecleanModeString: NO
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: NuisanceListTxt: 
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: FixLegacyBiasString: NO
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: confoutfile: 
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: startStep: MIGP
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: stopAfterStep: ComputeTICAFeatures
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: parLimit: -1
Wed Apr 19 09:55:51 CDT 2023:tICAPipeline.sh: MatlabMode: 0

How should I address this issue? Thank you.

Estephan

Hi Estephan,

By default, _hcp_tica_procstring is built as <hcp_cifti_tail>_<hcp_tica_surfregname>_hp<hcp_icafix_highpass>.

I believe what you are trying to get will be set automatically if you do not set _hcp_tica_procstring yourself.

Cheers, Jure

Thanks Jure. I was able to run tICA pipeline. It apparently did run into an error, which I communicated with the HCP people and Tim Coalson believes he knows what went wrong. To address it, it would need a new commit version for the HCP pipelines. Is there a way for me to use qunex 0.97.3 and refer to an external version of the HCP pipelines for the specific purpose of running tICA?

This should fix it:

change parallel.shlib fallback to use pids instead of predicting job … · Washington-University/HCPpipelines@fb5b68b · GitHub

I have mentioned this commit on the qunex slack. The master branch currently has another issue that we are working on dealing with, so a build of qunex on that commit may not be what we want. However, as I understand it, there is a way to tell qunex to use an external version of the pipelines, so you could try that method (specifically for tICA).

Tim

On Fri, Apr 21, 2023 at 4:16 PM Estephan Moana-Filho <estephanmoana@gmail.com> wrote:

Any chance you cam share the possible solution? I can try to work with the qunex experts a way to implement it sooner, if possible.

Sent from my iPhone

On Apr 21, 2023, at 3:38 PM, Tim Coalson <tim.coalson@gmail.com> wrote:
I think I may have figured out what is happening, the job numbers aren’t behaving the way I expected when there are 2 sets of jobs run by the same script, under whatever environment your qunex/slurm setup creates. I have a possible fix, but it will be a while before a container will be available with it.

Tim

Hi Estephan,

Yes, that is possible. You need to have the latest HCP Pipelines locally. When running the container, you need to bind the local folder so Singularity will have access to it. Next, you need to override the environment variable HCPPIPEDIR within the container. There are several ways to do this, see QuNex container deployment — QuNex documentation and Running commands against a container using qunex_container — QuNex documentation for details. You can do it by setting con_HCPPIPEDIR on the login node, or you can use the envars or bash_post parameters.

Jure