Description:
I am trying to run an HCP task fMRI analysis on HCP-style data that we collected ourselves. To avoid large of amounts of duplicate data we don’t copy all the data to our home folders in our lab. My suspicion is that I might have done something wrong here but in general, we’ve been able to follow this approach but because there are not many error logs I find it difficult to debug this issue. The only message I have is:
→ QuNex will run the command over 0 sessions. It will utilize:
Scheduled jobs: 0
Maximum sessions run in parallel for a job: 1.
Maximum elements run in parallel for a session: 1.
Up to 1 processes will be utilized for a job.
Note in the code below, the echo command does print the expected comma-separated sessionids, so the issue is not that this variable is just empty. My predicament is exacerbated by the fact that there are no logs at all:
/home1/Jaquent/Datasets/VP00211_HCP/processing/
├── logs
│ ├── batchlogs
│ ├── comlogs
│ └── runlogs
├── VP00211_3T_Prisma_HCP_batch.txt
└── VP00211_7T_Terra_HCP_batch.txt
Call:
# Trying to submit the following particpants:
echo "Submitting: ${sessionids}"
# This section runs the qunex command.
python3 $path_2_qunex_container_script/qunex_container hcp_task_fmri_analysis \
--sessionsfolder="${data_folder}/sessions/" \
--sessions="${data_folder}/processing/VP00211_7T_Terra_HCP_batch.txt" \
--sessionids="${sessionids}" \
--overwrite="yes" \
--bash_pre="module load singularity" \
--bind="/public/home2/VNLab/" \
--hcp_task_lvl1tasks="${level1_run1_folder}@${level1_run2_folder}@${level1_run3_folder}@${level1_run4_folder}" \
--hcp_task_lvl2task="${level2_folder}" \
--hcp_task_lvl1fsfs="${level1_run1_fsf}@${level1_run2_fsf}@${level1_run3_fsf}@${level1_run4_fsf}" \
--hcp_task_lvl2fsf="${level2_fsf}" \
--hcp_task_highpass="${filtering}" \
--hcp_bold_final_smoothFWHM="${smoothing}" \
--hcp_task_confound="Confound_Parameters.txt" \
--hcp_task_procstring="MSMAll_hp0_clean" \
--container=$qunex_container_file \
--scheduler="SLURM,mem-per-cpu=${memory},time=${time}, partition=blade, jobname=${jobname},mail-user=${email}, mail-type=ALL, exclude=bnode05"
Versions:
- QuNex container:
qunex_suite-0.95.2.sif
Additional things I tried:
- I verified that the first
sessionidis theVP00211_7T_Terra_HCP_batch.txt, viagrep “R0204_7TSESS1” /home1/Jaquent/Datasets/VP00211_HCP/processing/:
VP00211_7T_Terra_HCP_batch.txtsession: R0204_7TSESS1dicom: /home1/Jaquent/Datasets/VP00211_HCP/sessions/R0204_7TSESS1/dicomraw_data: /home1/Jaquent/Datasets/VP00211_HCP/sessions/R0204_7TSESS1/niidata: /home1/Jaquent/Datasets/VP00211_HCP/sessions/R0204_7TSESS1/4dfphcp: /home1/Jaquent/Datasets/VP00211_HCP/sessions/R0204_7TSESS1/hcp
- The path
/home1/Jaquent/Datasets/VP00211_HCP/sessions/R0204_7TSESS1/hcpdoes contain data as it is 4.8 GB. - The issue does not seem to be the filename of the batch file, which was typically named
sessions_hcp_batch.txtbut in this case is `VP00211_7T_Terra_HCP_batch.txt, which was tested by re-naming it.