[RESOLVED] Fc_compute_gbc: incompatible array size during mFz calculation

Description:

When executing fc_compute_gbc, Matlab error when calculating mean Fz value across all voxels due to incompatible array sizes, specifically because array ‘op2’ is size 0x0. What is that array and how can I fix this?

Call:

qunex_container fc_compute_gbc \
--container="/gpfs/gibbs/pi/n3/software/Singularity/qunex_suite-1.0.1.sif" \
--flist="/gpfs/gibbs/pi/n3/Studies/PREDICT/processing/lists/bold_allsiemens_dtseries_test.list" \
--command='mFz:' \
--sroiinfo='' \
--troiinfo='' \
--frames='' \
--targetf='GBC' \
--parjobs=1 \
--options='eventdata=all|ignore=udvarsme|badevents=use|fcmeasure=r|savegroup=none|saveind=all|savesessionid=false|itargetf=sfolder|rsmooth=|rdilate=|time=true|verbose=true|debug=true' \
--scheduler=SLURM,time=01:00:00,mem-per-cpu=32000,partition=pi_anticevic,mail-type=all,nodes=1,cpus-per-task=1

Logs:

The only output/log is qunex_container log:

---> Setting up Octave 


.......................... Running QuNex v1.0.1 [QIO] ..........................


--- Full QuNex call for command: fc_compute_gbc

qunex fc_compute_gbc --flist="/gpfs/gibbs/pi/n3/Studies/PREDICT/processing/lists/bold_allsiemens_dtseries_test.list" --command="mFz:" --sroiinfo="" --troiinfo="" --frames="" --targetf="GBC" --options="eventdata=all|ignore=udvarsme|badevents=use|fcmeasure=r|savegroup=none|saveind=all|savesessionid=false|itargetf=sfolder|rsmooth=|rdilate=|time=true|verbose=true|debug=true"

---------------------------------------------------------



Running:
>>> fc_compute_gbc('/gpfs/gibbs/pi/n3/Studies/PREDICT/processing/lists/bold_allsiemens_dtseries_test.list', 'mFz:', '', '', '', 'GBC', 'eventdata=all|ignore=udvarsme|badevents=use|fcmeasure=r|savegroup=none|saveind=all|savesessionid=false|itargetf=sfolder|rsmooth=|rdilate=|time=true|verbose=true|debug=true')


fc_compute_gbc options used:
---------------------------
      sessions: all
     eventdata: all
        ignore: udvarsme
     badevents: use
     fcmeasure: r
     savegroup: none
       saveind: all
 savesessionid: false
      itargetf: sfolder
       rsmooth: 
       rdilate: 
         vstep: 12000
       verbose: true
         debug: true
          time: true


Checking ...
... found results folder (GBC)
ans =
{
  [1,1] = fc
  [1,2] = p
  [1,3] = r
  [1,4] = z
}

 ... listing files to process
 ... reading file list: 
     - session id: PV01606_20230614
       ... found image file (/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/bold1_Atlas_s_hpss_res-mVWMWB1d_lpss.dtseries.nii)
       ... found image file (/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/bold2_Atlas_s_hpss_res-mVWMWB1d_lpss.dtseries.nii)
       ... found image file (/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/bold3_Atlas_s_hpss_res-mVWMWB1d_lpss.dtseries.nii)
       ... found image file (/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/bold4_Atlas_s_hpss_res-mVWMWB1d_lpss.dtseries.nii)

 ... done ... done.

---------------------------------
Processing session PV01606_20230614
     ... creating ROI masks
     ... done
     ... reading image file(s)
         -> 1332 frames read, done.
     ... generating extraction sets ...
img_get_extraction_matrices options used:
----------------------------------------
    ignore: udvarsme
 badevents: use
   verbose: true
     debug: true
 done.
     ... computing gbc
         ... set timeseries
         ... extracted ts

img_compute_gbc_fc
... setting up data
... starting GBC on 
... 91282 voxels & 1323 frames to process in 8 steps
... computing GBC for voxels:
     ...        1:12000
     -> fc [32.799 s]
     -> clip [0.000 s]
     -> Fz [23.627 s]
     -> mFz
Matlab Error! Processing Failed!
mx_el_ge: nonconformant arguments (op1 is 12000x91282, op2 is 0x0)


ERROR: fc_compute_gbc failed! Please check output / log!

Hi,

Let us investigate this and get back to you as soon as we have an answer.

Best, Jure

cz387,

When running the command, please, specify the parameter for each of the GBC “commands”. In your --command parameter you specify that you would like to compute mean Fisher z value, however, you do not specify the threshold to use when filtering the connections you would like the mean Fz to be computed over. If you would like to include all connections, you can specify the threshold to be 0, e.g:

--command='mFz:0'

In the future, we will add more specific error and warning to alert the users when they have not specified the relevant parameter for the GBC command.

All the best,

Grega

Thank you! When I specified mFz:0, I no longer got that error. The output gbc_bold_allsiemens_dtseries_test_timeseries_mFz_0_r.dscalar.nii was created in images/functional for that test session.

There are no new batchlogs, runlogs, or comlogs, but the qunex_container log in my home folder says the following:

---> Setting up Octave 


.......................... Running QuNex v1.0.1 [QIO] ..........................


--- Full QuNex call for command: fc_compute_gbc

qunex fc_compute_gbc --flist="/gpfs/gibbs/pi/n3/Studies/PREDICT/processing/lists/bold_allsiemens_dtseries_test.list" --command="mFz:0" --sroiinfo="" --troiinfo="" --frames="" --targetf="GBC" --options="eventdata=all|ignore=udvarsme|badevents=use|fcmeasure=r|savegroup=none|saveind=all|savesessionid=false|itargetf=sfolder|rsmooth=|rdilate=|time=true|verbose=true|debug=true"

---------------------------------------------------------



Running:
>>> fc_compute_gbc('/gpfs/gibbs/pi/n3/Studies/PREDICT/processing/lists/bold_allsiemens_dtseries_test.list', 'mFz:0', '', '', '', 'GBC', 'eventdata=all|ignore=udvarsme|badevents=use|fcmeasure=r|savegroup=none|saveind=all|savesessionid=false|itargetf=sfolder|rsmooth=|rdilate=|time=true|verbose=true|debug=true')


fc_compute_gbc options used:
---------------------------
      sessions: all
     eventdata: all
        ignore: udvarsme
     badevents: use
     fcmeasure: r
     savegroup: none
       saveind: all
 savesessionid: false
      itargetf: sfolder
       rsmooth: 
       rdilate: 
         vstep: 12000
       verbose: true
         debug: true
          time: true


Checking ...
... found results folder (GBC)
ans =
{
  [1,1] = fc
  [1,2] = p
  [1,3] = r
  [1,4] = z
}

 ... listing files to process
 ... reading file list: 
     - session id: PV01606_20230614
       ... found image file (/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/bold1_Atlas_s_hpss_res-mVWMWB1d_lpss.dtseries.nii)
       ... found image file (/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/bold2_Atlas_s_hpss_res-mVWMWB1d_lpss.dtseries.nii)
       ... found image file (/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/bold3_Atlas_s_hpss_res-mVWMWB1d_lpss.dtseries.nii)
       ... found image file (/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/bold4_Atlas_s_hpss_res-mVWMWB1d_lpss.dtseries.nii)

 ... done ... done.

---------------------------------
Processing session PV01606_20230614
     ... creating ROI masks
     ... done
     ... reading image file(s)
         -> 1332 frames read, done.
     ... generating extraction sets ...
img_get_extraction_matrices options used:
----------------------------------------
    ignore: udvarsme
 badevents: use
   verbose: true
     debug: true
 done.
     ... computing gbc
         ... set timeseries
         ... extracted ts

img_compute_gbc_fc
... setting up data
... starting GBC on 
... 91282 voxels & 1323 frames to process in 8 steps
... computing GBC for voxels:
     ...        1:12000
     -> fc [32.551 s]
     -> clip [0.000 s]
     -> Fz [29.934 s]
     -> mFz [0.735 s]
     ...    12001:24000
     -> fc [32.800 s]
     -> clip [0.000 s]
     -> Fz [30.159 s]
     -> mFz [0.727 s]
     ...    24001:36000
     -> fc [32.707 s]
     -> clip [0.000 s]
     -> Fz [30.894 s]
     -> mFz [0.727 s]
     ...    36001:48000
     -> fc [32.797 s]
     -> clip [0.000 s]
     -> Fz [29.800 s]
     -> mFz [0.653 s]
     ...    48001:60000
     -> fc [32.714 s]
     -> clip [0.000 s]
     -> Fz [29.758 s]
     -> mFz [0.731 s]
     ...    60001:72000
     -> fc [32.550 s]
     -> clip [0.000 s]
     -> Fz [30.365 s]
     -> mFz [1.656 s]
     ...    72001:84000
     -> fc [32.575 s]
     -> clip [0.000 s]
     -> Fz [31.381 s]
     -> mFz [0.726 s]
     ...    84001:91282
     -> fc [19.928 s]
     -> clip [0.000 s]
     -> Fz [17.048 s]
     -> mFz [0.441 s]
... done! [487.639 s]
         ... computed gbc maps
     ... saving gcb

             -> /gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/gbc_bold_allsiemens_dtseries_test_timeseries_mFz_0_r done.


Completed


---> Successful completion of task

Are the logs maybe in the folder you executed the command from (the location where you were in the terminal when you called the command) or in the home folder (as processing/logs)?

In order for QuNex to put the logs in the study’s processing logs folder it needs to know where that is. Your command above does not use the --studyfolder or --serssionsfolder parameters, so it is impossible for QuNex to know where to put the logs. Try adding

--logfolder=/gpfs/gibbs/pi/n3/Studies/PREDICT/processing/logs

to the call. Or you can set the --logfolder to something else and the logs should be there (the same holds for other commands).

Best, Jure

Ah I see. Now after I added --sessionsfolder (and also added --logfolder, but as you say, just specifying --sessionsfolder is enough), the qunex_container log is added to processing/logs/batchlogs instead of my home folder. Thank you!
However, I do not see any new logs in runlogs or comlogs. I ran the command from the study folder and I don’t see any new files in there either. I checked the qunex_container logs from some different commands I ran in the past and they have something like Command log: <runlog_path> and Command output: <comlog_path> or You can follow command's progress in: <comlog_path> but I do not see something like that in the log for this command.

Call:

qunex_container fc_compute_gbc \
--container="/gpfs/gibbs/pi/n3/software/Singularity/qunex_suite-1.0.1.sif" \
--sessionsfolder="/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions" \
--logfolder="/gpfs/gibbs/pi/n3/Studies/PREDICT/processing/logs" \
--flist="/gpfs/gibbs/pi/n3/Studies/PREDICT/processing/lists/ptseries_gbc_test.list" \
--command='mFz:0' \
--parjobs=1 \
--options='eventdata=all|ignore=udvarsme|badevents=use|fcmeasure=r|savegroup=none|saveind=all|savesessionid=true|itargetf=sfolder|rsmooth=|rdilate=|time=true|verbose=true|debug=true' \
--scheduler=SLURM,time=01:00:00,mem-per-cpu=32000,partition=pi_anticevic,mail-type=all,nodes=1,cpus-per-task=1

Log:

---> Setting up Octave 


.......................... Running QuNex v1.0.1 [QIO] ..........................


--- Full QuNex call for command: fc_compute_gbc

qunex fc_compute_gbc --sessionsfolder="/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions" --logfolder="/gpfs/gibbs/pi/n3/Studies/PREDICT/processing/logs" --flist="/gpfs/gibbs/pi/n3/Studies/PREDICT/processing/lists/ptseries_gbc_test.list" --command="mFz:0" --options="eventdata=all|ignore=udvarsme|badevents=use|fcmeasure=r|savegroup=none|saveind=all|savesessionid=true|itargetf=sfolder|rsmooth=|rdilate=|time=true|verbose=true|debug=true"

---------------------------------------------------------



Running:
>>> fc_compute_gbc('/gpfs/gibbs/pi/n3/Studies/PREDICT/processing/lists/ptseries_gbc_test.list', 'mFz:0', '', '', '', '', 'eventdata=all|ignore=udvarsme|badevents=use|fcmeasure=r|savegroup=none|saveind=all|savesessionid=true|itargetf=sfolder|rsmooth=|rdilate=|time=true|verbose=true|debug=true')


fc_compute_gbc options used:
---------------------------
      sessions: all
     eventdata: all
        ignore: udvarsme
     badevents: use
     fcmeasure: r
     savegroup: none
       saveind: all
 savesessionid: true
      itargetf: sfolder
       rsmooth: 
       rdilate: 
         vstep: 12000
       verbose: true
         debug: true
          time: true


Checking ...
... found results folder (.)
ans =
{
  [1,1] = fc
  [1,2] = p
  [1,3] = r
  [1,4] = z
}

 ... listing files to process
 ... reading file list: 
     - session id: PV01606_20230614
       ... found image file (/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/bold1_Atlas_s_hpss_res-mVWMWB1d_lpss_BOLD-CAB-NP-v1.0.ptseries.nii)
       ... found image file (/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/bold3_Atlas_s_hpss_res-mVWMWB1d_lpss_BOLD-CAB-NP-v1.0.ptseries.nii)

     - session id: LA08975_20230102
       ... found image file (/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/LA08975_20230102/images/functional/bold1_Atlas_s_hpss_res-mVWMWB1d_lpss_BOLD-CAB-NP-v1.0.ptseries.nii)
       ... found image file (/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/LA08975_20230102/images/functional/bold3_Atlas_s_hpss_res-mVWMWB1d_lpss_BOLD-CAB-NP-v1.0.ptseries.nii)

 ... done ... done.

---------------------------------
Processing session PV01606_20230614
     ... creating ROI masks
     ... done
     ... reading image file(s)
         -> 666 frames read, done.
     ... generating extraction sets ...
img_get_extraction_matrices options used:
----------------------------------------
    ignore: udvarsme
 badevents: use
   verbose: true
     debug: true
 done.
     ... computing gbc
         ... set timeseries
         ... extracted ts

img_compute_gbc_fc
... setting up data
... starting GBC on 
... 718 voxels & 657 frames to process in 1 steps
... computing GBC for voxels:
     ...          1:718
     -> fc [0.017 s]
     -> clip [0.000 s]
     -> Fz [0.015 s]
     -> mFz [0.000 s]
... done! [0.055 s]
         ... computed gbc maps
     ... saving gcb

             -> /gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/gbc_PV01606_20230614_ptseries_gbc_test_timeseries_mFz_0_r done.

---------------------------------
Processing session LA08975_20230102
     ... creating ROI masks
     ... done
     ... reading image file(s)
         -> 666 frames read, done.
     ... generating extraction sets ...
img_get_extraction_matrices options used:
----------------------------------------
    ignore: udvarsme
 badevents: use
   verbose: true
     debug: true
 done.
     ... computing gbc
         ... set timeseries
         ... extracted ts

img_compute_gbc_fc
... setting up data
... starting GBC on 
... 718 voxels & 663 frames to process in 1 steps
... computing GBC for voxels:
     ...          1:718
     -> fc [0.011 s]
     -> clip [0.000 s]
     -> Fz [0.014 s]
     -> mFz [0.000 s]
... done! [0.040 s]
         ... computed gbc maps
     ... saving gcb

             -> /gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/LA08975_20230102/images/functional/gbc_LA08975_20230102_ptseries_gbc_test_timeseries_mFz_0_r done.


Completed


---> Successful completion of task

Since the command is working for me, I don’t need to see the runlogs or comlogs right now but it would be helpful in the future.


Also I was wondering if it is possible to get a dt/ptseries output from this function? Currently I can get d/pscalar outputs from this command and they look great! I know that it is possible to get a dt/ptseries output from the fc_compute_wrapper but is it possible with fc_compute_gbc?

Thank you for your help!

cz387, hi!

Regarding logging
The fc_compute_[gbc|seedmaps|roifc], fc_extract_roi_timeseries and a number of other commands are wrappers for functions written in matlab. I believe that in case of these commands, the CLI input is just converted to the correct matlab/octave call and executed and no run- or comlogs are recorded. We probably should add the output in these cases to run- and comlogs.

Regarding parcellation
If you provide ptseries as input to GBC (and other functions), a pscalar resulting file will be generated. The function currently does not support parcellated results based on dense input. You have two options to obtain parcellated GBC results. First, parcellate bold timeseries before calling GBC – this has the added benefit of increased signal-to-noise and speed of GBC computation –, or parcellating the results after GBC computation. You can achieve this either using wb_command or using QuNex. QuNex does not have a “parcellate” command per se, but you can use fc_extract_roi_timeseries command to the same effect.

The fc_extract_roi_timeseries command takes a dense file, extracts values for each specified ROI for each volume and can save those values in multiple formats, including a ptseries file. The advantage of using this command is that you can specify what value to extract, this can be mean, median, max or min across all the grayordinates that fall within a ROI, or a first eigenvariate for the ROI across all the frames of the image.

You can read more about the command in the online tutorial. Unfortunately the function specific documentation is not yet published in documentation, but you can access it by running:

qunex fc_extract_roi_timeseries --help

All the best,

Grega

Hi Grega,

Thank you! I have been parcellating dense timeseries with parcellate_bold command, and then input the resulting ptseries file into fc_compute_gbc and getting pscalar output. I will use the pscalar file as input to run_qc to create pngs where I can quickly browse/QC the parcellated functional connectivity maps and connectivity matrices.

I noticed that when I used fc_compute_wrapper with the same input (ptseries), I received ptseries and csv outputs (when extractdata='yes'). My question is why fc_compute_gbc and fc_compute_wrapper have different types of outputs with the same input (ptseries → pscalar vs ptseries → ptseries + csv) and is it possible to get ptseries + csv outputs from fc_compute_gbc as well?

Here are examples of the commands I have used:

qunex_container fc_compute_wrapper \
--container="/gpfs/gibbs/pi/n3/software/Singularity/qunex_suite-1.0.1.sif" \
--sessionsfolder="/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions" \
--sessions="PV01606_20230614" \
--calculation="gbc" \
--runtype="individual" \
--gbc-command='mFz:0' \
--inputfiles="bold1_Atlas_s_hpss_res-mVWMWB1d_lpss_BOLD-CAB-NP-v1.0.ptseries.nii,bold3_Atlas_s_hpss_res-mVWMWB1d_lpss_BOLD-CAB-NP-v1.0.ptseries.nii" \
--inputpath="images/functional" \
--extractdata="yes" \
--outname="GBC_boldAP_Atlas_s_hpss_res-mVWMWB1d_lpss_BOLD-CAB-NP-v1.0"  \
--overwrite="yes" \
--ignore="udvarsme" \
--mask="0" \
--rsmooth="0" \
--rdilate="0" \
--vstep="1000" \
--verbose="true" \
--time="true" \
--covariance="true" \
--parsessions="1" \
--parjobs="1" \
--scheduler=SLURM,time=01:00:00,mem-per-cpu=16000,partition=pi_anticevic,mail-type=all,nodes=1,cpus-per-task=1
qunex_container fc_compute_gbc \
--container="/gpfs/gibbs/pi/n3/software/Singularity/qunex_suite-1.0.1.sif" \
--sessionsfolder="/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions" \
--logfolder="/gpfs/gibbs/pi/n3/Studies/PREDICT/processing/logs" \
--flist="/gpfs/gibbs/pi/n3/Studies/PREDICT/processing/lists/PV01606_20230614_ptseries.list" \
--command='mFz:0' \
--parjobs=1 \
--options='sessions=all|eventdata=all|ignore=udvarsme|badevents=use|fcmeasure=r|savegroup=none|saveind=all|savesessionid=true|itargetf=sfolder|rsmooth=|rdilate=|step=1000|time=true|verbose=true|debug=true' \
--scheduler=SLURM,time=01:00:00,mem-per-cpu=32000,partition=pi_anticevic,mail-type=all,nodes=1,cpus-per-task=1

This is the content of PV01606_20230614_ptseries.list :

session id: PV01606_20230614
    file:/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/bold1_Atlas_s_hpss_res-mVWMWB1d_lpss_BOLD-CAB-NP-v1.0.ptseries.nii
    file:/gpfs/gibbs/pi/n3/Studies/PREDICT/sessions/PV01606_20230614/images/functional/bold3_Atlas_s_hpss_res-mVWMWB1d_lpss_BOLD-CAB-NP-v1.0.ptseries.nii

Catrin, hi!

fc_compute_wrapper is a bash script that was written to provide a single point of entry for a number of different functional connectivity analyses. The advantage, besides a single point of entry, is that it combines multiple steps in a single command. In your case, it combines a call to a function that computes GBC followed by an extraction of data using wb_command. The disadvantage is that it becomes unwieldy to provide access to all the parameters and updated functionality for individual analyses. For example, fc_compute_wrapper calls the old GBC script, which does not offer the latest functionality, it also does not support the latest functionality related to definition of ROI and similar. As it is difficult to support and upgrade, we decided to deprecate fc_compoute_wrapper and advise users to use the latest analysis functions directly.

fc_compute_gbc is a direct call to a function written in Matlab and as such provides access to all the parameters and functionality. The function itself, however, does not include the functionality to output the results as a text file. This needs to be achieved using a separate command. To extract values for all the parcels in a parcellated GBC file, you could use fc_extract_roi_timerseres command. To use it, you would generate a list of the relevant GBC files to pass as a flist parameter. You can then run:

qunex_container fc_extract_roi_timeseries \
    --container="/gpfs/gibbs/pi/n3/software/Singularity/qunex_suite-1.0.1.sif" \
    --flist="<path to the list file>" \
    --roiinfo="parcels:all" \
    --targetf="<path to the folder where results are to be saved>" \
    --options="savegroup:long|saveind:wide"

In this case the data for each file will be extracted to a wide format txt file, and for all the sessions in a long format text file. You can set any of those to “none” if you do not want individual or joined group files. You can change the format. You can add itargetf:sfolder to options, if you would like the resulting files to be saved in original sessions folders.

Alternatively, to get the same output as in the fc_compute_wrapper, you could run:

wb_command -nifti-information -print-matrix <input cifti file> >> <output.csv file>

All the best,

Grega