[RESOLVED] Qunex preparation to run the ABIDE data (in BIDS format)

I’m moving a conversation here from the slack forum. The bolded part below is the most current question.

POST 1

I’m working through the abide1_processing_steps.md file. In preparation, I zipped the BIDS compatible ABIDE1 data and copied it to the $my_study_folder/sessions/inbox/BIDS location. I’m hitting a bit of a snag at following part

# --- prepare data /gpfs/project/fas/n3/software/qunexdev/bin/qunex_container setup_hcp \
--sessionsfolder="$my_study_folder/sessions" \
--sessions="$batch_file" \
--container="$qunex_container" \
--scheduler="SLURM,time=0-01:00:00,ntasks=1,cpus-per-task=1,mem-per-cpu=8000,partition=day"

When I run the command, I get the following message, indicating that no jobs are being run.

--> QuNex will run the command over 0 sessions. It will utilize:
Scheduled jobs: 0
Maximum sessions run in parallel for a job: 1.
Maximum elements run in parallel for a session: 1.
Up to 1 processes will be utilized for a job.

When I look at the batch.txt file that $batch_file points to, it appears to just be commented descriptions.

POST 2

After going through the error logs, I think the problem is the specification of “sessions” with the create_session_info command.

/gpfs/project/fas/n3/software/qunexdev/bin/qunex_container create_session_info \
--sessionsfolder="$my_study_folder/sessions" \
--sessions=" HCPA001 " \
--container="$qunex_container" \
--overwrite=no \
--scheduler="SLURM,time=0-01:00:00,ntasks=1,cpus-per-task=1,mem-per-cpu=8000,partition=day"

The "HCPA001" is what’s created in the tutorial video during the import of the DICOMs.

It was then suggested I take a look at the following wiki page: https://bitbucket.org/oriadev/qunex/wiki/UsageDocs/OnboardingBIDSData

POST 3

That page focuses on the import_bids command. Usage is described, but I don’t have a clear idea what the result should be. The abide1_processing_steps.md file includes the following step

/gpfs/project/fas/n3/software/qunexdev/bin/qunex_container import_bids \
--sessionsfolder="$my_study_folder/sessions" \
--container="$qunex_container" \
--check=any \
--scheduler="SLURM,time=0-01:00:00,ntasks=1,cpus-per-task=1,mem-per-cpu=8000,partition=day"

It runs without issue. However, afterwards, the “/info/bids/” folder is empty. I’m not quite sure what “mapping” takes place, but I’m guessing data from the zip should be unpacked into that location, no?

Hi wpettine!

Welcome to the QuNex forum. It seems like import_bids did not actually import any sessions since subsequent commands (setup_hcp, create_session_info) are unable to find these sessions and properly prepare everything for HCP processing. Could you maybe attach the log from the import_bids command please? You can find logs in $my_study_folder/processing/logs/runlogs and $my_study_folder/processing/logs/comlogs folders.

Once we see what happened there, we should be able to help you get this running.

Thanks for the response! The command import_bids is not producing a log file. Is there a flag that forces a log output? I don’t see that option in the documentation.

Hi!

Unlike import_dicom, import_bids has no check parameter. Try running the command without the --check=any. I am currently having some issues with accessing our compute cluster so I cannot test myself. Let me know how it goes.

Another remark is that I see you are using /gpfs/project/fas/n3/software/qunexdev/bin/qunex_container this uses the develop version of the script which is not recommended as it is not completely tested. It is better to just use qunex_container.

I ran the following command and didn’t notice anything different (no log file, etc.). What should I expect the output to be? In other words, how do I know if the command has been successful? The documentation says it performs a mapping, but I’m not sure what that looks like in practice.

qunex_container import_bids \
    --sessionsfolder="$my_study_folder/sessions" \
    --container="$qunex_container" \
    --scheduler="SLURM,time=0-01:00:00,ntasks=1,cpus-per-task=1,mem-per-cpu=8000,partition=day"

Thanks!

Hi Warren,

Please check the batch log folder /gpfs/project/fas/n3/Studies/ABIDE/abide1/processing/logs/batchlogs. Here you can find information printed to stdout by QuNex captured by the slurm scheduler. It seems like qunex was trying to use docker. Could you please double check if your $qunex_container variable is defined correctly?

Looks like I missed the line below before the latest run.

qunex_container=/gpfs/project/fas/n3/software/Singularity/qunex_suite-0.91.11.sif

After defining it correctly, the folder /gpfs/project/fas/n3/Studies/ABIDE/abide1/info/bids is full of goodies. I’ll start sorting through the output and move through further processing.

Does this mean the initial problem was with the use of the command

/gpfs/project/fas/n3/software/qunexdev/bin/qunex_container?

If so, should I replace that with qunex_container throughout the script I’m using?

No. The problem was the check flag. The check flag is invalid, and qunex will refuse to run. You can find the error log in the batch log folder.

Using stable qunex_container implementation on the master branch instead of the dev branch is considered best practice, and you should always use the stable version.

Cool. Thanks again for the assistance!