• Algorithm outputs will be submitted to our challenge website. (If the team uses pretrained model or other public datasets, the team will be required to submit an additional set of algorithm output without using pretrained model or public datasets. )
  • Each team is required to submit a docker to organizing team for reproducing the results. Each participating team will also need to submit a solution paper about the algorithm to the organizers.
  • To be eligible for awards, top-entry teams are required to make their codes publicly available.


For algorithm designs:
The input can be skull-stripped ADC maps, Z_ADC maps, or combine them.
The output is lesion predictions.


Example of an algorithm docker


Github for BONBID-HIE2023: 


https://github.com/baorina/BONBID-HIE-MICCAI2023/tree/main


Development Stage

Participanting teams can submit prediction results of all training cases to verify algorithms.

This stage is used to design and evaluate algorithm on training set. We strongly encourange the participanting teams to do cross-validation on training set.

For each case, the output prediction should be stored in the same format as files in 3LABLE, saved as *.mha.

When submitting the output for all files, please zip predictions of all cases to a zip file, for example, test.zip.


Eval Stage

Participanting teams can submit docker files of their algorithms for sanity check.

This stage is only designed for santity check of algorithm dockers. The performance is not used for ranking.

Test Stage

Participanting teams can submit docker files of algorithm for held-out test cases.

The test set is hidden on the server. Participating teams are required to submit algorithm dockers. The final ranking is based on performance in this stage.