Challenge @ ISBI 2014

Evaluation results are now available for segmentation and landmarks

The participants of VISCERAL Benchmark Anatomy2 will have the chance to present their intermediate results at the "VISCERAL Organ Segmentation and Landmark Detection Challenge" at 2014 IEEE International Symposium on Biomedical Imaging (ISBI) in Beijing, China. An overall description of VISCERAL Anatomy2 benchmark and the structure of available medical images and annotations can be found on the main Anatomy Benchmark webpage. This current page contains the details regarding the ISBI challenge in particular, such as the available dataset, submission of results, evaluation, and related deadlines.

Challenge Structure and Participation

In this challenge, a set of annotated medical imaging data is provided to the participants, along with a powerful complimentary cloud-computing instance (8-core CPU with 16GB RAM) where participant algorithms can be developed and evaluated. The available data contains segmentation and landmarks of several different anatomical structures in different image modalities, e.g. CT and MRI (see the link above for a detailed list of our data). The participants, however, do NOT have to address all the tasks involved in such data, but rather they can attempt any sub-problem thereof. For instance, an algorithm that is meant to localize vertebral landmarks in CT images will only be evaluated for that task, whereas another algorithm that can segment all organs in all the modalities will be evaluated in those given categories for which it outputs any results. In other words, we will present a per-anatomy, per-modality evaluation result depending on the nature participating algorithms and the attempted image analysis tasks. Indeed, the vision of VISCERAL is to create a single, large, and multi-purpose medical image dataset, on which different research groups can test their specific applications and solutions.

The registration for this challenge is now open. It requires the same procedure as the participation of Anatomy2 benchmark, and you can see the details on the main Anatomy2 webpage. Having filled participant details there and signed an agreement form, each participating group receives access to a virtual cloud-computing instance that can be chosen during registration from 5 different operating systems including Windows and Linux. A participant get administrator rights on such virtual machine (VM) and can access it remotely to deploy their algorithms and any supporting library/applications. The training image and annotation files are accesible within such VMs through secured links.

For the purposes of this challenge, the test images will also be made available closer to submission deadline through such links. A participant is expected to compute corresponding annotations (segmentations and/or landmark locations) and save them in a given filename format. These files can then be submitted by the participants within their VM through a script.

Meanwhile, the participants will explain their methodology used for the challenge at a challenge presentation and in a short paper format, which will be compiled and published online in a challenge proceedings.

Important Dates

Jan 14, Tue √ Registration opens.
Feb 17, Mon √ Training dataset is available.
Mar 21, Fri  ISBI early-bird registration.  (See below to receive notification of acceptance by this time)
Apr 9, Wed 
Deadline for INITIAL SUBMISSION of abstracts (see below)
Apr 11, Fri  Notification of acceptance. ISBI test data is released.
Apr 21, Mon  Deadline for submitting ISBI test annotations (through participant VMs).
Apr 30, Fri √ Deadline for FINAL SUBMISSION of contributions for challenge proceedings (see below)
May 1, Thu  Presentation at the challenge session at IEEE ISBI in Beijing, China.

NEWS: The challenge session was a success, thanks to all participants!

Summary of Participation Steps

→ If you wish to present your results at the challenge:
    • Follow the abstract/PDF submission procedure given below to contribute to the challenge proceedings.
    • Apply your algorithm to the test data (distributed closer to ISBI) and submit your annotation results through the virtual machines.
    • Present your approach at the ISBI challenge session.

→ If you are not a contributor, you are still welcome to the challenge session, where the organizers will announce the evaluation results.

Submission of Contributions

We plan to publish the proceedings of this challenge online. To be placed in such proceedings and to have a presentation slot during the challenge session, the participants need to follow the two-stage submission procedure below:

• Initial Submission: For the challenge participation, a single-blind review process will be applied. Initial submission requires a max. 2000-characters abstract submitted through our challenge submission page at:

• Final Submission: This is the PDF submission of the 2-to-5 pages contributions explaining the methodology and the results. It will be submitted at the same submission page above, using the following one-column paper format:

Formatting Sample: ISBI2014_Visceral_paper_template.pdf
Latex Template:
( Participants may also prepare their submissions in Word or other programs, by following the fomratting sample above,as templates for those are not provided here. )

Additional notes on copyright for the submitted material: For the proceedings copyright, the ceur-ws policy will apply. In particular, the copyright and any similar right for the material in the author contributions for the (online) published proceedings remain with the papers' authors. So, the participants entirely keep the copyright of their content, and give only a non-exclusive license to us to publish their papers as a collection in a proceedings volume, the copyright of which as a whole is held by the proceedings editors. Nevertheless, the authors should make sure that their content does not infringe any copyright laws, such as its copyright has not already been transferred exclusively to a third-party.

Virtual Machines in the Cloud

The vision of the VISCERAL project is the automatic annotation and evaluation of very large datasets on the cloud. Accordingly, it requires the participants provide us with compiled annotation executables, which are then run in the evaluation phase on the test data by us in a VM. This is NOT a requirement for this ISBI challenge, which will only take annotation volumes from the participants (not algorithm executables). Nonetheless, following the general file access and VM evaluation guidelines is highly recommended to the participants such that they can easily participate in our continued set of benchmarks with growing amount of available data.

Having completed the registration and submitted the signed data agreement form, the training images and annotations are available for access from participant virtual machines (VM) through secured file links. A list of these links as well as participant-specific access keys are provided for registered participants through the participant dashboard. Details on VM access can also be found as a PDF file (named "Azure Cloud Guidelines") through that same participant dashboard.

ISBI training landmarks

Training Dataset

ISBI training segmentations

Images: 15 volumes each for 4 different image modalities and field-of-views, which add up to 60 volumes in total. See the Anatomy2 page for a detailed description. Please note that the ethics approval for these clinical images do not allow their download from the cloud/VMs.

Annotations: In each volume, up to 20 structures are segmented and up to 53 landmark locations are selected. The missing annotations are due to poor visibility of the structures in certain image modalities or due to such structures being outside the field-of-view. Accordingly, in all 60 volumes, a total of 890 structures are segmented and 2420 landmarks are selected. A breakdown of annotations per anatomy can be seen in figures linked on the right and also in this factsheet.

Test Dataset and Evaluation

Closer to the final submission deadline, 5 new image volumes per modality (a total of 20 volumes) will be provided to the participants, through links accessible within their VMs. The participants will compute the segmentations/landmarks of these volumes, will store those in their VM in a file format to be announced here, and will accordingly submit them by running a script on their VM. The submitted annotations do not need to be complete, i.e. only segmentation or landmark results, as well as results only on a subset of anatomical structures and/or image modalities may be submitted. For instance, if a participant has a tailored algorithm for liver segmentation in contrast-enhanced CT images, that participant can submit only those 5 segmentations computed for that structure for that modality.

Submitted results will be evaluated by the organizers using this evaluation tool for the given performance metrics.

Note that in our future benchmarks (but not for ISBI 2014), the test data will NOT be accessible directly by the participants. Instead, the participants provide us with their compiled annotation executable that can be called in a pre-defined manner to produce results for any input image. The organizers will use those in the VM environment to annotate/evaluate a large set of medical testing images. Indeed, this evaluation technique was used in our first Anatomy1 benchmark.

Results Submission through Virtual Machines

The procedure explained below only relates to this ISBI challenge, as it involves submitting the annotation files themselves, in contrast to the general structure of our challenge series which involve the submission of annotating executables in the VM.

The participants can access the filenames of the testset for this challenge through their participant dashboards. For now, only the image files (/Volumes/*) are acecsible for download. Then the participants geenrate corresponding annotations, those shall be put in a single folder in the VM with the file naming conventions advertised in our other challenges. This folder is then submitted (uploaded) by running the following java executable in the VM: Download the Visceral Participant Image Upload (VPIU) tool

Note that only the annotation filenames listed in the testset list (in the dashboard) will be uploaded by the tool. The corresponding ground-truth annotations will be released to all participants after ISBI challenge day.   A user manual for the java submission tool is provided here: Manual for the VPIU submission tool


This ISBI challenge is organized by Orcun Goksel and Bjoern Menze, together with the VISCERAL Consortium.

To keep up to date with the latest news on VISCERAL benchmarks, subscribe to the VISCERAL mailing list.

For specific questions regading this ISBI challenge, you may contact Orcun Goksel at his given email address.