Benchmark 1 (Anatomy1)
Participating in the Benchmark
- Register here: http://visceral.eu:8080/register/Registration.xhtml
- Read the participation specification (updated to v1.2 on 10 December 2013, check regularly for updates). This is the definitive specification for participating in the benchmark - information in this document supercedes information published anywhere else!
- It is possible to obtain some teaser data before registering. The full training dataset is only available after registration.
The first Benchmark focuses on whole body labelling in 3D medical imaging data. This page gives an overview of Benchmark 1. Links to documents in which more details are available are given.
Benchmark 1 will be open for participation from August 2013 to November 2013. Organisers will make available manually annotated data created by radiologists, of which an example is shown on the right.
Annotated structures found in the first batch training gold corpus (more to come):
1. Segmentations: left/right kidney, spleen, liver, left/right lung, urinary bladder, rectus abdominis muscle, 1st lumbar vertebra, pancreas, left/right psoas major muscle, gallbladder, sternum, aorta, trachea, left/right adrenal gland.
2. Landmarks: Lateral end of clavicula, crista iliaca, symphysis below, trochanter major, trochanter minor, tip of aortic arch, trachea bifurcation, aortic bifurcation, crista iliaca
There are two tasks in which it is possible to participate:
1. Multi layered tasks: (1) segmentation of anatomical structures (lung, liver, kidney, ...) in non-annotated whole body MR- and CT- volumes (participants can choose which of the organs to segment), and (2) the identification of anatomical landmarks in this data. To ensure that algorithms that for instance are only able to segment organs, but not able to localize them in a large volume, the organisers will provide additional initialization information, if participants desire.
2. The surprise organ: evaluating learning algorithms: This part of the benchmark aims to evaluate algorithms that are not tuned to a specific organ, but instead can learn to segment, or localize any structure, given sufficient training data.During the development phase the data distributed is the same as for the tasks described in point 1 above. However instead of developing algorithms only for the given organs, the participants use the data to train and develop algorithms that learn localization- and segmentation models that can be transferred to structures different from those included in the training data set.
More information on the tasks and their evaluation is in the document: Definition of the evaluation protocol and goals.
The data sets for Benchmark 1 have been acquired during daily clinical routine work. Whole body MRI and CT scans or examinations of the whole trunk are used. Furthermore, imaging of the abdomen in MRI and contrast-enhanced CT for oncological staging purposes is used, since there is a higher resolution for segmentation especially of smaller inner organs, such as the adrenal glands.
The following organs are annotated for Benchmark 1: Kidneys, spleen, liver, lungs, urinary bladder, rectus abdominis muscle, lumbar vertebra #1, thyroid gland, pancreas, psoas major muscles, gall bladder, sternum, aorta, trachea, and adrenal glands.
A detailed description of the data and of the annotated organs and landmarks is available in the document: Data set for first competition.
As the data to be used in Benchmark 1 is several Terabytes, we will not ask participants to download the data. The data will be stored on the Microsoft Azure Cloud, and when participants register, they will receive a computing instance in the Microsoft Azure cloud (Windows or Linux), provided and financed by VISCERAL, with the support of Microsoft Research. The benchmark runs in two phases:
The participants each have their own computing instance in the cloud, linked to a small partly-annotated dataset of the same structure as the large one. Software (executables, source code is not necessary) for carrying out the benchmark tasks must be placed into the computing instances by the participants, closely following specifications made available by the organisers. The large data set not accessible to the participants.
On the benchmark submission deadline, the organiser takes over the instances from the participants, links them to the large data set, executes the software installed on the computing instances on the large dataset and evaluates the results.
|1 August 2013||Benchmark opens - Participants can register and get access to their cloud computing instance and the small (training) data set.|
|26 September 2013||MICCAI Workshop - discuss the initial results and experiences in the Benchmark.|
|15 December 2013 (extended)||Final Benchmark submission deadline - all necessary executable software must be in the cloud computing instances. The organisers take over the computing instances and start the evaluation on the large (test) data set.|
To keep up to date with the latest news on VISCERAL, subscribe to the VISCERAL Mailing List.
Ask questions and make comments on the LinkedIn Group.
- Information on the data formats that will be used in Benchmark 1 are in this document: Data format definition focusing on Competition 1
- The annotation software that is being used to create the ground truth is described in this document: Prototype of 3D annotation software interface. To ensure that the best use is made of the manual annotators' time, an active annotation framework is used, described in this document: Prototype of gold corpus active annotation framework.
- The Call for Participation in Benchmark 1.