Frequently Asked Questions¶
Make sure to read the rules as well.
I have downloaded the data. How do I read it?
All data is stored in Meta format containing an ASCII readable header
and a separate raw image data file. This format is ITK compatible. Full
documentation is available here. An
application that can read the data is SNAP.
If you want to write your own code to read the data, note that in the
header file you can find the dimensions of each file. In the raw file
the values for each voxel are stored consecutively with index running
first over x, then y, then z. The pixel type is short for the image data
and unsigned char for the segmentations of the training data.
What do the entries in the result tables mean?
For each test case, five different performance measures are computed.
These are the overlap error (OE) in percent, the relative volume
difference (VD), in percent, and three symmetric distance measures, all
in millimeters. These are the mean absolute difference (AD), the average
root mean square surface distance (RMSD) and the maximum distance (MD).
Each error measure is translated to a score in the range from 0 (lowest
possible score) to 100 (perfect result), by comparing them with typical
scores of an independent human observer. Finally, the five scores are
averaged to obtain one overall score per test case. These scores are
averaged to obtain the score for a system. The details of the error
measures and the scoring system are explained
here.
How often can I submit results?
In principle you can upload as often as you want. Note however that all
results you submit will appear on the website and every system should be
substantially different from previous entries. The differences compared
to other systems you have submitted must be evident from the sumitted
pdf file. In other words, you cannot submit different results using the
same pdf file. We are committed to avoid 'training on the test set' and
therefore do not want teams to send in a series of results that differ
only in the settings of some parameters. For parameter tuning and
related experiments, you should can use the supplied training data.
Can the results of my system be removed from the web site?
Currently we do not offer the possibility for teams to remove submitted
results. If you believe there are good reasons to remove certain results
that you have submitted, for example because you have submitted a new
system that makes the old results obsolete, please contact Bram van
Ginneken (bram@isi.uu.nl).
What must be in the pdf document that is required for every
submission?
Ideally this document is a paper describing the system that has been
used to generate the results in such detail that others can reimplement
it. With system we mean the complete method, algorithm or procedure that
was used to obtain the results. In other words, the description should
be a standard scientific publication or technical report about your
work. If you have published a paper describing your system, please
upload that paper or, in case you are not allowed to have the paper in
its original form downloadable from this site, upload a short
description of it and a reference to the paper. If you have not yet
written a detailed paper, or have submitted this for publication and do
not want it to become publicly available, or if you have other reasons
why you want to withhold detailed information about your method, please
indicate the reasons for this in the pdf file you submit and describe
the system only briefly.
Why do I have to provide a pdf document and/or a description with
every result I submit?
We believe that it is not too interesting to report here the results of
systems whose working is unknown. Therefore we require that a
description of each system is provided.
Can I change the pdf that describes my system?
Yes. Send the pdf and the submission number it applies to to Bram van
Ginneken (bram@isi.uu.nl). If you have published a paper that reports
results on the CAUSE07 database, we require you to notify us and the pdf
of that paper may be the most appropriate document to describe that
system.
Why can't I download the reference segmentation for the test data and
perform the evaluation myself?
From our previous experiences with making data sets publicly available
we have learned that if we would release the 'truth' for the test data,
groups would perform slightly or vastly different types of evaluations.
This may lead to incomparable results between papers that have used the
same data. To avoid this, we decided on the current procedure, which
makes sure that each system is evaluated in exactly the same way. If you
would like to perform a different type of evaluation and the lack of a
reference makes it impossible for you to do so, please contact the
organizers.
What is the difference between an automatic, semi-automatic and an
interactive system?
For each system listed on this site, it is indicated whether it is
automatic, semi-automatic, or interactive. When a team submits results,
it must indicate to which class the system that generated those results
belong. An automatic system is fully automatic, that is, it should run
without any changes on any input scan, including all test scans. If a
method requires a seed point to be set, or any parameter that may varied
by a user for certain cases, or if different settings have been used for
different test cases to obtain good results, or if some pre- or
postprocessing was applied that was not exactly the same for all test
cases, the system is not automatic. Another way of thinking about
this is that if we would ask teams to provide an executable program and
we would supply it to the test data, we should get exactly the same
results as the ones submitted for automatic systems. Semi-automatic
refers to those systems that require some input from a human observer,
for some or all cases, but which do not demand extensive editing by a
human. Interactive systems require extensive editing, and typically have
a human observer edit the results until he or she is satisfied with the
final outcome. As a result, interactive systems will often yield results
that are 'as good as manual'. We realize there is somewhat of a gray
zone between semi-automatic and interactive. Please choose what category
you think best fits your system and makes sure to describe the degree of
interaction needed in the pdf file that describes your system.
Where can I find more information about the data and the
competition?
A lot of information is available in the introductory article to the
workshop proceedings that can be found
here.
I have lost the password of my registered team. What should I do?
Send an e-mail to Bram van Ginneken (bram@isi.uu.nl). A password
reminder will be mailed to the e-mail address provided with the
registered team.