NIH | National Cancer Institute | NCI Wiki  

WIKI MAINTENANCE NOTICE

Please be advised that NCI Wiki will be undergoing maintenance on Thursday, May 23rd between 1200 ET and 1300 ET.
Wiki will remain available, but users may experience screen refreshes or HTTP 502 errors during the maintenance period. If you encounter these errors, wait 1-2 minutes, then refresh your page.

If you have any questions or concerns, please contact the CBIIT Atlassian Management Team.

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Infrastructure for Algorithm Comparisons, Benchmarks, and Challenges in Medical Imaging

AuthorAuthors: Jayashree Kalpathy-Cramer and Karl Helmer

...

Challenges are being increasingly viewed as a mechanism to foster advances in a number of domains, including healthcare and medicine. The United States Federal Government, as part of the open-government initiative, has underscored the role of challenges as a way to "promote innovation through collaboration and (to) harness the ingenuity of the American Public." Large quantities of publicly available data and cultural changes in the openness of science have now made it possible to use these challenges and crowdsourcing efforts to propel the field forward.

...

Some of the key advantages of challenges over conventional methods include 1) scientific rigor (sequestering the test data), 2) comparing methods on the same datasets with the same, agreed-upon metrics, 3) allowing computer scientists without access to medical data to test their methods on large clinical datasets, 4) making resources available, such as source code, and 5) bringing together diverse communities (that may traditionally not work together) of imaging and computer scientists, machine learning algorithm developers, software developers, clinicians, and biologists.

However, despite this potential, there are a number of challenges. Medical data is usually governed by privacy and security policies such as HIPPA that make it difficult to share patient data. Patient health records can be very difficult to completely de-identify. Medical imaging data, especially brain MRIs, can be particularly challenging as once one could easily reconstruct a recognizable 3D model of the subject.

...

The medical imaging community has conducted a host of challenges at conferences such as MICCAI and SPIE. However, these have typically have been modest in scope (both in terms of data size and number of participants). Medical imaging data poses additional challenges to both participants and organizers. For organizers, ensure ensuring that the data are free of PHI is both critical and non-trivial. Medical data is typically acquired in DICOM format. However, ensuring that a DICOM file is free of PHI requires domain knowledge and specialized software tools. Multimodal imaging data can be extremely large. Imaging formats for pathology images can be proprietary and interoperability between formats can require additional software development efforts. Encouraging non-imaging researchers (e.g. machine-learning scientists) to participate in imaging challenges can be difficult due to the domain knowledge required to convert medical imaging into a set of feature vectors. For participants, access to large compute clusters with computing power, storage space, and bandwidth can prove difficult. Medical imaging data is challenging for non-imaging researchers.

However, it is imperative that the imaging community develops the tools and infrastructure necessary to host these challenges and potentially enlarge the pool of methods by making it more feasible for non-imaging researchers to participate. Resources such as the Cancer Imaging Archive (TCIA) have greatly reduced the burden for sharing medical imaging data within the cancer community and making these data available for use in challenges. Although a number of challenge platforms exist currently, we are not aware of any systems that meet all the requirements necessary to currently host medical imaging challengechallenges.

In this article, we review a few historical imaging challenges. We then list the requirements we believe to be necessary (and nice to have) to support large-scale multimodal imaging challenges. We then review existing systems and develop a matrix of features and tools. Finally, we make some recommendations for developing Medical Imaging Challenge Infrastructure (MedICI), a system to support medical imaging challenges.

...

Portal for Kaggle, a leading website for challenges for data scientists
Figure 5. Portal for Kaggle, a leading website for challenges for data scientists

Topcoder

Multiexcerpt include
nopaneltrue
MultiExcerptNameExitDisclaimer
PageWithExcerptwikicontent:Exit Disclaimer to Include
is a similar popular website for software developers, graphic designers and data scientists. In this case, participants typically share their code or designs. They use the Appirio proprietary crowdsourcing development platform, built on Amazon Web Services, Cloud Foundry, Heroku, HTML5, Ruby and Java. A recent computational biology challenge run on Topcoder demonstrated that this crowdsourcing approach produced algorithmic solutions that greatly outperform commonly used algorithms such as BLAST for sequence annotation {Lakhani, 2013 #3789}. This competition was run with a $6000 prize and drew 733 participants (17% of whom submitted code) and the prize-winning algorithms were made available with an open source license.

Challenge Post

Multiexcerpt include
nopaneltrue
MultiExcerptNameExitDisclaimer
PageWithExcerptwikicontent:Exit Disclaimer to Include
has been used to organize hackathons, online challenges and other software collaborative activities. In person hackathons are free while the online challenges cost $1500/month (plus other optional charges).  

Open Source

Synapse

Multiexcerpt include
nopaneltrue
MultiExcerptNameExitDisclaimer
PageWithExcerptwikicontent:Exit Disclaimer to Include
is both an open source platform and a hosted solution for challenges and collaborative activities created by Sage bionetworks. It has been used for a number of challenges including the DREAM challenge. Synapse allows the sharing of code as well as data. However, the code typically is in R, Python and similar languages. Synapse also has a nice programmatic interface and methods to upload/download data, submit results, create annotations and provenance through R, Python, command line and Java. These options can be configured for the different challenges. Content in Synapse is referenced by unique Synapse IDs. The three basic types of Synapse objects include projects, folders and files. These can be accessed through the web interface or through programmatic APIs. Experience and support for running image analysis code within Synapse is limited.

...

Example Challenge hosted in Synapse
Figure 7. Example Challenge hosted in Synapse

COMIC framework

Multiexcerpt include
nopaneltrue
MultiExcerptNameExitDisclaimer
PageWithExcerptwikicontent:Exit Disclaimer to Include
is an open-source platform that facilitates the creation of challenges and has been used to host a number of medical imaging challenges. The Consortium for Open Medical Image Computing (COMIC) platform, built using Python/Django was created and is maintained by a consortium of five European medical image analysis groups including Radboud University, Erasmus, and UCL. They also offer a hosted site, with the hardware located at Fraunhofer MEVIS in Bremen, Germany. The current framework allows participants to create a website, add pages including wikis, create participant registrations, methods for organizers to upload data and participants to download data (for instance through Dropbox). However, the platform including ways to visualize medical data and results is still under development as are options to share algorithms and perform challenges in the cloud.

The main steps to create a new challenge are:

Multiexcerpt include
nopaneltrue
MultiExcerptNameExitDisclaimer
PageWithExcerptwikicontent:Exit Disclaimer to Include
are:

However, at this time, there is limited support for automatic evaluation of submitted results, results presentation, native support for medical images although many of these features are planned.

The HubZero

Multiexcerpt include
nopaneltrue
MultiExcerptNameExitDisclaimer
PageWithExcerptwikicontent:Exit Disclaimer to Include
is an open source platform developed for scientific collaboration. It has been used heavily in a number of communities including nanoscience, earthquake engineering, molecular diagnostics and others. A version focused on cancer informatics can is hosted at nciphub.org. nciphub shares a lot of features with the Synapse platform. It allows user management and role-based access. Users can create groups that share common interest and collaborate within these groups. Files can be shared within projects. Other features include wiki, calendars, creating and sharing resources such as presentations, multimedia and even tools. Most common tools found on the various hubs are those based on simulations. Although nciphub has limited native support of medical imaging, libraries to handle medical images can be configured to work in the hub. Members of the Quantitative Imaging Network (QIN) are exploring the use of nciphub for challenges, especially for the communication and data sharing. especially for the communication and data sharing.
Multiexcerpt include
nopaneltrue
MultiExcerptNameExitDisclaimer
PageWithExcerptwikicontent:Exit Disclaimer to Include

CodaLab

Multiexcerpt include
nopaneltrue
MultiExcerptNameExitDisclaimer
PageWithExcerptwikicontent:Exit Disclaimer to Include
CodaLab is an open-source project that originated at Microsoft Research that was expressly created for hosting challenges and supporting reproducible research. The OuterCurve Foundation currently maintains it. Challenge organizers can easily set up challenges by creating a competition bundle that consists of data as well as evaluate tools. As part of the configuration files, the number of phases and duration (e.g. training, leaderboard, test) can be set up by the organizer. The evaluation program can be written in any language. Participants can upload results and get immediate feedback. The currently available version of CodaLab comes with scoring algorithms for image segmentation evaluation Organizers can extend the presentation of results to allow drilling down into the results with tables and charts. CodaLab currently uses the Azure platform although, in theory, it should be possible to deploy on other servers without a great deal of effort. CodaLab is also developing support for worksheets. These are resources to support reproducible research and for collaboration. Using these, researchers have compared a number of open source NLP tools on different public datasets
Multiexcerpt include
nopaneltrue
MultiExcerptNameExitDisclaimer
PageWithExcerptwikicontent:Exit Disclaimer to Include
. As this technology continues to be developed, researchers will be able to quickly compare the performance of different algorithms on a range of datasets in the "cloud" by leveraging Azure technology.

...

The MIDAS platform has been used to host a couple of imaging challenges. A special

Multiexcerpt include
nopaneltrue
MultiExcerptNameExitDisclaimer
PageWithExcerptwikicontent:Exit Disclaimer to Include
module is available to host challenges. The developers of the platform also made available the COVALIC evaluation tool for segmentation challenges with the following metrics: Average distance of boundary surfaces, 95th percentile Hausdorff distance of boundary surfaces, Dice overlap, Cohen's kappa, Sensitivity, Specificity, Positive Predictive Value.

...

The web portal is the single point of entry for the participants. Historically, this would have information about the challenge, potentially host the data and provide a submission site for the user to upload results. The challenge organizer could also provide the results of the challenge at this page. Many challenges have wikis and announcement pages as well as forums. A good example of active discussion forums can be found at the Kaggle

Multiexcerpt include
nopaneltrue
MultiExcerptNameExitDisclaimer
PageWithExcerptwikicontent:Exit Disclaimer to Include
. Most systems have backend systems (typically a relational database) for managing data and users. These allow registered used to access perhaps the training data and ground truth, the test data but not the ground truth.

...