NIH | National Cancer Institute | NCI Wiki  

Error rendering macro 'rw-search'

null

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Some of the key advantages of challenges over conventional methods include 1) scientific rigor (sequestering the test data), 2) comparing methods on the same datasets with the same, agreed-upon metrics, 3) allowing computer scientists without access to medical data to test their methods on large clinical datasets, 4) making available resources such as source code, and 5) bringing together diverse communities (that may traditionally not work together) of imaging and computer scientists, machine learning algorithm developer, software developers, clinicians and biologists. CKK: Reviewer comment "Review wording of # 4 and 5".

However, despite this potential, there are a number of challenges. Medical data is usually governed by privacy and security policies such as HIPPA that make it difficult to share patient data. Patient health records can be very difficult to completely deidentify. Medical imaging data, especially brain MRIs can be particularly challenging as once could easily reconstruct a recognizable 3D model of the subject.

...

Challenges have been popular in a number of scientific communities since the 1990s. In the text retrieval community, the Text REtrieval Conference (TREC), co-sponsored by NIST is an early example of evaluation campaigns where participants work on a common task using data provided by the organizers and evaluated with a common set of metrics. ChaLearn has organized challenges in machine learning since 2013. CKK: Word document originally said "20013". Changed to "2013".

We begin with a brief review of a few medical imaging challenges held in the last decade and review their organization and infrastructure requirements. Medical imaging challenges are now a routine aspect of the highly regarded MICCAI annual meeting. Challenges at MICCAI began in 2007 with a liver segmentation and caudate segmentation challenges.

...

The MICCAI-BraTS challenge highlighted a number of findings that mirrored experiences from other domains. CKK: Word document had the lone word "These" following the period of the previous sentence. Did you intend to add another sentence here?

  • The agreement between experts in is not perfect (~0.8 Dice score).
  • The agreement (between experts and between algorithms) is highest for the whole tumor and relatively poor for areas of necrosis and non-enhancing tumor.
  • Combining segmentations created by "best" algorithms created a segmentation that achieves overlap with consensus "expert" labels that approaches inter-rater overlap.
  • This approach can be used to automatically create large labeled datasets.
  • However, there are cases where this does not work and we still need to validate a subset of images with human experts.

Dice coefficients of inter-rater agreement and of rater vs. fused label maps
 

Figure 2. Dice coefficients of inter-rater agreement and of rater vs. fused label maps

...

These platforms typically charge a hosting fee and offering monetary rewards is pretty common. They have large communities (hundreds of thousands) of registered users and coders and can be a way to introduce the problem to communities outside the core domain expert academic researchers and get solutions that are novel in the domain. CKK: Word document had the lone word "The" following the period of the previous sentence. Did you intend to add another sentence here?

Kaggle is a very popular platform for data science competitions. It is a commercial platform used by companies to pose problems for monetary rewards, jobs and knowledge advancement. There are public and private leaderboards with the test data also being withheld from the participant. Typical hosting costs are reported to be $15,000-20,000 plus additional costs for prizes. However, Kaggle does have a free hosting option to organize challenges for educational purposes. This option is primarily meant to be used by instructors as part of the class curriculum. Kaggle does not provide any support for organizers of Kaggle In Class. There is a 100GB limit on file size. There also appears to be very simple options for scoring. Almost all challenges hosted here appear to be prediction type challenges where results can be submitted as a csv file and the "truth" is also a csv file. It does not appear that imaging-based challenges (such as segmentation challenges) would lend themselves to being hosted on Kaggle In Class without significant effort.

...