NIH | National Cancer Institute | NCI Wiki  

Error rendering macro 'rw-search'

null

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 153 Next »

Contents of this Page

Introduction to CTIIP

Today, medical technicians and doctors cannot directly compare different types of medical images from the same person. For example, you cannot take an ultrasound of a tumor and compare its features to those on a slide containing cells of that same tumor, let alone compare that tumor to the same kind in a mouse. Since image data from different disciplines are in different formats, comparing them means changing those native formats to something both can interpret, risking important changes to the data contained within. This is not good news for a patient.

Most cancer diagnoses are made based on images. You have to see a tumor, or compare images of it over time, to determine its level of threat. Ultrasounds, MRIs, and X-rays are all common types of images that radiologists use to collect information about a patient and perhaps cause a doctor to recommend a biopsy. Once that section of the tumor is under the microscope, pathologists learn more about it. Radiologists and pathologists represent different scientific disciplines, however, without a common vocabulary. To gather even more information, a doctor may order a genetic panel. If that panel shows that the patient has a genetic anomaly, the doctor or a geneticist may search for clinical trials that match it, or turn to therapies that researchers have already proven effective for this combination of tumor and genetic anomaly through recent advances in precision medicine. Like radiology and pathology, genomics uses its own vocabulary, preventing data from being directly shared.

Yet another way we learn about cancer in humans is through small animal research. Images from small animals allow detailed study of biological processes, disease progression, and response to therapy, with the potential to provide a natural bridge to human disease. Due to differences in how data is collected and stored about animals and humans, however, the bridge is man-made.

Each of these diagnostic images are at a different scale, from a different scientific discipline. A large-scale image like an X-ray may be almost life-size. Slices of tumors are smaller still and you must put them on a slide under a microscope to see them. Not surprisingly, each of these image types require specialized knowledge to create, handle, and interpret them. While complementary, each specialist comes from a different scientific discipline.

The good news is that it is now possible to both create large databases of information about images and apply existing data standards. The bad news is that each of these databases is protected by proprietary formats that do not communicate with one another, and standards do not yet exist for all image types. Researchers from each of the disciplines under the umbrella called imaging refer to the images in a unique way, using different vocabulary. Wouldn't it be nice if a scientist could simply ask questions without regard to disciplinary boundaries and harness all of the available data about tissue, cells, genes, proteins, and other parts of the body to prove or disprove a hypothesis?

One promise of big data is that data mashups can integrate two or more data sets in a single interface so that doctors, pathologists, radiologists, and laboratory technicians can make connections that improve outcomes for patients. Such mashups require and await technical solutions in the areas of data standards and software development. A significant start to all of these technical solutions are the sub-projects of the National Cancer Institute Clinical and Translational Imaging Informatics Project (NCI CTIIP).

CTIIP Sub-Projects

As discussed so far, cancer research is needed across domains. To serve this need, the National Cancer Institute Clinical and Translational Imaging Informatics Project (NCI CTIIP) team plans to meet it by creating a data mashup interface, along with other software and standards, that accesses The Cancer Genome Atlas (TCGA) clinical and molecular data, The Cancer Imaging Archive (TCIA) in-vivo imaging data, caMicroscope pathology data, a pilot data set of animal model data, and relevant imaging annotation and markup data.

The common informatics infrastructure that will result from this project will provide researchers with analysis tools they can use to directly mine data from multiple high-volume information repositories, creating a foundation for research and decision support systems to better diagnose and treat patients with cancer.

CTIIP is composed of the following sub-projects. Each project is discussed on this page.

Sub-Project NameDescription
Digital PathologyAddresses the accessibility of digital pathology data, improves tools for annotation and markup of pathology images through the development of microAIM, and analysis tools with caMicroscope projects in each of the targeted research domains: clinical imaging, pre-clinical imaging, and digital pathology imaging. raises the level of interoperability (take out pilot project)
Integrated Query System 
DICOM Standards for Small Animal Imaging; Use of Informatics for Co-clinical TrialsAddresses the need for standards in pre-clinical imaging and tests the informatics tools created in the Digital Pathology and Integrated Query System sub-project in co-clinical trials.
CTIIP Primer (DRAFT)Challenges are a tool for ... The pilot challenges would use limited data sets for proof-of-concept, and test the informatics infrastructure needed for more rigorous “Grand Challenges” that could later be scaled up and supported by extramural initiatives.

The Importance of Data Standards

NCI CBIIT has worked extensively for several years in the area of data standards for both clinical research and healthcare, working with the community and Standards Development Organizations (SDOs), such as the Clinical Data Interchange Standards Consortium (CDISC), Health Level 7 (HL7) and the International Organization for Standardization (ISO). From that work, Enterprise Vocabulary Services (EVS) and Cancer Data Standards Registry and Repository (caDSR) are harmonized with the Biomedical Research Integrated Domain Group (BRIDG), Study Data Tabulation Model (SDTM), and Health Level Seven® Reference Information Model HL7 RIM models. Standardized Case Report Forms (CRFs), including those for imaging, have also been created. The CBIIT project work provides the bioinformatics foundation for semantic interoperability in digital pathology and co-clinical trials integrated with clinical and patient demographic data and data contained in TCIA and TCGA.

The common infrastructure that will result from CTIIP and its sub-projects depends on data interoperability, which is greatly aided by adherence to data standards. While image data standards exist to support communicating image data in a common way, the data standards that do exist for image data are inconsistently adopted. One reason for the lack of uniform adoption is that vendors of image management tools required for the analysis of imaging data have created these tools so that they only accept proprietary data formats. Researchers then make sure their data can be interpreted by these tools. The result is that data produced on different systems cannot be analyzed by the same mechanisms.

Another challenge for CTIIP with its goal of integrating data from complimentary domains is the lack of a defined standard for co-clinical and digital pathology data. Without a data standard for these domains, it is very difficult to share and leverage such data across studies and institutions. As part of the CTIIP project, the team has extended the DICOM model to co-clinical and small animal imaging.

Within the three research domains that CTIIP intends to make available for integrative queries, only one, clinical imaging, has made some progress in terms of establishing a framework and standards for informatics solutions. Those standards include Annotation and Image Markup (AIM), which allow researchers to standardize annotations and markup for radiology and pathology images, and Digital Imaging and Communications in Medicine (DICOM), which is a standard for handling, storing, printing, and transmitting information in medical imaging. For pre-clinical imaging and digital pathology, there are no such standards that allow for the seamless viewing, integration, and analysis of disparate data sets to produce integrated views of the data, quantitative analysis, data integration, and research or clinical decision support systems.

As part of the DICOM Standards for Small Animal Imaging; Use of Informatics for Co-clinical Trials sub-project, the long-term goal is to generate DICOM-compliant images for small animal research. MicroAIM (µAIM) is currently in development to serve the unique needs of this domain.

The following table presents the data that the CTIIP team is integrating through various means. This integration relies on the expansion of software features and on the application of data standards, as described in subsequent sections of this document.

DomainData SetApplicable Standard
Clinical ImagingThe Cancer Genome Atlas (TCGA) clinical and molecular data 
Clinical ImagingThe Cancer Imaging Archive (TCIA) in vivo imaging dataDICOM
Pre-ClinicalSmall animal models

N/A

A standard exists but has not been adopted

Digital PathologycaMicroscope

DICOM

A standard exists but has not been adopted

AllAnnotations and markup on imagesAIM

Digital Pathology and Integrated Query System

The goal of this foundational sub-project is to create a digital pathology image server that can accept images from multiple domains and run integrative queries on that data. Using this server, which is an extended version of caMicroscope, researchers can select data from different imaging data sets and use them in image algorithms. The first data sets that are being integrated on this image server are TCGA and TCIA.

The TCGA project is producing a comprehensive genomic characterization and analysis of 200 types of cancer and providing this information to the research community. TCIA and the underlying National Biomedical Image Archive (NBIA) manage well-curated, publicly-available collections of medical image data. The linkages between TCGA and TCIA are valuable to researchers who want to study diagnostic images associated with the tissue samples sequenced by TCGA. TCIA currently supports over 40 active research groups including researchers who are exploiting these linkages.

Although TCGA and TCIA comprise a rich, complementary, multi-discipline data set, they are in an infrastructure that provides limited ability to query the data. Researchers want to query both databases at the same time to identify cases based on all available data types. While TCGA and TCIA are DICOM-compliant, digital pathology and co-clinical/small animal model environments do not share the same data standards or do not use them consistently.

To address these limitations, the CTIIP team is developing an Integrated Query System to make it easier to analyze data from different research disciplines represented by TCGA, TCIA, and co-clinical/small animal model data. The lack of common data standards will not be a hindrance to data analysis, since the server that the unified query interface is on will accept whole slides without recoding. The unified query interface will also provide a common platform and data engine for the hosting of “pilot challenges," which are described in more detail below. Pilot challenges will advance biological and clinical research in a way that also integrates the clinical, co-clinical/small animal model, and digital pathology imaging disciplines.

Digital Pathology

Digital pathology, unlike its more mature radiographic counterpart, has yet to standardize on a single storage and transport media. In addition, each pathology-imaging vendor produces its own image management systems, making image analysis systems proprietary and not standardized. The result is that images produced on different systems cannot be analyzed via the same mechanisms. Not only does this lack of standards and the dominance of proprietary formats impact digital pathology, but it prevents digital pathology data from integrating with data from other disciplines.

The purpose of the digital pathology component of CTIIP is to support data mashups between image-derived information from TCIA and clinical and molecular metadata from TCGA. The team is using OpenSlide, a vendor-neutral C library, to extend the software of caMicroscope, a digital pathology server, to provide the infrastructure for these data mashups. The extended software will support some of the common formats adopted by whole slide vendors as well as basic image analysis algorithms. With the incorporation of common whole slide formats, caMicroscope will be able to read whole slides without recoding, which often introduces additional compression artifacts, and provide a logical bridge from proprietary pathology formats to DICOM standards. With caMicroscope's support for basic image analysis algorithms, researchers can use this tool to enable analytic and decision support using digital pathology images from TCIA and NBIA.

Data federation, a process whereby data is collected from different databases without ever copying or transferring the original data, is part of the new infrastructure as well. It will make it possible to create integrative queries using data from TCIA and TCGA. The software used to accomplish this data federation is Bindaas. Bindaas is middleware that is also used to build the backend infrastructure of caMicroscope. The team is extending Bindaas with a data federation capability that makes it possible to query data from TCIA and TCGA.

Image annotations also require standards so that they can be read by different imaging disciplines along with the rest of the image data. caMicroscope will also be extended to include image annotation and markup features using the micro-Annotation and Image Markup ( μ-AIM).

Integrated Query System

To make data comparable, it must first be collected in a structured fashion. For example, TCGA relies on Common Data Elements, which are the standard elements that structure TCGA data. Second, data comparisons require common data vocabularies. For example, when a tumor is described in a human or an animal, one of a discrete number of approved vocabulary options must be used to describe the tumor.

The Integrated Query System will access multiple data types in a federated fashion, meaning that the original data will reside in independent systems. The Integrated Query System will provide an interface  scientists can use to select the data types they want to combine, or "mash up," based on their own research questions.

The following table presents the data types and their sources that the Integrated Query System will make available.

Data Types in the Integrated Query SystemData Source
GenomicGoogle Genomics Cloud
ClinicalDownloaded from TCGA and stored in a customized database at Emory University
PreclinicalCustomized database at Emory University
Radiology Images (Human and Animal) TCIA
Radiology Image Annotation and MarkupAIM Data Service (AIME)
Pathology Images (Human and Animal) caMicroscope
Pathology Image Annotation and MarkupuAIM Data Service (uAIME)

The Integrated Query System, with its support for whole slides and data mashups of federated data, will act as a foundation for a broader set of novel community research projects.

DICOM Working Group 30

Since its first publication in 1993, DICOM has revolutionized the practice of radiology, allowing the replacement of X-ray film with a fully digital workflow. Each year, the standard is updated with formats for medical images that can be exchanged with the data and quality necessary for clinical use. (Source: http://dicom.nema.org/Dicom/about-DICOM.html)

As part of the Small Animal/Co-clinical Improved DICOM Compliance and Data Integration sub-project of CTIIP, the NCI supported the development of a DICOM supplement for small animal imaging. The group of people contributing to it, Working Group 30, completed Supplement 187: Preclinical Small Animal Imaging Acquisition Context Exit Disclaimer logo , in 2015.

The goal of this sub-project is to directly compare data from co-clinical animal models to real-time clinical data from TCIA and TCGA. This was accomplished by developing Supplement 187 to accommodate small animal imaging and identifying a pilot co-clinical data set to integrate with TCIA and TCGA, which is in process.

Supplement 187 Data Elements

Information about how a small animal image was acquired is relevant to the interpretation of the image and must be stored with it. While DICOM defines terminology applicable to other types of images, it does not include data elements associated with small animal image acquisition. The new Supplement 187, developed as part of the CTIIP project in 2015, defines terminology that is unique to small animal imaging. It includes the following templates that include terminology relevant to image acquisition.

  • Preclinical Small Animal Image Acquisition Context
    • Language of Content Item and Descendants
    • Observation Context
    • Biosafety Conditions
    • Animal Housing
    • Animal Feeding
    • Heating Conditions
    • Circadian Effects
    • Physiological Monitoring Performed During Procedure
    • Anesthesia
      • Medications and Mixture Medications
    • Medication, Substance, Environmental Exposure

Consult Supplement 187: Preclinical Small Animal Imaging Acquisition Context Exit Disclaimer logo for details about each of these templates.

Pilot Challenges

Challenges are being increasingly viewed as a mechanism to foster advances in a number of fields, including healthcare and medicine. Large quantities of publicly available data, such as TCIA, and cultural changes in the openness of science have now made it possible to use these challenges, as well as crowdsourcing (enlisting the services of people via the Internet), to propel the field forward.

Some of the key advantages of challenges over conventional methods include 1) scientific rigor (sequestering the test data), 2) comparing methods on the same datasets with the same, agreed-upon metrics, 3) allowing computer scientists without access to medical data to test their methods on large clinical datasets, 4) making resources available, such as source code, and 5) bringing together diverse communities (that may traditionally not work together) of imaging and computer scientists, machine learning algorithm developers, software developers, clinicians, and biologists.  

As explained in the Challenge Management System Evaluation Report, challenge hosts and participants cannot do it alone. The computing resourcing needed to process these large datasets may be beyond what is available to individual participants. For the organizers, creating an infrastructure that is secure, robust, and scalable can require resources that are beyond the reach of many researchers. Additionally, imaging formats for pathology images can be proprietary and interoperability between formats can require additional software development efforts.

The Pilot Challenges sub-project of CTIIP will make a set of integrated data from TCIA and TCGA publicly available to researchers who will participate in three complementary "pilot challenge" projects. (this only happened in the first challenge to figure out which image was from which tumor–look at Miccai ) These pilot challenges proactively address research questions that compare the decision support systems for clinical imaging, co-clinical imaging, and digital pathology. As opposed to a more rigorous "grand" challenge, each pilot challenge will function as a proof of concept to learn how to scale challenges up in the future. Each challenge will use the informatics infrastructure created in the Digital Pathology and Integrated Query System sub-project and allow participants to validate and share algorithms on a software clearinghouse platform such as HUBZero.

A team from Massachusetts General Hospital will guide the pilot challenges, using the Medical Imaging Challenge Infrastructure (MedICI), a system that supports medical imaging challenges.

The pilot challenges are as follows:

MICCAI 2015 ChallengesDescription
Combined Radiology and Pathology Classification 
Segmentation of Nuclei in Pathology Images 

Comparing Algorithms to Ground Truth

Before a challenge begins, the Pilot Challenge team will work with a pathologist and a radiologist to determine the ground truth for a particular image. Participants will then analyze images they have never seen before and develop algorithms to accomplish a certain task. The algorithm that comes closest to ground truth is the winner of the challenge.

MedICI

Jaysharee's program: Medical Imaging Challenge Infrastructure: MedICI

  1. Based on open-source CodaLab
  2. ePAD (created by Daniel Rubin's group at Stanford): tool for annotating images, creates AIM images
  3. caMicroscope

http://miccai.cloudapp.net:8000/competitions/28

  1. Competition #1: MICCAI challenge has a training phase where they train their algorithms. A test phase where they run their algorithms on images they have never seen before. They are compared to the ground truth that is determined beforehand. caMicroscope is used to see what is there before and to visualize the results. Overlap/completeness match determines the winner.
  2. Competition #2: They are given slides.

From PPT: Use titles of slides

Setting up a competition by an organizer. Organizer creates competition bundle.

Can go to cancerimagingarchive.net and create shared lists. Shared lists are pulled into CodaLab. That is how they get the test and training data.

Next is to create ground truth.

Regions of interest in a tumor for annotations are necrosis, adema, and active cancer. Radiologists create the ground truth.

Once participants upload their results, they can see them in ePad.

Notes

Medical Image Computational and computer-assisted Intervention: MICCAI

Ground truth: find the compatibility of the informatics that we need to run pilots. Take images out of TCIA, TCGA, clinical data and compare them.

Jasharee doing MICCAI Challenge in Munich. Segmentation of nuclear imaging in pathology. Combined radiology and pathology classification.

Want to be able to say that these informatics allow us to compare the pathology, rad, co-clinical findings.

Document the approach, technology, application to do a MICCAI challenge the way Jaysharee does it.

Scenarios

Need to generate proper therapy for a patient. Look at in vivo imaging, radiology and pathology, run a gene panel to look for abnormal. Look at co-clinical trials (model of a tumor in a mouse that is similar to a human. Experiment therapies on mice.) Run an integrative query to develop a sophisticated diagnosis. Search big data.

Visual pathology integrative queries–Ashish at Emory. Imaging consistent with ground truth.

Need to explain how the challenge management system and integrative query system play together in a scientific scenario.

three tocs: one for challenge steps, one for int query sys. how well does it integrate; what are the common–how do we annotate the tumor in MedICI such that it is compatible with the annotations in the components of the integrative query system. What relationships can we find in the informatics in the animal and patient findings.

Describe each section separately and then see if we can merge the two to answer the scientific question.

Informatics help us communicate. It can help us better treat our patients.

For example, breast cancer has biomarkers (progesterone status, etc.). One question to ask is "if the estrogen status is negative in humans, what does the pathology look like?" Then compare this to mice. Is the model we have a good model for the human condition?

  • No labels