NIH | National Cancer Institute | NCI Wiki  

Error rendering macro 'rw-search'

null

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 12 Next »

Contents of this Page

The National Cancer Institute Clinical and Translational Imaging Informatics Project (NCI CTIIP) seeks to increase the understanding of genetic mutations to better diagnose and treat patients with cancer. Its component projects support research goals from the domains of genomics, diagnostic imaging, and digital pathology. These research goals include:

  • Creating an open-source digital pathology image server that can host and serve digital pathology images for any of the major vendors without recoding, facilitating the integration of pathology data with radiographic, genomic, and proteomic data.
  • Establishing an informatics and IT infrastructure to implement pilot challenges for clinical and pre-clinical studies that integrate the genomics, diagnostic imaging, and digital pathology domains.
  • DICOM Working Group 30?
  • Developing DICOM standards for small animal imaging and identify co-clinical datasets to test the integration of TCIA and TCGA for this data.

NCIP created in 2013 as a part of CBIIT.

Imaging Informatics Working Group created across NCI. Explore needs for research in in vivo, pathology, and omics. Radio-patho genomics for omics.

Need to generate proper therapy for a patient. Look at in vivo imaging, radiology and pathology, run a gene panel to look for abnormal. Look at co-clinical trials (model of a tumor in a mouse that is similar to a human. Experiment therapies on mice.) Run an integrative query to develop a sophisticated diagnosis. Search big data.

Informatics have to let us communicate. Need to be able to compare the data between the omics.

Visual pathology integrative queries–Ashish at Emory. Imaging consistent with ground truth.

Three pilot challenges–pathology, radiology, co-clinical.

Medical Image Computational and computer-assisted Intervention: MICCAI

Interventions in tumors, cardiology, etc that are image-based

Mass General will guide the pilots

Ground truth: find the compatibility of the informatics that we need to run pilots. Take images out of TCIA, CGA, clinical data and compare them.

Put together a primer, examples of data, use cases, how to carry out an integrative query, so that it is understandable.

Jasharee doing MICCAI Challenge in Munich. Segmentation of nuclear imaging in pathology. Combined radiology and pathology classification.

Want to be able to say that these informatics allow us to compare the pathology, rad, co-clinical findings.

Document the approach, technology, application to do a MICCAI challenge the way Jaysharee does it. See their order of march.

Ashish has the conceptual approach for an integrative query system. Learn his order of march.

Need to explain how the challenge management system and integrative query system play together in a scientific scenario.

three tocs: one for challenge steps, one for int query sys. how well does it integrate; what are the common–how do we annotate the tumor in MedICI such that it is compatible with the annotations in the components of the integrative query system. What relationships can we find in the informatics in the animal and patient findings.

How do we better treat our patients?

Describe each section separately and then see if we can merge the two to answer the scientific question.

Challenge Management System, MedICI

Jaysharee's program: Medical Imaging Challenge Infrastructure: MedICI

  1. Based on open-source CodaLab
  2. ePAD (created by Daniel Rubin's group at Stanford): tool for annotating images, creates AIM images
  3. caMicroscope

http://miccai.cloudapp.net:8000/competitions/28

  1. Competition #1: MICCAI challenge has a training phase where they train their algorithms. A test phase where they run their algorithms on images they have never seen before. They are compared to the ground truth that is determined beforehand. caMicroscope is used to see what is there before and to visualize the results. Overlap/completeness match determines the winner.
  2. Competition #2: They are given slides.

From PPT: Use titles of slides

Setting up a competition by an organizer. Organizer creates competition bundle.

Can go to cancerimagingarchive.net and create shared lists. Shared lists are pulled into CodaLab. That is how they get the test and training data.

Next is to create ground truth.

Regions of interest in a tumor for annotations are necrosis, adema, and active cancer. Radiologists create the ground truth.

Once participants upload their results, they can see them in ePad.

Integrative Query System

Look at Ulli's PPT

Extend software to support data mashups between image-derived information from TCIA and clinical and molecular metadata from TCGA.

Integrative Queries

Programmatic Access to Data to TCGA-related image data.

Extend software to support data mashups between image-derived information from TCIA and clinical and molecular metadata from TCGA.

What the data is used for

Relate data from TCIA, caMicroscope, animal model

genomics, animal

how do we make a decision on a firm diagnosis?

Get queries and relate it to the human data and vice versa

System should integrate clinical data (from TCGA), preclinical data (comes from UC Davis)

Use case: Breast cancer has biomarkers (progesterone status, etc.). One question to ask is "if the estrogen status is negative in humans, what does the pathology look like?" Then compare this to mice. Is the model we have a good model for the human condition?

If you treat a mouse model that has an ER negative status with a certain drug, what is the outcome? Then see this in humans.

We are setting up the data structure so when that is done, we'll be able to see what use cases are possible.

To make data comparable, we must collect it in a structured fashion. Common Data Elements for TCGA.

We are pulling data out of caDSR (ER negative and positive, other common data elements) and we are asking Bob Cardiff's team to ask the same questions so that we can compare human and mouse data.

We are exploring the standardization of informatics. Use all the tools we have to create standard informatics to compare patient to animal data. We are using the available standards: DICOM, AIM, micro AIM. Fundamental to integrative queries.

If you did an integrative query, how would you do it? Data calls to do different integrative queries. How would you use sufficient standard data. Come out with information that will allow you to make a decision. Pilot challenges to compare the decision support systems for three domains.

We need a clear explanation of how to do this.

Data mashups that allow us to

Explain our complicated project in a simple manner so they understand why we are doing and what we are doing.

  • No labels