NIH | National Cancer Institute | NCI Wiki  

Error rendering macro 'rw-search'

null

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The following are the steps in a translational research scenario.

  • Within a consortium of cooperating institutions an investigator conducts a search across the clinical data repositories to investigate the feasibility of a potential clinical research idea.

...

  • Within the consortium, the research question is circulated to gauge interest.

...

  • Members of the consortium discuss the research question and approve it as viable.

...

  • The research question is formalized by the coordinating center into a clinical research ancillary protocol for validation of a biomarker as predictive of tumor shrinkage in the context of treatment using an investigational agent and posts to the consortium for consideration (for example, Do patients with a particular marker respond better to treatment with the agent?).

...

  • Consortium member sites choose to join the protocol and agree to accrue patients on to it, collect bio samples from each participant and ship a defined set of bio samples to Central Pathology.

...

  • Participating consortium sites each submit common consent forms, case report forms, and boilerplate Material Transfer Agreements to the appropriate local regulatory offices.

...

  • The protocol meta data, case report forms, standard operating procedures, MTA documents, and other items as needed are finalized and disseminated to each participating site.

...

  • Participants are screened on the basis of eligibility by study coordinator at each site.

...

  • Patients are accrued (by physician or patient self referral) by local staff onto the protocol at each site, and the accrual event is reported to the coordinating center.

...

  • Bio samples are collected and relevant clinical annotations including tumor measurements are collected at the appropriate time points as indicated in the protocol (these are for calculating the primary end point, tumor shrinkage).

...

  • Follow-up appointments are scheduled as specified in the protocol.

...

  • Bio samples are periodically sent to Central Pathology.

...

  • Central Pathology re-labels the samples to hide the source and identity.

...

  • Central Pathology sends out batches of collated bio samples to each of the participating biomarker assay labs.

...

  • A basic scientist at biomarker lab submits the result of biomarker assays.

...

  • Patients are followed for 3 years from primary treatment date. An annual follow-up visit occurs and a blood sample is taken. Additional clinical annotations are collected.

...

  • The trial closes, and all the data are made accessible to the statisticians.

...

  • A statistician communicates the clinical significance of and evidence for biomarker response prediction.

...

  • A clinical researcher, basic scientist and statistician write a scientific paper reporting the results.

...

  • Data are made available according to funding agencies requirements.

Overlay of protein array data on the regulatory pathways with links to patient and cell culture data.

...

An outside researcher requests access to a consortium's Prostate SPOREs Federated Biorepositories, eleven instances of caTissue Suite independently maintained and managed.

...

  • A Research Fellow at a University has been working at identifying SNPs that might be related to aggressive forms of Prostate Cancer. The Fellow has narrowed the search down to 21 SNPs, and discusses the results with the mentor.

...

  • The mentor just returned from a Bio repository presentation where the mentor learned of the Consortium's federated biobanks built on caTissue, and he mentions that this might be a valuable resource that could aid the research. The mentor suggests that the Fellow contact a colleague at Memorial Sloan Kettering Cancer Center (MSKCC), who is a member of the Consortium.

...

  • The Fellow drafts an email briefly explaining the research and sends it to the MSKCC cancer center member. The Fellow asks about the possibility of searching across the Consortium's federated biobanks for cases that have X and Y and at least 3 years of outcome data. The goal is to collect enough tissue to construct a Tissue Microarray.

...

  • The MSKCC member responds and directs the Fellow to the consortium hub web site where there are details on the policies and procedures for requesting an account to be able to submit a query that would search each of the 11 instances of caTissue Suite. He agrees to be her sponsor.

...

  • The Fellow completes an online form that requests and abstract of the research, the name of the Fellow's institution, the non-profit status of that institution, the name of the sponsoring Consortium member and that person’s institution, and a required checkbox indicating that The Fellow has read and agrees to the terms of use.

...

  • The Oversight Committee (OC) of the Consortium Biorepositories has a standing regularly scheduled telephone conference call during which the committee reviews requests to query the federated biorespositories. After each regular call a new set of primary reviewers are elected who will be responsible for thoroughly reading new requests and presenting them to the other members of the OC for a vote.

...

  • The OC has authored a set of appropriate use policy documents against which all requests are measured. The current primary reviewers read the Fellow’s application and report to the other members of the OC.

...

  • The OC votes to approve the Fellow’s request.

...

  • The Fellow is notified of the OC’s decision, and is supplied with an account to the MSKCC instance of caTissue, since this application was sponsored by a member at MSKCC.

...

  • The Fellow logs into caTissue Suite at MSKCC as a researcher, and formulates the parameters of the query. The Fellow submits the query and after a period of time sees a results set that span 8 of the 11 instances of caTissue Suite at the Consortium sites.

...

  • The Fellow uses this information to request tissue from four institutions to build the TMA.

High Throughput Sequencing Using DNA Sequencing to Exhaustively Identify Tumor Associated Mutations

...

Version A is "Sequencing of selected genes via Maxim Gilbert Capillary (“First Generation”) sequencing." Nature. 2008 Sep 4 - Epub ahead of print (Posted posted for the workgroup members).

  1. Develop a list of 2000 to 3000 genes thought to be likely targets for cancer causing mutations.
  2. As a preliminary (lower cost) test, pick the most promising 600 genes from this list.
  3. Develop a gene model for each of these genes.
  4. Hand modify that gene model, for example, to merge small exons into a single amplicon.
  5. Design primers for PCR amplification for each of these genes.
  6. Order Primers for each exon of each of the genes.
  7. Test Primers.
  8. In parallel with steps 1-7, identify match pairs of tumor samples and normal tissue from the same individual for the tumors of interest.
  9. Have pathologists confirm that the tumor samples are what they claim to be and that they consist of a high percentage of tumor tissue.
  10. Make DNA from the tumor samples, confirming for each tumor that quantity and quality of the DNA are adequate.
  11. PCR amplify each of the genes.
  12. Sequence each of the exons of each of the genes for each tumor and normal pair of DNA samples.
  13. Find all the differences between the tumor sequence and normal sequence.
  14. Confirm that these differences are real using custom arrays, the seqenome (Mass Spec) technology and biotage or both. (A biotage is pyrosequencing-based technology directed specifically at looking for SNP-like changes.)
  15. Identify changes that are seen at a higher frequency than what would occur by chance.
  16. Relate the genes in which these changes are seen to known signaling pathways.

...

Version B. As above, except globally sequence all genes. Science 321: 1807-1812 (2008) (Posted posted for the workgroup members). Delete steps 1 and 2 and replace step 3 with: 3) Develop a gene model for each of the genes in the Human genome.

...

Version C. Whole genome sequencing using second generation sequencers. Hypothetical (Posted posted for workgroup members).

  1. Identify matched pairs of tumor samples and normal tissue from the same individual for the tumors of interest.
  2. Have pathologists confirm that the tumor samples are what they claim to be and that they consist of a high percentage of tumor tissue.
  3. Make DNA from the tumor samples, confirming for each tumor that the quantity and quality of the DNA are adequate.
  4. Sequence each of the sample pairs to the required fold coverage (7.5 to 35-fold, depending on the technology and read length).
  5. Map the individual reads to the canonical human genome sequence.
  6. Find all the differences between the tumor sequence and normal sequence.
  7. Confirm that these differences are real using custom arrays, the seqenome (Mass Spec) technology or biotage or both. (Biotage is a pyrosequencing-based technology directed specifically at looking for SNP-like changes).
  8. Identify changes that are seen at a higher frequency than what would occur by chance.
  9. Relate the genes in which these changes are seen to known signaling pathways.

...

This is a scenario based on evaluating and enriching the NanoParticle Ontology (NPO) (posted for the workgroup. The NanoParticle Ontology (posted for the workgroup) is an ontology which is being developed at Washington University in St. Louis to serve as a reference source of controlled vocabularies and terminologies in cancer nanotechnology research. Concepts in the NPO have their instances in the data represented in a database or in literature. In a database, these instances include field names, field entries, or both for the data model. NPO represents the knowledge supporting unambiguous annotation and semantic interpretation of data in a database or in the literature. To expedite the development of NPO, object models must be developed to capture the concepts and inter-concept relationships from the literature. Minimum information standards should provide guidelines for developing these object models, so the minimum information is also captured for representation in the NPO.

...