NIH | National Cancer Institute | NCI Wiki  

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin
Scrollbar
iconsfalse

...

Panel
titleDocument Information

Author:  Craig Stancl, Scott Bauer, Cory Endle
Email:  Stancl.craig@mayo.edu, bauer.scott@mayo.edu, endle.cory@mayo.edu
Team:  LexEVS
Contract:   S13-500 MOD4
Client:  NCI CBIIT
National Institutes of Heath
US Department of Health and Human Services

...

Attendees:  Tin, Cory, Craig, Scott, Yeon, Jason, Kim, Larry, Cuong, Jacob, Sara, Larry, Gilberto

Discussion: Triple Store/Graph Database

  • Mayo has looked at the report from the SI group from CBIIT.  
  • Larry indicated that SPARQL query is the most focus for the NCIT.  Also ability to federate queries across SPARQL end points.  Would like have consistent results across LexEVS, SPARQL.
  • Jason and Kim have been working on a project 
  • Gilberto - there are no use cases prepared.  However, there are things that a terminology server cannot provide.  Would like to have more integrated services.
    • For example, if researching Cancer and looking for gene data (how do I glue this information together). If both are in RDF, then can query using all with SPARQL.
    • Another example, is data elements - are there other data that exist that might be appropriate for my research.  Users can start to explore ontologies for this data discovery.
    • Federation of data from other SPARQL endpoints is the primary interest.
  • Larry suggested that Instead of LexEVS - Hierarchy and traversals might be better implemented in SPARQL.
  • Gilberto - 
    • Federated queries - yes, that is primary focus.
    • SPARQL doesn't need to support reasoning - however, some minimal reasoning reasoning may be considered.
    • Performance isn't priority, but it can't be a bottleneck.  (graph DB isn't in the consideration)
    • LexEVS/CTS2 doesn't need to tie to the triple-store (all shouldnwould't be exposed through the triple store)
  • Kevin provided an overview of "what does a terminology database need to do?" and reviewed Key value store, document store (mongoDB, CouchDB), relational db and graph db usage to satisfy specific functionality required by a terminology.
    • KVS - Key-Value store; DS - Document Store; RDBMS - Relation Database; GDB - Graph Database  

...

    • Need to best look at your requirements and needs when choosing the solution.  

    • Kevin looked at Neo4J, OrientDB, and others by performing benchmarks to determine how well these tools were improving.   
    • Overall, Kevin found arangoDB to be best all around solution.  It is a mix of document and graph solution.  
      • Modeling is open for documents, graphs, and key value pairs
      • Allows for Joins
      • Provides graph functionality.
    • Gilberto - does arangoDB provide SPARQL endpoint plugin?  Kevin indicated that arangoDB may not be supportive of SPARQL.
    • Demo of arangoDB
      • CTS2 JSON for parts of SNOMED loaded into arangoDB.
      • Benchmarks attempted
        • Neighborhood (Qualifier value) - LexEVS and CTS2 does this
          • returns in less than a second
        • Decendants (Qualifier value) - more difficult as maxDepth -1 (all)
          • returns in just over a second
          • typically done by building a table to traverse
        • Leaves (Event) (Return all the leaves)
          • Expensive to do in a DB
          • SNOMED Event branch - return all the leaves.
          • 7300 returned in less than 2 seconds.
        • Sub-Graphs (value set resolution related)
          • SNOMED root note - all Event branch with everything below, all observation branch and all of organism branch.
          • Return how many in each branch and then provide intersection of these branches and see what is returned.
          • returns in 3 seconds.
            • all - 354,000 
            • event - 8500
            • obs - 855
            • organism - 34000
            • intersection - 1
          • Slightly slower results on OrientDB.
        • Graph neighbors - count only
          • How many nodes are in the graph - is difficult in LexEVS
          • extremely fast result. 
        • JOINS from nodes to edges
          • Joining the edges to the entity.
          • returns relation, to and from
        • Shortest Path to Root
          • Returnes verticies and edges
      • Gilberto - how much difference were there between the reviewed tools?  
        • Kevin - OrientDB and ArangoDB are similar.   Neo4J is the most mature of all, but didn't have same performance and was more of a pure graph database.  
      • Tracy - to satisfy the need for SPARQL endpoint is Neo4J best?
        • Kevin - suggests that ArangoDB is not the way to go for SPARQL requirements.  
      • Kevin's usecases for using ArangoDB is based on performance and ability to quickly meet requirements of users.  
      • Larry - how could this be used in combination with LexEVS and other tools.   
        • Kevin - the use of multiple stores/services is becoming more common to accomplish specific tasks.
    • NG (Kim and Jason) have been working on SPARQL endpoint for LexEVS
      • Doesn't have to go through database layer so it is faster. 
      • Kim demoed some working code as part of the browser.  
      • Trees and Hierarchy is faster. 
      • Continuing to review and understand how SPARQL can apply to EVS tools.
    • Larry - how difficult will it be to deploy triple-store and graph DB in the NCI environment?
      • Sara - if part of build and deploy (aside from security concerns) then the tools support team can use.  (for example struts, spring, etc).
      • This impacts the DBAs more than systems.  It depends if the project teams need DBA support.
      • CBIIT managed hosting (supported by infrastructure teams) is currently how EVS is supported.
    • Scott indicated graph queries in LexEVS could be supported by triple stores.  

...

    • However, metadata cannot be supported in triple store.  ArangoDB for example, could provide metadata on the edges.
      • Design would need to be considered to provide a hybrid solution.  The LexEVS API would need to be transparent to the users.   The API would need to wrap content from triple store and LexEVS DB.
    • The Mayo and NCI team needs to clarify the strengths and weaknesses of each and determine how to best address.

Discussion: Cloud Considerations and Discussion: Build and Deployment Process

Attendees:  Tin, Cory, Craig, Scott, Yeon, Jason, Kim, Larry, Cuong, Jacob, Sara, AJ, Larry, Gilberto

  • Scott described considerations for cloud usage
    • Auto deploy
    • Auto Scale resources
    • Uptime
    • Sharable instances
  • Kevin noted that technical is starting to provide ability to deliver what cloud promises.   Cloud is more about changing your development lifecycle than hardware.   Cloud is not server virtualization.  
  • Kevin demo'ed the use of Docker with LexEVS
    • Sara asked if it is possible to take a docker image and use it on different tiers - by passing in a variable to let the application know what to configure at that tier.  
      • Kevin - this is the idea - and those variables can be stored in version control.  This simplifies the process.
    • Micro-architectures and micro-services is what is important today.  LexEVS fits this model well.
    • Attempting to document a LexEVS install is complex.  
    • Docker Example has:
      • LexEVS
      • LexEVS-cts2
      • LexEVS-remote
      • mysql
      • uriresolver
    • This docker container configures a complete LexEVS environment .
    • Kevin described the use of a Nexus server.  Similar to maven, docker images can be hosted on a private or public nexus repository.  Nexus has expanded to include docker support - internal docker repositories.
    • Sarah - The tomcat, mysql and os come from public repositories.
    • Application versions can be specified or simply pull the latest from the docker server.
    • Sarah - CBIIT is not ready to support Docker and won't be available by March 2016.  
    • The goal is to provide docker images for on premise (NCI) installation or install to external cloud services as required by NCI. 
    • Sara - considerations for storing configuration files - we need to consider how passwords and other information

1:00 PM - 3:00 PM

1W030

Discussion: Value Set Editor (Authoring)

  • NCI to provide requirements/use cases 
  

Discussion: The Future of lbGUI

  • Discuss future requirements
  • Review issues - JIRA and others
  • Develop a roadmap to address technical debt
  • Determine next steps/roadmap

Discussion: Build and Deployment Process and Discussion: Value Set Editor (Authoring)

Attendees:  Cory, Craig, Scott, Jason, Kim, Larry, Gilberto, Rob

  • Dev Ops Discussions carried over from earlier this morning.
    • Continuous Integration
      • Cory discussed the Continuous Integration Server usage and how it is used by the Mayo development team.
      • There was discussion about how to include CI server functionality to provide value to both the browser team and LexEVS team.  
      • Currently Jenkins is unofficially supported at NCI, but they are supporting what is needed by the project teams.  
      • Mayo does have Travis and Jenkins, but suggests the use of Jenkins.  
      • Jason - there is interest and some value, but limited given few (one) developers.  It was suggested that it would be best to discuss with the Dev Ops group before setting up something else.  
  • Value Set Editor

Image Added

        • Ability to efficiently load the resolved value sets
    • Tracy - Value Set authoring doesn't occur often.  The value set resolution happens every time there is a new version of NCIT.  
    • FDA and CDISC value sets are based on NCIT concepts.
      • Report writer templates are used for individual value sets.
    • Further review of value set workflow (including resolution) is needed to determine requirements and proposed changes.  

Discussion: The Future of lbGUI

Attendees:  Cory, Craig, Scott, Jason, Kim, Larry, Gilberto, Rob

  • Current usage of lbGUI:
    • Mayo team uses the GUI to verify loads. (Development)
    • Tracy uses when the admin scripts aren't working.  (Admin)
    • Rob uses to cleanup  (Admin)
    • Gilberto uses to determine if data is loaded correctly. (Data)
  • Scott noted that the representation of data in lbGUI isn't always correct.  It is better to look at DB to determine how things are loaded.   
  • Technology needs to be updated
    • Usage is becoming unstable 
    • There are several know bugs.  
  • The focus should be on expanding the admin scripts and move away from lbGUI.
  • Further review of admin workflow is needed to determine requirements and proposed admin script changes and additions.
  • lbGUI will continue to be minimally supported as needed by the development team.
  • Consider ability to provide viewing additional metadata in the GUI.

3:00 PM - 5:00 PM

1W030

Debrief

  • Prioritize
  • Determine next steps/roadmap

Debrief

Attendees:  Cory, Craig, Scott, Jason, Kim, Larry, Gilberto

 

 


Scrollbar
iconsfalse

...