Skip to main content

Building Patient Cohorts with NLP and Knowledge Graphs

Amir Kermany
Moritz Steller
David Talby
Michael Sanky
Share this post

Healthcare and Life Sciences Forum

Check out the solution accelerator to download the notebooks referred throughout this blog. 

Cohort building is an essential part of patient analytics. Defining which patients belong to a cohort, testing the sensitivity of various inclusion and exclusion criteria on sample size, building a control cohort with propensity score matching techniques: These are just some of the processes that healthcare and life sciences researchers live day in and day out, and that's unlikely to change anytime soon. What is changing is the underlying data, the complexity of clinical criteria, and the dynamism demanded by the industry.

While tools exist for building patient cohorts based on structured data from EHR data or claims, their practical utility is limited. More and more, cohort building in healthcare and life sciences requires criteria extracted from unstructured and semi-structured clinical documentation with Natural Language Processing (NLP) pipelines. Making this a reality requires a seamless combination of three technologies:

(1) a platform that scales for computationally-intensive calculations of massive real world datasets,
(2) an accurate NLP library & healthcare-specific models to extract and relate entities from medical documents, and
(3) a knowledge graph toolset, able to represent the relationships between a network of entities.

The latest solution from John Snow Labs and Databricks brings all of this together in the Lakehouse.

Optimizing clinical trial protocols

Let's consider one high impact application of dynamic cohort building.

Recruiting and retaining patients for clinical trials is a long-standing problem that the pandemic has exacerbated. 80% of trials are delayed due to recruitment problems1, with many sites under-enrolling. Delays in recruitment have huge financial implications in terms of both the cash burn to manage extended trials and the opportunity cost of patent life, not to mention the implications of delaying potentially life-saving medications.

One of the challenges is that as medications become more specialized, clinical trial protocols are increasingly complex. It is not uncommon to see upwards of 40 different criteria for inclusion and exclusion. The old age "measure twice, cut once" is exceedingly important here. Let's look at a relatively straightforward example of a protocol for a Phase 3 trial estimated to run for six years: Effect of Evolocumab in Patients at High Cardiovascular Risk Without Prior Myocardial Infarction or Stroke (VESALIUS-CV)2:

Building Patient Cohorts with NLP and Knowledge Graphs

In terms of protocol design, the inclusion and exclusion criteria must be targeted enough to have the appropriate clinical sensitivity, and broad enough to facilitate recruitment. Real world data can provide the guideposts to help forecast patient eligibility and understand the relative impact of various criteria. In the example above, does left-ventricular ejection fraction > 30% limit the population by 10%, 20%? How about eGFR < 15? Does clinical documentation include mentions of atrial flutter that are not diagnosed, which would impact screen failure rates?

Fortunately, these questions can be answered with real-world data and AI.

Site selection and patient recruitment

Similar challenges exist once a clinical trial protocol has been defined. One of the next decisions for a pharmaceutical company is where to set up sites for the trial. Setting up a site is time consuming, expensive, and often wasteful - Over two-thirds of sites fail to meet their original patient enrollment goals and ip to 50% of sites enroll one or no patients in their studies3.

This challenge is amplified in newer clinical trials - especially those focusing on rare diseases, or on cancer patients with specific genomic biomarkers. In those cases, a hospital may see only a handful of relevant patients per year, so estimating in advance how many patients are candidates for a trial, and then actually recruiting them when they appear, are both critical to timely success.

The advent of precision health leads to many more clinical trials that target a very small population4. This requires the automation scale to find candidate patients to these trials automatically, as well as

3https://www.clinicalleader.com/doc/considerations-for-improving-patient-0001
4https://www.webmd.com/cancer/precision-medicine-clinical-trials

state-of-the-art NLP capabilities since trial inclusion and exclusion criteria call out more facts that are only available in unstructured text. These facts include genomic variants, social determinants of health, family history, and specific tumor characteristics.

Fortunately, new AI technology is now ready to meet these challenges.

Design and Run Better Clinical Trials with John Snow Labs & Databricks

First, lets understand the end to end solution architecture for Patient Cohort Building with NLP and Knowledge Graphs:

An end-to-end workflow for automating PHI removal from documents and images

We will build a Knowledge Graph (KG) using Spark NLP relation extraction models and a graph API. The main point of this solution is to show creating a clinical knowledge graph using Spark NLP pretrained models. For this purpose, we will use pretrained relation extraction and NER models. After creating the knowledge graph, we will query the KG to get some insightful results.

As Building Patient Cohorts with NLP and Knowledge Graphs was part of DAIS 2022, please view its session here: demo.

NLP Pre-Processing

Overall, there are 965 clinical records in our example dataset stored in Delta table. We read the data and write the records into bronze Delta tables.

An example dataset of clinical health records stored in a Delta table within Delta Lake.
An example dataset of clinical health records stored in a Delta table within Delta Lake.

Extracting from relationships from the text in this dataframe, SparkNLP for Healthcare applies a Posology relation extraction pretrained model that supports the following relations:
DRUG-DOSAGE, DRUG-FREQUENCY, DRUG-ADE (Adverse Drug Events), DRUG-FORM, DRUG-ROUTE, DRUG-DURATION, DRUG-REASON, DRUG=STRENGTH

The model has been validated against the posology dataset described in (Magge, Scotch, & Gonzalez-Hernandez, 2018) http://proceedings.mlr.press/v90/magge18a/magge18a.pdf.

RelationRecallPrecisionF1F1 (Magge, Scotch, & Gonzalez-Hernandez, 2018)
DRUG-ADE0.661.000.800.76
DRUG-DOSAGE0.891.000.940.91
DRUG-DURATION0.751.000.850.92
DRUG-FORM0.881.000.940.95*
DRUG-FREQUENCY0.791.000.880.90
DRUG-REASON0.601.000.750.70
DRUG-ROUTE0.791.000.880.95*
DRUG-STRENGTH0.951.000.980.97

*Magge, Scotch, Gonzalez-Hernandez (2018) collapsed DRUG-FORM and DRUG-ROUTE into a single relation.

Within our NLP pipeline, Spark NLP for Healthcare is following the standardized steps of preprocessing (documenter, sentencer, tokenizer), word embeddings, part-of-speech tagger, NER, dependency parsing, and relation extraction. Relation extraction in particular is the most important step in this pipeline as it establishes the connection by bringing relationships to the extracted NER chunks.

The resulting dataframe includes all relationships accordingly:

Spark NLP for Healthcare maps the relationships within the data for analysis.
Spark NLP for Healthcare maps the relationships within the data for analysis.

Within our Lakehouse for Healthcare, this final dataframe will be written to the silver layer.

Next, the RxNorm codes are extracted from the prior established dataset. Firstly, we use a basic rules based logic to define and clean up 'entity1' and 'entity2', followed by an SBERT (Sentence BERT) based embedder and BioBERT based resolver support the transformation to rxnorm codes.

See below for the first three records of the silver layer data set the extracted Rx related text, its NER chunks, the applicable RxNorm code, all related codes, RxNorm resolutions and final drug resolution.

The results of transformed data within the silver layer of Delta Lake.
The results of transformed data within the silver layer of Delta Lake.

This result dataframe is written to the gold layer.

Lastly, a pretrained named entity recognition deep learning model for clinical terminology (https://nlp.johnsnowlabs.com/2021/08/13/ner_jsl_slim_en.html) is applied to our initial dataset to extract generalized entities from our medical text.

The result dataframe includes the NER chunk and NER label from the unstructured text:

Using deep learning, generalized entities can be recognized and extracted for the gold layer within Delta Lake.
Using deep learning, generalized entities can be recognized and extracted for the gold layer within Delta Lake.

This result dataframe is written to the gold layer.

Creating and Querying of the Knowledge Graph

For the creation of the Knowledge Graph (KG), the prior result dataframes in the golden layer are required as well as additional tabular de-identified demographic information of patients. See:

For the creation of the Knowledge Graph (KG), the prior result dataframes in the golden layer are required as well as additional tabular de-identified demographic information of patients

For building the KG, best practices are to use your main cloud provider's graph capabilities. Two agnostic options to build a sufficient graph are: 1. Write your dataframe to a NoSql Database and use its graph API 2. Use a native graph database.

The goal of both options is to get to a graph schema for the extracted entities that look the following:

A visual representation of a graph schema to retrieve information based on underlying relationships for querying.
A visual representation of a graph schema to retrieve information based on underlying relationships for querying.

This can be achieved by splitting dataframe into multiple dataframes by ner_label and creating nodes and relationships. Examples for establishing relationship are (Examples are written in Cypher https://neo4j.com/developer/cypher/ ):

def add_symptoms(rows, batch_size=500):
	query = '''
	UNWIND $rows as row
	MATCH(p:Patient{name:row.subject_id})
	MERGE(n:Symptom {name:row.chunk})
	MERGE (p)-[:IS_SYMPTOM{date:date(row.date)}]->(n)
 
	WITH n
	MATCH (n)
	RETURN count(*) as total  
	'''
	return update_data(query, rows, batch_size)

def add_tests(rows, batch_size=500):
	query = '''
	UNWIND $rows as row
	MATCH(p:Patient{name:row.subject_id})
	MERGE(n:Test {name:row.chunk})
	MERGE (p)-[:IS_TEST{date:date(row.date)}]->(n)
 
	WITH n
	MATCH (n)
	RETURN count(*) as total  
	'''
	return update_data(query, rows, batch_size)

Once the KG is properly established, within any of the two options (in this example a graph database), a schema check will validate the count of records in each node and relationship:

Running a schema check ensures that the format and data relationships are as expected.
Running a schema check ensures that the format and data relationships are as expected.

The KG is now ready to be intelligently queried to retrieve information based on the underlying established relationships within our NLP RE steps prior. The following shows a set of queries answering clinical questions:

1. Patient 21153's journey in medical records: symptoms, procedures, disease-syndrome-disorders, test, drugs & rxnorms:

Query:

patient_name = '21153'
 
query_part1 = f'MATCH (p:Patient)-[r1:IS_SYMPTOM]->(s:Symptom) WHERE p.name = {patient_name} '

query_part2 = '''
WITH DISTINCT p.name as patients, r1.date as dates, COLLECT(DISTINCT s.name) as symptoms, COUNT(DISTINCT s.name) as num_symptoms
 
MATCH (p:Patient)-[r2:IS_PROCEDURE]->(pr:Procedure)
WHERE p.name=patients AND r2.date = dates
 
WITH DISTINCT p.name as patients, r2.date as dates, COLLECT(DISTINCT pr.name) as procedures, COUNT(DISTINCT pr.name) as num_procedures, symptoms, num_symptoms
MATCH (p:Patient)-[r3:IS_DSD]->(_d:DSD)
WHERE p.name=patients AND r3.date = dates
 
WITH DISTINCT p.name as patients, r3.date as dates, symptoms, num_symptoms, procedures, num_procedures,  COLLECT(DISTINCT _d.name) as dsds, COUNT(DISTINCT _d.name) as num_dsds
MATCH (p:Patient)-[r4:IS_TEST]->(_t:Test)
WHERE p.name=patients AND r4.date = dates
 
WITH DISTINCT p.name as patients, r4.date as dates, symptoms, num_symptoms, procedures, num_procedures, dsds, num_dsds, COLLECT(_t.name) as tests, COUNT(_t.name) as num_tests
MATCH (p:Patient)-[r5:RXNORM_CODE]->(rx:RxNorm)-[r6]->(_d:Drug)
WHERE p.name=patients AND r5.date = dates
RETURN DISTINCT p.name as patients, r5.date as dates, symptoms, num_symptoms, procedures, num_procedures, dsds, num_dsds, tests, num_tests, COLLECT(DISTINCT toLower(_d.name)) as drugs, COUNT(DISTINCT toLower(_d.name)) as num_drugs, COLLECT(DISTINCT rx.code) as rxnorms, COUNT(DISTINCT rx.code) as num_rxnorm
ORDER BY dates;
'''

Dataframe:

Dataframe

Graph:

A visual graph that uses NLP to show established relationships between data records.
A visual graph that uses NLP to show established relationships between data records.

2. Patients who are prescribed Lasix between May 2060 and May 2125:

Query:

query_string ='''
MATCH (p:Patient)-[rel_rx]->(rx:RxNorm)-[rel_d]->(d:Drug)-[rel_n:DRUG]->(n:NER)
WHERE d.name IN ['lasix']
  	AND rel_n.patient_name=p.name
  	AND rel_n.date=rel_rx.date
  	AND rel_rx.date >= date("2060-05-01")
  	AND rel_n.date >= date("2060-05-01")
  	AND rel_rx.date < date("2125-05-01")
  	AND rel_n.date < date("2125-05-01")
RETURN DISTINCT
  	d.name as drug_generic_name,
  	p.name as patient_name,
  	rel_rx.date as date
ORDER BY date ASC
'''

Dataframe:

Dataframe

Graph:

A visual graph that uses NLP to show established relationships between patient records and medication.
A visual graph that uses NLP to show established relationships between patient records and medication.

3. Dangerous drug combinations:

Query:

query_string ='''
WITH ["ibuprofen", "naproxen", "diclofenac", "indometacin", "ketorolac", "aspirin", "ketoprofen", "dexketoprofen", "meloxicam"] AS nsaids
MATCH (p:Patient)-[r1:RXNORM_CODE]->(rx:RxNorm)-[r2]->(d:Drug)
WHERE any(word IN nsaids WHERE d.name CONTAINS word)
WITH DISTINCT p.name as patients, COLLECT(DISTINCT d.name) as nsaid_drugs, COUNT(DISTINCT d.name) as num_nsaids
MATCH (p:Patient)-[r1:RXNORM_CODE]->(rx:RxNorm)-[r2]->(d:Drug)
WHERE p.name=patients AND d.name='warfarin'
RETURN DISTINCT patients,
            	nsaid_drugs,
            	num_nsaids,
            	d.name as warfarin_drug,
            	r1.date as date
'''

Dataframe:

Dataframe

Graph:

A visual graph that uses NLP to show established relationships between prescription codes and medication.
A visual graph that uses NLP to show established relationships between prescription codes and medication.

4. Patients with hypertension or diabetes with chest pain:

Query:

query_string = """
MATCH (p:Patient)-[r:IS_SYMPTOM]->(s:Symptom),
(p1:Patient)-[r2:IS_DSD]->(_dsd:DSD)
WHERE s.name CONTAINS "chest pain" AND p1.name=p.name AND _dsd.name IN ['hypertension', 'diabetes'] AND r2.date=r.date
RETURN DISTINCT p.name as patient, r.date as date, _dsd.name as dsd, s.name as symptom
"""

Dataframe:

Dataframe

Graph:

A visual graph that uses NLP to show established relationships between patient records and medical symptoms.
A visual graph that uses NLP to show established relationships between patient records and medical symptoms.

SparkNLP and your preferred native KG database or KG API work well together for building knowledge graphs from extracted entities and established relationships. In many scenarios, Federal Agencies and industry enterprises require retrieving cohorts fast to gain population health or adverse event insights. As most data is available as unstructured text from clinical documents, as demonstrated, we can create a scalable and automated production solution to extract entities, build their relationships, establish a KG, and ask intelligent queries where the Lakehouse supports the end-to-end.

Start building your Cohorts with Knowledge Graphs using NLP

With this Solution Accelerator, Databricks and John Snow Labs make it easy to enable building clinical cohorts using KGs.

To use this Solution Accelerator, you can preview the notebooks online and import them directly into your Databricks account. The notebooks include guidance for installing the related John Snow Labs NLP libraries and license keys.

You can also visit our Lakehouse for Healthcare and Life Sciences page to learn about all of our solutions.

1https://www.biopharmadive.com/spons/decentralized-clinical-trials-are-we-ready-to-make-the-leap/546591
2https://clinicaltrials.gov/ct2/show/NCT03872401

Try Databricks for free

Related posts

See all Industries posts