The World’s Struggle Against Hypertension – The Role of Artificial Intelligence


When one considers the vast array of diseases the world contends with on a daily basis, hypertension (HTN) ranks among the top being a major risk factor for heart and circulatory health. HTN affects a significant portion of the world’s population. According to the World Health Organization, “an estimated 1.13 billion people worldwide have hypertension, most (two-thirds) living in low- and middle-income countries”. Notably less than 1 in 5 people have the condition under control. Hypertension is a major global cause of premature death and has no known viable large scale solution.

What is Hypertension?

The World Health Organization states “Blood pressure is the force exerted by circulating blood against the walls of the body’s arteries, the major blood vessels in the body. Hypertension is when blood pressure is too high.” Blood pressure has two measures systolic and diastolic. Systolic refers to the pressure in blood vessels as the heart contracts or beats. Diastolic refers to the pressure in blood vessels when the heart rests between beats. Using these two measures, hypertension occurs when systolic is greater than or equal to 140 mmHg and diastolic is greater than or equal to 90mmHg as measured on two different days.

The Opportunity

Predicting and early diagnosing of hypertension could conceivably help prevent related health complications. WHO’s target for noncommunicable diseases includes one to “reduce the prevalence of hypertension by 25% by 2025 (baseline 2010).” With these goals in mind, researchers around the world continue to seek out methods to reduce hypertension yet encounter many challenges in the pursuit.

Challenges for Reducing Hypertension

Two major challenges in dealing with hypertension globally are the collection of data and the number of factors that contribute to hypertension and thus controlling the disease. 

Collecting data: Traditional monitoring of blood pressure (BP) is a challenge because of access to cuff-based monitors, inconsistency of usage, and lack of trend data upon which healthcare practitioners can make informed decisions and provide researchers to reliable, ongoing data.

Complexity: There are a litany of different factors impacting HTN (genome, BP, environmental, age, weight, sleep quality and quantity, smoking, alcohol use, etc.) resulting in challenges for clinicians in accurately categorizing individuals as hypertensive, recommending next steps and for individuals to do all that’s necessary to reduce their own hypertension risk.

What Are Hypertension Focused Groups Using AI For?

Academics, institutions and companies are all part of the effort to reduce the impact of hypertension by enlisting the aid of artificial intelligence or AI. The list of use cases and applications is long including:

  • Prediction of early stages of hypertension/diagnose HTN and its risk of incidence
  • Ability to identify risk factors and phenotypes of HTN, 
  • Identifying treatment adherence success factors 
  • Estimating blood pressure (BP), 
  • Developing novel cuffless methods for BP measurement, 
  • Analyzing controlled trial, randomized data 
  • Transforming clinical practice inserting personalized, customized, individualized prevention and treatment approaches, (optimal BP goals, antihypertensive medication regimen, targeted interventions)
  • Leveraging multi-omics  and wearables to inform prescribers and patients about specific factors that may impact their BP control
  • Analysing DNA or RNA data streams/sequencing tools for hypertension identification and pharmaceutical development

Barriers to AI Use

Artificial Intelligence is consistently characterized as immature in the literature of the last two years as it relates to hypertension applications. As with all AI sorties, the technology must prove it’s consistency, accuracy, and reliability in the BP sphere. AI must also be able to incorporate a multiverse of HTN factors including gut bacteria, ECG, BP, DNA, etc. at a sufficient level of accuracy and reliability to meet any minimum standards or regulations. 

Is there a role for AI in HTN?

Study after study in 2019, 2020 and now early 2021 argue that AI has a role to play in reducing the incidence of hypertension.According to Koshimizu, Kojima & Okuno “The use of artificial intelligence in numerous prediction and classification tasks, including clinical research and healthcare management, is becoming increasingly more common.”

To support the reduction of the global health and economic burden of HTN, AI research and development has expanded in the last two years – both academically and commercially – many recent studies (2019 and 2020), and thus now is an ideal time to apply AI to hypertension. 

One 2021 study titled “Risk Stratification for Early Detection of Diabetes and Hypertension in Resource-Limited Settings: Machine Learning Analysis” led by the Department of Industrial and Systems Engineering, University of Wisconsin-Madison, and the Department of Mechanical and Industrial Engineering at the University of Toronto, stated “In the next decade, health systems in many countries are planning to spend significant resources on noncommunicable disease screening programs and our study demonstrates that machine learning (ML) models can be leveraged by these programs to effectively utilize limited resources by improving risk stratification.”

Wide ranging accuracy results can occur when applying AI to big data. Testing using various approaches such as deep learning, support vector machines (SVM), neural network algorithms, machine learning (ML) algorithms and photoplethysmography applying wearable biosensors and portable devices appeared in studies such as the one by Kostisa et al published in the International Journal of Cardiology Hypertension. 

Beyond identifying HTN, AI can be effective in HTN management. Koshimizu et al argue their study shows “that artificial intelligence is advantageous for hypertension management and can be used to establish clinical evidence for the practical management of hypertension.”

Temple University in Philadelphia was recently awarded $748,000 to build a ML system to “analyze electronic health data, identifying patients who are likely to have persistently uncontrolled blood pressure (BP) so doctors can be more proactive with their interventions”. The outcome of the algorithm for the clinician would be provide a prediction, leveraging a variety of factors, that a “particular patient is more likely to come back to their next appointment still having uncontrolled hypertension.”


Identification and treatment of hypertension, a disease affecting 1 in 7 globally, is currently lagging in its ability to inform patients, clinicians and other healthcare industry participants. Outdated methods of data collection do not provide a holistic picture of the individual’s physiology and history. Generalized treatment plans do not take into account the diversity of patients. 

Artificial intelligence, despite being a relatively immature technology, offers the opportunity to leverage wearable technology, deeper analytics, more databases, and broader expertise in the identification and treatment of hypertension. A great deal of research has been performed in the last 18 months and continues towards more effective ways to use artificial intelligence in the healthcare sector and more particularly in the reduction of hypertension globally. 

For more information about hypertension and how technology is addressing the challenge contact Eduardo Serna at,

Eduardo Serna is the CEO and Founder of Grupo Seara. Grupo Seara, is a Canadian-based medtech company with extensive expertise in blood pressure data collection. 



Grupo Seara:

Brian Lenahan is the author of four Amazon-published books on artificial intelligence including the Bestseller “Artificial Intelligence: Foundations for Business Leaders and Consultants”. He is a former executive in a Top 10 North American bank, a University Instructor, and mentors innovative companies in the Halton and Hamilton areas. Brian’s training in AI comes from MIT and he writes extensively on artificial intelligence and quantum computing. 



Aquitaine Innovation Advisors:

#digitalhealth #ai #artificialintelligence #healthcare #healthtechnology #fitnesstechnology #hypertension


World Health Organization.

Thanat Chaikijurajai, Luke J Laffin, Wai Hong Wilson Tang

“Artificial Intelligence and Hypertension: Recent Advances and Future Outlook”

 2020 Nov 3 – National Library of Medicine, NIH

PMID: 32615586  PMCID: PMC7608522 (available on 2021-07-02)  DOI: 10.1093/ajh/hpaa102

Hiroshi Koshimizu, Ryosuke Kojima & Yasushi Okuno 

 “Future possibilities for artificial intelligence in the practical management of hypertension”

Hypertension Research volume 43, pages1327–1337(2020); Nature 

Published: 13 July 2020

Dhammika Amaratunga; Javier Cabrera; Davit Sargsyana; John B. Kostisa; Stavros Zinonosa; William J.Kostisa

“Uses and opportunities for machine learning in hypertension research”

International Journal of Cardiology Hypertension Volume 5, June 2020, 100027

Steinberg, Don. “A MACHINE-LEARNING APPROACH TO CONTROLLING BLOOD PRESSURE” Temple University. September 15, 2020


(Text in Italics are direct quotes from the links)

Artificial Intelligence and Hypertension: Recent Advances and Future Outlook


Thanat Chaikijurajai  1 , Luke J Laffin  1 , Wai Hong Wilson Tang  1; 2020 Nov 3 – National Library of Medicine, NIH

PMID: 32615586  PMCID: PMC7608522 (available on 2021-07-02)  DOI: 10.1093/ajh/hpaa102

Prevention and treatment of hypertension (HTN) are a challenging public health problem. Recent evidence suggests that artificial intelligence (AI) has potential to be a promising tool for reducing the global burden of HTN, and furthering precision medicine related to cardiovascular (CV) diseases including HTN. Since AI can stimulate human thought processes and learning with complex algorithms and advanced computational power, AI can be applied to multimodal and big data, including genetics, epigenetics, proteomics, metabolomics, CV imaging, socioeconomic, behavioral, and environmental factors. AI demonstrates the ability to identify risk factors and phenotypes of HTN, predict the risk of incident HTN, diagnose HTN, estimate blood pressure (BP), develop novel cuffless methods for BP measurement, and comprehensively identify factors associated with treatment adherence and success. Moreover, AI has also been used to analyze data from major randomized controlled trials exploring different BP targets to uncover previously undescribed factors associated with CV outcomes. Therefore, AI-integrated HTN care has the potential to transform clinical practice by incorporating personalized prevention and treatment approaches, such as determining optimal and patient-specific BP goals, identifying the most effective antihypertensive medication regimen for an individual, and developing interventions targeting modifiable risk factors. Although the role of AI in HTN has been increasingly recognized over the past decade, it remains in its infancy, and future studies with big data analysis and N-of-1 study design are needed to further demonstrate the applicability of AI in HTN prevention and treatment.

Future possibilities for artificial intelligence in the practical management of hypertension


Published: 13 July 2020

Hiroshi Koshimizu, Ryosuke Kojima & Yasushi Okuno 

Hypertension Research volume 43, pages1327–1337(2020) 


The use of artificial intelligence in numerous prediction and classification tasks, including clinical research and healthcare management, is becoming increasingly more common. This review describes the current status and a future possibility for artificial intelligence in blood pressure management, that is, the possibility of accurately predicting and estimating blood pressure using large-scale data, such as personal health records and electronic medical records. Individual blood pressure continuously changes because of lifestyle habits and the environment. This review focuses on two topics regarding controlling changing blood pressure: a novel blood pressure measurement system and blood pressure analysis using artificial intelligence. Regarding the novel blood pressure measurement system, we compare the conventional cuff-less method with the analysis of pulse waves using artificial intelligence for blood pressure estimation. Then, we describe the prediction of future blood pressure values using machine learning and deep learning. In addition, we summarize factor analysis using “explainable AI” to solve a black-box problem of artificial intelligence. Overall, we show that artificial intelligence is advantageous for hypertension management and can be used to establish clinical evidence for the practical management of hypertension.

Uses and opportunities for machine learning in hypertension research

International Journal of Cardiology Hypertension Volume 5, June 2020, 100027

Dhammika Amaratunga Javier Cabrera Davit Sargsyana John B. Kostisa Stavros Zinonosa William J.Kostisa



Artificial intelligence (AI) promises to provide useful information to clinicians specializing in hypertension. Already, there are some significant AI applications on large validated data sets.

Methods and results

This review presents the use of AI to predict clinical outcomes in big data i.e. data with high volume, variety, veracity, velocity and value. Four examples are included in this review. In the first example, deep learning and support vector machine (SVM) predicted the occurrence of cardiovascular events with 56%–57% accuracy. In the second example, in a data base of 378,256 patients, a neural network algorithm predicted the occurrence of cardiovascular events during 10 year follow up with sensitivity (68%) and specificity (71%). In the third example, a machine learning algorithm classified 1,504,437 patients on the presence or absence of hypertension with 51% sensitivity, 99% specificity and area under the curve 87%. In example four, wearable biosensors and portable devices were used in assessing a person’s risk of developing hypertension using photoplethysmography to separate persons who were at risk of developing hypertension with sensitivity higher than 80% and positive predictive value higher than 90%. The results of the above studies were adjusted for demographics and the traditional risk factors for atherosclerotic disease.


These examples describe the use of artificial intelligence methods in the field of hypertension.


By Don Steinberg   September 15, 2020

Machine learning, a branch of computer artificial intelligence, has been put to use in a range of healthcare fields, from identifying promising drugs to recognizing changes in medical images that can catch health issues early. 

A new project at the College of Public Health aims to apply machine learning techniques to the everyday health of millions of people who have hypertension, a main risk factor for cardiovascular disease (CVD). Gabriel Tajeu, assistant professor of Health Services Administration and Policy, has been awarded a five-year, $748,000 grant from the National Institutes of Health to build a machine learning system to analyze electronic health data, identifying patients who are likely to have persistently uncontrolled blood pressure (BP) so doctors can be more proactive with their interventions.

Adults who have controlled BP have a substantially decreased risk of CVD-related mortality compared to those with uncontrolled BP. However, using current guidelines, over 40% of the 100 million U.S. adults with hypertension have controlled BP, making it a significant public health issue and one that is insufficiently kept in check. In treatment, there can be what’s called “clinical inertia,” Tajeu said.

“Even if a patient’s blood pressure is uncontrolled, the doctor might take a wait-and-see approach rather than increase the dosage of antihypertensive medications,” he said. “But six months or a year is a long time to be walking around with uncontrolled hypertension.”

What Tajeu’s machine-learning algorithm would do is provide the clinician with a prediction, based on a mix of data factors, that a particular patient is more likely to come back to their next appointment still having uncontrolled hypertension. “Hopefully that would give them more of a justification for increasing treatment intensity and lower that clinical inertia,” he said.

Machine learning algorithms have been put to work on electronic health records in applications such as predicting lengths of hospital stays and hospital mortality rates, but analysis of health data to identify trends and make predictions around hypertension is relatively new territory, Tajeu said.

A machine-learning algorithm (MLA) needs to be “trained” using a large set of data. In this case it will be anonymized data from the Temple Health System, which contains extensive demographic, clinical, prescribing, and dispensing data. An MLA can recognize patterns in data that may be difficult to detect otherwise and identify variables that are important for prediction of an outcome. Essentially, the system will assess existing data to analyze what combination of factors are most likely to indicate that a patient with uncontrolled BP will later return still having uncontrolled BP.  

Tajeu will work with graduate students in public health and computer science to build the system. By the end of the research, they hope to have a validated machine-learning algorithm. The next step would be to identify how to best integrate it into clinical management software used in the Temple Health System. That would allow a physician treating a patient to be alerted that a patient is at risk at the point of service, so appropriate measures could be taken. 

“This also allows you to use your resources in a targeted fashion,” Tajeu said. “Because if you just say for everybody who has uncontrolled blood pressure at a visit, we’re going to do X, Y, and Z, that’s expensive. But if there’s a segment of that population that the machine learning algorithm identifies as the most high-risk, then we can target those patients. And in that way, we can save the healthcare system money, we can improve care for a vulnerable population in North Philadelphia, and ultimately save lives.”

Predicting hypertension using machine learning: Findings from Qatar Biobank Study

Latifa A. AlKaabi, Lina S. Ahmed, Maryam F. Al Attiyah, Manar E. Abdel-Rahman 

Predicting hypertension using machine learning: Findings from Qatar Biobank Study

Published: October 16, 2020


Background and objective

Hypertension, a global burden, is associated with several risk factors and can be treated by lifestyle modifications and medications. Prediction and early diagnosis is important to prevent related health complications. The objective is to construct and compare predictive models to identify individuals at high risk of developing hypertension without the need of invasive clinical procedures.


This is a cross-sectional study using 987 records of Qataris and long-term residents aged 18+ years from Qatar Biobank. Percentages were used to summarize data and chi-square tests to assess associations. Predictive models of hypertension were constructed and compared using three supervised machine learning algorithms: decision tree, random forest, and logistics regression using 5-fold cross-validation. The performance of algorithms was assessed using accuracy, positive predictive value (PPV), sensitivity, F-measure, and area under the receiver operating characteristic curve (AUC). Stata and Weka were used for analysis.


Age, gender, education level, employment, tobacco use, physical activity, adequate consumption of fruits and vegetables, abdominal obesity, history of diabetes, history of high cholesterol, and mother’s history high blood pressure were important predictors of hypertension. All algorithms showed more or less similar performances: Random forest (accuracy = 82.1%, PPV = 81.4%, sensitivity = 82.1%), logistic regression (accuracy = 81.1%, PPV = 80.1%, sensitivity = 81.1%) and decision tree (accuracy = 82.1%, PPV = 81.2%, sensitivity = 82.1%. In terms of AUC, compared to logistic regression, while random forest performed similarly, decision tree had a significantly lower discrimination ability (p-value<0.05) with AUC’s equal to 85.0, 86.9, and 79.9, respectively.


Machine learning provides the chance of having a rapid predictive model using non-invasive predictors to screen for hypertension. Future research should consider improving the predictive accuracy of models in larger general populations, including more important predictors and using a variety of algorithms.

Value of a Machine Learning Approach for Predicting Clinical Outcomes in Young Patients With Hypertension


Xueyi Wu , Xinglong Yuan , Wei Wang , Kai Liu , Ying Qin , Xiaolu Sun , Wenjun Ma , Yubao Zou , Huimin Zhang , Xianliang Zhou , Haiying Wu , Xiongjing Jiang , Jun Cai , Wenbing Chang , Shenghan Zhou , Lei Song

Originally published16 Mar 2020 2020;75:1271–1278


Risk stratification of young patients with hypertension remains challenging. Generally, machine learning (ML) is considered a promising alternative to traditional methods for clinical predictions because it is capable of processing large amounts of complex data. We, therefore, explored the feasibility of an ML approach for predicting outcomes in young patients with hypertension and compared its performance with that of approaches now commonly used in clinical practice. Baseline clinical data and a composite end point—comprising all-cause death, acute myocardial infarction, coronary artery revascularization, new-onset heart failure, new-onset atrial fibrillation/atrial flutter, sustained ventricular tachycardia/ventricular fibrillation, peripheral artery revascularization, new-onset stroke, end-stage renal disease—were evaluated in 508 young patients with hypertension (30.83±6.17 years) who had been treated at a tertiary hospital. Construction of the ML model, which consisted of recursive feature elimination, extreme gradient boosting, and 10-fold cross-validation, was performed at the 33-month follow-up evaluation, and the model’s performance was compared with that of the Cox regression and recalibrated Framingham Risk Score models. An 11-variable combination was considered most valuable for predicting outcomes using the ML approach. The C statistic for identifying patients with composite end points was 0.757 (95% CI, 0.660–0.854) for the ML model, whereas for Cox regression model and the recalibrated Framingham Risk Score model it was 0.723 (95% CI, 0.636–0.810) and 0.529 (95% CI, 0.403–0.655). The ML approach was comparable with Cox regression for determining the clinical prognosis of young patients with hypertension and was better than that of the recalibrated Framingham Risk Score model.

AI (Artificial Intelligence) and Hypertension Research

Curr Hypertens Rep. 2020; 22(9): 70.

Published online 2020 Aug 27. doi: 10.1007/s11906-020-01068-8

PMCID: PMC7450041 PMID: 32852654

Franco B. Mueller corresponding author


Purpose of Review

This review a highlights that to use artificial intelligence (AI) tools effectively for hypertension research, a new foundation to further understand the biology of hypertension needs to occur by leveraging genome and RNA sequencing technology and derived tools on a broad scale in hypertension.

Recent Findings

For the last few years, progress in research and management of essential hypertension has been stagnating while at the same time, the sequencing of the human genome has been generating many new research tools and opportunities to investigate the biology of hypertension. Cancer research has applied modern tools derived from DNA and RNA sequencing on a large scale, enabling the improved understanding of cancer biology and leading to many clinical applications. Compared with cancer, studies in hypertension, using whole genome, exome, or RNA sequencing tools, total less than 2% of the number cancer studies. While true, sequencing the genome of cancer tissue has provided cancer research an advantage, DNA and RNA sequencing derived tools can also be used in hypertension to generate new understanding how complex protein network, in non-cancer tissue, adapts and learns to be effective when for example, somatic mutations or environmental inputs change the gene expression profiles at different network nodes. The amount of data and differences in clinical condition classification at the individual sample level might be of such magnitude to overwhelm and stretch comprehension. Here is the opportunity to use AI tools for the analysis of data streams derived from DNA and RNA sequencing tools combined with clinical data to generate new hypotheses leading to the discovery of mechanisms and potential target molecules from which drugs or treatments can be developed and tested.


Basic and clinical research taking advantage of new gene sequencing-based tools, to uncover mechanisms how complex protein networks regulate blood pressure in health and disease, will be critical to lift hypertension research and management from its stagnation. The use of AI analytic tools will help leverage such insights. However, applying AI tools to vast amounts of data that certainly exist in hypertension, without taking advantage of new gene sequencing-based research tools, will generate questionable results and will miss many new potential molecular targets and possibly treatments. Without such approaches, the vision of precision medicine for hypertension will be hard to accomplish and most likely not occur in the near future.

Keywords: Artificial intelligence, Deep machine learning algorithms, Whole genome and RNA sequencing, Hypertension treatment, Gene and protein networks, Target molecules, Cancer and hypertension research publications


Feber, J.; Badawi, H.; Alenazi, S.Author Information

Journal of Hypertension: July 2019 – Volume 37 – Issue – p e32

doi: 10.1097/



The goal of the study was to assess the performance of artificial intelligence in predicting upper limits of normal (ULN = 95th percentile for a given age and height) office blood pressure (BP) in children whose ULN varies with age and height.

Design and method: 

The most recent pediatric normative office BP data (Flynn J, 2017:140:e2017) were used as a training data set for the Fuzzy Rules Based System (FRBS), to generate rules for office BP prediction. FRBS was then applied to office BP measured in 756 patients (405 boys, 351 girls) aged 4.3 to 16.9 years (median = 14.1), who were seen in the clinic from 2012 to 2018; median height was 160 cm (range = 117 to 187). Systolic and diastolic BP ULN predicted by FRBS (PredSBP, PredDBP) were compared with calculated ULN (CalcSBP, CalcDBP) using descriptive statistics, correlation analysis and Bland-Altman statistics.


Systolic and diastolic BP ULN ranged from 109 to 138 mmHg and from 68 to 86 mmHg respectively. The mean ± SD difference between PredSBP and CalcSBP was 0.45 ± 0.89 mmHg. Similarly, the PredDBP differed from CalcDBP by 0.18 ± 0.67 mmHg. The correlation coefficient between predicted and calculated SBP and DBP was 0.99 and 0.98 respectively. The mean bias (on Bland-Altman analysis) between PredSBP and CalcSBP was −0.45 mmHg, with lower and upper limits of agreement (LOA) ranging from −2.19 to +1.29 mmHg. The bias for PredDBP and CalcDBP was even lower (mean bias = −0.18 mmHg, LOA = −1.49 to +1.13 mmHg).


The Fuzzy Rules Based System accurately predicted the upper limits of systolic and diastolic office blood pressure readings for children of different ages, heights and blood pressures.

Future Direction for Using Artificial Intelligence to Predict and Manage Hypertension


Chayakrit Krittanawong  1 , Andrew S Bomback  2 , Usman Baber  3 , Sripal Bangalore  4 , Franz H Messerli  5   6   7 , W H Wilson Tang  8   9   10. 2018 Jul 6

PMID: 29980865  DOI: 10.1007/s11906-018-0875-x


Purpose of review: Evidence that artificial intelligence (AI) is useful for predicting risk factors for hypertension and its management is emerging. However, we are far from harnessing the innovative AI tools to predict these risk factors for hypertension and applying them to personalized management. This review summarizes recent advances in the computer science and medical field, illustrating the innovative AI approach for potential prediction of early stages of hypertension. Additionally, we review ongoing research and future implications of AI in hypertension management and clinical trials, with an eye towards personalized medicine.

Recent findings: Although recent studies demonstrate that AI in hypertension research is feasible and possibly useful, AI-informed care has yet to transform blood pressure (BP) control. This is due, in part, to lack of data on AI’s consistency, accuracy, and reliability in the BP sphere. However, many factors contribute to poorly controlled BP, including biological, environmental, and lifestyle issues. AI allows insight into extrapolating data analytics to inform prescribers and patients about specific factors that may impact their BP control. To date, AI has been mainly used to investigate risk factors for hypertension, but has not yet been utilized for hypertension management due to the limitations of study design and of physician’s engagement in computer science literature. The future of AI with more robust architecture using multi-omics approaches and wearable technology will likely be an important tool allowing to incorporate biological, lifestyle, and environmental factors into decision-making of appropriate drug use for BP control.

Keywords: Artificial intelligence; Big data; Deep learning; Hypertension; Machine learning; Wearable technology.

Machine learning and blood pressure

The Journal of Clinical Hypertension

Prasanna Santhanam MBBS, MD  Rexford S. Ahima MD, PhD

First published: 19 September 2019 3


Machine learning (ML) is a type of artificial intelligence (AI) based on pattern recognition. There are different forms of supervised and unsupervised learning algorithms that are being used to identify and predict blood pressure (BP) and other measures of cardiovascular risk. Since 1999, starting with neural network methods, ML has been used to gauge the relationship between BP and pulse wave forms. Since then, the scope of the research has expanded to using different cardiometabolic risk factors like BMI, waist circumference, waist‐to‐hip ratio in concert with BP and its various pharmaceutical agents to estimate biochemical measures (like HDL cholesterol, LDL and total cholesterol, fibrinogen, and uric acid) as well as effectiveness of anti‐hypertensive regimens. Data from large clinical trials like the SPRINT are being re‐analyzed by ML methods to unearth new findings and identify unique relationships between predictors and outcomes. In summary, AI and ML methods are gaining immense attention in the management of chronic disease. Elevated BP is a very important early metric for the risk of development of cardiovascular and renal injury; therefore, advances in AI and ML will aid in early disease prediction and intervention.


Machine Learning (ML) is a type of artificial intelligence (AI) based on pattern recognition. In ML, a given data set called “training data” is used for performing predictions without explicit programming, and it has now become an invaluable tool in medical research.1, 2 ML may involve learning algorithms (supervised classification and regression) and unsupervised (grouping or cluster analysis); reinforced learning; or feature learning (eg, Principal Component Analysis, Artificial Neural Networks; multilayer perceptron) or creation of alternative prediction models (eg, deep structured learning like some forms artificial neural networks, decision trees, support vector machines, and Bayesian networks).3

High blood pressure (BP) is a major risk factor for cardiovascular diseases (CVD). As early as 1999, in a small study of 12 persons, neural networks analysis was shown to provide a better model fit for the relationship between blood pressure and pulse wave volumes.4 Since then, there have been different attempts at using neural networks to analyze blood pressure. In 2003, in an epidemiological analysis of the Framingham data, the neural network methods used then, could not clearly classify groups into low and high blood pressure.5

There has been renewed interest in different AI methods. A study that involved estimating CVD from age, gender, race, BMI, waist‐to‐height ratio and blood pressure (both systolic (SBP) and diastolic (DBP) as input variables, and biochemical measures (ie, HDL,LDL and total cholesterol, fibrinogen, and uric acid)as output variables in an Artificial Neural Network [ANN] analysis was 82.6% accurate.6 Subsequently, a cohort of diabetic patients with and without CKD (Chronic Kidney Disease) were followed for a period of 7.8 years in another study, to evaluate the associated factors and develop different machine learning models based on the partial least square regression, the classification and regression tree, the C5.0 decision tree, random forest, naive Bayes classification, neural network, and support vector machines.7 Age, age at diagnosis, WBC (White Blood Count), total cholesterol, waist‐to‐hip ratio, LDL cholesterol, alcohol intake, and 5 genetic polymorphisms including those of uteroglobin and lipid metabolism (UGB G38A, LIPC −514C > T, APOB Thr71Ile, APOC3 3206T > G, and APOC3 1100C > T) were the significant determinants among a host of SNPs, and the methods that had the best performance were support vector machines and random forest plots.7 When the classification trees (as a machine learning technique) was used to predict systolic blood pressure in a cohort of 400 college students (aged between 16 and 63 years) from the variables—BMI, waist circumference hip circumference and waist‐hip ratio, the model had a sensitivity of 45.65% and a specificity of 65.15% in the testing sample in women and 58.38% and 69.70% in men, and it outperformed the traditional logistic regression analysis.8 ML has also been used to predict metabolic risks of overweight status from different biomarkers as well as cardiovascular events like ventricular tachycardia.9, 10

ML and AI have also been employed in clinical research studies on BP. In a study involving 73 subjects, 14 attributes were obtained from ECG (Electrocardiogram) and photoplethysmogram and using a genetic algorithm and feature selection technique, the blood indicators were selected for each subject.11 Using multivariate regression and support vector, a continuous BP estimation model was developed and it was found to be highly correlated with actual BP measurements(a correlation coefficient of 0.852 (Mean Error (ME) −0.001 ± 3.102 mm Hg) for SBP, and 0.790 (ME −0.004 ± 2.199 mm Hg) for DBP) supporting the evidence that AI can take non‐invasive biophysical parameters and estimate blood pressure fairly accurately.11

Aortic SBP measurement usually involves recording a peripheral pulse waveform and on occasion invasive techniques. A comparison between the validated generalized transfer function and different ANNs has shown that the radial SBP, DBP, and heart rate alone were able to provide a relatively accurate aortic SBP estimation (with a slightly increased variance).12

In recent years, the relationship between BP and other CVD risk factors have been studied with ML techniques. Surprisingly, beta‐blockers have been shown to be more effective than expected, compared to other anti‐hypertensives (especially in the light of different guidelines for hypertension management), and the concomitant use of statins and proton pump inhibitors appears to be synergistic in improving the success of anti‐hypertensives.13 Data from the SPRINT trial have been re‐analyzed with random forest plot to predict cardiovascular outcomes and it was shown that the most important determinants were urine albumin/creatinine ratio, estimated GFR, age, serum creatinine, history of subclinical cardiovascular disease (CVD), serum cholesterol, a variable representing SBP signals using wavelet transformation, HDL levels, the 90th percentile of SBP, and serum triglyceride with an overall AUC (Area Under the Curve) of 0.71.14 Newer methods for classification of blood pressure like modular neural networks using three separate modules for SBP, DBP, and pulse have been employed with good results.15

In summary, AI and ML methods are gaining immense attention in the management of chronic disease. Elevated BP is a very important early metric for the risk of development of cardiovascular and renal injury, therefore, advances in AI and ML will aid in early disease prediction and intervention

Questionnaire based Prediction of Hypertension using Machine Learning

Abhijat Chaturvedi, Siddharth Srivastava, Astha Rai, A S Cheema Centre for Development of Advanced Computing, Noida {abhijatchaturvedi, siddharthsrivastava, asthar, ascheema}

Desham Chelimela, Rajeev Aravindakshan

All India Institute of Medical Sciences, Mangalagiri,

medRxiv preprint doi:

this version posted June 20, 2020.

Abstract. Machine Learning has proven its ability in healthcare as an assisting technology for health care providers either by saving precious time or timely alerts or vitals monitoring. However, their application in real world is limited by availability of data. In this paper, we show that simple machine learning algorithms especially neural networks, if designed carefully, are extremely effective even with limited amount of data. Specifically with exhaustive experiments on standard Modified Na- tional Institute of Standards and Technology database (MNIST) dataset we analyse the impact of various parameters for effective performance. Further, on a custom dataset collected at a tertiary care hospital for hypertension analysis, we apply these design considerations to achieve better performance as compared to competitive baselines. On a real world dataset of only a few hundred patients, we show the effectiveness of these design choices and report an accuracy of 75% in determining whether a patient suffers from hypertension.

Keywords: Machine Learning in Healthcare · Neural Network · Support Vector Machine · Random Forest

Hypertension is one of the most common ailment in the Indian patients and the severe threat that it poses makes it important to identify it as early as possible in order to stop it from becoming incurable. Fourth National Family Health Survey reports more than 13% of men and more than 8% of women in early to middle ages are suffering from hypertension [1]. The scarcity of healthcare experts makes the situation even more grim.

A Machine-Learning-Based Prediction Method for Hypertension Outcomes Based on Medical Data

by Wenbing Chang †, Yinglai LiuOrcID, Yiyong Xiao †OrcID, Xinglong Yuan, Xingxing XuOrcID, Siyue ZhangOrcID and Shenghan Zhou *OrcID

School of Reliability and Systems Engineering, Beihang University, Beijing 100191 China

These authors contributed equally to this work.

Diagnostics 2019, 9(4), 178;

Received: 26 September 2019 / Revised: 4 November 2019 / Accepted: 5 November 2019 / Published: 7 November 2019


The outcomes of hypertension refer to the death or serious complications (such as myocardial infarction or stroke) that may occur in patients with hypertension. The outcomes of hypertension are very concerning for patients and doctors, and are ideally avoided. However, there is no satisfactory method for predicting the outcomes of hypertension. Therefore, this paper proposes a prediction method for outcomes based on physical examination indicators of hypertension patients. In this work, we divide the patients’ outcome prediction into two steps. The first step is to extract the key features from the patients’ many physical examination indicators. The second step is to use the key features extracted from the first step to predict the patients’ outcomes. To this end, we propose a model combining recursive feature elimination with a cross-validation method and classification algorithm. In the first step, we use the recursive feature elimination algorithm to rank the importance of all features, and then extract the optimal features subset using cross-validation. In the second step, we use four classification algorithms (support vector machine (SVM), C4.5 decision tree, random forest (RF), and extreme gradient boosting (XGBoost)) to accurately predict patient outcomes by using their optimal features subset. The selected model prediction performance evaluation metrics are accuracy, F1 measure, and area under receiver operating characteristic curve. The 10-fold cross-validation shows that C4.5, RF, and XGBoost can achieve very good prediction results with a small number of features, and the classifier after recursive feature elimination with cross-validation feature selection has better prediction performance. Among the four classifiers, XGBoost has the best prediction performance, and its accuracy, F1, and area under receiver operating characteristic curve (AUC) values are 94.36%, 0.875, and 0.927, respectively, using the optimal features subset. This article’s prediction of hypertension outcomes contributes to the in-depth study of hypertension complications and has strong practical significance. View Full-Text

Keywords: hypertension outcomes; feature selection; recursive feature elimination; classification algorithm; XGBoost; prediction

Estimating Blood Pressure from Photoplethysmogram Signal and Demographic Features using Machine Learning Techniques

[Submitted on 7 May 2020]

Moajjem Hossain Chowdhury, Md Nazmul Islam Shuzan, Muhammad E.H. Chowdhury, Zaid B Mahbub, M. Monir Uddin, Amith Khandakar, Mamun Bin Ibne Reaz

Hypertension is a potentially unsafe health ailment, which can be indicated directly from the Blood pressure (BP). Hypertension always leads to other health complications. Continuous monitoring of BP is very important; however, cuff-based BP measurements are discrete and uncomfortable to the user. To address this need, a cuff-less, continuous and a non-invasive BP measurement system is proposed using Photoplethysmogram (PPG) signal and demographic features using machine learning (ML) algorithms. PPG signals were acquired from 219 subjects, which undergo pre-processing and feature extraction steps. Time, frequency and time-frequency domain features were extracted from the PPG and their derivative signals. Feature selection techniques were used to reduce the computational complexity and to decrease the chance of over-fitting the ML algorithms. The features were then used to train and evaluate ML algorithms. The best regression models were selected for Systolic BP (SBP) and Diastolic BP (DBP) estimation individually. Gaussian Process Regression (GPR) along with ReliefF feature selection algorithm outperforms other algorithms in estimating SBP and DBP with a root-mean-square error (RMSE) of 6.74 and 3.59 respectively. This ML model can be implemented in hardware systems to continuously monitor BP and avoid any critical health conditions due to sudden changes.

Comments: Accepted for publication in Sensor, 14 Figures, 14 Tables

Subjects: Signal Processing (eess.SP); Machine Learning (cs.LG)

Journal reference: Sensors 2020, 20(11), 3127

DOI: 10.3390/s20113127

Cite as: arXiv:2005.03357 [eess.SP]

  (or arXiv:2005.03357v1 [eess.SP] for this version)

Using Machine Learning to Predict Hypertension from a Clinical Dataset


Daniel LaFreniere1, Farhana Zulkernine2 School of Computing

Queen’s University Kingston ON, Canada farhana@cs.queensu.ca2

David Barber

School of Medicine Queen’s University Kingston, ON, Canada

Ken Martin

Canadian Primary Care Sentinel Surveillance Network (CPCSSN) Queen’s University Kingston ON, Canada

Abstract— Hypertension is an illness that often leads to severe and life threatening diseases such as heart failure, thickening of the heart muscle, coronary artery disease, and other severe conditions if left untreated. An artificial neural network is a powerful machine learning technique that allows prediction of the presence of the disease in susceptible populations while removing the potential for human error. In this paper, we identify the important risk factors based on patients’ current health conditions, medical records, and demographics. These factors are then used to predict the presence of hypertension in an individual. These risk factors are also indicative of the probability of a person developing hypertension in the future and can, therefore, be used as an early warning system. We present a neural network model for predicting hypertension with about 82% accuracy. This is good performance given our chosen risk factors as inputs and the large integrated data used for the study. Our network model utilizes very large sample sizes (185,371 patients and 193,656 controls) from the Canadian Primary Care Sentinel Surveillance Network (CPCSSN) data set. Finally, we present a literature study to show the use of these risk factors in other works along with experimental results obtained from our model.

Keywords — Artificial neural network; hypertension; backpropagation network; medical decision support systems.

Risk Stratification for Early Detection of Diabetes and Hypertension in Resource-Limited Settings: Machine Learning Analysis

Justin J Boutilier1, PhD ; Timothy C Y Chan2, PhD ; Manish Ranjan3, MBA ; Sarang Deo4, PhD 

1Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI, United States

2Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, ON, Canada

3NanoHealth, NanoCare Health Services, Hyderabad, India

4Max Institute of Healthcare Management, Indian School of Business, Hyderabad, India

Corresponding Author:

Justin J Boutilier, PhD

Department of Industrial and Systems Engineering

University of Wisconsin-Madison

1513 University Avenue

Madison, WI, 53706

United States

Phone: 1 6082630350



Background: The impending scale up of noncommunicable disease screening programs in low- and middle-income countries coupled with limited health resources require that such programs be as accurate as possible at identifying patients at high risk.

Objective: The aim of this study was to develop machine learning–based risk stratification algorithms for diabetes and hypertension that are tailored for the at-risk population served by community-based screening programs in low-resource settings.

Methods: We trained and tested our models by using data from 2278 patients collected by community health workers through door-to-door and camp-based screenings in the urban slums of Hyderabad, India between July 14, 2015 and April 21, 2018. We determined the best models for predicting short-term (2-month) risk of diabetes and hypertension (a model for diabetes and a model for hypertension) and compared these models to previously developed risk scores from the United States and the United Kingdom by using prediction accuracy as characterized by the area under the receiver operating characteristic curve (AUC) and the number of false negatives.

Results: We found that models based on random forest had the highest prediction accuracy for both diseases and were able to outperform the US and UK risk scores in terms of AUC by 35.5% for diabetes (improvement of 0.239 from 0.671 to 0.910) and 13.5% for hypertension (improvement of 0.094 from 0.698 to 0.792). For a fixed screening specificity of 0.9, the random forest model was able to reduce the expected number of false negatives by 620 patients per 1000 screenings for diabetes and 220 patients per 1000 screenings for hypertension. This improvement reduces the cost of incorrect risk stratification by US $1.99 (or 35%) per screening for diabetes and US $1.60 (or 21%) per screening for hypertension.

Conclusions: In the next decade, health systems in many countries are planning to spend significant resources on noncommunicable disease screening programs and our study demonstrates that machine learning models can be leveraged by these programs to effectively utilize limited resources by improving risk stratification.

J Med Internet Res 2021;23(1):e20123


Development and Evaluation of a Mobile Decision Support System for Hypertension Management in the Primary Care Setting in Brazil: Mixed-Methods Field Study on Usability, Feasibility, and Utility


Authors of this article: Daniel Vitório Silveira, MD  ;   Milena Soriano Marcolino, MSc, MD, PhD ;   Elaine Leandro Machado1, MSc, PhD  ;   Camila Gonçalves Ferreira1, BSN  ;   Maria Beatriz Moreira Alkmim, MSc, MD  ;   Elmiro Santos Resende MSc, MD, PhD  ;   Bárbara Couto Carvalho1, MD  ;   André Pires Antunes1, 5, MSc, MD  ;   Antonio Luiz Pinho Ribeiro2, MD, PhD  



Despite being an important cardiovascular risk factor, hypertension has low control levels worldwide. Computerized clinical decision support systems (CDSSs) might be effective in reducing blood pressure with a potential impact in reducing cardiovascular risk.


The goal of the research was to evaluate the feasibility, usability, and utility of a CDSS, TeleHAS (tele–hipertensão arterial sistêmica, or arterial hypertension system), in the care of patients with hypertension in the context of a primary care setting in a middle-income country.


The TeleHAS app consists of a platform integrating clinical and laboratory data on a particular patient, from which it performs cardiovascular risk calculation and provides evidence-based recommendations derived from Brazilian and international guidelines for the management of hypertension and cardiovascular risk. Ten family physicians from different primary care units in the city of Montes Claros, Brazil, were randomly selected to use the CDSS for the care of hypertensive patients for 6 months. After 3 and 6 months, the feasibility, usability, and utility of the CDSS in the routine care of the health team was evaluated through a standardized questionnaire and semistructured interviews.


Throughout the study, clinicians registered 535 patients with hypertension, at an average of 1.24 consultations per patient. Women accounted for 80% (8/10) of participant doctors, median age was 31.5 years (interquartile range 27 to 59 years). As for feasibility, 100% of medical users claimed it was possible to use the app in the primary care setting, and for 80% (8/10) of them it was easy to incorporate its use into the daily routine and home visits. Nevertheless, 70% (7/10) of physicians claimed that the time taken to fill out the CDSS causes significant delays in service. Clinicians evaluated TeleHAS as good (8/10, 80% of users), with easy completion and friendly interface (10/10, 100%) and the potential to improve patients’ treatment (10/10, 100%). A total of 90% (9/10) of physicians had access to new knowledge about cardiovascular risk and hypertension through the app recommendations and found it useful to promote prevention and optimize treatment.


In this study, a CDSS developed to assist the management of patients with hypertension was feasible in the context of a primary health care setting in a middle-income country, with good user satisfaction and the potential to improve adherence to evidence-based practices.


Artificial intelligence for early prediction of pulmonary hypertension using electrocardiography

Joon-myoung Kwon, MD 1 Kyung-Hee Kim, MD, PhD  1 Jose Medina-Inojosa, MD, MSC

Ki-Hyun Jeon, MD, MS Jinsik Park, MD, PhD Byung-Hee Oh, MD, PhD

Published:April 23, 2020DOI:

PlumX Metrics


Screening and early diagnosis of pulmonary hypertension (PH) are critical for managing progression and preventing associated mortality; however, there are no tools for this purpose. We developed and validated an artificial intelligence (AI) algorithm for predicting PH using electrocardiography (ECG).


This historical cohort study included data from consecutive patients from 2 hospitals. The patients in one hospital were divided into derivation (56,670 ECGs from 24,202 patients) and internal validation (3,174 ECGs from 3,174 patients) datasets, whereas the patients in the other hospital were included in only an external validation (10,865 ECGs from 10,865 patients) dataset. An AI algorithm based on an ensemble neural network was developed using 12-lead ECG signal and demographic information from the derivation dataset. The end-point was the diagnosis of PH. In addition, the interpretable AI algorithm identified which region had the most significant effect on decision making using a sensitivity map.


During the internal and external validation, the area under the receiver operating characteristic curve of the AI algorithm for detecting PH was 0.859 and 0.902, respectively. In the 2,939 individuals without PH at initial echocardiography, those patients that the AI defined as having a higher risk had a significantly higher chance of developing PH than those in the low-risk group (31.5% vs 5.9%, p < 0.001) during the follow-up period. The sensitivity map showed that the AI algorithm focused on the S-wave, P-wave, and T-wave for each patient by QRS complex characteristics.


The AI algorithm demonstrated high accuracy for PH prediction using 12-lead and single-lead ECGs.

An Artificial Intelligence Solution for Noninvasively Identifying Portal Hypertension?

Atif Zaman, MD, MPH reviewing Liu Y et al. Clin Gastroenterol Hepatol 2020 Mar 20

April 14, 2020

Convolutional neural network analysis of liver and spleen images shows high accuracy for diagnosing CSPH in patients with cirrhosis.

The most accurate method of diagnosing clinically significant portal hypertension (CSPH) in patients with cirrhosis is by measuring hepatic venous pressure gradient (HVPG). However, this technique is invasive and not readily available in all clinical settings.

In the current study, researchers developed and validated deep convolutional neural network (CNN) models using abdominal scans (liver and spleen) from computed tomography (CT) and magnetic resonance imaging (MRI) to identify CSPH (defined as HVPG ≥10 mmHg). They used data from two prospective, multicenter trials conducted in China and Turkey in which CT or MRI scans were performed within 14 days of HVPG measurement via transjugular catheterization. Two cohorts, divided into CT and MRI recipients, were shuffled and randomly sampled six times to create training, validation, and test datasets (in a 3:1:1 ratio) for modeling.

A total of 679 patients were included in the CT cohort (contributing ~19,000 images of liver and spleen) and 271 in the MRI cohort (~90,000 images). In both cohorts, overall diagnostic accuracy of CNN was high, between 90% and 100%. On repetition of modeling, diagnostic accuracy remained persistently high at greater than 89%.


This novel method using deep learning demonstrated near-perfect performance in diagnosing CSPH in patients with cirrhosis. Once these results are validated in other cohorts and the technique is implementable in non-research settings, this could become the method of choice in risk-stratifying patients with cirrhosis.

HEALTH NEWS  SEPT. 10, 2020 / 2:20 PM

AI tool screens for heart disease using gut bacteria, study shows

ByBrian Dunleavy

A new AI-based tool can screen for heart disease based on presence of bacteria in the gut, though more development is needed because of the difference in bacteria from person to person, researchers say. Photo by hamiltonpaviana/Pixabay

A new AI-based tool can screen for heart disease based on presence of bacteria in the gut, though more development is needed because of the difference in bacteria from person to person, researchers say. Photo by hamiltonpaviana/Pixabay

Sept. 10 (UPI) — Artificial intelligence can help screen adults for heart disease by detecting specific bacteria in the gut, according to a study published Thursday by the journal Hypertension.

The researchers analyzed stool samples collected from nearly 1,000 patients — half of whom had known heart disease — with their state-of-the-art machine learning modeling tool that runs on artificial intelligence, or AI.


The approach correctly identified clusters of gut bacteria present in those with heart disease, researchers said.

Based on the presence of this bacteria, the AI-based tool correctly identified people with heart disease more than 70% of the time.

RELATED Early treatment of rheumatoid arthritis may thwart heart disease

“Gut microbiota are associated with the development of hypertension, or high blood pressure, ‘the silent killer’ that often leads to heart disease,” study co-author Bina Joe told UPI.

“Our study shows that there are real changes in the population of these bacteria in our gut that can pinpoint people at risk for heart disease, before hypertension develops,” said Joe, professor and chair of the department of physiology and pharmacology at the University of Toledo.

Nearly half of all U.S. adults have some form of heart disease, and many of them go undiagnosed until they suffer a major event like a heart attack or stroke, according to the American Heart Association.

RELATED Study: Two drinks per day raises risk for obesity, metabolic syndrome

The new AI-based screening tool could help enhance screening for heart diseases by pinpointing the presence of specific bacteria in the gut, Joe said.

The tool uses historical data on the presence of certain bacteria in patients with high blood pressure or heart disease to identify those at risk for the conditions — based on whether they too have the same bacteria — she said.

The gut microbiome is made up of microorganisms, including bacteria, that live in the digestive tracts of humans and are involved in the digestion of food and processing of nutrients.

RELATED Heart med use has halved heart attack risk in people with Type 2 diabetes

In recent years, bacteria in the gut has been linked with overall health beyond the digestive tract, Joe said. Recent studies have also suggested that the presence of certain bacteria can increase risk for heart disease.

Because the composition of the bacteria in the gut varies from person to person, it has been challenging to develop a test that accurately identifies the bacteria that increases heart disease risk.

The AI-based tool, however, still is in the early stages of development, and it may be several years before doctors can use it to identify at-risk patients, Joe said.

“The gut microbiome is highly variable among individuals, so we were surprised by the promising level of accuracy obtained from these preliminary results,” she said.

“It is conceivable that clinicians could analyze the gut microbiome of patients’ stool samples with our machine learning method to screen patients for heart and vascular diseases.”VENDORS

VERHAERT – Using AI to track blood pressure

14 July 2020  

Posted by Wouter Hendrickx  Artificial Intelligence, My Future Product, Perspectives, Smart Medical

Hypertension, hypotension or irregular blood flow are linked with various diseases like coronary heart disease and ischemic as a hemorrhagic stroke. The WHO reported that around 13% of all deaths are caused by high blood pressure values. The overall prevalence of raised blood pressure (adults aged 25 and over) is around 31% (T.Mills K. et. All., 2016). Verhaert’s AILab invented a breakthrough AI-platform to track continuous blood pressure from data created by a PPG-sensor (photoplethysmogram). This as an alternative of the traditional sphygmomanometer to implement in future applications.

PPG device linked to a smartwatch (not traditional cuff)

OMRON – AI may help in high blood pressure monitoring

Publication – Artificial Intelligence in Medication


OMRON Healthcare, a global medical equipment company had published a study to coincide with World Hypertension Day today (17 May), to reveal the attitudes of individuals towards blood pressure monitoring. The study was conducted in collaboration with Kantar Health, in which 62,000 individuals from the UK, Italy, France, Germany, and Spain were surveyed. 

Results showed that Germany and Spain have the highest proportion of people affected by high blood pressure who take a more proactive step towards managing their conditions. Up to 78% of the respondents from Germany and 72% of the respondents from Spain, expressed they are more likely to adjust their behavior to adapt to their medical condition. While only 69% of the UK respondents are willing to do so. Hypertension patients from Italy and France, on the other hand, voiced the least willingness to change their lifestyles. 

The failure to monitor blood pressure condition at home 

Furthermore, the study also showed that 19% of the hypertension patients in Europe are taking blood pressure measurements on a monthly basis, 26% are doing so every week and only 11% do it every day. The study noted the importance of monitoring blood pressure at home: to assess day-to-day blood pressure change and help to avoid white-coat effect or the tendency to yield a higher blood pressure reading in a medical setting as compared to home. 

As André Van Gils, Chief Executive Officer and President of OMRON Healthcare Europe said in the press release, “people live increasingly busier lives and it can be easy to forget to put your heart health at first at times. As the number one contributing risk factor for global death, there should be no greater priority than monitoring your blood pressure…” 

The purpose of the study is to find out the motivations behind people’s attitudes towards blood pressure monitoring. The study results were released a week after the British Heart Foundation (BHF)’s latest report on the rise of deaths from heart and circulatory diseases among people below the age of 75 in the UK, for the first time in 50 years. Hypertension has been a major risk factor for heart and circulatory diseases. 

How AI is changing the landscape? 

The major difference comes in the use of wearables to monitor one’s blood pressure and notify individuals when abnormalities occur. Patients will no longer have to consciously remember to have their blood pressure taken at home or in the clinic. 

In fact, three months ago, OMRON Health had announced the integration of its FDA (the US Food and Drug Administration) cleared device – HeartGuide, into pinpointIQ, a monitoring platform developed by physIQ, to check on at-risk patients in outpatient settings. The artificial intelligence (AI) driven HeartGuide allows users to have a real-time insight into their blood pressure, activity, and sleep. The dashboard also enables users to have an overview of the trend and history of their activities and health conditions. 

Recently, a group of researchers from the University of California, San Diego, are prioritizing on wearable data to generate machine learning algorithm that can predict blood pressure and to provide personalized recommendations to lower it when needed. Researchers highlighted bed time as a factor affecting one patient’s blood pressure and sedentary lifestyle for another patient. This affirms the importance of personalization in blood pressure monitoring and how AI can be in the position to endorse that. 

Recommended Posts

Leave a Reply

Your email address will not be published. Required fields are marked *


Abrir Chat
🗨️ Habla con uno de nuestros asesores
Hola 👋 ¿En qué te podemos ayudar?