Get Our e-AlertsSubmit Manuscript
Health Data Science / 2021 / Article

Review Article | Open Access

Volume 2021 |Article ID 9819851 | https://doi.org/10.34133/2021/9819851

Jun Chen, Chao Lu, Haifeng Huang, Dongwei Zhu, Qing Yang, Junwei Liu, Yan Huang, Aijun Deng, Xiaoxu Han, "Cognitive Computing-Based CDSS in Medical Practice", Health Data Science, vol. 2021, Article ID 9819851, 13 pages, 2021. https://doi.org/10.34133/2021/9819851

Cognitive Computing-Based CDSS in Medical Practice

Received26 Nov 2020
Accepted28 Jun 2021
Published22 Jul 2021

Abstract

Importance. The last decade has witnessed the advances of cognitive computing technologies that learn at scale and reason with purpose in medicine studies. From the diagnosis of diseases till the generation of treatment plans, cognitive computing encompasses both data-driven and knowledge-driven machine intelligence to assist health care roles in clinical decision-making. This review provides a comprehensive perspective from both research and industrial efforts on cognitive computing-based CDSS over the last decade. Highlights. (1) A holistic review of both research papers and industrial practice about cognitive computing-based CDSS is conducted to identify the necessity and the characteristics as well as the general framework of constructing the system. (2) Several of the typical applications of cognitive computing-based CDSS as well as the existing systems in real medical practice are introduced in detail under the general framework. (3) The limitations of the current cognitive computing-based CDSS is discussed that sheds light on the future work in this direction. Conclusion. Different from medical content providers, cognitive computing-based CDSS provides probabilistic clinical decision support by automatically learning and inferencing from medical big data. The characteristics of managing multimodal data and computerizing medical knowledge distinguish cognitive computing-based CDSS from other categories. Given the current status of primary health care like high diagnostic error rate and shortage of medical resources, it is time to introduce cognitive computing-based CDSS to the medical community which is supposed to be more open-minded and embrace the convenience and low cost but high efficiency brought by cognitive computing-based CDSS.

1. Introduction

The Clinical Decision Support System (CDSS) has been widely accepted as a set of computer software programs that assists physicians and other health care roles in their decision-making whenever and wherever needed, like giving diagnostic suggestions based on the patient data or providing the knowledge about a specific disease. Compared to the type of CDSS that focuses on providing professional medical content, for example, UpToDate [1], there is a growing number of research and industrial efforts in cognitive computing-based CDSS [2]. Cognitive computing, as quoted in [3], “refers to systems that learn at scale, reason with purpose and interact with humans naturally.” Specifically, cognitive computing encompasses the advances of artificial intelligence including technologies like machine learning, natural language processing (NLP), computer vision, and speech recognition [3]. Despite that professional medical content providers like UpToDate, BMJ Best Practice, Cochrane Library, Embase, and DynaMed offer evidence-based content service for clinical decision support, it is inconvenient for health care roles to use actively and explicitly while taking care of patients. Besides, it is costly to manually summarize the diagnostic rules, manual decision trees, and treatment plans via plenty of clinical trials. In contrast with deterministic decision support based on human-readable medical knowledge, manually defined rules and decision trees, cognitive computing-based CDSS provides probabilistic clinical decision support via automatically learning and inferencing with large-quantity and high-quality clinical data. Due to the convenience, automation, and low cost of human labor, it is necessary to take advantage of cognitive computing-based CDSS to resolve the severe issues confronted by the current status of medical practice, including the following: (i)The rate of misdiagnosis is high in primary health care facilities. According to an autopsy analysis over 50 years, the overall diagnostic error rate is 36.24% in China [4]. Due to severe lack of medical resources, the misdiagnosis rate in the primary health care facilities is estimated beyond the aforementioned number. Besides, there are approximately 12 million US adults who are misdiagnosed each year resulting in a misdiagnosis rate of 5.08% [5](ii)The lack of general practitioners is severe in primary health care. As the first protection of people’s health, the general practitioner is of great importance for primary health care. However, there exists a huge gap between the demands for general practitioners in society and the number of existing general practitioners. In China, a majority of medical students are trained to become medical specialists rather than general practitioners. The State Council of China has recently released a policy that by the year 2030, there should be at least 5 certificated general practitioners per 10,000 citizens, which means there is a lack of 500,000 general practitioners at least [6]. In the United Kingdom, the National Health Service has claimed a shortage of general practitioners [7] where the gap has been rising from 5,000 to 12,100 by 2020 [8](iii)The rapidly aging societies are faced with a lack of health care resources. Aging is one of the leading concerns in Asia where more than a quarter of the population will be over 60 years old by 2050 [9]. However, according to World Health Statistics released by the World Health Organization [10], the numbers of medical doctors per 10,000 population in China, Japan, Singapore, and Thailand are 19.8, 24.1, 22.9, and 8.1, respectively, compared to 36.8 in Australia and 35.9 in New Zealand among the western countries. There is an obvious and severe lack of health care resources in Asia, which can be fulfilled directly or partially fulfilled with the assistance of cognitive computing-based CDSS

Compared with CDSS focusing on providing human-readable medical knowledge or manually defined rules and decision trees, cognitive computing-based CDSS benefits from automatically learning and inferencing from a large amount of data with machine-readable medical knowledge, which is becoming the main stream of CDSS studies and industrial applications.

This review summarizes the major advances of cognitive computing-based CDSS in the last decade by discussing the characteristics, the general framework, and the applications as well as the limitations of cognitive computing-based CDSS.

2. Characteristics

Compared with other type of CDSS, cognitive computing-based CDSS is both data-driven and knowledge-driven [11]. Firstly, there are hundreds of information systems managing data of various modalities. It is natural that clinicians take multimodal data about a patient into account, e.g., texts of medical history from an Electronic Medical Record system and other medical contents from guidelines and eBooks, films of X-ray, CT and MRI scans from a Picture Archiving and Communications System (PACS), and tabular data from a Laboratory Information System. Cognitive computing-based CDSS automatically learns from large-scale multimodal data. Secondly, unlike general domains, medical knowledge is critical in health studies where the tolerance to decision errors is extremely low. Compared to human-readable medical knowledge in the medical content providers, cognitive computing-based CDSS requires the representation of computable medical knowledge. To transform human-readable knowledge to machine-readable knowledge is one of the core problems in cognitive computing-based CDSS, where various attempts have been made in the representation of medical knowledge.

2.1. Managing Multimodal Data

Cognitive computing-based CDSS relies heavily on clinical data where the volume, the quality, and the variety of data sources greatly affect the performance, the robustness, and the generality. Due to the sensitivity and privacy concerns, clinical data are rarely available before. With the desensitization technique, there are more and more publicly available multimodal clinical data like large-scale health data from intensive care admissions MIMIC-III [12] and eICU [13] as well as medical imaging data [14, 15] for researchers to experiment with. Cognitive computing-based CDSS deals with multimodal data, which can be categorized as (i)Clinical Text. Most of the clinical notes are recorded with texts. To understand the clinical notes by machines, a preliminary task is the clinical Named-Entity Recognition (NER) [1618] which extracts medical entities of specific types like symptoms, vitals, and important findings in the reports and maps the entities to the predefined vocabulary. This enables machines to get the fundamental and critical keywords out of the pool of original texts. The pretrained language model [1922] is one of the main streams of the current NLP studies, which maps words (the minimum granularity of grammar) to universal high-dimensional vectors, namely, embeddings, by jointly training multiple NLP tasks on top of these embeddings. Based on medical entities and embeddings, many clinical problems are experimented with machine intelligence like giving diagnostic suggestions [2325], predicting clinical outcomes or codes [26, 27], and recommending medication combination [28].(ii)Medical Image. Medical imaging is very common in clinical practice to assist the diagnosis and assess patient’s health conditions including X-ray film [29], CT film [30], MRI film [31], ultrasound image [32], fundus retina image [33], and pathological image [34]. Therefore, the machine is trained to automatically analyze the medical images to highlight the important findings in the images or give impression results. There are three main tasks that the machine usually tackles with computer vision models: detection [3537] (the existence of lesion and its location and size in the image), segmentation [31, 38] (outline the margins and edges of the lesions), and classification [30, 39] (figure out the class of the image or the lesion among the given classes). The technology has been applied in the analysis of medical images like lung cancer CT film [30], brain MRI film [31], fundus imaging for diabetic retinopathy [33], skin cancer imaging [39], and pathology [34].(iii)Tabular Data. Tabular data are very common in the information systems from health care providers, e.g., demographic information of patients, laboratory test results, and order sets in the Computerized Physician Order Entry (COPE). Tabular data can be easily applied in machine learning models as a feature input. For example, demographic data and biomarkers from laboratory tests are used in the prediction of progression on patients with Alzheimer’s Disease [40]. It is straightforward to build decision trees or naive Bayes models on tabular data like the selection of the procedure in dental implant placement based on vertical ridge augmentation [41] as well as the prediction of heart disease [42].(iv)Audio Data. Compared to text, image, and tabular data, audio data are less commonly used in the clinical workflow. A hearing test is a typical scenario that depends on the patient’s reaction to a pure-tone audiogram to determine the health condition of the patient [43]. By incorporating machine intelligence, it is able to automatically detect the dead regions of hearing on patients based on the hearing test [44]. With the technical advances of speech recognition, more attempts in cognitive computing-based CDSS have been made to automatically translate the clinical conversation between clinicians and patients from audio data to texts and further perform clinical decision support based on text analysis [45, 46].

In most cases, more than one type of data mortality can be analyzed together in real clinical procedures to comprehensively assess a patient’s health conditions. Therefore, cognitive computing is studied to manage multimodal data. For example, clinical texts are combined with tabular data in automatic diagnosis studies [23, 24, 47]. Medical images and clinical texts are fused for clinical diagnosis, prognosis, and treatment decisions [48]. Multimodal neuroimaging data are jointly analyzed in the study of Alzheimer’s Disease [49]. Audio data of the patient-doctor conversations are transformed into texts for better understanding about patient’s health [45, 46]. Besides, during the pandemic of COVID-19, the features of clinical texts, laboratory tests, and medical images on COVID-19 patients are systematically and jointly studied, which demonstrates a typical case of dealing with multimodal data.

2.2. Computerizing Medical Knowledge

Medical knowledge, without a doubt, is critical in making clinical decisions. Compared with the human-readable knowledge, manually defined rules, and decision trees in the type of CDSS focusing on providing medical contents, cognitive computing-based CDSS studies how to make medical knowledge computable that is readable and manageable by machines. The studies of computable medical knowledge are generally divided into two groups: explicit and implicit knowledge representations.

The studies of explicit medical knowledge representations are aimed at interpretable knowledge discovery on the large scale of medical data via learning patterns, associations, relations, pathways, and structures, which can be cognitively understood by machines. Below summarizes multiple types of approaches to explicitly computerize medical knowledge: (i)Regression Models. Regression models are very common in clinical research. Given observations on the potential influential factors as well as their outcomes, regression models automatically fit to the observations via learning the weight of each factor which can further be used to predict the outcome of unseen samples. The weights obtained by regression models explicitly reflect how important each influential factor is in determining the clinical outcomes, which, apparently, is a perfect match with tabular data, for example, in the prediction of deaths due to COVID-19 [50]. Besides, multiple types of regression models are widely studied including a linear regression model for Alzheimer’s Disease diagnosis [51], a logistic regression model for cardiovascular risk prediction [52], a Poisson mixture regression model for heart disease prediction [53], and a polynomial regression model in the analysis of MRI films [54].(ii)Automatic Decision Trees. Different from manually defined decision trees, automatic decision tree models like ID3 [55] and C4.5 [56] learn the logic of the decision process as in a rooted tree where each node corresponds to a specific observable variable, and the per-node branching strategy is generated by fitting to the observations with outcomes. The decision on a new sample is obtained by a walk from the root to one of the leaf nodes where the observation is matched with the branching conditions alongside the walk. Automatic decision trees have been widely studied in clinical decision support like in the prediction of obesity risk [57] and the study of differential diagnosis [58]. Based on the decision tree, ensemble learning is incorporated to improve the performance while preserving the feature of the decision tree. Examples include the gradient-boosting decision tree (GBDT) in the selection of important features to identify multicancer risk [59] and random forest in the prediction of cardiovascular risk [60] as well as an extreme gradient boosting in the prediction of smoking-induced noncommunicable disease [61] and the diagnosis of chronic kidney disease [62].(iii)Bayesian Methods. The Bayesian methods are based on statistics and Bayes’ theorem, which explicitly represent medical knowledge using the estimated probability of a particular incident. In the simplest form, the naive Bayes model in the diagnostic suggestion problem [63] assumes that, if disease can possibly cause symptoms or to be present on the patient with some probability, then the event that causes to be present is independent of the event that causes to be present on the same patient. Given that assumption, Bayesian methods estimate the probability of each disease based on the conditional probability that the disease causes each symptom to be present. In a real application, symptoms are not likely to be independent from each other, and the Bayesian network models are proposed to predict a diagnosis given a list of observations on symptoms, vitals, and other findings. Besides, Bayesian methods have been used in the prediction of osteonecrosis of the femoral head [64], the inference of diagnosis [65], the prediction of drug-induced liver injury [66], and the study of drug-target interaction [67] as well as those studies with Bayesian networks [23, 68].(iv)Medical Knowledge Graph. Knowledge graph manages data in the graphical structure where the complex semantics are represented by concepts and relations. Apparently, medicine is a typical knowledge-driven subject compared with other domains like education or finance. Thus, the knowledge graph, which represents medical knowledge in an explicit and easy-to-access way, has been widely used in health care studies [6971], especially in cognitive computing-based CDSS. A knowledge graph mainly consists of ontology and relations. Ontology determines the form of conceptualization of the knowledge in a particular domain. Necessarily, an ontology contains the vocabulary of terms in the given domain and the specification of their meaning [72] as well as the interlinks between these concepts. Relations, often represented by triplets (subject, predicate, and object), elucidate how the entities in the knowledge graph are linked with each other. For example, the triplet asthma, has_symptom, and coughing means disease asthma has the symptom coughing. Once ontology is constructed, relations can be automatically extracted from various kinds of documents [73, 74]. For example, SNOMED CT is a typical resource of scientifically validated medical ontology where concepts, descriptions, and relationships are explicitly defined. With the complex semanticsrepresented by machine-readable concepts and relationships, the medical knowledge graph is widely used as external knowledge in predictive tasks for clinical decision support such as the classification of rare disease [75], the analytics of cancer data [76], and the prediction of patient’s health status [77].

The above lists the major approaches to explicitly represent medical knowledge in the ways that machine intelligence is able to process. In most cases, the medical knowledge graph can be combined with other approaches. For example, regression models and automatic decision trees depend on predefined features which are usually created by feature engineering, or alternatively, the definition of these features can be directly obtained from the medical knowledge graph.

Apart from the explicit representations, another group of methods to computerize medical knowledge is implicit representation or the more commonly used term representation learning. Compared with explicit knowledge, we classify the methods that focus on obtaining a computation-efficient and high dimensional latent feature vector for a given object, into the group of implicit representation. Generally speaking, implicit representation sacrifices interpretability in exchange for the performance gain in downstream tasks. Benefitting from the advances of deep learning, a great number of studies have been conducted in the representation learning of objects, widely known as object2vector, where object means the original form of human-readable knowledge. Examples of studies in medicine include word2vector [22, 78], sentence2vector [79], document2vector [80], image2vector, speech2vector [81], EHR2vector [82], and patient2vector [83].

There exist multiple ways to categorize the methods of representation learning. In this review, we split them into two groups based on whether or not external knowledge is considered in the representation learning of objects. (i)Direct Representation Learning. This summarizes the group of methods that directly generates the representations of objects based on the original input without auxiliary external knowledge. A number of research works have been carried out on the representations of medical texts based on sequential models like the convolutional neural network [25, 27, 84] and the Gated Recurrent Unit [26] where clinical texts are processed as sequences of tokens and the sequential features are supposed to be captured in the representations. For the representations of medical images, studies have been performed on layer-wise representing images from local regions till aggregating to the representation of the whole image including both 2D images [85, 86] and 3D images [87]. Despite the above supervised methods that depend on the annotated labels of input to learn the representations, there exist unsupervised direct representation learning methods, generally known as an autoencoder, that encode the input into a feature vector in the latent space, and then, another network is used to decode the feature and reconstruct the input [88, 89]. By minimizing the difference between the original input and the reconstructed output, the autoencoder automatically learns the representations of the input without supervised labels.(ii)External Knowledge-Enhanced Representation Learning. Studies have found that the performance of representation learning can be improved by incorporating external explicit knowledge, especially in the medical domain. There are various forms to introduce external knowledge in the representation learning including the description of medical concepts on Wikipedia Pages to enhance the implicit representation of diseases [90], the hierarchy from the International Classification of Diseases to update the representation of diseases [91] and the pathway to automatically infer the diagnosis [92], and the causal relationship between diseases and symptoms in automatic diagnosis [23, 24]. Besides, since multimodal data are widely used in cognitive computing-based CDSS, the incorporation of external knowledge as another kind of modality is very common. For example, by introducing explicit knowledge like risk factors extracted from patient questionnaires and an electronic health record, the performance of a deep convolutional neural network to represent breast cancer patients based on their mammography is significantly improved (AUC from 0.62 to 0.70) [93].

3. General Framework

According to the characteristics, we summarize a general framework to build a cognitive computing-based CDSS as shown in Figure 1. The framework consists of four layers: medical big data, perception, cognition, and applications. (i)Medical Big Data. The effectiveness of most data-driven models is determined by the volume, the quality, and the variety of the data. Medical big data maintain a collection of large-scale high-quality medical data. For cognitive computing-based CDSS, there are usually three types of data: (1) Literature of human-readable medical knowledge including textbooks, guidelines, instructions, and research articles. (2) Clinical data that are generated in the information systems during the routine clinical events including clinical notes, medical films, laboratory test data, acoustic waveforms, and demographic data of patients. (3) Application data that are generated during the use of applications, e.g., system logs, clinicians’ feedback to the decision support, and statistical counts of different diagnosis codes. Medical big data meet the needs of data for training machine learning models, monitoring the status of CDSS and analyzing clinicians’ behaviors and preference as well as upgrading CDSS towards a better experience.(ii)Perception. The perception layer provides functions to process the medical data of various modalities as well as the ability of a holistic multimodal data analysis. It integrates AI operators like natural language processing, medical image analysis, structured data analysis, and speech recognition to capture, understand, and manipulate various kinds of health-related data with respect to language, vision, tabular data, and speech. Moreover, the perception layer is able to jointly fuse the analysis on multimodal data.(iii)Cognition. The cognition layer computerizes medical knowledge that transforms the human-readable knowledge from the perceived multimodal data into machine-readable knowledge with explicit methods like regression models, automatic decision trees, Bayesian methods, and medical knowledge graphs as well as implicit methods like representation learning. Computable medical knowledge which is learned from large-scale high-quality data forms the core of cognitive computing-based CDSS. It enables the downstream tasks to reason with medical knowledge. For example, the machine can read and understand the patient’s EMR documents under the assistance of clinical natural language processing that extracts symptoms, vitals, and other important findings and links them to the medical knowledge graph that shows what diseases these findings imply.(iv)Applications. On top of the perception and the cognition layers, the layer of applications embeds the capabilities of CDSS into the clinical pathway of real practice. Some of the popular applications that cognitive computing-based CDSS has are diagnostic suggestion, treatment recommendation, consultation support, rational use of medicines, knowledge base for diseases, and treatment plans.

Figure 2 demonstrates an example to automatically infer the diagnosis of the patient under the general framework of cognitive computing-based CDSS. Multimodal clinical data about the patient are simultaneously obtained including clinical documents like admission notes and medical records and tabular data like vitals and laboratory test results, as well as medical images like chest X-ray films. For clinical texts, sequential features are extracted by models like convolutional neural networks and recurrent neural networks, which can be used to classify the category of the diagnosis based on the text description, e.g., respiratory disease or infectious disease. Besides, critical entities, attributes, and relationships are extracted from clinical texts that add to the machine-readable knowledge about the patient, e.g., symptom cough is recognized in the texts. For tabular data, most of them can be directly used in machine learning models as flatten features except that some require extra processing. For example, continuous valued temperature from vitals may be discretized to fever or not fever and the count of white blood cell (WBC) per liter can be discretized to low, normal, and high. For medical images, deep learning models can be used to detect salience objects, segment lesions, and classify the types of lesions. All the above transform the human-readable medical data to evidences leading to the diagnosis of COVID-19, including the category, the symptoms, the history of contagious contacts, and the findings from radiology where the relationships and semantics are given by the medical knowledge graph. Besides, the similarity of the patient with COVID-19 can be measured by their implicit representations in the latent space. Therefore, multimodal data are jointly analyzed, and human-readable medical knowledge is transformed to machine-readable knowledge before generating the result of the diagnosis.

4. Applications

In this section, the applications of cognitive computing-based CDSS are reviewed. As illustrated in Figure 3, cognitive computing-based CDSS appropriately provides real-time decision supports to different health care roles like physician, pharmacist, and nurse at appropriate time through appropriate approaches alongside the routine clinical workflow. From the screening of disease throughout the clinical pathway to the discharge of patients, there are many points that cognitive computing-based CDSS is capable of supporting decision-making like consultation support, test item recommendation, diagnostic suggestion, and treatment recommendation. The following review some applications of cognitive computing-based CDSS along the clinical pathway.

4.1. Diagnostic Suggestion

Diagnostic suggestion is one of the commonly used applications. By jointly analyzing the clinical notes, reports, laboratory test results, and medical images, machine intelligence is capable of automatically inferencing the most appropriate diagnosis code, which is suggested to physicians before determining the ultimate diagnosis. In the study of pediatric disease diagnosis, the machine outperforms junior physicians (F1 score 0.885 vs. 0.840) in the diagnosis accuracy based on clinical notes, although being inferior to senior physicians (F1 score 0.885 vs. 0.915) [94]. By incorporating the knowledge of disease-symptom causal relationships in deep learning, Yuan et al. [24] achieves the state-of-the-art diagnosis results on the admission notes of the Top-50 most frequent diagnosis codes on the public MIMIC-III dataset [12]. In the study of skin cancer diagnosis based on visual skin images, the model based on convolutional neural network outperforms certificated dermatologists in the task of three-way classification, i.e., benign, malignant, or nonneoplastic, with average accuracy of 72.1% vs. 65.78% [39]. Moreover, in the study of the intraoperative diagnosis of a brain tumor based on stimulated Raman histology images, the performance of deep learning is superior to the pathologist-based interpretation with overall accuracy comparison of 94.6% vs. 93.9% [87]. These studies are evidences that in quite a few subjects, cognitive computing-based CDSS have been on par with human-level diagnostic performance. Diagnostic suggestions given by cognitive computing-based CDSS can actually assist clinicians in real-world diagnosis, especially in primary health care. Besides research results, diagnostic suggestion has been implemented as a critical feature of cognitive computing-based CDSS in industrial products including Baidu [23, 24] and iFLYTEK [95].

4.2. Detection of Misdiagnosis

Misdiagnosis is a severe worldwide issue in primary health care. Cognitive computing-based CDSS is capable of detecting diagnostic errors by comparing physicians’ diagnosis codes and features with those generated by a machine. A logistic regression-based study on the detection of delirium misdiagnosis for over 5 months on real medical records obtained an average accuracy of 72% [96], demonstrating an evidence that a machine can assist physicians in determining the error-prone diagnosis. Based on the medication data, there is a study that detects epilepsy patients out of medical records that are given nonepilepsy as diagnosis codes [97], which achieves AUC 97.2% on real medical records.

4.3. Treatment Recommendation

There have been strong evidences that system-initiated recommendations of treatment to health care providers are effective at improving the quality of treatment orders [98]. Compared to diagnostic suggestions, treatment plans for frequent diseases are relatively fixed, which means treatment recommendation is more knowledge dependent. Therefore, the study of treatment recommendation in cognitive computing-based CDSS puts more focus on constructing the medical knowledge graph, e.g., the disease-drug bipartite graph [99], where treatment plans are induced once the diagnosis is determined.

Besides knowledge-driven methods, there are many studies of data-driven models for treatment recommendation in cognitive computing-based CDSS. These models share similar assumptions that (1) horizontally speaking, the historical diagnosis and treatment data of all populations collectively demonstrate some patterns of cooccurrences, i.e., similar treatment for similar diagnosis. (2) Longitudinally, the patient’s past diagnosis and treatment data can influence the treatment on him/her in the near future. The study of recommendation of medication combination reduces the rate of Drug-Drug Interaction (DDI) by 3.6% from the existing EHR data [28], which lowers the risk for the patient to simultaneously take multiple drugs that should be avoided using together. There are many studies conducted on the public MIMIC-III dataset [12], which leverage the longitudinal medical records as well as the complex relationships between medications and diagnosis codes and achieve the performance of recommendation over 63% F1 score [100, 101].

4.4. Rational Use of Medicines

The World Health Organization, known as WHO, estimates that more than 50% of medicines worldwide are prescribed, dispensed, or sold inappropriately. The rational use of medicines requires the medicine to meet the patient’s need in an appropriate dosage and adequate period of time at a low cost with the minimum harm to the patient’s health. In a study on over 9 million prescriptions given by dentists [102], it is reported that about 96.6% of antibiotics are prescribed for irrational or uncertain indications, which leads to great waste of medical resources and unnecessary risk to patients’ health.

With the ability to read and understand the instructions of medicines and the pharmacopeia as well as the patient’s clinical data, cognitive computing-based CDSS is capable of automatically detecting issues, e.g., error, warning, and conflict, in the prescriptions. These issues include potential DDI, medicine-allergy history conflict, medicine-demographic conflict (e.g., adult-only or pregnancy-only medicine), contraindication between diagnosis and medicine, medicine without corresponding compatible diagnosis, and medicine with inappropriate dosage, frequency, or period of time. The detection of potential DDIs is one of the hottest topics in this direction where the relationships of DDIs can be automatically extracted from texts with machine learning [103]. The review on the commercial DDI software reports the most commonly used one, Drug-Reax from Micromedex, due to its high sensitivity [104]. Drug Interaction Facts software for cancer patients is another widely used commercial DDI product that reports severity of DDI as well as the level of evidence to support the DDI with accuracy over 95% [104].

5. Examples of the Existing Systems

In China, cognitive computing-based CDSS has been implemented and deployed in many primary health care facilities to assist general practitioners in diagnosis and treatment. Among the providers of cognitive computing-based CDSS, Baidu, iFLYTEK, and Dr. Mayson are the representatives of the real systems in China. Each of these existing systems has advantageous features over the others. For example, based on the Chinese natural language processing and knowledge graph technology and products, the CDSS from Baidu is good at dealing with clinical texts [105] which is critical in cognitive computing like recommending diagnosis via understanding the patient’s illness from clinical texts [23, 24]. iFLYTEK is one of the top audio technology providers in China, and thus, the function of the voice medical record is a key feature in its CDSS, which frees the physician from manually typing and editing the texts in EMR. Instead, physicians interactively communicate with the computer via a customized microphone that captures the physician’s voice answer and translates to text data to automatically generate a medical record. Meanwhile, iFLYTEK has been the only system so far that passes the written test of the National Medical Licensing Examination in China in 2017 which surpasses 96.3% of human examinees [95]. It is a strong proof that the advance of cognitive computing-based CDSS has come to the state that has near-human performance and is ready to assist physicians in real medical practice. Besides, Dr. Mayson is empowered by the localization of the medical knowledge base from the Mayo Clinic that provides professionally verified descriptions about diseases and pathways as well as treatment plans.

Figure 4 demonstrates how cognitive computing-based CDSS is used in the medical practice, which is similar in the reviewed existing systems mentioned above. Firstly, cognitive computing-based CDSS works by integrating with other information systems like EMR, HIS, and LIS so that CDSS is capable of perceiving the clinical data. Multiple ways of integration are available including C/S (client/server), B/S (browser/server), and API. The choice of integration is influenced by the original information system that records the physician’s input because CDSS is not supposed to greatly change how the physician usually works or even induce an extra burden. To achieve the goal of appropriate information provided at the appropriate time in an appropriate way, cognitive computing-based CDSS will not present all relevant information at the same time. Instead, it only presents the supports to the current clinical decision with suggestions or system-initiated alerts. Therefore, cognitive computing-based CDSS has to recognize the scene and determines what supports that physician needs. An example is given in Figure 3.

6. Limitations

Despite the potentials to significantly decrease the errors, mistakes, and risks caused manually by health care roles in clinical activities, cognitive computing-based CDSS has obvious limitations which are great concerns before stepping into the spotlight of real medical practice. (i)The Habit of Cognitive Computing-Based CDSS. There is a long way to go to help health care roles to develop the habit of using cognitive computing-based CDSS in the routine clinical workflow. The system is something new to clinicians who are still used to searching medical databases online or looking up in the books by hand in the old tradition when feeling uncertain in the diagnosis or treatment. It is necessary but challenging to widely introduce cognitive computing-based CDSS in real clinical activities and change the stereotype. Health care roles are supposed to be open-minded to embrace machine intelligence as their assistant.(ii)The Interconnection of Medical Data. Due to the legal and privacy issues, the isolation of medical data between health care facilities has greatly limited the application of cognitive computing-based CDSS. The medical data are scattered in each single health care facility spreading across the nation. On the one hand, the isolation of medical data limits the performance of the data-driven methods in cognitive computing-based CDSS. On the other hand, it builds a barrier for machine intelligence to comprehensively understand the health conditions of patients. Therefore, it is important to push forward the interconnection of medical data from the administrative perspective.(iii)The Interpretability of Cognitive Computing-Based CDSS. Despite the superior performance of machine intelligence compared with the traditional technology, the interpretability of results given by a machine has always been a controversial argument. In particular, most of the deep learning models are trained end-to-end as a black box. Different from other domains, the evidence for clinical decision support is necessary because it matters when dealing with every piece of health-related decision. There have been studies on the interpretability of machine learning models by incorporating attention mechanism [24, 26, 100] or external medical knowledge [23, 90]. More efforts are required in advancing the interpretability of cognitive computing-based CDSS

7. Conclusion

Over the last decade, cognitive computing has pushed forward the next generation of CDSS into real medical practice. Different from other types of CDSS, cognitive computing-based CDSS manages multimodal data and computerizes medical knowledge, which revolutionizes the way that machine intelligence assists health care roles in clinical decision-making. In spite of some reported use cases, the medical community is supposed to be more open-minded to work with cognitive computing-based CDSS as assistants, which is the new normal in the next decades.

Conflicts of Interest

Dr. Chen reports personal fees from Baidu Inc., outside the submitted work; in addition, Dr. Chen has a patent US16/802,331 pending, a patent US17/116,972 pending, a patent CN202010592658.1 pending, a patent CN202010478500.1 pending, a patent CN202010342276.3 pending, and a patent CN202011382636.9 pending. Dr. Lu reports personal fees from Baidu Inc., outside the submitted work; in addition, Dr. Lu has a patent US16/802,331 pending, a patent US17/116,972 pending, a patent CN202010592658.1 pending, a patent CN202010478500.1 pending, a patent CN202010342276.3 pending, a patent CN202011382636.9 pending, a patent CN201911110599.3 pending, and a patent CN201930627045.5 licensed. Mr. Huang reports personal fees from Baidu Inc., outside the submitted work; in addition, Mr. Huang has a patent US16/802,331 pending, a patent US17/116,972 pending, a patent CN202010592658.1 pending, a patent CN202010478500.1 pending, a patent CN202010342276.3 pending, a patent CN202011382636.9 pending, a patent CN201911110599.3 pending, and a patent CN201930627045.5 licensed. Ms. Zhu reports personal fees from Baidu Inc., outside the submitted work; in addition, Ms. Zhu has a patent CN201911110599.3 pending and a patent CN201930627045.5 licensed. Ms. Yang reports personal fees from Baidu Inc., outside the submitted work. Mr. Liu reports personal fees from Baidu Inc., outside the submitted work; in addition, Mr. Liu has a patent CN201911110599.3 pending and a patent CN201930627045.5 licensed. Mrs. Huang reports personal fees from Baidu Inc., outside the submitted work. Dr. Deng reports personal fees from the Affiliated Hospital of Weifang Medical University, outside the submitted work. Dr. Han reports personal fees from the First Affiliated Hospital, China Medical University, outside the submitted work.

Authors’ Contributions

Substantial contributions to the conception and design were provided by all authors. Acquisition, analysis, and interpretation of data were handled by J. Chen, C. Lu, and H.F. Huang for the technical literature and D.W. Zhu, Q. Yang, J.W. Liu, and Y. Huang for the investigation of real-world applications in health care facilities and the statistics. Drafting of the manuscript was handled by J. Chen. Critical revision for important intellectual content was handled by A.J. Deng, X.X. Han, and J. Chen. Support of clinical experience and feedback was provided by A.J. Deng and X.X. Han. The visits to multiple health care facilities spreading over China about CDSS were led by D.W. Zhu, Q. Yang, J.W. Liu, and Y. Huang.

Acknowledgments

Our work is supported by the National Key Research and Development Program of China under No. 2020AAA0109400.

References

  1. T. Isaac, J. Zheng, and A. Jha, “Use of UpToDate and outcomes in US hospitals,” Journal of Hospital Medicine, vol. 7, no. 2, pp. 85–90, 2012. View at: Publisher Site | Google Scholar
  2. R. Zubair, G. Francisco, and B. Rao, “Artificial intelligence for clinical decision support,” Cutis, vol. 102, no. 3, pp. 210-211, 2018. View at: Google Scholar
  3. J. E. K. III, Computing, cognition and the future of knowing, how humans and machines are forging a new age of understanding, IBM Research Whitepaper, 2015.
  4. K. Q. Zhu and S. J. Zhang, “Analysis of autopsy cases in 50 years,” Chinese Journal of Internal Medicine, vol. 43, no. 2, pp. 128–130, 2004. View at: Publisher Site | Google Scholar
  5. H. Singh, A. N. D. Meyer, and E. J. Thomas, “The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations,” BMJ Quality and Safety, vol. 23, no. 9, pp. 727–731, 2014. View at: Publisher Site | Google Scholar
  6. S. Lian, Q. Chen, M. Yao, C. Chi, and M. D. Fetters, “Training pathways to working as a general practitioner in China,” Family Medicine, vol. 51, no. 3, pp. 262–270, 2019. View at: Publisher Site | Google Scholar
  7. A. Majeed, “Shortage of general practitioners in the NHS,” BMJ, vol. 358, no. article j3191, 2017. View at: Publisher Site | Google Scholar
  8. B. Hayhoe, A. Majeed, M. Hamlyn, and M. Sinha, “Primary care workforce crisis: how many more GPs do we need,” in Harrogate: RCGP Annual Conference, Harrogate, North Yorkshire, England, 2016. View at: Google Scholar
  9. S. Noda, P. M. R. Hernandez, K. Sudo et al., “Service delivery reforms for asian ageing societies: a cross-country study between Japan, South Korea, China, Thailand, Indonesia, and the Philippines,” International Journal of Integrated Care, vol. 21, no. 2, p. 1, 2021. View at: Publisher Site | Google Scholar
  10. WHO, World health statistics 2020, monitoring health for the SDGs, 2020, https://apps.who.int/iris/bitstream/handle/10665/332070/9789240005105-eng.pdf.
  11. B. Middleton, D. F. Sittig, and A. Wright, “Clinical decision support: a 25 year retrospective and a 25 year vision,” Yearbook of Medical Informatics, vol. 25, Supplement 01, pp. S103–S116, 2016. View at: Publisher Site | Google Scholar
  12. A. E. Johnson, T. J. Pollard, L. Shen et al., “MIMIC-III, a freely accessible critical care database,” Scientific Data, vol. 3, no. 1, p. 160035, 2016. View at: Publisher Site | Google Scholar
  13. T. J. Pollard, A. E. W. Johnson, J. D. Raffa, L. A. Celi, R. G. Mark, and O. Badawi, “The eICU collaborative research database, a freely available multi-center database for critical care research,” Scientific Data, vol. 5, no. 1, p. 180178, 2018. View at: Publisher Site | Google Scholar
  14. X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “ChestX-ray8: hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3462–3471, Honolulu, HI, USA, July 2017. View at: Publisher Site | Google Scholar
  15. J. Irvin, P. Rajpurkar, M. Ko et al., “Chexpert: a large chest radiograph dataset with uncertainty labels and expert comparison,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 590–597, 2019. View at: Publisher Site | Google Scholar
  16. Y. Wu, M. Jiang, J. Xu, D. Zhi, and H. Xu, “Clinical named entity recognition using deep learning models,” in AMIA Annual Symposium, pp. 1812–1819, San Francisco, California, USA, 2018. View at: Google Scholar
  17. Q. Wang, Y. Zhou, T. Ruan, D. Gao, Y. Xia, and P. He, “Incorporating dictionaries into deep neural networks for the Chinese clinical named entity recognition,” Journal of Biomedical Informatics, vol. 92, p. 103133, 2019. View at: Publisher Site | Google Scholar
  18. S. Zhao, Z. Cai, H. Chen, Y. Wang, F. Liu, and A. Liu, “Adversarial training based lattice LSTM for Chinese clinical named entity recognition,” Journal of Biomedical Informatics, vol. 99, p. 103290, 2019. View at: Publisher Site | Google Scholar
  19. Y. Sun, S. Wang, Y. Li et al., “ERNIE 2.0: a continual pre-training framework for language understanding,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 5, pp. 8968–8975, 2020. View at: Publisher Site | Google Scholar
  20. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: pre-training of deep bidirectional transformers for language understanding,” in The Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186, Minneapolis, Minnesota, USA, 2019. View at: Google Scholar
  21. M. E. Peters, M. Neumann, M. Iyyer et al., “Deep contextualized word representations,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2227–2237, New Orleans, Louisiana, USA, 2018. View at: Publisher Site | Google Scholar
  22. J. Lee, W. Yoon, S. Kim et al., “BioBERT: a pretrained biomedical language representation model for biomedical text mining,” Bioinformatics, vol. 36, no. 4, pp. 1234–1240, 2020. View at: Publisher Site | Google Scholar
  23. J. Chen, X. Dai, Q. Yuan, C. Lu, and H. Huang, “Towards interpretable clinical diagnosis with Bayesian network ensembles stacked on entity-aware CNNs,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 3143–3153, 2020. View at: Publisher Site | Google Scholar
  24. Q. Yuan, J. Chen, C. Lu, and H. Huang, “The graph-based mutual attentive network for automatic diagnosis,” in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, pp. 3393–3399, July 2020. View at: Publisher Site | Google Scholar
  25. Z. Yang, Y. Huang, Y. Jiang, Y. Sun, Y. J. Zhang, and P. Luo, “Clinical assistant diagnosis for electronic medical record based on convolutional neural network,” Scientific Reports, vol. 8, no. 1, p. 6329, 2018. View at: Publisher Site | Google Scholar
  26. Y. Sha and M. D. Wang, “Interpretable predictions of clinical outcomes with an attention-based recurrent neural network,” in ACM International Conference on Bioinformatics, Com- putational Biology,and Health Informatics, pp. 233–240, Boston, MA, USA, 2017. View at: Google Scholar
  27. J. Mullenbach, S. Wiegre e, J. Duke, J. Sun, and J. Eisenstein, “Explainable prediction of medical codes from clinical text,” in The Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 1101–1111, New Orleans, Louisiana, USA, 2018. View at: Google Scholar
  28. J. Shang, C. Xiao, T. Ma, H. Li, and J. Sun, “GAMENet: graph augmented memory networks for recommending medication combination,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 1126–1133, 2019. View at: Publisher Site | Google Scholar
  29. P. Rajpurkar, J. Irvin, K. Zhu et al., “CheXNet: radiologist-level pneumonia detection on chest X-rays with deep learning,” Tech. Rep., CoRR, 2017, https://arxiv.org/abs/1711.05225. View at: Google Scholar
  30. S. K. Lakshmanaprabu, S. N. Mohanty, K. Shankar, N. Arunkumar, and G. Ramirez, “Optimal deep learning model for classification of lung cancer on CT images,” Future Generation Computer Systems, vol. 92, pp. 374–382, 2019. View at: Publisher Site | Google Scholar
  31. M. Attique, G. Gilanie, Hafeez-Ullah et al., “Colorization and automated segmentation of human T2 MR brain images for characterization of soft tissues,” PLoS One, vol. 7, no. 3, article e33616, 2012. View at: Publisher Site | Google Scholar
  32. S. Liu, Y. Wang, X. Yang et al., “Deep learning in medical ultrasound analysis: a review,” Engineering, vol. 5, no. 2, pp. 261–275, 2019. View at: Publisher Site | Google Scholar
  33. F. Arcadu, F. Benmansour, A. Maunz, J. Willis, Z. Haskova, and M. Prunotto, “Deep learning algorithm predicts diabetic retinopathy progression in individual patients,” npj Digital Medicine, vol. 2, no. 1, pp. 115–118, 2019. View at: Publisher Site | Google Scholar
  34. M. K. K. Niazi, A. V. Parwani, and M. N. Gurcan, “Digital pathology and artificial intelligence,” The Lancet Oncology, vol. 20, no. 5, pp. e253–e261, 2019. View at: Publisher Site | Google Scholar
  35. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” in The Annual Conference on Neural Information Processing Systems, Montreal, Canada, 2015. View at: Google Scholar
  36. T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, July 2017. View at: Publisher Site | Google Scholar
  37. Z. Cai and N. Vasconcelos, “Cascade R-CNN: delving into high quality object detection,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6154–6162, Salt Lake City, UT, USA, June 2018. View at: Publisher Site | Google Scholar
  38. M. Martinez-Escobar, J. Leng Foo, and E. Winer, “Colorization of CT images to improve tissue contrast for tumor segmentation,” Computers in Biology and Medicine, vol. 42, no. 12, pp. 1170–1178, 2012. View at: Publisher Site | Google Scholar
  39. A. Esteva, B. Kuprel, R. A. Novoa et al., “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, pp. 115–118, 2017. View at: Publisher Site | Google Scholar
  40. S. Pölsterl, I. Sarasua, B. Gutiérrez-Becker, and C. Wachinger, “A wide and deep neural network for survival analysis from anatomical shape and tabular clinical data,” Machine Learning and Knowledge Discovery in Databases, pp. 453–464, 2020. View at: Google Scholar
  41. A. B. Plonka, I. Urban, and H. L. Wang, “Decision tree for vertical ridge augmentation,” The International Journal of Periodontics & Restorative Dentistry, vol. 38, no. 2, pp. 269–275, 2018. View at: Publisher Site | Google Scholar
  42. S. Maheswari and R. Pitchai, “Heart disease prediction system using decision tree and naive Bayes algorithm,” Current Medical Imaging Formerly Current Medical Imaging Reviews, vol. 15, no. 8, pp. 712–717, 2019. View at: Publisher Site | Google Scholar
  43. R. A. Davies, “Audiometry and other hearing tests,” Handbook of Clinical Neurology, vol. 137, pp. 157–176, 2016. View at: Publisher Site | Google Scholar
  44. J. Schlittenlacher, R. E. Turner, and B. C. J. Moore, “A hearing-model-based active-learning test for the determination of dead regions,” Trends in Hearing, vol. 22, 2018. View at: Publisher Site | Google Scholar
  45. H. Suominen, L. Zhou, L. Hanlen, and G. Ferraro, “Benchmarking clinical speech recognition and information extraction: new data, methods, and evaluations,” JMIR Medical Informatics, vol. 3, no. 2, p. e19, 2015. View at: Publisher Site | Google Scholar
  46. J. Kodish-Wachs, E. Agassi, I. Patrick Kenny, and J. M. Overhage, “A systematic comparison of contemporary automatic speech recognition engines for conversational clinical speech,” in AMIA Annu Symp Proc, pp. 683–689, San Francisco, California, USA, 2018. View at: Google Scholar
  47. Z. Qiao, X. Wu, S. Ge, and W. Fan, “MNN: multimodal attentional neural networks for diagnosis prediction,” in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, pp. 5937–5943, Macao, China, August 2019. View at: Publisher Site | Google Scholar
  48. S.-C. Huang, A. Pareek, S. Seyyedi, I. Banerjee, and M. P. Lungren, “Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines,” npj Digital Medicine, vol. 3, no. 1, p. 136, 2020. View at: Publisher Site | Google Scholar
  49. S. Liu, S. Liu, W. Cai et al., “Multimodal neuroimaging feature learning for multiclass diagnosis of Alzheimer's disease,” IEEE Transactions on Biomedical Engineering, vol. 62, no. 4, pp. 1132–1140, 2015. View at: Publisher Site | Google Scholar
  50. S. Ghosal, S. Sengupta, M. Majumder, and B. Sinha, “Linear regression analysis to predict the number of deaths in India due to SARS-CoV-2 at 6 weeks from day 0 (100 cases - March 14th 2020),” Diabetes & Metabolic Syndrome: Clinical Research & Reviews, vol. 14, no. 4, pp. 311–315, 2020. View at: Publisher Site | Google Scholar
  51. Y. Chen, Y. Shao, J. Yan et al., “A feature-free 30-disease pathological brain detection system by linear regression classifier,” CNS & Neurological Disorders Drug Targets, vol. 16, no. 1, pp. 5–10, 2017. View at: Publisher Site | Google Scholar
  52. S. F. Weng, J. Reps, J. Kai, J. M. Garibaldi, and N. Qureshi, “Can machine-learning improve cardiovascular risk prediction using routine clinical data?” PLoS One, vol. 12, no. 4, article e0174944, 2017. View at: Publisher Site | Google Scholar
  53. C. Mufudza and H. Erol, “Poisson mixture regression models for heart disease prediction,” Computational and Mathematical Methods in Medicine, vol. 2016, Article ID 4083089, 10 pages, 2016. View at: Publisher Site | Google Scholar
  54. S. C. Leu, Z. Huang, and Z. Lin, “Generation of Pseudo-CT using High-Degree Polynomial Regression on Dual- Contrast Pelvic MRI Data,” Scientific Reports, vol. 10, no. 1, p. 8118, 2020. View at: Publisher Site | Google Scholar
  55. J. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, no. 1, pp. 81–106, 1986. View at: Publisher Site | Google Scholar
  56. S. L. Salzberg, “C4.5: programs for machine learning,” Machine Learning, vol. 16, no. 3, pp. 235–240, 1994. View at: Publisher Site | Google Scholar
  57. C. Rodríguez-Pardo, A. Segura, J. J. Zamorano-León et al., “Decision tree learning to predict overweight/obesity based on body mass index and gene polymporphisms,” Gene, vol. 699, pp. 88–93, 2019. View at: Publisher Site | Google Scholar
  58. L. Moraes, C. E. Pedreira, S. Barrena, A. Lopez, and A. Orfao, “A decision-tree approach for the differential diagnosis of chronic lymphoid leukemias and peripheral B-cell lymphomas,” Computer Methods and Programs in Biomedicine, vol. 178, pp. 85–90, 2019. View at: Publisher Site | Google Scholar
  59. J. Zhang, D. Xu, K. Hao et al., “FSGBDT: identification multicancer-risk module via a feature selection algorithm by integrating Fisher score and GBDT,” Briefings in Bioinformatics, vol. 22, no. 3, 2020. View at: Publisher Site | Google Scholar
  60. L. Yang, H. Wu, X. Jin et al., “Study of cardiovascular disease prediction model based on random forest in eastern China,” Scientific Reports, vol. 10, no. 1, p. 5245, 2020. View at: Publisher Site | Google Scholar
  61. K. Davagdorj, V. H. Pham, N. Theera-Umpon, and K. H. Ryu, “XGBoost-based framework for smoking-induced noncommunicable disease prediction,” International Journal of Environmental Research and Public Health, vol. 17, no. 18, p. 6513, 2020. View at: Publisher Site | Google Scholar
  62. A. Ogunleye and Q.-G. Wang, “XGBoost model for chronic kidney disease diagnosis,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 17, no. 6, pp. 2131–2140, 2020. View at: Publisher Site | Google Scholar
  63. C. D. Manning, P. Raghavan, and H. Schütze, Introduction to Information Retrieval, Cambridge University Press, USA, 2008, ch. Text classi_cation and Naive Bayes.
  64. S. Cui, L. Zhao, Y. Wang et al., “Using naive Bayes classifier to predict osteonecrosis of the femoral head with cannulated screw fixation,” Injury, vol. 49, no. 10, pp. 1865–1870, 2018. View at: Publisher Site | Google Scholar
  65. Y. Shen, Y. Li, H. T. Zheng, B. Tang, and M. Yang, “Enhancing ontology-driven diagnostic reasoning with a symptom-dependency-aware naïve Bayes classifier,” BMC Bioinformatics, vol. 20, no. 1, p. 330, 2019. View at: Publisher Site | Google Scholar
  66. D. P. Williams, S. E. Lazic, A. J. Foster, E. Semenova, and P. Morgan, “Predicting drug-induced liver injury with Bayesian machine learning,” Chemical Research in Toxicology, vol. 33, no. 1, pp. 239–248, 2020. View at: Publisher Site | Google Scholar
  67. L. Peska, K. Buza, and J. Koller, “Drug-target interaction prediction: a Bayesian ranking approach,” Computer Methods and Programs in Biomedicine, vol. 152, pp. 15–21, 2017. View at: Publisher Site | Google Scholar
  68. S. Razzaki, A. Baker, Y. Perov et al., “A comparative study of artificial intelligence and human doctors for the purpose of triage and diagnosis,” Tech. Rep., CoRR, 2018, http://arxiv.org/abs/1806.10698. View at: Google Scholar
  69. L. Li, P. Wang, J. Yan et al., “Real-world data medical knowledge graph: construction and applications,” Artificial Intelligence in Medicine, vol. 103, p. 101817, 2020. View at: Publisher Site | Google Scholar
  70. M. Rotmensch, Y. Halpern, A. Tlimat, S. Horng, and D. Sontag, “Learning a health knowledge graph from electronic medical records,” Scientific Reports, vol. 7, no. 1, p. 5994, 2017. View at: Publisher Site | Google Scholar
  71. I. Y. Chen, M. Agrawal, S. Horng, and D. Sontag, “Robustly extracting medical knowledge from EHRs: a case study of learning a health knowledge graph,” in Biocomputing 2020, pp. 19–30, Fairmont Orchid, Hawaii, USA, December 2019. View at: Publisher Site | Google Scholar
  72. R. Stevens, C. A. Goble, and S. Bechhofer, “Ontology-based knowledge representation for bioinformatics,” Briefings in Bioinformatics, vol. 1, no. 4, pp. 398–414, 2000. View at: Publisher Site | Google Scholar
  73. G. Wang, W. Zhang, R. Wang et al., “Label-free distant supervision for relation extraction via knowledge graph embedding,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2246–2255, Brussels, Belgium, 2018. View at: Publisher Site | Google Scholar
  74. H. Alani, Sanghee Kim, D. E. Millard et al., “Automatic ontology-based knowledge extraction from web documents,” IEEE Intelligent Systems, vol. 18, no. 1, pp. 14–21, 2003. View at: Publisher Site | Google Scholar
  75. X. Li, Y. Wang, D. Wang, W. Yuan, D. Peng, and Q. Mei, “Improving rare disease classification using imperfect knowledge graph,” BMC Medical Informatics and Decision Making, vol. 19, no. S5, p. 238, 2019. View at: Publisher Site | Google Scholar
  76. S. M. S. Hasan, D. Rivera, X.-C. Wu, E. B. Durbin, J. B. Christian, and G. Tourassi, “Knowledge graph-enabled cancer data analytics,” IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 7, pp. 1952–1967, 2020. View at: Publisher Site | Google Scholar
  77. T. Pham, X. Tao, J. Zhang, and J. Yong, “Constructing a knowledge-based heterogeneous information graph for medical health status classification,” Health Information Science and Systems, vol. 8, no. 1, p. 10, 2020. View at: Publisher Site | Google Scholar
  78. W. Zhu, X. Jin, J. Ni, B. Wei, and Z. Lu, “Improve word embedding using both writing and pronunciation,” PLoS One, vol. 13, no. 12, article e0208785, 2018. View at: Publisher Site | Google Scholar
  79. K. Blagec, H. Xu, A. Agibetov, and M. Samwald, “Neural sentence embedding models for semantic similarity estimation in the biomedical domain,” BMC Bioinformatics, vol. 20, no. 1, p. 178, 2019. View at: Publisher Site | Google Scholar
  80. J. Wang, M. Li, Q. Diao, H. Lin, Z. Yang, and Y. J. Zhang, “Biomedical document triage using a hierarchical attention-based capsule network,” BMC Bioinformatics, vol. 21, no. S13, p. 380, 2020. View at: Publisher Site | Google Scholar
  81. D. T. Toledano, M. P. Fernández-Gallego, and A. Lozano-Diez, “Multi-resolution speech analysis for automatic speech recognition using deep neural networks: experiments on TIMIT,” PLoS One, vol. 13, no. 10, article e0205355, 2018. View at: Publisher Site | Google Scholar
  82. A. Rajkomar, E. Oren, K. Chen et al., “Scalable and accurate deep learning with electronic health records,” npj Digital Medicine, vol. 1, no. 1, 2018. View at: Publisher Site | Google Scholar
  83. Y. Si, J. du, Z. Li et al., “Deep representation learning of patient data from Electronic Health Records (EHR): A systematic review,” Journal of Biomedical Informatics, vol. 115, article 103671, 2020. View at: Publisher Site | Google Scholar
  84. F. Li and H. Yu, “ICD coding from clinical text using multi-filter residual convolutional neural network,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 5, pp. 8180–8187, 2020. View at: Publisher Site | Google Scholar
  85. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, June 2016. View at: Publisher Site | Google Scholar
  86. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, June 2016. View at: Publisher Site | Google Scholar
  87. T. C. Hollon, B. Pandian, A. R. Adapa et al., “Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks,” Nature Medicine, vol. 26, no. 1, pp. 52–58, 2020. View at: Publisher Site | Google Scholar
  88. Y. Pu, Z. Gan, R. Henao et al., “Variational autoencoder for deep learning of images, labels and captions,” Advances in Neural Information Processing Systems, vol. 29, pp. 2352–2360, 2016. View at: Google Scholar
  89. Q. Zhao, E. Adeli, N. Honnorat, T. Leng, and K. M. Pohl, “Variational autoencoder for regression: application to brain aging analysis,” Med Image Comput Comput Assist Interv, pp. 823–831, 2019. View at: Google Scholar
  90. A. Prakash, S. Zhao, S. A. Hasan et al., “Condensed memory networks for clinical diagnostic inferencing,” in AAAI, pp. 3274–3280, San Francisco, California, USA, 2017. View at: Google Scholar
  91. E. Choi, M. T. Bahadori, L. Song, W. F. Stewart, and J. Sun, “GRAM: graph-based attention model for healthcare representation learning,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, Nova Scotia, Canada, August 2017. View at: Publisher Site | Google Scholar
  92. K. Wang, X. Chen, N. Chen, and T. Chen, “Automatic emergency diagnosis with knowledge-based tree decoding,” in Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, pp. 3407–3414, July 2020. View at: Publisher Site | Google Scholar
  93. A. Yala, C. Lehman, T. Schuster, T. Portnoi, and R. Barzilay, “A deep learning mammography-based model for improved breast cancer risk prediction,” Radiology, vol. 292, no. 1, pp. 60–66, 2019. View at: Publisher Site | Google Scholar
  94. H. Liang, B. Y. Tsui, H. Ni et al., “Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence,” Nature Medicine, vol. 25, no. 3, pp. 433–438, 2019. View at: Publisher Site | Google Scholar
  95. J. Wu, X. Liu, X. Zhang, Z. He, and P. Lv, “Master clinical medical knowledge at certificated-doctor-level with deep learning model,” Nature Communications, vol. 9, no. 1, p. 4352, 2018. View at: Publisher Site | Google Scholar
  96. C. Hercus and A. R. Hudaib, “Delirium misdiagnosis risk in psychiatry: a machine learning-logistic regression predictive algorithm,” BMC Health Services Research, vol. 20, no. 1, p. 151, 2020. View at: Publisher Site | Google Scholar
  97. W. Ge, W. Guo, L. Cui, H. Li, and L. Liu, “Detection of wrong disease information using knowledge-based embedding and attention,” Database Systems for Advanced Applications, pp. 459–473, 2020. View at: Google Scholar
  98. T. J. Bright, A. Wong, R. Dhurjati et al., “Effect of clinical decision-support systems: a systematic review,” Annals of Internal Medicine, vol. 157, no. 1, pp. 29–43, 2012. View at: Publisher Site | Google Scholar
  99. E. Gundogan and B. Kaya, “A recommendation method based on link prediction in drug-disease bipartite network,” in 2017 2nd International Conference on Advanced Information and Communication Technologies (AICT), Lviv, Ukraine, July 2017. View at: Publisher Site | Google Scholar
  100. C. Su, S. Gao, and S. Li, “GATE: graph-attention augmented temporal neural network for medication recommendation,” IEEE Access, vol. 8, pp. 125447–125458, 2020. View at: Publisher Site | Google Scholar
  101. J. Shang, T. Ma, C. Xiao, and J. Sun, “Pre-training of graph augmented transformers for medication recommendation,” in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, pp. 5953–5959, Macao, China, August 2019. View at: Publisher Site | Google Scholar
  102. C. Z. Koyuncuoglu, M. Aydin, N. I. Kirmizi et al., “Rational use of medicine in dentistry: do dentists prescribe antibiotics in appropriate indications?” European Journal of Clinical Pharmacology, vol. 73, no. 8, pp. 1027–1032, 2017. View at: Publisher Site | Google Scholar
  103. W. J. Hou and B. Ceesay, “Extraction of drug-drug interaction using neural embedding,” Journal of Bioinformatics and Computational Biology, vol. 16, no. 6, article 1840027, 2018. View at: Publisher Site | Google Scholar
  104. T. Roblek, T. Vaupotic, A. Mrhar, and M. Lainscak, “Drug-drug interaction software in clinical practice: a systematic review,” European Journal of Clinical Pharmacology, vol. 71, no. 2, pp. 131–142, 2015. View at: Publisher Site | Google Scholar
  105. D. Dai, X. Xiao, Y. Lyu, S. Dou, Q. She, and H. Wang, “Joint extraction of entities and overlapping relations using position-attentive sequence labeling,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 6300–6308, 2019. View at: Publisher Site | Google Scholar

Copyright © 2021 Jun Chen et al. Exclusive Licensee Peking University Health Science Center. Distributed under a Creative Commons Attribution License (CC BY 4.0).

 PDF Download Citation Citation
Views223
Downloads160
Altmetric Score
Citations