Get Our e-AlertsSubmit Manuscript
Research / 2022 / Article

Review Article | Open Access

Volume 2022 |Article ID 9869518 | https://doi.org/10.34133/2022/9869518

Jintao Li, Jie Chen, Hua Bai, Haiwei Wang, Shiping Hao, Yang Ding, Bo Peng, Jing Zhang, Lin Li, Wei Huang, "An Overview of Organs-on-Chips Based on Deep Learning", Research, vol. 2022, Article ID 9869518, 20 pages, 2022. https://doi.org/10.34133/2022/9869518

An Overview of Organs-on-Chips Based on Deep Learning

Received24 Sep 2021
Accepted08 Dec 2021
Published19 Jan 2022

Abstract

Microfluidic-based organs-on-chips (OoCs) are a rapidly developing technology in biomedical and chemical research and have emerged as one of the most advanced and promising in vitro models. The miniaturization, stimulated tissue mechanical forces, and microenvironment of OoCs offer unique properties for biomedical applications. However, the large amount of data generated by the high parallelization of OoC systems has grown far beyond the scope of manual analysis by researchers with biomedical backgrounds. Deep learning, an emerging area of research in the field of machine learning, can automatically mine the inherent characteristics and laws of “big data” and has achieved remarkable applications in computer vision, speech recognition, and natural language processing. The integration of deep learning in OoCs is an emerging field that holds enormous potential for drug development, disease modeling, and personalized medicine. This review briefly describes the basic concepts and mechanisms of microfluidics and deep learning and summarizes their successful integration. We then analyze the combination of OoCs and deep learning for image digitization, data analysis, and automation. Finally, the problems faced in current applications are discussed, and future perspectives and suggestions are provided to further strengthen this integration.

1. Introduction

The most widely used experimental models in biological research are cell-based and animal models; however, both these models have many limitations. Traditional cell-based models lack essential features, such as complex multiple cultures, physiological microenvironments, and tissue mechanical forces [1]. Animal models, although regarded as the current gold standard in many biological studies, have problems such as high cost, ethical issues, low throughput, and interspecific differences, which significantly limit the progress of drug development and other biological research [2, 3].

Microfluidic-based OoC technology is proposed to fill the gap between traditional two-dimensional (2D) cell culture and animal models and to gradually replace animal studies [4]. As a product of the progressive development of microfluidic technology, OoCs combine microfluidic technology with cell biology; they faithfully mimic the physiological microenvironment of the in vivo target organs. These novel in vitro biological models can replicate the local characteristics of a disease and control the environmental parameters of cell survival, making them a cost-efficient and high-throughput platform for biological research. Polydimethylsiloxane (PDMS) and poly(methyl methacrylate) (PMMA) are the most commonly used materials for the fabrication of OoC devices. Owing to the transparent nature of these materials and its high compatibility with fluorescence microscopy, OoC applications usually generate a large number of images, leading to a large amount of image-based data. This has traditionally been accumulated and processed by manual methods, which are typically inefficient [5].

The utilization of automatic and intelligent data analysis systems will further enhance the development of OoCs in various biomedical applications. Deep learning [6] is the most representative research field in artificial intelligence (AI) [7]. The application of deep learning to OoCs offers a powerful tool for the exploration and analysis of the massive image-based data generated by OoC approaches, which consequently enhances the automated level of OoCs. Riordon et al. reviewed the integration of deep learning with microfluidics [8]. However, in the field of OoCs, no review focused on this novel concept to date. Therefore, a timely and comprehensive summary of the applications of deep learning in OoC studies will promote the development of this technology and facilitate research in both fields.

This review provides an in-depth discussion of the integration of deep learning and OoCs (Figure 1). Following the introduction of the basic concepts of OoCs and deep learning, we review deep learning as a multifunctional data analysis tool for biomedical applications, including cell identification, localization, tracking, and image segmentation. Finally, we discuss future directions for the application and integration of deep learning in the field of OoCs.

2. Emergence of OoC Technology

Replicating the human physiological system is extremely important for the pharmaceutical industry to predict drug efficiency, pharmacokinetics, and toxicity [9]. Animal models are currently the gold standard for many biological studies and can provide the most accurate predictions. However, the associated high costs, low throughput, and ethical issues limit the application of animal models to the early stages of drug discovery [10]. In addition, interspecies differences are an insurmountable gap between animal models and humans; thus, the experimental results for some disease models [11] and drug efficacy studies [12] have deviated from those of humans. For in vitro models, most biological studies have relied on two-dimensional (2D) cell cultures [13]. Despite the value of this model, it does not adequately reconstruct the in vivo cell microenvironment or simulate the complex physiological functions of human organs. To solve this problem, three-dimensional (3D) cell culture models have emerged and provide several enhancements compared to traditional 2D cell culture, such as improvement of the expression of differentiation functions, tissue structure, signal capture, and drug response sensitivities [4]. However, even the most effective 3D models are still unable to perfectly reproduce the complex cell–cell interaction, spatial configuration of different types of cells, and tissue mechanical forces of human organs.

Recent research in microfluidic systems and cell biology has created novel engineered microphysiological systems, OoCs [1]. These in vitro models provide tissue mechanical force and a controllable microenvironment, allowing the reconstruction of fundamental features of the target organ/tissue. Therefore, the introduction of OoCs has bridged the gap between oversimplified 2D cell culture and expensive animal models, providing an efficient and energy-saving biological research platform (Figure 2).

2.1. Microfluidic Technology

Microfluidics are miniaturized systems with specific morphological and positional structures. Their width and height scales are both between 100 nm and 100 μm [14]. The reaction time in microfluidic systems is much shorter than that in conventional instruments because the small system rapidly diffuses molecules [15]. The implementation of microfluidics is inextricably linked to the rapid development of photolithography and inkjet printing technologies. At the same time, researchers have designed pumps and valves capable of controlling and manipulating fluid flow [14]. Thus, microfluidic systems have the advantages of fast operation and small size and can manipulate fluids on the microscopic scale, enabling physiological fluid shear and pulsating flow patterns. Small-size microfluidic systems use less reagent than traditional flow control platforms and are thus perfect tools for high-throughput screening [15, 16]. In the past two decades, microfluidics have been successfully used in various biological applications such as fast cell sorting [17], cell biochemistry analysis [18], biomaterial screening [16], and OoCs [1, 19].

2.2. OoC Technology

Owing to the intrinsic characteristics of microfluidic technology, such as miniaturization, highly controlled flow systems, and flexible device designs, the integration of microfluidic technology, biomaterials, and cell biology has resulted in an advanced in vitro OoC system (Figure 3). Compared to conventional in vitro cell models, OoCs can accurately control parameters such as the chemical concentration gradient [20], tissue mechanical force [21], cell spatial configurational culture [22], multiple-cell coculture [23], and organ–organ interaction [24], in order to replicate the complex structures, microenvironments, and physiological functions of human organs. In addition, physiological barrier models based on OoCs accurately simulate the delivery and penetration of compounds in vivo [25]. In recent years, the precision of OoCs has increased dramatically, allowing assays to be performed on single cells and enabling high throughput, with thousands of simultaneous quantitative analyses at single-cell resolution [26].

After rapid developments in recent years, researchers have replicated several human organs in OoCs (Figure 4). Ho et al. successfully mimicked the structure of liver lobules by patterning liver cells and epithelial cells on a circular PDMS-based microfluidic chip to simulate the lobular structure of the liver [27]. Huh et al. designed a double-layer lung-on-a-chip, which deformed the PDMS membrane using a vacuum pump to simulate the expansion and contraction of the alveolar wall during respiration [28]. This work was regarded as a landmark study concerning OoC technology. Kim et al. utilized a similar design to simulate the expansion of human intestinal peristalsis [29]. Jang et al. replicated the proximal tubular structure of the kidney by introducing fluid shear in a bilayer OoC [30]. Ren et al. also constructed a capillary endothelial barrier using two parallel microcolumn arrays to accurately mimic the structure and function of myocardial tissue [31]. By combining polymer chemistry and OoC technology, we recently reported a blood–brain barrier (BBB)-on-a-chip with simulated BBB function. We successfully evaluated the permeability of small-molecule drugs and monitored the endocytosis and transcytosis of nanomaterials in the endothelium [32, 33]. Furthermore, collaboration between research institutions and pharmaceutical companies has brought OoCs to a practical stage. A kidney-on-a-chip has been successfully applied for drug screening [34]. In addition, Johnson & Johnson plans to conduct drug trials using the human thrombus simulation OoCs developed by Emulate and use the liver-on-a-chip to test the hepatotoxicity of drugs [35].

As can be seen from this discussion, OoCs have been developed for the major organs of the human body; an increasing number of organ models have also been developed for other less studied organs and tissues, such as muscle models [36], bone models [37], tissue models [38], mammary gland models [39], skin models, and others [40, 41].

3. Deep Learning

In recent years, with more powerful computing performance of graphics processing units (GPUs) [42] and the improvement of big data acquisition capabilities, deep learning has led to the creation of state-of-the-art benchmarks in many industries and has become the preferred intelligent technology for engineering applications. It has been widely used in many fields, such as natural language processing [43], speech recognition [44], and computer vision [45]. Based on the relation between three current topics of interest in computer science, AI [7], machine learning [46], and deep learning [6], we provide an in-depth analysis of the development context of deep learning (Figure 5). Machine learning is a common technical means to realize AI, and deep learning is a type of machine learning algorithm. AI is applied to mimic human thinking, perceive the environment, and take action to achieve goals. Machine learning refers to choosing the most appropriate algorithm based on a large amount of historical data so that machines can learn inherent regular information to effectively solve practical problems. There are a vast array of algorithms in machine learning, the most widely used of which is neural network-based deep learning. The behavior of neural network-based deep learning mimics many characteristics of the human brain; it also includes some of the basic functions of the brain by simulating the structure and character of the cerebrum.

In 1943, McCulloch and Pitts jointly proposed the McCulloch–Pitts (M-P) model (Figure 6(a)) and developed the theory of neural networks, which provided a foundation for the growth of machine learning [47]. In 1957, neural networks were first developed, and Rosenblatt et al. established the concept of a monolayer perceptron (Figure 6(b)), which became the first neural network model [48]. It is a simple neural network that linearly divides data into two categories. The input is the eigenvector of the instance, the output is the category of the instance, and the binary values of +1 and -1 are used. However, it was not until 1969 that Minskey and Papert demonstrated that the perceptron was incapable of facing linear inseparable problems similar to XOR problems; this led to the development of machine learning over a ten-year period of research [49].

In 1986, as the originator of deep learning, Rumelhart et al. proposed the famous back propagation (BP) algorithm (Figure 6(c)), which can solve linear inseparable problems such as XOR and thus promoted the wave of research into the second generation of neural networks [50]. An error between the actual and reference values became apparent when training the network. They then used the gradient descent method to reduce this error as much as possible. After forward propagation, the gradient of the error relative to the model parameters was calculated. This gradient was propagated back through the method of gradient descent to modify the weight of synaptic connections among different neurons, gradually finding the best combination of weights and deviations to reduce the error to a minimum and improve the performance of the model.

BP has become the most commonly used optimization algorithm for multilayer perceptron training. In addition, another deep learning pioneer, Lecun et al., proposed convolutional neural networks (CNNs) to successfully realize handwritten digit recognition [51]. This was the world’s first CNN architecture, the famous LeNet network (Figure 7(a)). A CNN architecture typically consists of several convolution layers closely followed by pooling layers and a fully connected (FC) layer (Figure 6(d)). In the convolution layer, the feature map of the previous layer and a filter from the upper left corner create a convolution that multiplies the corresponding numbers and then adds them. The filter slides smoothly until all the features are calculated; thus, the output forms a feature map of this layer. The pooling layer, which contributes to aggregating features and reducing dimensions, is positioned between two convolution layers. It divides the input data into different regions, and the image resolution in each region is reduced through pooling operations. The main purpose of the FC layer is to connect the features one by one to the marker space. All its connections are tightly linked to those of the previous layer, thus transforming the multidimensional output into a one-dimensional vector and achieving classification. However, owing to the disappearance of the gradient, the limitation of the number of training samples, the lack of computing power, and the introduction of shallow learning models such as support vector machines (SVM) [52], logistic regression (LR) [53], decision tree [54], and the naive Bayesian model (NBM) [55], the neural network has not been widely applied and promoted, and research in this area has reduced.

In 2006, Hinton et al. proposed deep belief networks (DBNs) (Figure 7(b)), which effectively shortened the training time of deep neural networks and alleviated the problem of gradient disappearance caused by the BP algorithm [56]. In addition, a new activation function, the rectified linear unit (ReLU), was constructed. Experiments showed that using ReLU could suppress the vanishing gradient problem [57].

In 2012, deep learning became a popular research topic. In the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), Krizhevsky et al. built a multilayer convolutional neural network, AlexNet (Figure 7(c)), to achieve an image classification error rate that was significantly reduced from the previous lowest of 26% to 15% [58]. This record-setting performance surprised the entire industry and stimulated research interest in neural networks again. Numerous models based on the deep CNN architecture are emerging, and many impressive results have been achieved [59]. Representative CNN architectures include VGGNet [60], GoogLeNet [61], and ResNet [62].

In addition to CNNs, many other branches of research in the field of deep learning have been developed recently, including sequence prediction represented by recurrent neural networks (RNNs) [63] and transformers [64], image generation represented by generative adversarial networks (GANs) [65], object detection represented by Faster R-CNN [66], YOLO [67], semantic segmentation represented by U-Net [68], and DeepLab [69].

In recent years, deep learning has been successfully applied to commercial applications by various manufacturers; the applications include Google Translate, Apple’s voice tool Siri, Microsoft’s Cortana personal voice assistant, and Ant Financial Smile to Pay [70]. Most importantly, deep learning can potentially help in the mitigation of diseases such as the coronavirus disease (COVID-19), which has resulted in a global pandemic over the past two years. Deep learning technology is likely to play a large role in the identification of epidemiological characteristics across many countries and to enable the exploration of the development trend of pandemics, thus providing a basis for creating control measures. In addition, research teams in an increasing number of industries are incorporating deep learning in the exploration of research and commercial applications, such as medical diagnosis, pandemic tracking and prediction, industrial intelligent manufacturing, autonomous driving, and virtual reality.

4. OoCs and Deep Learning Integration

OoCs and deep learning are frontier disciplines in biomedical engineering and AI, respectively. In this section, we first introduce the application of deep learning to microfluidics and then extend it to OoCs (Table 1). Because the combination of these two disciplines is not widely explored at present, we also provide some perspective on the application of deep learning to the following aspects of OoCs: prediction, target recognition, image segmentation, and tracking.


ApplicationExperimentNetworkFunctionRefs
Device designSubjectInputArchitectureOutput

Deep learning in microfluidicsA microfluidic device with three capillary tubesThe generated microdroplet in a T-junction microfluidic systemFour numbers affect the size of microdropletAn ANN architectureLength of the droplet and the diameter of the junctionPredict the size of microdroplet at the exit of the T-junction according to different parameters[71]
Two pressure sensors and a single microchannel filled with a liquid metalMicrofluidic soft sensorsAn analog voltageAn RNN with an attention modulePressure estimation and localizationEstimate both pressure magnitude and location while considering the hysteresis problem[72]
A fluid flow shape model decided by micropillarsFlow sculptingThe top-half image of a microchannel shapeA CNN architectureCorresponding pillar sequencesMake predictions and deliver comparable designs for flow sculpting[73]
Two microfluidic devices with four culture channelsPseudomonas aeruginosa bacteriaSpectrum images that convert from original images via FFTAn AlexNet architecturePixel count in the spectrum imagesRecognize the regional concentration change of the cultured bacteria[91]
A plastic slide with physical channels in mediumBone marrow from mice tibiae and iliaLong-term and time-lapse microscopy cell patchesA CNN-RNN architectureCell lineage scorePredict the lineage choice of stem cells’ progeny[92]

Deep learning in OoCsA microfluidic device composed of a central immune chamber and two tumor chambersInterferon-α-conditioned dendritic cells (IFN-DCs)Time-lapse images that record cells’ trajectory in 3D tumor spacesAn unsupervised image analysis algorithm, Cell HunterParameters that characterize IFN-DC behavior toward cancer cellsTrack immune cell-tumor interactions in real time[94]
A microfluidic device composed of six reservoirs and four chambersThree groups of human PBMCsData that collect by a microfluidic platform and time-lapse videoCell HunterSome trajectories of specific cellTrack the migration and the interactions of human PBMCs toward tumor cells[96]
A microfluidic device with 3D biomimetic hydrogels inside microchambersHER2+ breast cancer BT474 cell line and PBMCsAn atlas of videos at varying spatial-temporal resolutionsCell HunterA set of kinematic and interaction descriptorsDescribe the motility and interaction at varying spatial-temporal resolutions[97]
A 3D coculture microfluidic device with a central vascular compartment and two lateral chambersBT474 cell line of HER2+ breast cancer, the breast CAF cell line Hs578T and PBMCsTime-lapse videos and images that reconstructed in 3DCell HunterParameters that record the interaction of a single cancer cell with all the PBMCsCharacterize the responses to the drug and dissect the roles of immune cells and fibroblasts[98]
A microfluidic device with 3D biomimetic gelsBT474 cell line of HER2+ breast cancer and PBMCsVideo sequence of cellsCell Hunter and a CNN architectureAtlas of experimental cell tracks and typeDiscover hidden messages within cell trajectories for cancer drug treatments[99]
A stretchable micropatterned 3D human skeletal muscle platformHuman skeletal muscle cells and myogenic stem cellsMorphological image of skeletal muscle cellsA CNN-RNN architectureTemporal prediction and cell function of muscle cellsJudge the physiological status, contractile type and performance of muscle cells more easily[100]

4.1. Deep Learning in Microfluidics

The development of deep learning has resulted in great progress in microfluidics research and has led to a new generation of microfluidic platforms with extensive functions. Furthermore, the applications of deep learning in microfluidics have allowed researchers to observe phenomena that were difficult to capture in the past. We have divided these applications into two categories: device parameters and images.

4.1.1. Deep Learning in Device Parameters

The application of microfluidic devices in emulsion production has resulted in advantages such as reduction of reagents, product emulsions with a narrow molecular weight distribution [74, 75], and high-value-added products [76]. Changing and grouping all the parameters (i.e., flow rate, viscosity, two-phase surface tension, and microchannel diameter) can affect the T-junction microdroplets. Mahdi et al. used these dimensionless parameters as inputs to a neural network [71]. After cyclic training in hidden layers, the number and interconnectivity of the neurons were determined. Finally, the network output the predictions of the dimensionless length and junction width of the microdroplet (Figure 8(a)).

Soft sensors [77] used in microfluidics are composed of highly deformable polymers. They are used in elastomeric actuators [78], soft wearable robotic devices [79], and soft robotic grippers [80]. However, compared with traditional sensors, the main disadvantages of microfluidic soft sensors are the nonlinearity and hysteresis of the response. Han et al. used a hierarchical signal-level recurrent network to characterize a microfluidic soft sensor that could identify the pressure and location of the stimulus (Figure 8(b)) [72]. In a microfluidic channel filled with liquid metal, the simulated voltage varied with the pressure and location of the channel. Initially, the network aggregated time-series information with three hidden layers and transformed the sequential input data into a representation. The network then identified the locations at which the sensor was being pressed. Finally, the pressure estimation network predicted the magnitude of the pressure corresponding to the sensor outputs. The obtained time-series-related data were entered into the RNN for model training. The model could identify the pressure and location of the stimulus along the channel.

Microfluidic chip design and fluidic modeling require numerous calculations and a knowledge of hydromechanics, which may be an obstacle for researchers with a biomedical background. The most common approach is to make a large number of interactive and intuitive choices among a wide range of design options, which is a time-consuming and labor-intensive task. Stoecklein et al. answered the question “what kind of geometry produces an ideal microfluidic shape?” using deep learning (Figure 8(c)) [73]. A CNN architecture used target flow shapes from a testing dataset and predicted the flow shape. Compared with the original network, neural networks could independently select the best option. The intelligent sampling of this type of data can greatly improve performance and enable effective prediction outside the training range.

4.1.2. Deep Learning in Images

The algorithmic rules for the recognition of cell images are mostly based on mathematical principles and statistical theorems, which are a part of traditional machine learning. For example, a peripheral blood smear image consists of three types of cells: red blood cells, white blood cells, and platelets. The latter two are morphologically different from red blood cells. Marker-controlled algorithms have been used to separate white blood cell nuclei in microscopic images [81]. In addition, there are morphological and threshold selection techniques [82], cluster segmentation algorithms and rule-based methods [83], mathematical morphological and particle-size measurement methods [84], and grayscale thresholding methods [85]. These can also achieve cell recognition. The combination of traditional machine learning with microfluidics has enabled single-cell lipid screening [86] and cell counting [87].

However, owing to the complexity of the multiple types of data generated by the highly parallel operation of microfluidics, traditional machine learning is no longer sufficient to satisfy the requirements of researchers. The application of deep learning, which is a popular method for processing large amounts of data with high efficiency, is appropriate owing to advances in technology. Compared with traditional machine learning, the advantage of the integration of deep learning in microfluidics is clear: it can be used to train complex neural networks to obtain the internal features of the data and improve the efficiency of experiments using large, high-dimensional datasets.

Deep learning has been used to identify and classify mobile cells in microfluidic channels using an RNN architecture. Cell feature vectors (e.g., roundness, circumference, and major axis length) obtained by various imaging modalities have been fed into the networks, and the class of diagnosed cells (e.g., leukocytes and colon cancer cells) was identified. Label-free cell classification was achieved by Singh et al. using the aforementioned approach [88]. Chen et al. used a quantitative time delay microscope to obtain rich cell characteristic data and used deep learning methods to achieve cell classification [89]. The accuracy of this method exceeded that of traditional machine learning. San-Miguel et al. used microfluidic techniques to capture the localization of Caenorhabditis elegans arrays and imaged their synaptic punctum patterns [90]. This work identified subtle differences between mutations by feeding the measured data into an RNN architecture, revealing their hidden genetic differences.

Most existing measurement methods are not suitable for microfluidic equipment with small sample volumes as the level of bacteria in the channel needs to be measured during culturing. Hence, Kim et al. developed an image-based method to assess the growth status of bacteria in microfluidic channels [91]. In this study, bacteria were cultured in a microfluidic device with liquid and agar gel media in two separate channels. Time-lapse images were captured, and a fast Fourier transform (FFT) was used to detect variable frequencies of the images. The experimentally obtained images were used as input to the CNN architecture (Figure 9(a)). Using this model, the level of Pseudomonas aeruginosa was successfully obtained and the bacterial growth in the microfluidic channels was quantified.

In addition, a combination of CNN and RNN architectures can be applied when complex image input and temporal information need to be achieved. For example, cell differentiation changes the intracellular molecular properties of original stem cells, resulting in changes in their morphology and motility. Buggenthin et al. combined CNN with RNN architecture to predict single-cell lineages when identifying hematopoietic lineages (Figure 9(b)) [92]. The model first used a CNN architecture to extract local abstract features of stem cells under bright field images, and the vector of outputs indicated whether they were similar to certain cell patches. Then, the vector of outputs was fed into an RNN with a bidirectional long short-term memory network (LSTM) architecture to model cell dynamics. The effect of temporal information of the cells in the video was analyzed, and individual cells were classified as belonging to a certain lineage. This hybrid method improved the prediction ability of the model compared with that of CNN; moreover, it predicted the pedigree selection of primary hematopoietic cells. Using a similar approach, it would also be possible to identify the shape of the cells and analyze the movement morphology.

4.2. Deep Learning in OoCs

In this section, we discuss various applications of deep learning in OoCs. By discussing examples of each type of application in detail, the power and versatility of the integration of deep learning with OoCs were illustrated.

Dendritic cells play a critical role in the recognition of tumor cells by absorbing tumor antigens and presenting them to T cells. The effectiveness of immune therapy therefore relies heavily on the interaction between the tumor and dendritic cells in the tumor microenvironment to induce an effective antitumor response [93]. Parlato et al. reconstructed an interconnected 3D immune cell–tumor ecosystem by combining OoCs with advanced microscopy techniques (Figure 10(a)) [94]. The device was composed of a central immune chamber, which was interconnected with the tumor chamber via an array of microchannels. On this tumor-on-a-chip device, CellHunter [95], an unsupervised cell tracking analysis algorithm based on deep learning, was used to quantify the number, velocity, displacement, and other parameters representing the migration capacity of dendritic cells. With the support of this system, the effective movement of dendritic cells toward tumor cells was assessed.

As immune cells explore known environments such as probes, the analysis of the environment could yield information about how human peripheral blood mononuclear cells (PBMCs) approach tumor cells. To monitor the interaction between PBMCs and tumor cells, Biselli et al. utilized OoC technology and reconstructed a tumor-on-a-chip that cocultured the PBMCs with HER2+ tumor cells (Figure 10(b)) [96]. Through a customized algorithm, the authors showed that the experimental conditions of time-lapse microscopy directly influenced the accuracy of the cell tracking algorithm. Based on the same chip, Comes et al. further investigated the impact of temporal and spatial resolutions on the reliability of OoC experimental results (Figure 10(c)) [97]. Using this method, the authors successfully obtained the kinematic characteristics of cells under different therapeutic conditions and revealed the efficacy of targeted therapy. This work also demonstrated the important role of OoCs as a bridge between biology and computer science in high-content image-based data extraction.

A major challenge in cancer research is an increase in the complexity of the tumor microenvironment. Related to the aforementioned examples, Nguyen et al., in the group mentioned earlier, built a more sophisticated HER2+ breast tumor microenvironment in the tumor-on-a-chip (Figure 11(a)) [98]. In addition to HER2+ breast cancer cells, endothelial cells and fibroblasts were cocultured to better replicate the microenvironment of the tumor tissue. Similar to previous studies, immune cells were applied, and CellHunter [95] was used to track the interactions between cells in OoCs. This integration of deep learning and OoCs enabled visualization and quantification of the complex dynamics of the tumor cells in the model. The results demonstrated that the tumor-on-a-chip was a powerful platform for the study of the interaction between immune cells and the tumor microenvironment, as well as the responses of the immune cell–tumor ecosystem to drug treatment.

The same group further optimized the algorithm and developed a novel deep learning tool called deep tracking [99]. The first step was to acquire a video of cells moving on an OoC platform. In the video, the trajectory images were collected under a range of circumstances. For a human expert, this step made the cell trajectories more conspicuous than the multichannel time-series approach. The second step utilized a pretrained CNN architecture, AlexNet, to classify the cell trajectories from the visual atlas of each experiment. The author tested deep tracking in two types of tumor-on-a-chip (Figure 11(b)). By using deep tracking, the addition of the immunotherapy drug trastuzumab was found to increase cancer–immune cell interactions. The high accuracy of the model illustrates the versatility of deep tracking. Normally, manually collected information is necessary in deep learning; however, the parameters in deep tracking algorithms are not tuned according to cyclic experiments. Therefore, we believe that by adding key parameters that are manually tuned by experts to the system, an even higher accuracy will be realized for deep tracking.

Substantial and energy metabolism are supported by skeletal muscles. It is important to simulate the physiological state in vitro because of the dynamic features of muscles. Jena et al. cultured primary human skeletal muscle cells in a 3D stretchable platform to generate a human muscle-on-a-chip [100]. In this work, the authors used an RNN architecture with LSTM memory blocks and the CNN architecture shown in Figure 11(c), where the algorithm is more complicated than deep tracking [99]. They inferred differences in position and morphology based on the time sequences of the images predicted by the RNN architecture. The authors further applied these images to a further CNN architecture. Using this deep learning framework, biochemical markers were successfully determined based on static and dynamic imaging of cells over time. This type of trained CNN architecture can also be used to judge the physiological status, contractile type, and performance of muscle cells, without expensive and complicated biochemical detection. Furthermore, this muscle-on-a-chip can provide a reference for early diagnosis and methods for the development of personalized medicine.

4.3. Key Applications of Deep Learning in OoCs

Through examples of the successful integration of deep learning and microfluidics, several key perspectives can be identified (Figure 12). In the project preparation stage, deep learning can be applied to the device design and material selection of OoCs, resulting in OoCs more suitable for the particular application. In addition, owing to the increasing popularity of multiple-cell cultures in OoCs, deep learning can also be used for robust discrimination of cell populations.

In some OoCs, it is important to segment the parts of the images with special significance and extract the relevant features to provide reliable results for the analysis of experimental data. For instance, the brain tumor-on-a-chip reported by Yi et al. printed brain tumor cells in 3D, where endothelial cells were cultured surrounding the tumor cells [101]. Several key actions in this study involved the separation of endothelial cells, neoplastic cells, and tumor stem cells with different fluorescent labeling. An automatic image-based system that can extract different segments of the tumor-on-a-chip will facilitate the analysis of anticancer drug therapy. However, the complexity of the image itself and issues such as inhomogeneity and individual differences in the segmentation process make it difficult for traditional machine learning methods and typical neural networks to achieve pixel-level segmentation of images.

In 2015, Long et al. first applied fully convolutional networks (FCNs), which can accept input images of arbitrary size and perform pixel-by-pixel classification of images [102]. The U-Net model is a modified FCN structure. It is named for its structure, which is shaped like the letter U, and is widely used in image semantic segmentation [68]. Zaimi et al. used a U-Net architecture to achieve pixel-level segmentation of axons, myelin sheaths, and backgrounds in images [103]. The same system could be applied to the pixel-level segmentation of images obtained from OoC. The type of CNN architecture used in the downsampling process can also be changed in semantic image segmentation. Lim et al. reconstructed all pixels of red blood cells and greatly improved the image quality and superresolution capability of the model [104]. This study contained two highlights: one was the generation of digital phantoms as input, which overcame the lack of ground truth and eliminated the introduced distortions. The second was the presentation of a U-Net architecture with a skip connection between input and output, which mitigated the vanishing gradient problem during training and increased its performance. Developers have now programmed a plug-in for single-cell segmentation in the ImageJ software that allows people who are unfamiliar with deep learning to use U-Net to analyze data on a personal computer [105]. In addition to the U-Net model, models, such as DeepCell [106], CDeep3M [107], and CellProfiler [108], have been used to achieve pixel-level segmentation of cellular images.

Finally, the real-time visualization of cell morphology and trajectory is crucial for medical research. Zhao et al. combined a classic U-Net architecture with glass–air Anderson localizing optical fibers (GALOFs) [109]. This imaging system, named Cell-DCNN-GALOF, could transfer high-quality images in real time. Furthermore, the image reconstruction process was remarkably robust with respect to various depths, temperature variations, and fiber bending. Most importantly, the system showed a unique transfer-learning capability when cells with different morphologies and classes were examined. This means that the system could be trained with cell images from OoCs. Cell trajectory monitoring could also be realized through Cell-DCNN-GALOF in real time. By automatically detecting their trajectory and quantifying relevant movement data, deep learning can also provide auxiliary references to quantify experimental results and develop new experimental models for OoCs. Therefore, we believe that OoCs will transform traditional approaches, which rely heavily on manual data processing and operation, into a highly automatic system by means of deep learning.

5. Summary and Prospects

5.1. Future Applications of OoCs

In recent years, deep learning has completely changed many traditional industrial systems. Moreover, researchers in different fields are trying to integrate deep learning technology into their respective fields. In particular, many research advances have shown the promise of the integration of OoC technology and deep learning. The main reasons for this are as follows.

First, the open-source projects implementing deep learning have gradually improved, including the open-source code based on TensorFlow, Pytorch, and Keras frameworks. These allow researchers to quickly reproduce open-source engineering code in different fields and apply it to their own specific tasks. Second, novel and better deep learning models with excellent performance are constantly emerging. These will ultimately be applied to the rapid development of industries that are extensively integrated with deep learning. Lastly, the multitype data generated by OoC experiments are not only complex but also large in quantity. Deep learning technology can be introduced to simplify the labor-intensive data analysis and feature extraction steps, alleviate the huge challenges brought by massive biomedical big data, and solve tasks that were previously considered infeasible.

5.1.1. Organelle Segmentation and Tracking in OoCs

Current applications of deep learning in cell biology mainly focus on the morphological changes of whole cells; however, the analysis of subcellular structures or organelles can provide additional and important information [110, 111]. Very recently, Lefebvre et al. developed a shallow machine learning algorithm-based software package named Mitometer that can rapidly segment and track cellular mitochondria from both images and videos [112]. This provides a new and exciting research direction for AI in cell biology. As mentioned previously, deep learning captures the inherent features of data in a more efficient and accurate way than traditional machine learning. We therefore believe that deep learning-based organelle segmentation and tracking can generate comparatively richer information on the cell/tissue behavior in the OoCs.

5.1.2. OoC Tissue Mechanical Force Control

Tissue mechanical force control is one of the most important features of OoCs. For example, in the case of a lung-on-a-chip, human airway epithelial cells were cultured in a computer-controlled two-phase microfluidic system. The system can simulate the propagation and rupture of fluid embolism that occurs during airway injury in obstructive lung disease [113]. As the timing of applying the tissue mechanical force is critical, it is necessary to develop an automatic instrument based on the cell morphology and microenvironment. Deep learning detects cell processes and biomarkers over time without affecting cell viability; therefore, it allows the monitoring of the performance of the entire system in real time [114]. By integrating deep learning with OoC, the system can exploit the potential to automatically regulate and control various functional parameters of OoCs.

5.1.3. Drug Screening on OoCs

OoCs have been widely explored as in vitro human-related disease models, which can be an excellent platform for the study of pharmacokinetics, drug toxicity, and pharmacology. For example, Boos et al. tested embryoid bodies, influenced by human liver metabolites, in an organoid system [115]. The platform provides a promising tool that more comprehensively reflects physiological processes in in vitro tests, thus increasing the predictive power of adverse drug effects. Thus, OoCs can be used to investigate embryotoxicity. Several examples [116, 117] have also illustrated the potential of the deep learning-based system in the accurate prediction of the efficacy and toxicity of therapeutics. Therefore, the predictive capabilities of deep learning and OoC integration are a promising and important tool for future drug discovery.

5.1.4. Rare Disease-on-Chips

OoC technology has been widely utilized to build various in vitro disease models [1]. However, in recent years, the development of new drugs for rare diseases has been greatly hampered by the scarcity of suitable preclinical models for clinical trials [118, 119]. OoC-based rare disease models generate important real-time data that cannot usually be observed in in vivo or clinical samples. These data can be further analyzed by deep learning in real time, thus enabling the analysis of changes in the development of such diseases at the molecular level and ultimately obtaining the specific mechanisms of disease occurrence.

5.1.5. Human-on-Chips

Human-on-chips, a further development of OoCs, consist of interconnected compartments. Each compartment (i.e., an OoC) contains specific cell types that represent different organs. All compartments are connected by a microfluidic circulation system [120], which is highly modular in nature. The features of different compartments can be extracted through deep learning, and the output of the previous compartment can be utilized as the input to the next compartment. The linkage of multiorgan tissue system interaction is of great benefit to physiologically based pharmacokinetic models, quantitative system pharmacology, and other models [1]. Recently, a robotic interrogator automatically cultured, perfused medium, and linked fluidic systems, to maintain the viability and organ-specific functions of eight vascularized two-channel organ chips for three weeks [121]. In addition, a high-throughput human-on-a-chip system was formed on a medium circulation platform, which enabled parallelized multiorgan experiments [122]. Furthermore, human-on-chips have already been used in the study of intestinal absorption, hepatic metabolism, and the activity of breast cancer drugs [123]. In the future, through the analysis of multiple data of each OoC (such as cell growth, differentiation, and metabolism) using deep learning, OoCs can be combined into a highly integrated and controllable microfluidic regulatory system, thus achieving self-intelligent regulation of OoCs. Indeed, subsequent work and collaborations are still required to push the development of the integration of multiple OoCs and deep learning.

5.2. Future Challenges in Deep Learning

Although deep learning technology has excellent performance in feature representation and data mining, its internal mechanism and calculation strategy still need to be optimized for specific applications. Therefore, the development of a highly automated OoC system to provide a convenient, reliable, and integrated intelligent platform for researchers is the main development direction and challenge in the future.

5.2.1. Data Processing

At present, the ability to acquire experimental data on OoCs has been greatly improved, and massive amounts of big data have been accumulated. However, reducing the huge cost of manual labeling and automatically mining and refining the inherent characteristics of massive data are key challenges that urgently need to be overcome. The following aspects of this can be explored. (1)Data Augmentation. The most advanced methods such as GAN and unsupervised data augmentation (the latest unsupervised data augmentation method proposed by Google has a better application in the field of natural scene images) can be introduced to generate simulation data with real data characteristics, expand the capacity of high-value data samples, and reduce the cost of data acquisition.(2)Automatic Data Annotation. The development of automatic data annotation algorithms and tools enables the automatic labeling of massive unlabeled data, reducing the huge cost of manual labeling, and improving the efficiency of labeling and development.(3)Semisupervised Learning. Semisupervised learning [124] can be introduced to reduce the dependence on massive labeled data. Active learning [125, 126], a special learning method in semisupervised learning, uses a small amount of labeled data; its effect is equal to supervised learning. Furthermore, self-supervised learning, as an unsupervised learning approach, only uses unlabeled data to learn a feature extractor, whose performance is optimized by pretraining and fine-tuning [127].(4)Transfer Learning. Transfer learning can also be used as an excellent technical means to reduce dependence on a large volume of labeled data. The network is pretrained using the labeled data from other domains. Then, the fine-tuning of the network can be completed with a small amount of labeled data in a specific domain.

5.2.2. Algorithm Upgrading

(1)Customized Design of Deep Learning Models. Facing the diverse application requirements in the field of OoCs, the deep learning models that are used in the field of natural optical images cannot be directly applied. Therefore, customized designs are required, and improvements must be made in terms of the model architecture and deep network layout.(2)Automatic Network Design. For specific application scenarios of OoCs, neural architecture search technology can be used to realize the automatic construction and the optimal intelligent design of deep models, which avoids the complexity and limitations of traditional deep models design based on expert experience.(3)Iterative Upgrade of Deep Models. Facing the increasingly updated OoC data, deep learning models usually have the problem of “catastrophic forgetting.” Therefore, enabling the deep learning model to handle new data and maintain robust performance requires further exploration of the iterative upgrade technology of deep learning models.(4)Interpretability. Deep learning models usually have the problem of the “black box effect”; that is, the internal mechanism is not clear. This limits the understanding of the underlying mechanism and interaction principle of the specific OoC, which then lacks interpretability and reliability. Therefore, there is an urgent need to develop interpretable deep learning technology to transform the “black box” of deep learning into a “white box” and enable meaningful physical explanations from a biological point of view.(5)Model Compression and Acceleration. For online scene applications, the accuracy and inference speed of the model need to be balanced. Therefore, it is necessary to thoroughly study the compression and acceleration technology of deep learning models, and under the premise of ensuring accuracy, compress the model volume to the extent possible to improve the inference speed.

5.2.3. Computing Capability

At present, the application of deep learning models is mainly based on computing hardware including GPU, CPU, FPGA, and other devices. The GPU and CPU are used for offline training of deep learning models. In particular, owing to the efficient computing power of the GPU, it has become the main hardware used for deep learning model training, including many products made by NVIDIA. FPGAs are mainly used in online applications and are edge devices for real-world applications.

With the development of distributed technology, current devices such as GPUs and FPGAs have been able to meet the application of deep learning models. In the future, it will be necessary to combine hardware resources to carry out research to improve the inference efficiency of deep learning models and enhance the reliable transfer and rapid deployment of models to meet additional application scenarios.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this article.

Authors’ Contributions

J.L., J.C., and H.B. contributed equally to this work.

Acknowledgments

This work was financially supported by the National Key R&D Program of China (2020YFA0709900), the National Natural Science Foundation of China (22077101, 22004099, and 62001003), the Joint Research Funds of Department of Science & Technology of Shaanxi Province and Northwestern Polytechnical University (2020GXLH-Z-008, 2020GXLH-Z-021, and 2020GXLH-Z-023), the Natural Science Foundation of Ningbo (202003N4049 and 202003N4065), the Open Project Program of the Analytical & Testing Center of Northwestern Polytechnical University (2020T018), the Open Project Program of Wuhan National Laboratory for Optoelectronics (No. 2020WNLOKF023), the Key Research and Development Program of Shaanxi (2020ZDLGY13-04 and 2021KW-49), the Natural Science Foundation of Anhui Province (2008085QF284), the China Postdoctoral Science Foundation (2020M671851), and the Fundamental Research Funds for the Central Universities.

References

  1. L. A. Low, C. Mummery, B. R. Berridge, C. P. Austin, and D. A. Tagle, “Organs-on-chips: into the next decade,” Nature Reviews Drug Discovery, vol. 20, no. 5, pp. 345–361, 2021. View at: Publisher Site | Google Scholar
  2. J. Seok, H. S. Warren, A. G. Cuenca et al., “Genomic responses in mouse models poorly mimic human inflammatory diseases,” Proceedings of the National Academy of Sciences of the United States of America, vol. 110, no. 9, pp. 3507–3512, 2013. View at: Publisher Site | Google Scholar
  3. M. Hay, D. W. Thomas, J. L. Craighead, C. Economides, and J. Rosenthal, “Clinical development success rates for investigational drugs,” Nature Biotechnology, vol. 32, no. 1, pp. 40–51, 2014. View at: Google Scholar
  4. S. N. Bhatia and D. E. Ingber, “Microfluidic organs-on-chips,” Nature Biotechnology, vol. 32, no. 8, pp. 760–772, 2014. View at: Google Scholar
  5. M. Verhulsel, M. Vignes, S. Descroix, L. Malaquin, D. M. Vignjevic, and J. L. Viovy, “A review of microfabrication and hydrogel engineering for micro-organs on chips,” Biomaterials, vol. 35, no. 6, pp. 1816–1832, 2014. View at: Publisher Site | Google Scholar
  6. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. View at: Google Scholar
  7. Z. Ghahramani, “Probabilistic machine learning and artificial intelligence,” Nature, vol. 521, no. 7553, pp. 452–459, 2015. View at: Google Scholar
  8. J. Riordon, D. Sovilj, S. Sanner, D. Sinton, and E. W. K. Young, “Deep learning with microfluidics for biotechnology,” Trends in Biotechnology, vol. 37, no. 3, pp. 310–324, 2019. View at: Google Scholar
  9. J. D. Caplin, N. G. Granados, M. R. James, R. Montazami, and N. Hashemi, “Microfluidic organ-on-a-chip technology for advancement of drug development and toxicology,” Advanced Healthcare Materials, vol. 4, no. 10, pp. 1426–1450, 2015. View at: Google Scholar
  10. M. J. Waring, J. Arrowsmith, A. R. Leach et al., “An analysis of the attrition of drug candidates from four major pharmaceutical companies,” Nature Reviews Drug Discovery, vol. 14, no. 7, pp. 475–486, 2015. View at: Publisher Site | Google Scholar
  11. I. Wagner, E.-M. Materne, S. Brincker et al., “A dynamic multi-organ-chip for long-term cultivation and substance testing proven by 3D human liver and skin tissue co-culture,” Lab on a Chip, vol. 13, no. 18, pp. 3538–3547, 2013. View at: Publisher Site | Google Scholar
  12. C. Unger, N. Kramer, A. Walzl, M. Scherzer, M. Hengstschläger, and H. Dolznig, “Modeling human carcinomas: Physiologically relevant 3D models to improve anti- cancer drug development,” Advanced Drug Delivery Reviews, vol. 79-80, pp. 50–67, 2014. View at: Publisher Site | Google Scholar
  13. D. W. Hutmacher, “Biomaterials offer cancer research the third dimension,” Nature Materials, vol. 9, no. 2, pp. 90–93, 2010. View at: Google Scholar
  14. N. Convery and N. Gadegaard, “30 years of microfluidics,” Micro and Nano Engineering, vol. 2, pp. 76–91, 2019. View at: Google Scholar
  15. K. S. Elvira, X. C. i Solvas, R. C. R. Wootton, and A. J. deMello, “The past, present and potential for microfluidic reactor technology in chemical synthesis,” Nature Chemistry, vol. 5, no. 11, pp. 905–915, 2013. View at: Publisher Site | Google Scholar
  16. M. Sesen, T. Alan, and A. Neild, “Droplet control technologies for microfluidic high throughput screening (μHTS),” Lab on a Chip, vol. 17, no. 14, pp. 2372–2394, 2017. View at: Google Scholar
  17. K. Chung, M. M. Crane, and H. Lu, “Automated on-chip rapid microscopy, phenotyping and sorting of C. elegans,” Nature Methods, vol. 5, no. 7, pp. 637–643, 2008. View at: Google Scholar
  18. S. Pan, Y. Zhang, A. Natalia et al., “Extracellular vesicle drug occupancy enables real-time monitoring of targeted cancer therapy,” Nature Nanotechnology, vol. 16, no. 6, pp. 734–742, 2021. View at: Publisher Site | Google Scholar
  19. A. Sontheimer-Phelps, B. A. Hassell, and D. E. Ingber, “Modelling cancer in microfluidic human organs-on-chips,” Nature Reviews Cancer, vol. 19, no. 2, pp. 65–81, 2019. View at: Google Scholar
  20. N. Ye, J. Qin, W. Shi, X. Liu, and B. Lin, “Cell-based high content screening using an integrated microfluidic device,” Lab on a Chip, vol. 7, no. 12, pp. 1696–1704, 2007. View at: Google Scholar
  21. P. A. Galie, D. Nguyen, C. K. Choi, D. M. Cohen, P. A. Janmey, and C. S. Chen, “Fluid shear stress threshold regulates angiogenic sprouting,” Proceedings of the National Academy of Sciences of the United States of America, vol. 111, no. 22, pp. 7968–7973, 2014. View at: Publisher Site | Google Scholar
  22. C. T. Ho, R. Z. Lin, R. J. Chen et al., “Liver-cell patterning Lab Chip: mimicking the morphology of liver lobule tissue,” Lab on a Chip, vol. 13, no. 18, pp. 3578–3587, 2013. View at: Publisher Site | Google Scholar
  23. R. Booth and H. Kim, “Characterization of a microfluidic in vitro model of the blood-brain barrier (μBBB),” Lab on a Chip, vol. 12, no. 10, pp. 1784–1792, 2012. View at: Google Scholar
  24. J. H. Sung and M. L. Shuler, “A micro cell culture analog (CCA) with 3-D hydrogel culture of multiple cell lines to assess metabolism-dependent cytotoxicity of anti-cancer drugs,” Lab on a Chip, vol. 9, no. 10, pp. 1385–1394, 2009. View at: Google Scholar
  25. L. G. Griffith and M. A. Swartz, “Capturing complex 3D tissue physiology in vitro,” Nature Reviews Molecular Cell Biology, vol. 7, no. 3, pp. 211–224, 2006. View at: Google Scholar
  26. V. C. Shukla, T. R. Kuang, A. Senthilvelan et al., “Lab-on-a-chip platforms for biophysical studies of cancer with single-cell resolution,” Trends in Biotechnology, vol. 36, no. 5, pp. 549–561, 2018. View at: Publisher Site | Google Scholar
  27. C. T. Ho, R. Z. Lin, W. Y. Chang, H. Y. Chang, and C. H. Liu, “Rapid heterogeneous liver-cell on-chip patterning via the enhanced field-induced dielectrophoresis trap,” Lab on a Chip, vol. 6, no. 6, pp. 724–734, 2006. View at: Google Scholar
  28. D. Huh, B. D. Matthews, A. Mammoto, M. Montoya-Zavala, H. Y. Hsin, and D. E. Ingber, “Reconstituting organ-level lung functions on a chip,” Science, vol. 328, no. 5986, pp. 1662–1668, 2010. View at: Publisher Site | Google Scholar
  29. H. J. Kim, D. Huh, G. Hamilton, and D. E. Ingber, “Human gut-on-a-chip inhabited by microbial flora that experiences intestinal peristalsis-like motions and flow,” Lab on a Chip, vol. 12, no. 12, pp. 2165–2174, 2012. View at: Google Scholar
  30. K. J. Jang and K. Y. Suh, “A multi-layer microfluidic device for efficient culture and analysis of renal tubular cells,” Lab on a Chip, vol. 10, no. 1, pp. 36–42, 2010. View at: Google Scholar
  31. L. Ren, W. Liu, Y. Wang et al., “Investigation of hypoxia-induced myocardial injury dynamics in a tissue interface mimicking microfluidic device,” Analytical Chemistry, vol. 85, no. 1, pp. 235–244, 2013. View at: Publisher Site | Google Scholar
  32. B. Peng, Z. Tong, W. Y. Tong et al., “In SituSurface Modification of Microfluidic Blood–Brain-Barriers for Improved Screening of Small Molecules and Nanoparticles,” ACS Applied Materials & Interfaces, vol. 12, no. 51, pp. 56753–56766, 2020. View at: Publisher Site | Google Scholar
  33. A. Oddo, B. Peng, Z. Tong et al., “Advances in Microfluidic Blood-Brain Barrier (BBB) Models,” Trends in Biotechnology, vol. 37, no. 12, pp. 1295–1314, 2019. View at: Publisher Site | Google Scholar
  34. V. van Duinen, S. J. Trietsch, J. Joore, P. Vulto, and T. Hankemeier, “Microfluidic 3D cell culture: from tools to tissue models,” Current Opinion in Biotechnology, vol. 35, pp. 118–126, 2015. View at: Google Scholar
  35. R. J. Ozminkowski, D. Ling, R. Z. Goetzel et al., “Long-term impact of Johnson & Johnson???s Health & Wellness Program on health care utilization and expenditures,” Journal of Occupational & Environmental Medicine, vol. 44, no. 1, pp. 21–29, 2002. View at: Publisher Site | Google Scholar
  36. A. Grosberg, A. P. Nesmith, J. A. Goss, M. D. Brigham, M. L. McCain, and K. K. Parker, “Muscle on a chip: _In vitro_ contractility assays for smooth and striated muscle,” Journal of Pharmacological & Toxicological Methods, vol. 65, no. 3, pp. 126–135, 2012. View at: Publisher Site | Google Scholar
  37. B. Altmann, A. Löchner, M. Swain et al., “Differences in morphogenesis of 3D cultured primary human osteoblasts under static and microfluidic growth conditions,” Biomaterials, vol. 35, no. 10, pp. 3208–3219, 2014. View at: Publisher Site | Google Scholar
  38. M. Baker, “A living system on a chip,” Nature, vol. 471, no. 7340, pp. 661–665, 2011. View at: Google Scholar
  39. M. M. G. Grafton, L. Wang, P.-A. Vidi, J. Leary, and S. A. Lelièvre, “Breast on-a-chip: mimicry of the channeling system of the breast for development of theranostics,” Integrative Biology, vol. 3, no. 4, pp. 451–459, 2011. View at: Google Scholar
  40. B. Ataç, I. Wagner, R. Horland et al., “Skin and hair on-a-chip: in vitro skin models versus ex vivo tissue maintenance with dynamic perfusion,” Lab on a Chip, vol. 13, no. 18, pp. 3555–3561, 2013. View at: Publisher Site | Google Scholar
  41. H. E. Abaci, K. Gledhill, Z. Guo, A. M. Christiano, and M. L. Shuler, “Pumpless microfluidic platform for drug testing on human skin equivalents,” Lab on a Chip, vol. 15, no. 3, pp. 882–888, 2015. View at: Google Scholar
  42. R. Shams, P. Sadeghi, R. A. Kennedy, and R. I. Hartley, “A survey of medical image registration on multicore and the GPU,” IEEE Signal Processing Magazine, vol. 27, no. 2, pp. 50–60, 2010. View at: Google Scholar
  43. T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” IEEE Computational Intelligence Magazine, vol. 13, no. 3, pp. 55–75, 2018. View at: Google Scholar
  44. L. Deng and X. Li, “Machine learning paradigms for speech recognition: an overview,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 5, pp. 1060–1089, 2013. View at: Google Scholar
  45. P. N. Srinivasu, J. G. SivaSai, M. F. Ijaz, A. K. Bhoi, W. Kim, and J. J. Kang, “Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM,” Sensors, vol. 21, no. 8, p. 2852, 2021. View at: Publisher Site | Google Scholar
  46. I. H. Witten and E. Frank, “Data mining: practical machine learning tools and techniques with Java implementations,” Sigmod Record, vol. 31, no. 1, pp. 76-77, 2002. View at: Google Scholar
  47. W. S. Mcculloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” Journal of Symbolic Logic, vol. 5, no. 4, pp. 115–133, 1943. View at: Google Scholar
  48. F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain,” Psychological Review, vol. 65, no. 6, pp. 386–408, 1958. View at: Google Scholar
  49. J. Nievergelt, “R69-13 perceptrons: an introduction to computational geometry,” IEEE Transactions on Computers, vol. C-18, no. 6, p. 572, 1969. View at: Google Scholar
  50. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986. View at: Google Scholar
  51. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. View at: Google Scholar
  52. C. J. C. Burges, “A tutorial on support vector machines for pattern recognition,” Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121–167, 1998. View at: Google Scholar
  53. L. Meier, S. V. D. Geer, and P. Bühlmann, “The group Lasso for logistic regression,” Journal of the Royal Statistical Society Series B (Statistical Methodology), vol. 70, no. 1, pp. 53–71, 2008. View at: Google Scholar
  54. S. R. Safavian and D. Landgrebe, “A survey of decision tree classifier methodology,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 21, no. 3, pp. 660–674, 1991. View at: Google Scholar
  55. N. Friedman, D. Geiger, and M. Goldszmidt, “Bayesian network classifiers,” Machine Learning, vol. 29, no. 2, pp. 131–163, 1997. View at: Google Scholar
  56. G. E. Hinton, S. Osindero, and Y. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527–1554, 2006. View at: Google Scholar
  57. P. Wang, R. Ge, X. Xiao, Y. Cai, G. Wang, and F. Zhou, “Rectified-linear-unit-based deep learning for biomedical multi-label data,” Interdisciplinary Sciences: Computational Life Sciences, vol. 9, no. 3, pp. 419–422, 2017. View at: Publisher Site | Google Scholar
  58. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017. View at: Google Scholar
  59. D. Silver, A. Huang, C. J. Maddison et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, 2016. View at: Publisher Site | Google Scholar
  60. J. Fan, T. Zhao, Z. Kuang et al., “HD-MTL: hierarchical deep multi-task learning for large-scale visual recognition,” IEEE Transactions on Image Processing, vol. 26, no. 4, pp. 1923–1938, 2017. View at: Publisher Site | Google Scholar
  61. S. Xie, X. Zheng, Y. Chen et al., “Artifact removal using improved GoogLeNet for sparse-view CT reconstruction,” Scientific Reports, vol. 8, no. 1, article 6700, 2018. View at: Publisher Site | Google Scholar
  62. Z. Wu, C. Shen, and A. V. D. Hengel, “Wider or deeper: revisiting the ResNet model for visual recognition,” Pattern Recognition, vol. 90, pp. 119–133, 2019. View at: Google Scholar
  63. A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in (Paper Presentation, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, Canada, 2013, http://arxiv.org/pdf/1308.0850v5.pdf. View at: Google Scholar
  64. B. Gustavsen and Á. Portillo, “A damping factor-based white-box transformer model for network studies,” IEEE Transactions on Power Delivery, vol. 33, no. 6, pp. 2956–2964, 2018. View at: Google Scholar
  65. T. Feng and D. Gu, “SGANVO: unsupervised deep visual odometry and depth estimation with stacked generative adversarial networks,” IEEE Robotics and Automation Letters, vol. 4, no. 4, pp. 4431–4437, 2019. View at: Google Scholar
  66. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017. View at: Google Scholar
  67. S. Singh, U. Ahuja, M. Kumar, K. Kumar, and M. Sachdeva, “Face mask detection using YOLOv3 and faster R-CNN models: COVID-19 environment,” Multimedia Tools and Applications, vol. 80, no. 13, pp. 19753–19768, 2021. View at: Google Scholar
  68. Y. Han and J. C. Ye, “Framing U-Net via deep convolutional framelets: application to sparse-view CT,” IEEE Transactions on Medical Imaging, vol. 37, no. 6, pp. 1418–1429, 2018. View at: Google Scholar
  69. L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, 2018. View at: Google Scholar
  70. A. L. Nobles, E. C. Leas, T. L. Caputi, S. H. Zhu, and J. W. Ayers, “Responses to addiction help-seeking from Alexa, Siri, Google Assistant, Cortana, and Bixby intelligent virtual assistants,” npj Digital Medicine, vol. 3, no. 1, article 11, 2020. View at: Google Scholar
  71. Y. Mahdi and K. Daoud, “Microdroplet size prediction in microfluidic systems via artificial neural network modeling for water-in-oil emulsion formulation,” Journal of Dispersion Science and Technology, vol. 38, no. 10, pp. 1501–1508, 2017. View at: Google Scholar
  72. S. Han, T. Kim, D. Kim, Y. L. Park, and S. Jo, “Use of deep learning for characterization of microfluidic soft sensors,” IEEE Robotics & Automation Letters, vol. 3, no. 2, pp. 873–880, 2018. View at: Google Scholar
  73. D. Stoecklein, K. G. Lore, M. Davies, S. Sarkar, and B. Ganapathysubramanian, “Deep learning for flow sculpting: insights into efficient learning using scientific simulation data,” Scientific Reports, vol. 7, article 46368, 2017. View at: Google Scholar
  74. C. H. Choi, J. H. Jung, T. S. Hwang, and C. S. Lee, “In situ microfluidic synthesis of monodisperse PEG microspheres,” Macromolecular Research, vol. 17, no. 3, pp. 163–167, 2009. View at: Google Scholar
  75. J. H. Xu, S. W. Li, J. Tan, Y. J. Wang, and G. S. Luo, “Preparation of highly monodisperse droplet in a T-junction microfluidic device,” AICHE Journal, vol. 52, no. 9, pp. 3005–3010, 2010. View at: Google Scholar
  76. Y. Zeng, R. Novak, J. Shuga, M. T. Smith, and R. A. Mathies, “High-performance single cell genetic analysis using microfluidic emulsion generator arrays,” Analytical Chemistry, vol. 82, no. 8, pp. 3183–3190, 2010. View at: Google Scholar
  77. P. Kadlec, B. Gabrys, and S. Strandt, “Data-driven soft sensors in the process industry,” Computers & Chemical Engineering, vol. 33, no. 4, pp. 795–814, 2009. View at: Google Scholar
  78. S. S. Robinson, K. W. O'Brien, H. Zhao, B. N. Peele, and R. F. Shepherd, “Integrated soft sensors and elastomeric actuators for tactile machines with kinesthetic sense,” Extreme Mechanics Letters, vol. 5, pp. 47–53, 2015. View at: Google Scholar
  79. Y. L. Park, B. R. Chen, N. O. Pérez-Arancibia et al., “Design and control of a bio-inspired soft wearable robotic device for ankle-foot rehabilitation,” Bioinspiration & Biomimetics, vol. 9, no. 1, article 016007, 2014. View at: Publisher Site | Google Scholar
  80. J. Shintake, V. Cacucciolo, D. Floreano, and H. Shea, “Soft robotic grippers,” Advanced Materials, vol. 30, no. 29, article 1707035, 2018. View at: Google Scholar
  81. R. Ahasan, A. U. Ratul, and A. S. M. Bakibillah, “White blood cells nucleus segmentation from microscopic images of strained peripheral blood film during leukemia and normal condition,” in Paper presentation, 2016 5th International Conference on Informatics, Electronics and Vision, Dhaka, Bangladesh, 2016. View at: Google Scholar
  82. N. E. Ross, C. J. Pritchard, D. M. Rubin, and A. G. Dusé, “Automated image processing method for the diagnosis and classification of malaria on thin blood smears,” Medical and Biological Engineering and Computing, vol. 44, no. 5, pp. 427–436, 2006. View at: Google Scholar
  83. Y. Bao and J. Sun, “Image registration with a modified quantum-behaved particle swarm optimization,” in Paper presentation, 2011 10th International Symposium on Distributed Computing and Applications to Business, Engineering and Science, Wuxi, China, 2011. View at: Google Scholar
  84. C. D. Ruberto, A. Dempster, S. Khan, and B. Jarra, “Analysis of infected blood cell images using morphological operators,” Image and Vision Computing, vol. 20, no. 2, pp. 133–146, 2002. View at: Google Scholar
  85. P. A. Pattanaik, M. Mittal, and M. Z. Khan, “Unsupervised deep learning cad scheme for the detection of malaria in blood smear microscopic images,” IEEE Access, vol. 8, pp. 94936–94946, 2020. View at: Google Scholar
  86. B. Guo, C. Lei, H. Kobayashi et al., “High-throughput, label-free, single-cell, microalgal lipid screening by machine-learning-equipped optofluidic time-stretch quantitative phase microscopy,” Cytometry Part A the Journal of the International Society for Analytical Cytology, vol. 91, no. 5, pp. 494–502, 2017. View at: Publisher Site | Google Scholar
  87. X. Huang, Y. Jiang, X. Liu et al., “Machine learning based single-frame super-resolution processing for lensless blood cell counting,” Sensors, vol. 16, no. 11, p. 1836, 2016. View at: Publisher Site | Google Scholar
  88. D. K. Singh, C. C. Ahrens, L. Wei, and S. A. Vanapalli, “Label-free, high-throughput holographic screening and enumeration of tumor cells in blood,” Lab on a Chip, vol. 17, no. 17, pp. 2920–2932, 2017. View at: Google Scholar
  89. C. L. Chen, A. Mahjoubfar, L.-C. Tai et al., “Deep learning in label-free cell classification,” Scientific Reports, vol. 6, no. 1, p. 21471, 2016. View at: Publisher Site | Google Scholar
  90. A. San-Miguel, P. T. Kurshan, M. M. Crane et al., “Deep phenotyping unveils hidden traits and genetic relations in subtle mutants,” Nature Communications, vol. 7, no. 1, article 12990, 2016. View at: Publisher Site | Google Scholar
  91. K. Kim, S. Kim, and J. S. Jeon, “Visual estimation of bacterial growth level in microfluidic culture systems,” Sensors, vol. 18, no. 2, article 447, 2018. View at: Google Scholar
  92. F. Buggenthin, F. Buettner, P. S. Hoppe et al., “Prospective identification of hematopoietic lineage choice by deep learning,” Nature Methods, vol. 14, no. 4, pp. 403–406, 2017. View at: Publisher Site | Google Scholar
  93. S. D. Blasio, I. Wortel, D. Bladel, L. Vries, and S. V. Hato, “Human CD1c+ DCs are critical cellular mediators of immune responses induced by immunogenic cell death,” Oncoimmunology, vol. 5, no. 8, article e1192739, 2016. View at: Google Scholar
  94. S. Parlato, A. de Ninno, R. Molfetta et al., “3D Microfluidic model for evaluating immunotherapy efficacy by tracking dendritic cell behaviour toward tumor cells,” Scientific Reports, vol. 7, no. 1, article 1093, 2017. View at: Publisher Site | Google Scholar
  95. E. Meijering, O. Dzyubachyk, and I. Smal, “Methods for cell and particle tracking,” Methods in Enzymology, vol. 504, pp. 183–200, 2012. View at: Google Scholar
  96. E. Biselli, E. Agliari, A. Barra et al., “Organs on chip approach: a tool to evaluate cancer -immune cells interactions,” Scientific Reports, vol. 7, no. 1, article 12737, 2017. View at: Publisher Site | Google Scholar
  97. M. C. Comes, P. Casti, A. Mencattini, D. Giuseppe, and E. Martinelli, “The influence of spatial and temporal resolutions on the analysis of cell-cell interaction: a systematic study for time-lapse microscopy applications,” Scientific Reports, vol. 9, no. 1, article 6789, 2019. View at: Google Scholar
  98. M. Nguyen, A. de Ninno, A. Mencattini et al., “Dissecting effects of anti-cancer drugs and cancer-associated fibroblasts by on-chip reconstitution of immunocompetent tumor microenvironments,” Cell Reports, vol. 25, no. 13, pp. 3884–3893.e3, 2018. View at: Publisher Site | Google Scholar
  99. A. Mencattini, D. Di Giuseppe, M. C. Comes, P. Casti, and E. Martinelli, “Discovering the hidden messages within cell trajectories using a deep learning approach for in vitro evaluation of cancer drug treatments,” Scientific Reports, vol. 10, no. 1, article 7653, 2020. View at: Google Scholar
  100. B. P. Jena, D. L. Gatti, S. Arslanturk, S. Pernal, and D. J. Taatjes, “Human skeletal muscle cell atlas: unraveling cellular secrets utilizing ‘muscle-on-a-chip’, differential expansion microscopy, mass spectrometry, nanothermometry and machine learning,” Micron, vol. 117, pp. 55–59, 2019. View at: Google Scholar
  101. H. G. Yi, Y. H. Jeong, Y. Kim et al., “A bioprinted human-glioblastoma-on-a-chip for the identification of patient-specific responses to chemoradiotherapy,” Nature Biomedical Engineering, vol. 3, no. 7, pp. 509–519, 2019. View at: Publisher Site | Google Scholar
  102. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in (Paper Presentation, 2015 IEEE Conference on Computer Vision and Pattern Recognition,, Boston, USA, 2015. View at: Google Scholar
  103. A. Zaimi, M. Wabartha, V. Herman, P. L. Antonsanti, and J. Cohen-Adad, “AxonDeepSeg: automatic axon and myelin segmentation from microscopy data using convolutional neural networks,” Scientific Reports, vol. 8, no. 1, article 3816, 2018. View at: Google Scholar
  104. J. Lim, A. B. Ayoub, and D. Psaltis, “Three-dimensional tomography of red blood cells using deep learning,” Advanced Photonics, vol. 2, no. 2, article 026001, 2020. View at: Google Scholar
  105. T. Falk, D. Mai, R. Bensch et al., “U-Net: deep learning for cell counting, detection, and morphometry,” Nature Methods, vol. 16, no. 1, pp. 67–70, 2019. View at: Publisher Site | Google Scholar
  106. D. Bannon, E. Moen, M. Schwartz, E. Borba, and D. V. Valen, “DeepCell Kiosk: scaling deep learning–enabled cellular image analysis with Kubernetes,” Nature Methods, vol. 18, no. 1, pp. 43–45, 2021. View at: Google Scholar
  107. M. G. Haberl, C. Churas, L. Tindall et al., “CDeep3M--Plug-and-Play cloud-based deep learning for image segmentation,” Nature Methods, vol. 15, no. 9, pp. 677–680, 2018. View at: Publisher Site | Google Scholar
  108. C. McQuin, A. Goodman, V. Chernyshev et al., “CellProfiler 3.0: next-generation image processing for biology,” Plos Biology, vol. 16, no. 7, article e2005970, 2018. View at: Publisher Site | Google Scholar
  109. J. Zhao, Y. Sun, H. Zhu et al., “Deep-learning cell imaging through Anderson localizing optical fiber,” Advanced Photonics, vol. 1, no. 6, article 066001, 2019. View at: Publisher Site | Google Scholar
  110. M. Bornens, “Organelle positioning and cell polarity,” Nature Reviews Molecular Cell Biology, vol. 9, no. 11, pp. 874–886, 2008. View at: Google Scholar
  111. J. G. Carlton, H. Jones, and U. S. Eggert, “Membrane and organelle dynamics during cell division,” Nature Reviews Molecular Cell Biology, vol. 21, no. 3, pp. 151–166, 2020. View at: Google Scholar
  112. A. E. Y. T. Lefebvre, D. Ma, K. Kessenbrock, D. A. Lawson, and M. A. Digman, “Automated segmentation and tracking of mitochondria in live-cell time-lapse images,” Nature Methods, vol. 18, no. 9, pp. 1091–1102, 2021. View at: Publisher Site | Google Scholar
  113. D. Huh, H. Fujioka, Y. C. Tung et al., “Acoustically detectable cellular-level lung injury induced by fluid mechanical stresses in microfluidic airway systems,” Proceedings of the National Academy of Sciences of the United States of America, vol. 104, no. 48, pp. 18886–18891, 2007. View at: Publisher Site | Google Scholar
  114. J. P. Wikswo, E. L. Curtis, Z. E. Eagleton et al., “Scaling and systems biology for integrating multiple organs-on-a-chip,” Lab on a Chip, vol. 13, no. 18, pp. 3496–3511, 2013. View at: Publisher Site | Google Scholar
  115. J. A. Boos, P. M. Misun, A. Michlmayr, A. Hierlemann, and O. Frey, “Microfluidic multitissue platform for advanced embryotoxicity testing in vitro,” Advanced Science, vol. 6, no. 13, article 1900294, 2019. View at: Google Scholar
  116. Y. Chang, H. Park, H. J. Yang et al., “Cancer drug response profile scan (CDRscan): a deep learning model that predicts drug effectiveness from cancer genomic signature,” Scientific Reports, vol. 8, no. 1, article 8857, 2018. View at: Publisher Site | Google Scholar
  117. M. Abdel-Basset, H. Hawash, M. Elhoseny, R. K. Chakrabortty, and M. Ryan, “DeepH-DTA: deep learning for predicting drug-target interactions: a case study of COVID-19 drug repurposing,” IEEE Access, vol. 8, pp. 170433–170451, 2020. View at: Google Scholar
  118. E. Gawehn, J. A. Hiss, and G. Schneider, “Deep learning in drug discovery,” Molecular Informatics, vol. 35, no. 1, pp. 3–14, 2016. View at: Google Scholar
  119. T. R. Lane, D. H. Foil, E. Minerali, F. Urbina, and S. Ekins, “Bioactivity comparison across multiple machine learning algorithms using over 5000 datasets for drug discovery,” Molecular Pharmaceutics, vol. 18, no. 1, pp. 403–415, 2020. View at: Google Scholar
  120. M. B. Esch, T. L. King, and M. L. Shuler, “The role of body-on-a-chip devices in drug and toxicity studies,” Annual Review of Biomedical Engineering, vol. 13, no. 1, pp. 55–72, 2010. View at: Google Scholar
  121. R. Novak, M. Ingram, S. Marquez, D. K. Das, and A. Delahanty, “Robotic fluidic coupling and interrogation of multiple vascularized organ chips,” Nature Biomedical Engineering, vol. 4, no. 4, pp. 407–420, 2020. View at: Google Scholar
  122. T. Satoh, S. Sugiura, K. Shin et al., “A multi-throughput multi-organ-on-a-chip system on a plate formatted pneumatic pressure-driven medium circulation platform,” Lab on a Chip, vol. 18, no. 1, pp. 115–125, 2018. View at: Publisher Site | Google Scholar
  123. Y. Imura, K. Sato, and E. Yoshimura, “Micro total bioassay system for ingested substances: assessment of intestinal absorption, hepatic metabolism, and bioactivity,” Analytical Chemistry, vol. 82, no. 24, pp. 9983–9988, 2010. View at: Google Scholar
  124. T. Miyato, S. Maeda, M. Koyama, and S. Ishii, “Virtual adversarial training: a regularization method for supervised and semi-supervised learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 8, pp. 1979–1993, 2019. View at: Google Scholar
  125. P. Kumar and A. Gupta, “Active learning query strategies for classification, regression, and clustering: a survey,” Journal of Computer Science and Technology, vol. 35, no. 4, pp. 913–945, 2020. View at: Google Scholar
  126. S. Budd, E. C. Robinson, and B. Kainz, “A survey on active learning and human-in-the-loop deep learning for medical image analysis,” Medical Image Analysis, vol. 71, article 102062, 2021. View at: Google Scholar
  127. L. Chen, P. Bentley, K. Mori, K. Misawa, M. Fujiwara, and D. Rueckert, “Self-supervised learning for medical image analysis using image context restoration,” Medical Image Analysis, vol. 58, article 101539, 2019. View at: Publisher Site | Google Scholar

Copyright © 2022 Jintao Li et al. Exclusive Licensee Science and Technology Review Publishing House. Distributed under a Creative Commons Attribution License (CC BY 4.0).

 PDF Download Citation Citation
Views850
Downloads673
Altmetric Score
Citations