Get Our e-AlertsSubmit Manuscript
Space: Science & Technology / 2022 / Article

Perspective | Open Access

Volume 2022 |Article ID 9852872 | https://doi.org/10.34133/2022/9852872

Ethan Waisberg, Joshua Ong, Phani Paladugu, Sharif Amit Kamran, Nasif Zaman, Andrew G. Lee, Alireza Tavakkoli, "Challenges of Artificial Intelligence in Space Medicine", Space: Science & Technology, vol. 2022, Article ID 9852872, 7 pages, 2022. https://doi.org/10.34133/2022/9852872

Challenges of Artificial Intelligence in Space Medicine

Received19 Jun 2022
Accepted26 Sep 2022
Published29 Oct 2022

Abstract

The human body undergoes many changes during long-duration spaceflight including musculoskeletal, visual, and behavioral changes. Several of these microgravity-induced effects serve as potential barriers to future exploration missions. The advent of artificial intelligence (AI) in medicine has progressed rapidly and has many promising applications for maintaining and monitoring astronaut health during spaceflight. However, the austere environment and unique nature of spaceflight present with challenges in successfully training and deploying successful systems for upholding astronaut health and mission performance. In this article, the dynamic barriers facing AI development in space medicine are explored. These diverse challenges range from limited astronaut data for algorithm training to ethical/legal considerations in deploying automated diagnostic systems in the setting of the medically limited space environment. How to address these challenges is then discussed and future directions for this emerging field of research.

1. Introduction

Artificial intelligence (AI) is any technique that leverages machines to mimic human intelligence. AI is a promising field that is expected to revolutionize nearly all aspects of medicine. AI is just beginning to be integrated into medicine and continues to progress towards a seamless integration in clinical care. The future goal of AI in space medicine will be to detect, diagnose, and recommend treatments to solve routine medical issues faced by astronauts while in deep space exploration, eliminating the signal transmission delays experienced while contacting terrestrial doctors. AI can also potentially predict future health problems that will occur in astronauts. Astronauts are in an austere environment face unique hazards including microgravity, increased radiation exposure, and increased distance. Many AI solutions for space medicine are currently being developed and integrated into digital technologies including telemedicine, wearable technology, augmented and virtual reality, and biosensors. However, the implementation of AI solutions for astronaut health requires special considerations for implementation onboard the International Space Station (ISS) and potentially other spacecraft for future spaceflight.

When anticipating the 2030 mission to Mars, the massive distance from the space shuttle to Earth will likely cause signal transmission delays between 5 and 20 minutes long, depending on planetary positions [1]. Artificial intelligence will be required if any rapid treatment decisions are required on this mission. If a medical procedure needed to be performed while in space, an AI program can help to provide guidance instantly, rather than having a delayed conversation with a surgeon on Earth. AI in space medicine will also be essential with the increasing prevalence of space tourism to ensure the safety of civilian passengers.

AI covers several subfields including machine learning and deep learning. Machine learning refers to algorithms with the ability to improve through experience and can either be categorized as unsupervised, supervised, reinforcement learning, self-supervised, multi-instance, and semi-supervised machine learning [2]. Deep learning involves the usage of vast amounts of data to train multi-layered neural networks. These techniques have revolutionized approaches to medicine for austere environments and will likely have immense impact in the field of space medicine for astronaut health during spaceflight. However, there are several domains of challenges that will need to be addressed for future applications of AI in space medicine. In this article, we discuss and elaborate upon these domains.

2. Humans in Space and Space Medicine

Many of the problems in space medicine, including acceleration, dysbarism, hypoxia, and altitude changes, have previously been encountered in dive and aviation medicine long before the advent of spaceflight [3]. However, space has unique challenges and hazards including prolonged exposure to altered gravitational fields, radiation, confinement, and a hostile environment. NASA’s Human Research Program (HRP) has identified 30 astronaut health risks and hopes to create suitable countermeasures to mitigate these risks. NASA also identified several “red” risks which are most likely to compromise mission performance and decrease astronaut quality of life which include risk of carcinogenesis, cognitive decrements, spaceflight-associated neuro-ocular syndrome, cardiovascular disease, and inadequate nutrition [4].

During spaceflight, pathophysiological changes in astronauts can occur after period of hours (vestibulo-ocular adaptations), weeks (cardiovascular deconditioning), to months (skeletal muscle atrophy and decreased bone density) [3]. In microgravity, the lack of mechanical loading on the musculoskeletal system results in a loss of bone mass and skeletal muscle atrophy [5]. Prolonged microgravity exposure can also lead to the development of spaceflight-associated neuro-ocular syndrome (SANS), which can manifest as posterior globe flattening, optic disc edema, areas of localized retinal ischemia, hyperopic shifts, and choroidal and retinal folds.

Ionizing radiation exposure in spaceflight and exposure to highly energetic particles is concerning as it can induce changes in gene expression and increase the risk of developing various malignancies [5]. Mental health risks during long-duration spaceflight include behavioral health from sharing a constricted environment with the same individuals and isolation risks from being away from Earth and loved ones. These identified problems are all priorities to address for future spaceflight missions to ensure mission success, including for missions to Mars.

Spaceflight has also been reported to significantly increase levels of hemolysis [6]. Astronaut fatigue and orthostatism may occur as a result of this anemia, making the monitoring of anemia essential for long-duration spaceflight [6]. Terrestrial anemia has also been associated with specific neuro-ocular signs (such as papilledema), which resembles SANS [7]. Monitoring anemia status in space may provide unique insights into the pathophysiology of SANS, which is currently not fully elucidated. Our group has previously proposed an approach using deep learning to non-invasively monitor anemia during spaceflight using retinal fundus photos [8]. With the usage of artificial intelligence, this approach can potentially replace the need for invasive blood draws in space while providing more frequent monitoring of anemia for astronauts. Challenges that this project must overcome included limited astronaut metadata and limited astronaut retinal images for the training of a deep learning model. In the following sections, we continue to elaborate upon such challenges such as limited metadata and other considerations for AI in space medicine.

3. Fundamentals of Artificial Intelligence in Medicine

Artificial intelligence has revolutionized how expert medical systems are carrying out differential diagnoses. With the advent of deep learning, many AI-based systems have performed at clinically appropriate levels when compared to experts in diagnosing diseases in ophthalmology [9], dermatology, radiology, and pulmonology. However, with dependency on such systems becoming more imminent, reliable deployment of medical AI systems in routine clinical care and remote clinical setup poses intricate ethical, technological, and human-centered obstacles for safe and practical translation.

Deep convolutional neural networks can extract and learn intrinsic features directly from raw data and have gained superior results in medical image classification [10], segmentation [11], and image-to-image translation [12, 13] tasks. For example, OpticNet-71 trained on OCT (Optical Coherence Tomography) images with Age-related Macular Degeneration, Diabetic Macular Edema, Drusen, and Choroidal Neovascularization disease types performed at high levels in identifying all of these pathologies. In addition, CheXNeXt has been shown to robustly identify 14 diseases by learning from chest radiographs and associated labels. In semantic segmentation, the task is more challenging, as it requires pixel-wise labeling of images for accurate identification of diseases or essential features. The architecture is an auto-encoder, with an encoder for learning spatial information and a decoder to utilize those learned features for pixel-wise prediction. For example, U-Net [14] architecture has been utilized for the retinal vessel [15], brain-tumor [16], and multi-organ CT [17] segmentation.

Generative adversarial networks (GANs) are a new type of deep learning technique consisting of two different architectures, a generator and a discriminator. The generator tries to generate new images or data by learning from a unique data distribution or tries to translate from one modality to an entirely new modality. In contrast, the discriminator tries to distinguish between real and newly synthesized images. There has been a surge in the usage of GANs in medical imaging, such as synthetic data generation [18], image-to-image translation [12, 13], and image segmentation [11]. For example, a recent conditional GAN [12] architecture has been shown to generate Fluorescein Angiography images from Color Fundus, containing patient-specific information. Moreover, RV-GAN [11] has extended the application of GANs to segment retinal vessels with high precision from color fundus images. This progress quintessentially illustrates the fast-paced progress of AI in medicine and the overarching impact it might bring to clinical support systems worldwide.

4. Discussion on the Domains of Challenges in AI in Space Medicine

AI in space medicine faces additional challenges compared to the challenges faced by AI systems for terrestrial medicine. The three sections below describe the primary challenges that AI is currently facing in space medicine.

4.1. Limited Astronaut Data for Training and Validation

To build an accurate model, large datasets are required to train machine learning models, and these datasets are essential for improving both the accuracy and precision of the model. For example, the field of ophthalmology is one of the leaders in AI research, primarily because of the significant amount of structured data and the high volume of imaging performed [19]. To create a robust dataset to train AI algorithms in space, data from a multitude of space missions over several years will be required. Specifically, more methods of non-invasive data collection are needed in space, such as biosensors. Our group has been developing a multi-modal head-mounted headset with the goal of helping collect more vision data during spaceflight to further understand the impacts of microgravity on the visual system [20, 21]. Astronaut data represents data from a special cohort and has several unique limitations: this data belongs to an extremely small population, that is, majority middle-aged, male, and white Americans. Training a space AI algorithm with this data would not have an equal efficacy for everyone else that does not fit this demographic. Challenges of working with a small dataset also include increased brittleness (the tendency of an algorithm to be fooled).

There are several potential solutions to this large challenge of AI in space medicine. One approach would be to utilize established machine learning solutions for small datasets on Earth. Transfer learning is a deep learning neural network learning technique that has been utilized in terrestrial machine learning applications where there is limited data (e.g., small dataset or small amount of labeled data) [22]. In transfer learning, layers of the neural network from an established convolutional neural network are transferred to a new framework (Figure 1). These pre-trained layers are typically trained on similar/related but much larger datasets that can increase the accuracy of this new model. This base network is then transferred to the new network that is usually fine-tuned and structured for a different but related task. Transfer learning has been utilized terrestrially for many medical applications including abdominal ultrasound classification [22], physiological signals [23], and brain tumor classification [24]. By utilizing pre-trained models on large terrestrial datasets that share similar features or areas of interest in space, transfer learning may be a strong solution to addressing the issue of limited data in space medicine.

Another potential solution is to utilize ground-based analogs on Earth to generate data. To induce many of the physiological changes that occur during spaceflight, terrestrial analog studies such as the six-degree head-down tilt bed rest are utilized [25]. Head-down tilt bed rest is currently the gold standard terrestrial analog for mimicking cephalad fluid shifts that occur in microgravity. The data acquired from terrestrial analog studies can be a useful addition to increase the size of a training dataset. However, spaceflight-associated neuro-ocular syndrome (SANS) is a unique condition that cannot be replicated by the most advanced terrestrial models. A recent study using NASA’s vessel generation analysis software showed that vascular density increased after six-degree head-down tilt testing; meanwhile, the opposite response (a significant decrease) was seen in the retinas of astronauts [26].

Most AI systems have not yet achieved reliable generalisability, which is essential before it can be used in clinical practice [27]. Even terrestrially, the generalisability of models is proving to be challenging across different populations with a recent study to detect thoracic disease on chest radiographs showing a specificity which varied widely across independent datasets (from 0.566 to 1.000) [28]. To overcome this issue, site-specific training in space will be needed to modify an existing framework for an astronaut population (Figure 2). As well, the usage of generative adversarial networks (GANs) can be useful to amplify astronaut datasets.

One advantage of GANs is that we can use them for training on small amounts of extraterrestrial data to extrapolate and generate new synthetic data. Generally, GANs consist of two distinct architectures, namely, generator and discriminator. The generator is responsible for synthesizing new synthetic data, whereas the discriminator is tasked with distinguishing if the generated data resembles the original data distribution. So, GANs can train on such synthetic fundus, optical coherence tomography (OCT), and other modalities of structured retinal imaging data to make the model generalizable and robust in the extraterrestrial setting. Another advantage of GANs is that we can train the system to learn conditional structured information of modality B from modality A and create an image-to-image translation architecture (Figure 3). Generally, MRI, fundus, and OCT imaging are used to detect various SANS-like conditions such as choroidal folds, globe flattening, and cotton wool spots. Fundus photography and OCT imaging are available on board the International Space Station (ISS) [29]. Contrarily, terrestrial orbital and cranial MRI, an essential tool for detecting globe flattening, is not available in ISS. So, the experts can utilize such generative networks to generate MRIs from other structured retinal imaging modalities and patient-specific information to generate synthetic MRI scans. Our group is currently developing a novel GAN architecture specifically for SANS that can learn to synthesize fluoresceine angiography images (an invasive modality not on the ISS) from retinal fundus images to detect different biological markers such as vascular and arterial damages, aneurysm, and other abnormalities. Once deployed, this robust system can also be extended for identification of SANS by translating them to meaningful FA images for further study in spaceflight. This robust system may also help to provide this modality for individuals on Earth that have contraindications to contrast dye or for austere environments that do not have access to this imaging modality or specialists to evaluate its findings.

4.2. Limited Prospective Research on AI in Space

Many of the current AI studies in medicine are retrospective in nature, relying on the data of patients that have already been diagnosed. However, to fully understand the diagnostic and treatment accuracy of AI in space, prospective research with standardized methodology must be carried out with current astronauts. Randomized controlled trials are often the optimal method to determine the effectiveness of a new AI system [9]; however, with a limited sample size of astronauts (approximately 16 per year), this is currently extremely challenging. With the development of new commercial spaceflight initiatives, the amount of people travelling to space is expected to increase exponentially over the next decade. As more people travel to space, more prospective research can be carried out with stronger methodology and a larger and more diverse population.

5.1. Astronaut-Doctor Relationship

In clinical medicine and medical ethics, the patient-doctor relationship is a crucial association built on strong ethical and legal frameworks. Physicians, as early in their career as the first day of medical school, take the Hippocratic Oath entailing a duty to preserving patient wellbeing and confidentiality. These two main tenets from the physician’s code are most impacted by AI in the context of patient data privacy and misdiagnosis. Introducing artificial intelligence introduces a complexity in this relationship creating the physician-patient-AI relationship. It is difficult to determine if the third entity in this expanded relationship is the human behind the code or the AI itself, which opens a deep discussion into the sentience of AI algorithms and accountability of clinical AI engineers. Regardless of the classification of the third entity in a patient-doctor-AI relationship, it is important to ensure that AI systems are built utilizing edge-computing rather than transmitting data to systems on earth for two reasons: (1)There is a significant delay in data transmission if data is constantly exchanged between spacecraft and ground flight surgeons(2)Edge-computing vastly reduces computational load and is better equipped to protect data privacy and prevent any data leaks, which is a crucial component to preserving astronaut health data privacy

There is also an important extension to the consideration of who or what the third entity is in an AI-incorporated physician-patient team, because if the AI was to misdiagnose an astronaut during a long-duration space mission, it is necessary to consider who is responsible. Is it the AI itself or is it the computer engineer behind the code? On Earth, physicians are often held accountable via malpractice laws. Does this mean that incorporating AI into medical diagnosis requires new laws holding engineers accountable to the same or lesser extent as physicians? It is necessary to consider these questions and enact the appropriate policy changes because another fundamental tenant of medical ethics is non-maleficence (the do-no-harm clause from the Hippocratic Oath) and ensuring that AI applications in space medicine do not put astronauts in harm’s way due to coding imperfections leading to misdiagnosis. These are important questions and considerations to ponder when implementing artificial intelligence in space medicine.

5.2. Informed Consent of Astronauts (Autonomy)

Another important tenant of medical ethics that impacts the implementation of AI in any medical context, but especially space medicine, revolves around patient autonomy. AI systems are rapidly developing, but patients and physicians alike are wary of AI’s efficacy due to it being a relatively new technology in medicine. Like with any innovations in medicine ranging from novel surgical protocols to breakthrough drug discovery, AI is an experimental technology and requires the autonomous and informed consent of astronauts who will most likely be research participants utilizing AI in space medicine for the first time. Therefore, astronauts, especially those involved with earlier missions, will have to be properly debriefed on all potential risks and willingly consent to utilizing the novel AI technology as part of their mission plan. Lastly, AI algorithms are consistently updated and as more astronauts go into space in spacecraft with AI systems incorporated, it is necessary for the AI to collect more data from astronauts with each mission to properly mature into a better system. This once again requires informed consent that respects astronaut autonomy. Without agreement from the astronauts in the corps to utilize their private data to improve AI systems, it will be difficult justify and develop such systems.

5.3. Time Lag and Positive Outcomes (Beneficence and Justice)

Medical ethics also stresses the importance of beneficence as one of the main goals of medicine. As astronauts go on longer missions deeper into the solar system, there will no longer be real-time communication with flight surgeons on earth. Therefore, in the event of any medical emergency, unless an astronaut crew member is an uninjured physician themselves, it is necessary to have an AI system incorporated into the spacecraft to properly assess the medical event and lead the crew members through an emergency especially if there is no doctor on board the spacecraft. This technology can not only serve as a crucial risk mitigation tool for future long-duration missions to Mars and beyond but can also have positive benefits on Earth.

Space is often considered a resource-poor setting and implementing AI into spacecraft designed for long-duration missions can produce technology that has direct applications to resource-poor setting on earth. There are several remote and under-resourced communities across the world that may benefit from having an AI system aid community leaders and medical officers. This can ultimately relieve the strain on clinicians who practice in remote and under-resourced regions. Such outcome highlights the last tenant of ethical consideration regarding AI in space medicine and medicine overall: justice. Space creates unique scientific, ethical, and legal challenges that can also serve as a unique platform to create new technologies that improve life on earth. The challenges of AI in space can serve to improve the success of future long-duration space missions but also vastly improve life on Earth for all people.

6. Future Directions

Current forms of AI are not yet as sophisticated as human learners, with training typically occurring in batches, which eventually halts and can no longer improve. Human learning is a gradual, asynchronous process, with possibilities of lifelong learning which can occur from self-reflection. As machine learning techniques continue to develop, high levels of generalizability and robustness will be possible with smaller datasets. In addition to this, as memory resources and processing power increases, more capable forms of AI will be possible. However, even with sufficient computing power AI currently overly relies on context-specific training that is difficult to apply in different environments such as space. Further advancements are required in transfer learning to overcome this issue in the near future. Finally, the “black box” decision-making process of AI requires further understanding as clinical decisions in medicine are a transparent process made with clear rationales, based on local guidelines rather than relying on an output that provides no further information.

7. Conclusion

Despite the many challenges, the future of AI in space medicine remains very promising. The investments made in developing AI technology for space medicine will continue to benefit society similar to how previous AI for identifying meteors in space was translated to improve the detection of breast malignancy in mammograms, and how bone demineralization studies in space provided insights on the pathogenesis of osteoporosis [25]. Over time, AI systems will continue to improve, becoming more intelligent and allow healthcare in space to become more data-driven, personalized, and preventative to improve the health outcomes of astronauts.

Data Availability

No data has been shared in this perspective paper.

Conflicts of Interest

All authors have no competing interests.

Authors’ Contributions

E.W. contributed to the design, writing, and figure development. J.O. contributed to the design, writing, and figure development. P.P. contributed to the writing. S.K. contributed to the writing. N.Z. contributed to the review and intellectual support. A.G.L. contributed to the review and intellectual support. A.T. contributed to the review and intellectual support.

Acknowledgments

This study was supported by NASA Grant [80NSSC20K183]: A Non-intrusive Ocular Monitoring Framework to Model Ocular Structure and Functional Changes due to Long-term Spaceflight.

References

  1. NASA, “Mission perseverance rover,” 2020, https://mars.nasa.gov/mars2020/spacecraft/rover/communications/. View at: Google Scholar
  2. J. Bajwa, U. Munir, A. Nori, and B. Williams, “Artificial intelligence in healthcare: transforming the practice of medicine,” Future Healthcare Journal, vol. 8, no. 2, pp. e188–e194, 2021. View at: Publisher Site | Google Scholar
  3. G. C. Demontis, M. M. Germani, E. G. Caiani, I. Barravecchia, C. Passino, and D. Angeloni, “Human pathophysiological adaptations to the space environment,” Frontiers in Physiology, vol. 8, p. 547, 2017. View at: Publisher Site | Google Scholar
  4. Z. S. Patel, T. J. Brunstetter, W. J. Tarver et al., “Red risks for a journey to the red planet: the highest priority human health risks for a mission to Mars,” npj Microgravity, vol. 6, no. 1, p. 33, 2020. View at: Publisher Site | Google Scholar
  5. P. D. Hodkinson, R. A. Anderton, B. N. Posselt, and K. J. Fong, “An overview of space medicine,” British Journal of Anaesthesia, vol. 119, pp. i143–i153, 2017. View at: Publisher Site | Google Scholar
  6. G. Trudel, N. Shahin, T. Ramsay, O. Laneuville, and H. Louati, “Hemolysis contributes to anemia during long-duration space flight,” Nature Medicine, vol. 28, no. 1, pp. 59–62, 2022. View at: Publisher Site | Google Scholar
  7. C. W. Yu, E. Waisberg, J. M. Kwok, and J. A. Micieli, “Anemia and idiopathic intracranial hypertension: a systematic review and meta-analysis,” Journal of Neuro-Ophthalmology, vol. 42, no. 1, pp. e78–e86, 2022. View at: Publisher Site | Google Scholar
  8. E. Waisberg, J. Ong, N. Zaman, S. A. Kamran, A. G. Lee, and A. Tavakkoli, “A non-invasive approach to monitor anemia during long-duration spaceflight with retinal fundus images and deep learning,” Life Sciences and Space Research, vol. 33, pp. 69–71, 2022. View at: Publisher Site | Google Scholar
  9. G. C. M. Siontis, R. Sweda, P. A. Noseworthy, P. A. Friedman, K. C. Siontis, and C. J. Patel, “Development and validation pathways of artificial intelligence tools evaluated in randomised clinical trials,” BMJ Health & Care Informatics, vol. 28, no. 1, article e100466, 2021. View at: Publisher Site | Google Scholar
  10. S. A. Kamran, A. Tavakkoli, and S. L. Zuckerbrod, “Improving robustness using joint attention network for detecting retinal degeneration from optical coherence tomography images,” in 2020 IEEE International Conference on Image Processing (ICIP), pp. 2476–2480, Abu Dhabi, United Arab Emirates, 2020. View at: Publisher Site | Google Scholar
  11. S. A. Kamran, K. F. Hossain, A. Tavakkoli, S. L. Zuckerbrod, K. M. Sanders, and S. A. Baker, “RV-GAN: segmenting retinal vascular structure in fundus photographs using a novel multi-scale generative adversarial network,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Cham, 2021. View at: Google Scholar
  12. A. Tavakkoli, S. A. Kamran, K. F. Hossain, and S. L. Zuckerbrod, “A novel deep learning conditional generative adversarial network for producing angiography images from retinal fundus photographs,” Scientific Reports, vol. 10, pp. 1–15, 2020. View at: Publisher Site | Google Scholar
  13. S. A. Kamran, K. F. Hossain, A. Tavakkoli, and S. L. Zuckerbrod, “Attention2angiogan: synthesizing fluorescein angiography from retinal fundus images using generative adversarial networks,” in 2020 25th International Conference on Pattern Recognition (ICPR), pp. 9122–9129, Milan, Italy, 2021. View at: Publisher Site | Google Scholar
  14. O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, Springer, Cham, 2015. View at: Google Scholar
  15. X. Xiao, S. Lian, Z. Luo, and S. Li, “Weighted res-unet for high-quality retina vessel segmentation,” in 2018 9th International Conference on Information Technology in Medicine and Education (ITME), pp. 327–331, Hangzhou, China, 2018. View at: Publisher Site | Google Scholar
  16. W. Chen, B. Liu, S. Peng, J. Sun, and X. Qiao, “S3D-UNet: separable 3D U-Net for brain tumor segmentation,” in International MICCAI Brainlesion Workshop, Springer, Cham, 2018. View at: Google Scholar
  17. H. Cao, Y. Wang, J. Chen et al., “Swin-unet: Unet-like pure transformer for medical image segmentation,” 2021, https://arxiv.org/abs/2105.05537. View at: Google Scholar
  18. M. Frid-Adar, I. Diamant, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification,” Neurocomputing, vol. 321, pp. 321–331, 2018. View at: Publisher Site | Google Scholar
  19. S. Y. Wang, S. Pershing, A. Y. Lee, and AAO Taskforce on AI and AAO Medical Information Technology Committee, “Big data requirements for artificial intelligence,” Current Opinion in Ophthalmology, vol. 31, no. 5, pp. 318–323, 2020. View at: Publisher Site | Google Scholar
  20. J. Ong, A. Tavakkoli, N. Zaman et al., “Terrestrial health applications of visual assessment technology and machine learning in spaceflight associated neuro-ocular syndrome,” npj Microgravity, vol. 8, no. 1, p. 37, 2022. View at: Publisher Site | Google Scholar
  21. J. Ong, N. Zaman, S. A. Kamran et al., “Contributed session I: A multi-modal visual assessment system for monitoring spaceflight associated neuro-ocular syndrome (SANS) during long duration spaceflight,” Journal of Vision, vol. 22, no. 3, p. 6, 2022. View at: Publisher Site | Google Scholar
  22. P. M. Cheng and H. S. Malhi, “Transfer learning with convolutional neural networks for classification of abdominal ultrasound images,” Journal of Digital Imaging, vol. 30, no. 2, pp. 234–243, 2017. View at: Publisher Site | Google Scholar
  23. A. Bizzego, G. Gabrieli, and G. Esposito, “Deep neural networks and transfer learning on a multivariate physiological signal dataset,” Bioengineering, vol. 8, no. 3, p. 35, 2021. View at: Publisher Site | Google Scholar
  24. R. Hao, K. Namdar, L. Liu, and F. Khalvati, “A transfer learning–based active learning framework for brain tumor classification,” Frontiers in Artificial Intelligence, vol. 4, p. 635766, 2021. View at: Publisher Site | Google Scholar
  25. J. Ong, A. G. Lee, and H. E. Moss, “Head-down tilt bed rest studies as a terrestrial analog for spaceflight associated neuro-ocular syndrome,” Frontiers in Neurology, vol. 12, p. 648958, 2021. View at: Publisher Site | Google Scholar
  26. G. Taibbi, M. Young, R. J. Vyas et al., “Opposite response of blood vessels in the retina to 6° head-down tilt and long-duration microgravity,” npj Microgravity, vol. 7, no. 1, p. 38, 2021. View at: Publisher Site | Google Scholar
  27. C. J. Kelly, A. Karthikesalingam, M. Suleyman, G. Corrado, and D. King, “Key challenges for delivering clinical impact with artificial intelligence,” BMC Medicine, vol. 17, no. 1, p. 195, 2019. View at: Publisher Site | Google Scholar
  28. E. J. Hwang, S. Park, K. N. Jin et al., “Development and validation of a deep learning–based automated detection algorithm for major thoracic diseases on chest radiographs,” JAMA Network Open, vol. 2, no. 3, article e191095, 2019. View at: Publisher Site | Google Scholar
  29. J. Ong, A. Tavakkoli, G. Strangman et al., “Neuro-ophthalmic imaging and visual assessment technology for spaceflight associated neuro-ocular syndrome (SANS),” Survey of Ophthalmology, vol. 67, no. 5, pp. 1443–1466, 2022. View at: Publisher Site | Google Scholar

Copyright © 2022 Ethan Waisberg et al. Exclusive Licensee Beijing Institute of Technology Press. Distributed under a Creative Commons Attribution License (CC BY 4.0).

 PDF Download Citation Citation
Views354
Downloads157
Altmetric Score
Citations