What Is Missing from Contemporary AI? The WorldRead the full article
Open access journal Intelligent Computing, published in affiliation with Zhejiang Lab, publishes the latest research outcomes and technological breakthroughs in intelligent computing.
Led by Shiqiang Zhu (Zhejiang Lab) and Ninghui Sun (Institute of Computing Technology, CAS), Intelligent Computing's editorial board is comprised of leading experts in the field of intelligent computing.
Latest ArticlesMore articles
Noise-Adaptive Intelligent Programmable Meta-Imager
We present an intelligent programmable computational meta-imager that tailors its sequence of coherent scene illuminations not only to a specific information-extraction task (e.g., object recognition) but also adapts to different types and levels of noise. We systematically study how the learned illumination patterns depend on the noise, and we discover that trends in intensity and overlap of the learned illumination patterns can be understood intuitively. We conduct our analysis based on an analytical coupled-dipole forward model of a microwave dynamic metasurface antenna (DMA); we formulate a differentiable end-to-end information-flow pipeline comprising the programmable physical measurement process including noise as well as the subsequent digital processing layers. This pipeline allows us to jointly inverse-design the programmable physical weights (DMA configurations that determine the coherent scene illuminations) and the trainable digital weights. Our noise-adaptive intelligent meta-imager outperforms the conventional use of pseudo-random illumination patterns most clearly under conditions that make the extraction of sufficient task-relevant information challenging: latency constraints (limiting the number of allowed measurements) and strong noise. Programmable microwave meta-imagers in indoor surveillance and earth observation will be confronted with these conditions.
How to Build the “Optical Inverse” of a Multimode Fibre
When light propagates through multimode optical fibres (MMFs), the spatial information it carries is scrambled. Wavefront shaping reverses this scrambling, typically one spatial mode at a time—enabling deployment of MMFs as ultrathin microendoscopes. Here, we go beyond sequential wavefront shaping by showing how to simultaneously unscramble all spatial modes emerging from an MMF in parallel. We introduce a passive multiple-scattering element—crafted through the process of inverse design—that is complementary to an MMF and undoes its optical effects. This “optical inverter” makes possible single-shot widefield imaging and super-resolution imaging through MMFs. Our design consists of a cascade of diffractive elements, and can be understood from the perspective of both multi-plane light conversion, and as a physically inspired diffractive neural network. This physical architecture outperforms state-of-the-art electronic neural networks tasked with unscrambling light, as it preserves the phase and coherence information of optical signals flowing through it. We show, in numerical simulations, how to efficiently sort and tune the relative phase of up to 400 step-index fibre modes, reforming incoherent images of scenes at arbitrary distances from the fibre facet. Our optical inverter can dynamically adapt to see through experimentally realistic flexible fibres—made possible by moulding optical memory effects into its design. The scheme is based on current fabrication technology so could be realised in the near future. Beyond imaging, these concepts open up a range of new avenues for optical multiplexing, communications, and computation in the realms of classical and quantum photonics.
Software Systems Implementation and Domain-Specific Architectures towards Graph Analytics
Graph analytics, which mainly includes graph processing, graph mining, and graph learning, has become increasingly important in several domains, including social network analysis, bioinformatics, and machine learning. However, graph analytics applications suffer from poor locality, limited bandwidth, and low parallelism owing to the irregular sparse structure, explosive growth, and dependencies of graph data. To address those challenges, several programming models, execution modes, and messaging strategies are proposed to improve the utilization of traditional hardware and performance. In recent years, novel computing and memory devices have emerged, e.g., HMCs, HBM, and ReRAM, providing massive bandwidth and parallelism resources, making it possible to address bottlenecks in graph applications. To facilitate understanding of the graph analytics domain, our study summarizes and categorizes current software systems implementation and domain-specific architectures. Finally, we discuss the future challenges of graph analytics.
Virtual Staining of Defocused Autofluorescence Images of Unlabeled Tissue Using Deep Neural Networks
Deep learning-based virtual staining was developed to introduce image contrast to label-free tissue sections, digitally matching the histological staining, which is time-consuming, labor-intensive, and destructive to tissue. Standard virtual staining requires high autofocusing precision during the whole slide imaging of label-free tissue, which consumes a significant portion of the total imaging time and can lead to tissue photodamage. Here, we introduce a fast virtual staining framework that can stain defocused autofluorescence images of unlabeled tissue, achieving equivalent performance to virtual staining of in-focus label-free images, also saving significant imaging time by lowering the microscope’s autofocusing precision. This framework incorporates a virtual autofocusing neural network to digitally refocus the defocused images and then transforms the refocused images into virtually stained images using a successive network. These cascaded networks form a collaborative inference scheme: the virtual staining model regularizes the virtual autofocusing network through a style loss during the training. To demonstrate the efficacy of this framework, we trained and blindly tested these networks using human lung tissue. Using 4× fewer focus points with 2× lower focusing precision, we successfully transformed the coarsely-focused autofluorescence images into high-quality virtually stained H&E images, matching the standard virtual staining framework that used finely-focused autofluorescence input images. Without sacrificing the staining quality, this framework decreases the total image acquisition time needed for virtual staining of a label-free whole-slide image (WSI) by ~32%, together with a ~89% decrease in the autofocusing time, and has the potential to eliminate the laborious and costly histochemical staining process in pathology.
Robust Phase Retrieval with Complexity-Guidance for Coherent X-Ray Imaging
Reconstruction of a stable and reliable solution from noisy and incomplete Fourier intensity data is a challenging problem for iterative phase retrieval algorithms. The typical methodology employed in the coherent X-ray imaging (CXI) literature involves thousands of iterations of well-known phase retrieval algorithms, e.g., hybrid input-output (HIO) or relaxed averaged alternating reflections (RAAR), that are concluded with a smaller number of error reduction (ER) iterations. Since the single run of this methodology may not provide a reliable solution, hundreds of trial solutions are first obtained by initializing the phase retrieval algorithm with independent random guesses. The resulting trial solutions are then averaged with appropriate phase adjustment, and resolution of the averaged reconstruction is assessed by plotting the phase retrieval transfer function (PRTF). In this work, we examine this commonly used RAAR-ER methodology from the perspective of the complexity parameter introduced by us in recent years. It is observed that the single run of the RAAR-ER algorithm provides a solution with undesirable grainy artifacts that persist to some extent even after averaging the multiple trial solutions. The grainy features are spurious in the sense that they are smaller in size compared to the resolution predicted by the PRTF curve. This inconsistency can be addressed by a novel methodology that we refer to as complexity-guided RAAR (CG-RAAR). The methodology is demonstrated with simulations and experimental data sets from the CXIDB database. In addition to providing consistent solution, CG-RAAR is also observed to require reduced number of independent trials for averaging.
Resource Configuration Tuning for Stream Data Processing Systems via Bayesian Optimization
Stream data processing systems are becoming increasingly popular in the big data era. Systems such as Apache Flink typically provide a number (e.g., 30) of configuration parameters to flexibly specify the amount of resources (e.g., CPU cores and memory) allocated for tasks. These parameters significantly affect task performance. However, it is hard to manually tune them for optimal performance for an unknown program running on a given cluster. An automatic as well as fast resource configuration tuning approach is therefore desired. To this end, we propose to leverage Bayesian optimization to automatically tune the resource configurations for stream data processing systems. We first select a machine learning model—Random Forest—to construct accurate performance models for a stream data processing program. We subsequently take the Bayesian optimization (BO) algorithm, along with the performance models, to iteratively search the optimal configurations for a stream data processing program. Experimental results show that our approach improves the 99th-percentile tail latency by a factor of 2.62× on average and up to 5.26× overall. Furthermore, our approach improves throughput by a factor of 1.05× on average and up to 1.21× overall.