Training and inference processes of deep neural network-based virtual staining. (a) Initial coarse registration steps used to match the images of the label-free tissue sections with the corresponding images of the histologically stained tissue sections. This image coregistration is performed by first extracting the region with the maximum correlation and then applying a multimodal rigid registration. (b) Procedure used to train the neural networks using a conditional GAN-based loss function, where α denotes a weight that balances between the per-pixel penalty and the global distribution loss. (c) Steps used to fine-tune the image coregistration and ensure that pixel-level coregistration accuracy is achieved through the use of an elastic transformation. The autofluorescence images are passed through a style-transfer network which can be used as an intermediate image to perform a correlation-based elastic coregistration. (d) Following its training, the network can virtually stain new cases that were never seen by the network before, by simply passing them through the deep neural network.