Useful Vocabulary

Computational imaging-  involves the joint design of imaging system hardware and software to indirectly form images from measurements using algorithms and utilizing a significant amount of computing, overcoming hardware limitations of optics and sensors, such as resolution and noise [1].

Deep learning- a subset of machine learning. Allows machines to solve complex problems even when using a data set that is very diverse, unstructured and inter-connected. Involves two phases: training and evaluation of deep neural network [2].

TensorFlow- an interface for expressing machine learning algorithms, and an implementation for executing such algorithms [3].

Convolutional Neural Network- a class of deep neural networks, most commonly applied to analyzing visual imagery.

Ground Truth- In machine learning, the term “ground truth” refers to the accuracy of the training set’s classification for supervised learning techniques. The term “ground truthing” refers to the process of gathering the proper objective (provable) data for this test. Evaluation campaigns or benchmarking events require a consistent ground truth for accurate and reliable evaluation of the participating systems or methods [4].

Physical pre-processing technique- the function relating low-resolution images to their high-resolution counterparts is learned from a given dataset, which is then used to modify the imaging hardware. By using prior knowledge gained from static images, we avoid redundancy in information collection by physically preprocessing the data [5].

Super-resolution problem- converting low-resolution images to higher resolution.

Space–bandwidth-product (SBP)- a measure of the information a signal contains and also the rendering capacity of a system. Is proportional to the product of the field-of-view and resolution [2]. Includes spatial resolution, field-of-view and temporal resolution.

Spatial resolution– refers to the number of pixels within a digital image. The greater the number of pixels, the higher the spatial resolution.

Field-of-view- the maximum area visible through a camera lens or microscope eyepiece at a particular position and orientation in space.

Temporal resolution- the time to take the multiple measurements of the cross-section and then reconstruct the image.

High-throughput imaging- automated microscopy and image analysis to visualize and quantitatively capture specimens; or real-time live imaging.

Fourier ptychographic microscopy- a computational imaging technique in which variable illumination patterns are used to collect multiple low-resolution images which are then computationally combined to create an image with resolution exceeding that of any single image from the microscope, but has poor temporal resolution [5].

Light field microscopy- a technique that records four-dimensional slices of the five-dimensional plenoptic function by applying an appropriate ray-space parametrization. In many cases, microlens arrays (MLAs) are used in the intermediate image plane of optical instruments to multiplex 2D spatial and 2D directional information on the same image sensor [6].

3D phase microscopy: interferometry-based technique- images are first taken interferometrically to directly record both phase and amplitude information of the scattered field. Next, a tomographic reconstruction algorithm is devised to recover the sample’s 3D refractive index distribution.This typically requires additional specialized and expensive hardware, which is not always compatible to standard microscopes [7].

3D phase microscopy : intensity-only technique- employ tomographic phase reconstruction from intensity-only measurements.  However, changing focus not only requires mechanical scanning, but also increases the acquisition time and data size, both of which are undesirable for high-throughput applications [7].

Optical multiplexing- way of sending multiple signals or streams of information over a communications link at the same time in the form of a single, complex signal. A single LED illumination pattern that allows the information from multiple focal planes to be multiplexed into a single image [8].

References

[1] Waller, L. & Van Duzer, T. (2017). Computational Microscopy. CITRIS.

[2] Cheng, Y., Strachan, M., Weiss, Z., Deb, M., Carone, D., & Ganapati, V. (2018). Illumination Pattern Design with Deep Learning for Single-Shot Fourier Ptychographic Microscopy.

[3] Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., … Zheng, X. (2016). TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems.

[4] Foncubierta Rodríguez, A., & Müller, H. (2012, October). Ground truth generation in medical imaging: a crowdsourcing-based iterative approach. In Proceedings of the ACM multimedia 2012 workshop on Crowdsourcing for multimedia, 9-14.

[5] Robey, A., & Ganapati, V. (2018). Optimal Physical Preprocessing for Example-Based Super-Resolution. https://doi.org/10.1364/OE.26.031333

[6] Bimber, O. & Schedl, D. (2019). Light-Field Microscopy: A Review. Journal of Neurology & Neuromedicine, (4)1, 1-6. https://doi.org/10.29245/2572.942X/2019/1.1237. 

[7] Ling, R., Tahir, W., Lin, H., Lee, H., & Tian, L. (2018). High-throughput intensity diffraction tomography with a computational microscope. Biomedical Optics Express, 9(5), 2130–2141. https://doi.org/10.1364/boe.9.002130

[8] Cheng, Y., Sabry, Z., Strachan, M., Cornell, S., Chanenson, J., Collins, E., & Ganapati, V. (2019). Deep Learned Optical Multiplexing for Multi-Focal Plane Microscopy.