Multiplexed Structured Image Capture (MuSIC): Applications for High-speed Imaging and Field of View Extension
Authors: M. Gragston, C. Smith, W. McCord, Z. He, N. Williamson, Z. Zhang.
In this talk, Mark Gragston presents a specific computational imaging technique, similar to structured illumination, in which a unique periodic patterning of light, or spatial frequency modulation, is utilized right before recording in order to encode images into a single exposure. This technique is called Multiplexed structured image capture (MUSIC) and it is essentially exchanging spatial resolution for extra frame storage. Advantages of MUSIC includes its very broad imaging speed range, therefore making it scalable. However, it is limited by the image sequence depths, or the number of frames one can store. Nevertheless, it is useful for field of view extension, where a desired full image is acquired from smaller multiple fields of view, as well as high-speed imaging applications, such as in aerospace photography.
Phase Imaging with Computational Sensitivity (PICS)
Author: G. Popescu
Professor Gabriel Popescu, from the University of Illinois, presents on the value in using phase, rather than intensity, for imaging biological structures as well adding specificity to the phase imaging technique.
Fluorescence microscopy has been used heavily for decades in cell biology, however, the process of staining or labeling cells, which involves the injection of dyes and fluorophores, might exert ill effects on cellular functions. A potential solution is Quantitative Phase Imaging (QPI), which allows for label-free imaging that is nondestructive, fast, and quantitative. QPI has many applications such as diffraction tomography, quantifying cell growth, diagnosing cancer, and much more.
The innovative microscopy technique that Dr. Popescu’s lab has developed is a combination of quantitative phase imaging and Artificial Intelligence and enables information about unlabeled live cells to be gathered with high specificity and this technique is called Phase Imaging with Computational Specificity (PICS). Another method is SLIM (spatial light interference microscopy) as well as GLIM (gradient light interference microscopy), which generate QPI data to which deep learning is applied. One benefit is that both SLIM and GLIM can be added onto existing microscopes and are thus compatible with the fluorescence channels, providing simultaneous phase and fluorescence images from the same field of view. Moreover, this novel approach is a good substitute for a few frequently used tags and stains, thereby removing the difficulties associated with chemical tagging and instead allowing for the non-destructive quantification of the growth of cells over multiple cell cycles.
DiffuserCam: computational microscopy with a lensless imager
Author: L. Waller
Laura Waller addresses the canonical lens imaging problem and suggests taking a camera, removing its lens and replacing it with another scattering element (such as double-sided tape), pointing the sensor at the world, and taking a picture. From this image, Dr. Waller and colleagues can solve an inverse problem to reconstruct the scene.
This idea is a way to replace bulky hardware that is often required for 3D imaging, with a more compact and inexpensive single-shot lensless system. The easy-to-build DiffuserCam consists of a diffuser, or a smooth random optical surface, placed in front of a standard 2D image sensor. Every 3D point within the scene, or volume of interest, creates a unique response on the sensor. In order to solve the large-scale inverse problem, Waller uses a physical approximation and simple calibration scheme.
However, the issue with this off-the-shelf diffuser is that there are areas that are in between focus spots that are not fully dark, which causes ill-posed SNR deconvolution. So after experimenting with randomly spaced microlenses that focus to multiple spots which are then randomly distributed such that ambiguity is avoided and there is no repeating pattern when the point in the scene shifts.
This is a particularly important development for microscopy, in which live samples are often used and fast imaging is therefore desired along with high resolution across a large volume. Some of the numerous applications that the speaker demonstrates include whole organism bioimaging and neural activity tracking in vivo.
Defining the Digital Camera
Author: D. Brady
Professor David Brady’s research focuses on issues related to the nature of the digital camera and in this talk he goes on to analyze the limitations of digital photography and proposes a new strategy to greatly improve the technology.
For the last 100 years, the main design of the digital camera has not changed significantly: it consists of a camera back and a removable lens. Yet Dr. Brady argues that this model is no longer viable and puts forward an innovative model of his own. Instead of employing a single lens, this new digital camera will be composed of an array of microcameras in which the lens does not form the image, but rather the image is formed computationally. Moreover, the camera must contain a computer in order to run the sophisticated and complex software. The third most important aspect of the camera is that it must become a dynamic sampling engine, where data is recorded all the time that does not simply report data captured in one moment in time, but rather over a period of time during which it is collected. This novel device allows us to explore the world in a much richer way than with traditional media. Some challenges the technology faces include rendering it inexpensive to make it and lowering the processing power per pixel.
An example of this technology being put into action is the Aqueti Mantis Camera, which uses an array of microcameras and carries out most of the video processing. These are actually programmable within the camera head and consist of a cloud style architecture where data is captured in the front end by a thin sheet of optics and an array of 10 single board computers on the back end execute video analysis, image warping, and other functions. Applications include security monitoring for airports, train stations, and roadways as well as for virtual travel and sporting events.
Another advantage is that the diversity of the cameras are not just in field of view, but also focal length by using an array of multifocal lenses where each lens varies in focal length, in addition to diversity in color and exposure. The development of the microcamera arrays are still at the very early stages and the manufacturing processes to develop the complex computer designs at the required scale do not currently exist. And so in order to continue innovating and advancing the digital camera, Dr. Brady suggests to be more imaginative when thinking about what a camera ought to do.
Towards a vectorial treatment of Fourier ptychographic microscopy
Authors: X. Dai, R. Horstmeyer, S. Xu, P. Konda
The Ganapati lab uses single-shot Fourier ptychographic microscopy (FPM) to reconstruct a three dimensional map of a given specimen, or biological structure, from fewer images. Our lab has been able to jointly optimize the LED illumination pattern, which lights the specimen underneath the microscope, in FPM with a reconstruction algorithm, thus allowing for single-shot 2D imaging of a given sample type while still maintaining a high space-bandwidth product; or a combination of high spatial resolution, wide field-of-view, and high temporal resolution.
However the sample and pupil function, within the image formation model described above, are all described in scalar. In his talk, Xiang Dai presents a new imaging technique capable of reconstructing the absorption, phase, and polarization-dependent properties of a sample all at once without sacrificing spatial resolution or field of view. Capturing and reconstructing a sample’s polarization-dependent features, such as birefringence and anisotropy, necessitates a different calculus, namely Jones Calculus, to describe the light and optical field in vector form. Dai described the system as an LED light which passes through a polarizer, described by a matrix, which then interacts with the specimen. Via 2D Fourier Transform, the vectorial light field then travels to the pupil plane and subsequently through the imaging lens. Next, a 2D inverse Fourier transform disperses the light to the image plane, passes through another polarizer, and lastly, the intensity measurements of the field are recorded by the digital image sensor.
Self-Supervised Joint Learning of Patterns and Matching for Structured Light Depth Estimation
Authors: E. Fletcher, R. Narayanswamy
In this talk, Evan Fletcher presents a machine learning method for single-shot structured light depth estimation. The way it works is the scene is illuminated with a known, fixed pattern with a camera placed a certain distance away from a projector. This causes the position or direction of objects in the scene to appear different when viewed from the projector and the camera, or parallax, and this shift of the pattern in image space is measured at each pixel using an algorithm. Utilizing this image-space shift measurement, Fletcher is able to calculate the scene depth at each camera pixel. The system is able to train the network at the same time as it optimizes the projected pattern via an in-network differentiable rendering layer, resulting in one structured light pattern and a ML matching algorithm and the results indicate a very accurate depth estimation.