Principle

All about images, toward the essence

We explore all aspects of images, including acquisition, analysis, modeling, expression, and interpretation, striving to uncover their fundamental nature and principles.

The word “image” often refers to a digital photograph, typically represented in RGB format. However, in a broader sense, an image can be any pattern that encodes the features or state of an object or phenomenon. In this sense, an image is a form of representation in both scientific and mathematical contexts. RGB images, for example, are just one way of capturing how objects appear to the human visual system. Other examples include depth images, spectral images, CT (computed tomography) images, and MRI (magnetic resonance imaging) images, each revealing different aspects of the target. In mathematics, an “image” refers to the result of mapping a set or structure from one space to another via a function. This concept is fundamental to Image Informatics, where we regard images as structures that emerge when real-world objects or phenomena are mapped, through sensors and computational processes, into new descriptive spaces such as pixels, waveforms, point clouds, or abstract feature vectors.

The Image Informatics Laboratory conducts interdisciplinary research on the fundamental structures, transformations, and expressive methods of such “patterns”. Our research extends beyond visual images to include sensor signals, shapes, motions, and time series, addressing challenges comprehensively from acquisition and analysis to interpretation and presentation.

1. Computational Imaging

An image is not the object itself, but a modality-dependent description. It represents one aspect of a target captured under specific measurement or observation conditions. For example, an RGB image records the intensity of reflected light from a particular viewpoint and under specific illumination. Changing the viewpoint or the measurement modality results in a different image. Every imaging process involves some degree of information loss. However, by acquiring data from multiple modalities and under diverse observation conditions, and by integrating them using computational methods, it becomes possible to recover essential and invariant patterns and structures. This process can be viewed as consisting of two stages: encoding (acquisition) and decoding (integration and interpretation). These stages involve a wide range of technologies, including optics, electronics, sparse modeling, deep learning, and optimization.

Our laboratory advances research in computational imaging and multimodal information integration across the full imaging pipeline, from designing novel imaging systems and algorithms to developing methods for reconstructing, enhancing, and interpreting complex image data. We aim to extract meaningful patterns from diverse sources such as medical scans, satellite images, 3D sensor data, and time-series measurements.

2. Understanding Motion and Deformation

Capturing the temporal changes of images is fundamental for understanding and predicting objects and phenomena. For example, human motion is constrained by skeletal structure. Similarly, temporal patterns observed in lung CT scans, pathology slides, or cloud movements used in weather prediction reflect underlying physical and structural constraints.

We model such time-dependent deformations and motions using geometric and statistical methods. By formulating deformations as geometric transformations, we can impose constraints such as isometry, conformality, and smoothness. This enables the analysis and reconstruction of complex and nonlinear changes. Through this approach, we develop a unified framework for handling diverse and dynamic images found in the real world, ranging from living organisms to natural phenomena.

3. Visualization and Expression

Effective visualization and expression of information require careful design of perspective, specifically how information is selected and presented in ways that are intuitive and meaningful. Rather than presenting all information exhaustively, the goal is to choose and present what is most relevant for a given purpose as an image. This approach supports clearer understanding, new discoveries, and better decision making.

Our laboratory develops advanced visualization and presentation techniques that take into account human perception and interaction. In addition to conventional two-dimensional displays, we explore a wide range of output modalities, including 3D displays, 3D printing, and other innovative devices, to expand how images are presented and understood.