This video introduces a high-speed SVBRDF measurement method which can be 180 times faster than a usual approach for an example. Spatial Varying Bidirectional Reflectance Distribution Function: SVBRDF is a function that defines how light is reflected at a surface and used for a photorealistic rendering in computer graphics. Since most of the parametric SVBRDF measurements involve over-sampling and non-linear optimization, not only the acquisition time but also the estimation time become so long. In order to realize the rapid SVBRDF measurement, we propose the strategy using an algebraic solution that eliminates over-sampling and optimization, and an adaptive illumination system that satisfies the necessary constraints for the solution. Based on this strategy, we demonstrated the rapid measurement with real objects and showed the synthetic scene with texel-level SVBRDF.
The real physical world essentially has parallel and real-time computing architectures, including sensors and robots, as well as social and psychological phenomena. Realizing an equivalent architecture based on engineering technology will help us to understand the real world, bring various advantages to applications, allow us to achieve performance levels far exceeding conventional systems, and eventually make it possible to build genuinely new information systems. Our laboratory, in particular, conducts research on exploring parallel, high-speed, and real-time operations for sensory information processing, some of which are listed below. Also, we are focusing on finding new industrial markets and strongly promoting technology transfer of our research outcomes in diverse ways, including collaborative research and commercialization.
A human being recognizes external environment by using many kinds of sensory information. By integrating these information and making up lack of information for each other, a more reliable and multilateral recognition can be achieved. The purpose of Sensor Fusion Project is to realize new sensing architecture by integrating multi-sensor information and to develop hierarchical and decentralized architecture for recognizing human beings further. As a result, more reliable and multilateral information can be extracted, which can realize high level recognition mechanism.
Dynamic Image Control (DIC) is a technology to show dynamic phenomena with various physical properties to human in comprehensible and intelligible way. Many dynamic phenomena in real world have immoderate characteristics that prevent human from clear understanding. For instance, we can't see a pattern on a flying bee wing, flowing red blood cell in vein, nor printed characters on a whacked golf ball dropping onto a fairway. This difficulty is due to the relatively slow flame-rate of conventional imaging systems that permit the object's dynamics superimposed onto the interested image. DIC modulates images by controlling optics, illuminations, and signal processing so as to output adequate images for a given purpose. The purpose of this research is to create and develop an epoch-making media technology based on dynamic image control. Followings are supposed application fields: biomedical instruments, microscopy, visual instruments, media technology, factory automation, and human interface.
Vision system technology that can achieve ultimate performance must be created in order to pioneer new applications. Low-quality systems designed by excluding the optimized performance for the target application will cause limitations in ideal functions. The key to achieving never-before-seen promising technologies is an application-oriented approach that will enable superior performance and functions by constructing sophisticated relationships between applications, principles (including architectures, system configurations, algorithms) , and devices from several perspectives. Concretely, these cross-cutting design capabilities will be critical factors for embodying new application concepts, refining the essential performance and function in order to maximize the value of the target application, and designing and developing new principles and devices in a comprehensive manner. The Vision Architecture research group aims to make substantive and practical progress in various application areas based on the above design concepts by exploiting high-speed image sensing going beyond the capabilities of the human eye. We create various applications in the fields of robotics, inspection, video media engineering, man--machine interfaces, and digital archiving by making full use of VLSI technology, parallel processing, measurement engineering, and computer vision.
Human sensorial modalities are inherently limited, as is our cognitive capacity to process information gathered by the senses. Technologically mediated sensory manipulation, if properly implemented, can alter perception or even generate completely new forms of perception. At a practical level, it can improve the efficiency of (low or high level) recognition tasks such as behaviour recognition, as well as improve human-to-human interaction. Such enhancements of perception and increased behavior recognition also allow for the design of novel interfaces. The problems of human perception and machine perception are reciprocally related; machine perception has its own limitations but can be trained to recognize self-perception, social perceptions, and emotional expressions. Meta Perception is an umbrella term for the theory and research practice concerned with the capture and manipulation of information that is normally inaccessible to humans and machines. In doing so, we hope to create new ways of perceiving the world and interacting with technology. Our group is not only concerned with intelligent sensors and systems technology, but also augmented reality, human-computer interaction, media art, neurophysiology, perspectives from fields such as ethics and computer-supported cooperative-work. Combining techniques we aim to integrate human and machine perception and as a consequence create a new interdisciplinary research area.