|
研究テーマ
ニュース
Projects
Filmy Cloud Removal on Satellite Imagery with Multispectral Conditional Generative Adversarial Nets |
|||
Damage Detection from Aerial Images via Convolutional Neural Networks |
|||
Incremental and Enhanced Scanline-Based Segmentation Method for Surface Reconstruction of Sparse LiDAR Data |
|||
Change Detection from a Street Image Pair using CNN Features and Superpixel Segmentation |
|||
Massive City-scale Surface Condition Analysis using Ground and Aerial Imagery |
|||
Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-mounted Camera |
主な研究の紹介
Massive City-scale Surface Condition Analysis using Ground and Aerial Imagery
Automated visual analysis is an effective method for understanding changes in natural phenomena over massive city-scale landscapes. However, the view-point spectrum across which image data can be acquired is extremely wide, ranging from macro-level overhead (aerial) images spanning several kilometers to micro-level front-parallel (street-view) images that might only span a few meters. This work presents a unified framework for robustly integrating image data taken at vastly different viewpoints to generate large-scale estimates of land surface conditions. To validate our approach we attempt to estimate the amount of post-Tsunami damage over the entire city of Kamaishi, Japan (over 4 million square-meters). Our results show that our approach can efficiently integrate both micro and macro-level images, along with other forms of meta-data, to efficiently estimate city-scale phenomena. We evaluate our approach on two modes of land condition analysis, namely, city-scale debris and greenery estimation, to show the ability of our method to generalize to a diverse set of estimation tasks.
Ken Sakurada, Takayuki Okatani and Kris M. Kitani, "Massive City-scale Surface Condition Analysis using Ground and Aerial Imagery", ACCV2014 (Oral, Acceptance Rate: Less than 4%), "Best Application Paper Honorable Mention Award" [paper]
Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-mounted Camera
This paper proposes a method for detecting temporal changes of the three-dimensional structure of an outdoor scene from its multi-view images captured at two separate times. For the images, we consider those captured by a camera mounted on a vehicle running in a city street. The method estimates scene structures probabilistically, not deterministically, and based on their estimates, it evaluates the probability of structural changes in the scene, where the inputs are the similarity of the local image patches among the multi-view images. The aim of the probabilistic treatment is to maximize the accuracy of change detection, behind which there is our conjecture that although it is dicult to estimate the scene structures deterministically, it should be easier to detect their changes. The proposed method is compared with the methods that use multi-view stereo (MVS) to reconstruct the scene structures of the two time points and then differentiate them to detect changes. The experimental results show that the proposed method outperforms such MVS-based methods.
Ken Sakurada, Takayuki Okatani and Koichiro Deguchi, "Detecting Changes in 3D Structure of a Scene from Multi-view Images Captured by a Vehicle-mounted Camera" , CVPR 2013 (Poster, Acceptance Rate:25.2%) ([paper], [supplementary material], [poster], [project])
受賞
受賞
論文リスト(一部)
国際学会(査読あり)