Sato Lab./Sugano Lab.
Sato Lab./Sugano Lab.
Y. Sato Lab.
Sugano Lab.
News
Publications
Contact
Datasets
Internal Wiki
English
日本語
Recent Publications
» List of All Publications
Gazing Into Missteps: Leveraging Eye-Gaze for Unsupervised Mistake Detection in Egocentric Videos of Skilled Human Activities
We address the challenge of unsupervised mistake detection in egocentric video of skilled human activities through the analysis of gaze …
Michele Mazzamuto
,
Antonino Furnari
,
Yoichi Sato
,
Giovanni Maria Farinella
PDF
Cite
SiMHand: Mining Similar Hands for Large-Scale 3D Hand Pose Pre-training
We present a framework for pre-training of 3D hand pose estimation from in-the-wild hand images sharing with similar hand …
Nie Lin
,
Takehiko Ohkawa
,
Yifei Huang
,
Mingfang Zhang
,
Minjie Cai
,
Ming Li
,
Ryosuke Furuta
,
Yoichi Sato
PDF
Cite
Code
Exo2EgoDVC: Dense Video Captioning of Egocentric Procedural Activities Using Web Instructional Videos
We propose a novel benchmark for cross-view knowledge transfer of dense video captioning, adapting models from web instructional videos …
Takehiko Ohkawa
,
Takuma Yagi
,
Taichi Nishimura
,
Ryosuke Furuta
,
Atsushi Hashimoto
,
Yoshitaka Ushiku
,
Yoichi Sato
PDF
Cite
Learning Multiple Object States from Actions via Large Language Models
Recognizing the states of objects in a video is crucial in understanding the scene beyond actions and objects. For instance, an egg can …
Masatoshi Tateno
,
Takuma Yagi
,
Ryosuke Furuta
,
Yoichi Sato
PDF
Cite
Code
DOI
ActionVOS: Action as Prompts for Video Object Segmentation
Delving into the realm of egocentric vision, the advancement of referring video object segmentation (RVOS) stands as pivotal in …
Liangyang Ouyang
,
Ruicong Liu
,
Yifei Huang
,
Ryosuke Furuta
,
Yoichi Sato
PDF
Cite
Code
Benchmarks and Challenges in Pose Estimation for Egocentric Hand Interactions with Objects
We interact with the world with our hands and see it through our own (egocentric) perspective. A holistic 3Dunderstanding of such …
Zicong Fan
,
Takehiko Ohkawa
,
Linlin Yang
,
Nie Lin
,
Zhishan Zhou
,
Shihao Zhou
,
Jiajun Liang
,
Zhong Gao
,
Xuanyang Zhang
,
Xue Zhang
,
Fei Li
,
Zheng Liu
,
Feng Lu
,
Karim Abou Zeid
,
Bastian Leibe
,
Jeongwan On
,
Seungryul Baek
,
Aditya Prakash
,
Saurabh Gupta
,
Kun He
,
Yoichi Sato
,
Otmar Hilliges
,
Hyung Jin Chang
,
Angela Yao
PDF
Cite
Masked Video and Body-worn IMU Autoencoder for Egocentric Action Recognition
Compared with visual signals, Inertial Measurement Units (IMUs) placed on human limbs can capture accurate motion signals while being …
Mingfang Zhang
,
Yifei Huang
,
Ruicong Liu
,
Yoichi Sato
PDF
Cite
WTS: A Pedestrian-Centric Traffic Video Dataset for Fine-grained Spatial-Temporal Understanding
In this paper, we address the challenge of fine-grained video event understanding in traffic scenarios, vital for autonomous driving …
Quan Kong
,
Yuki Kawana
,
Rajat Saini
,
Ashutosh Kumar
,
Jingjing Pan
,
Ta Gu
,
Yohei Ozao
,
Balazs Opra
,
David C. Anastasiu
,
Yoichi Sato
,
Norimasa Kobori
PDF
Cite
Single-to-Dual-View Adaptation for Egocentric 3D Hand Pose Estimation
The pursuit of accurate 3D hand pose estimation stands as a keystone for understanding human activity in the realm of egocentric …
Ruicong Liu
,
Takehiko Ohkawa
,
Mingfang Zhang
,
Yoichi Sato
PDF
Cite
Code
Exo2EgoDVC: Dense Video Captioning of Egocentric Procedural Activities Using Web Instructional Videos
We propose a novel benchmark for cross-view knowledge transfer of dense video captioning, adapting models from web instructional videos …
Takehiko Ohkawa
,
Takuma Yagi
,
Taichi Nishimura
,
Ryosuke Furuta
,
Atsushi Hashimoto
,
Yoshitaka Ushiku
,
Yoichi Sato
PDF
Cite
See all publications
Cite
×