佐藤研究室/菅野研究室
佐藤研究室/菅野研究室
佐藤 (洋) 研究室
菅野研究室
ニュース
発表文献
連絡先
データセット
内部ページ
日本語
English
最近の発表文献
» 全発表文献リスト
Gazing Into Missteps: Leveraging Eye-Gaze for Unsupervised Mistake Detection in Egocentric Videos of Skilled Human Activities
We address the challenge of unsupervised mistake detection in egocentric video of skilled human activities through the analysis of gaze …
Michele Mazzamuto
,
Antonino Furnari
,
Yoichi Sato
,
Giovanni Maria Farinella
PDF
引用
SiMHand: Mining Similar Hands for Large-Scale 3D Hand Pose Pre-training
We present a framework for pre-training of 3D hand pose estimation from in-the-wild hand images sharing with similar hand …
Nie Lin
,
Takehiko Ohkawa
,
Yifei Huang
,
Mingfang Zhang
,
Minjie Cai
,
Ming Li
,
Ryosuke Furuta
,
Yoichi Sato
PDF
引用
ソースコード
Exo2EgoDVC: Dense Video Captioning of Egocentric Procedural Activities Using Web Instructional Videos
We propose a novel benchmark for cross-view knowledge transfer of dense video captioning, adapting models from web instructional videos …
Takehiko Ohkawa
,
Takuma Yagi
,
Taichi Nishimura
,
Ryosuke Furuta
,
Atsushi Hashimoto
,
Yoshitaka Ushiku
,
Yoichi Sato
PDF
引用
Learning Multiple Object States from Actions via Large Language Models
Recognizing the states of objects in a video is crucial in understanding the scene beyond actions and objects. For instance, an egg can …
Masatoshi Tateno
,
Takuma Yagi
,
Ryosuke Furuta
,
Yoichi Sato
PDF
引用
ソースコード
DOI
ActionVOS: Action as Prompts for Video Object Segmentation
Delving into the realm of egocentric vision, the advancement of referring video object segmentation (RVOS) stands as pivotal in …
Liangyang Ouyang
,
Ruicong Liu
,
Yifei Huang
,
Ryosuke Furuta
,
Yoichi Sato
PDF
引用
ソースコード
Benchmarks and Challenges in Pose Estimation for Egocentric Hand Interactions with Objects
We interact with the world with our hands and see it through our own (egocentric) perspective. A holistic 3Dunderstanding of such …
Zicong Fan
,
Takehiko Ohkawa
,
Linlin Yang
,
Nie Lin
,
Zhishan Zhou
,
Shihao Zhou
,
Jiajun Liang
,
Zhong Gao
,
Xuanyang Zhang
,
Xue Zhang
,
Fei Li
,
Zheng Liu
,
Feng Lu
,
Karim Abou Zeid
,
Bastian Leibe
,
Jeongwan On
,
Seungryul Baek
,
Aditya Prakash
,
Saurabh Gupta
,
Kun He
,
Yoichi Sato
,
Otmar Hilliges
,
Hyung Jin Chang
,
Angela Yao
PDF
引用
Masked Video and Body-worn IMU Autoencoder for Egocentric Action Recognition
Compared with visual signals, Inertial Measurement Units (IMUs) placed on human limbs can capture accurate motion signals while being …
Mingfang Zhang
,
Yifei Huang
,
Ruicong Liu
,
Yoichi Sato
PDF
引用
WTS: A Pedestrian-Centric Traffic Video Dataset for Fine-grained Spatial-Temporal Understanding
In this paper, we address the challenge of fine-grained video event understanding in traffic scenarios, vital for autonomous driving …
Quan Kong
,
Yuki Kawana
,
Rajat Saini
,
Ashutosh Kumar
,
Jingjing Pan
,
Ta Gu
,
Yohei Ozao
,
Balazs Opra
,
David C. Anastasiu
,
Yoichi Sato
,
Norimasa Kobori
PDF
引用
Single-to-Dual-View Adaptation for Egocentric 3D Hand Pose Estimation
The pursuit of accurate 3D hand pose estimation stands as a keystone for understanding human activity in the realm of egocentric …
Ruicong Liu
,
Takehiko Ohkawa
,
Mingfang Zhang
,
Yoichi Sato
PDF
引用
ソースコード
Exo2EgoDVC: Dense Video Captioning of Egocentric Procedural Activities Using Web Instructional Videos
We propose a novel benchmark for cross-view knowledge transfer of dense video captioning, adapting models from web instructional videos …
Takehiko Ohkawa
,
Takuma Yagi
,
Taichi Nishimura
,
Ryosuke Furuta
,
Atsushi Hashimoto
,
Yoshitaka Ushiku
,
Yoichi Sato
PDF
引用
発表文献一覧
引用
×