Lightweight hands-on Python notebooks supplementing the deep learning theories. The flow of the modules follow the structure of C. M. Bishop and H. Bishop, Deep Learning: Foundations and Concepts. Springer Nature, 2024.
These datasets are provided for research purposes only. When using the data, please be sure to cite the publications properly.
Ryo Yonetani, Kris M. Kitani, and Yoichi Sato, “Recognizing Micro-Actions and Reactions from Paired Egocentric Videos,” In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR2016), 2016.
Ryo Yonetani, Kris M. Kitani, and Yoichi Sato, “Ego-Surfing First-Person Videos,” In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR2015), 2015.
Yusuke Sugano, Yasuyuki Matsushita, and Yoichi Sato, “Learning-by-Synthesis for Appearance-based 3D Gaze Estimation,” In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR2014), 2014
Yusuke Sugano, Yasuyuki Matsushita, and Yoichi Sato, “Graph-based Joint Clustering of Fixations and Visual Entities,” ACM Transactions on Applied Perception (TAP), Volume 10, Issue 2, Article 10, June 2013.
Keisuke Ogaki, Kris M. Kitani, Yusuke Sugano, and Yoichi Sato. “Coupling Eye-Motion and Ego-Motion features for First-Person Activity Recognition.” CVPR workshop on Ego-Centric Vision (ECV2012), June 2012.
Hideyuki Kubota, Yusuke Sugano, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto, and Kazuo Hiraki, “Incorporating Visual Field Characteristics into a Saliency Map,” in Proc. the 7th International Symposium on Eye Tracking Research & Applications (ETRA2012), March 2012.