佐藤研究室/菅野研究室
佐藤研究室/菅野研究室
佐藤 (洋) 研究室
菅野研究室
ニュース
発表文献
連絡先
データセット
内部ページ
日本語
English
Yusuke Sugano
最新
Image-to-Text Translation for Interactive Image Recognition: A Comparative User Study with Non-Expert Users
Technical Understanding from Interactive Machine Learning Experience: A Study through a Public Event for Science Museum Visitors
Rotation-Constrained Cross-View Feature Fusion for Multi-View Appearance-based Gaze Estimation
Learning Video-independent Eye Contact Segmentation from In-the-Wild Videos
Interactive Machine Learning on Edge Devices With User-in-the-Loop Sample Recommendation
Self-Supervised Learning for Audio-Visual Relationships of Videos with Stereo Sounds
Learning-by-Novel-View-Synthesis for Full-Face Appearance-Based 3D Gaze Estimation
Ego4D: Around the World in 3,000 Hours of Egocentric Video
Interact before Align: Leveraging Cross-Modal Knowledge for Domain Adaptive Action Recognition
Stacked Temporal Attention: Improving First-person Action Recognition by Emphasizing Discriminative Clips
EPIC-KITCHENS-100 Unsupervised Domain Adaptation Challenge for Action Recognition 2021: Team M3EM Technical Report
Use of Machine Learning by Non-Expert DHH People: Technological Understanding and Sound Perception
Learning-based Region Selection for End-to-End Gaze Estimation
Deep Photometric Stereo Networks for Determining Surface Normal and Reflectances
Improving Action Segmentation via Graph Based Temporal Reasoning
Investigating audio data visualization for interactive sound recognition
Light Structure from Pin Motion: Geometric Point Light Source Calibration
InvisibleEye: Fully Embedded Mobile Eye Tracking Using Appearance-Based Gaze Estimation
Evaluation of Appearance-Based Methods and Implications for Gaze-Based Applications
MPIIGaze: Real-World Dataset and Deep Appearance-Based Gaze Estimation
Visualizing Gaze Direction to Support Video Coding of Social Attention for Children with Autism Spectrum Disorder
A Multimodal Corpus of Expert Gaze and Behavior during Phonetic Segmentation Tasks
Forecasting user attention during everyday mobile interactions using device-integrated and wearable sensors
Gaze-guided Image Classification for Reflecting Perceptual Class Ambiguity
Light Structure from Pin Motion: Simple and Accurate Point Light Calibration for Physics-Based Modeling
Revisiting data normalization for appearance-based gaze estimation
Shape-Conditioned Image Generation by Learning Latent Appearance Representation from Unpaired Data
Training Person-Specific Gaze Estimators from User Interactions with Multiple Devices
Deep Photometric Stereo Network
Everyday Eye Contact Detection Using Unsupervised Gaze Target Discovery
InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation
It's Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation
Multi-task Learning Using Multi-modal Encoder-Decoder Networks with Shared Skip Connections
Noticeable or Distractive?: A Design Space for Gaze-Contingent User Interface Notifications
3D gaze estimation from 2D pupil positions on monocular head-mounted eye trackers
AggreGaze: Collective Estimation of Audience Attention on Public Displays
Labelled pupils in the wild: a dataset for studying pupil detection in unconstrained environments
Seeing with Humans: Gaze-Assisted Neural Image Captioning
Sensing and Controlling Human Gaze in Daily Living Space for Human-Harmonized Information Environments
Spatio-Temporal Modeling and Prediction of Visual Attention in Graphical User Interfaces
Appearance-based gaze estimation in the wild
Appearance-Based Gaze Estimation With Online Calibration From Mouse Operations
Gaze Estimation From Eye Appearance: A Head Pose-Free Method via Eye Image Synthesis
Rendering of Eyes for Eye-Shape Registration and Gaze Estimation
Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency
Image preference estimation with a data-driven approach: A comparative study between gaze and image features
Adaptive Linear Regressionfor Appearance-Based Gaze Estimation
Influence of stimulus and viewing task types on a learning-based visual saliency model
Learning gaze biases with head motion for head pose-free gaze estimation
Learning-by-Synthesis for Appearance-Based 3D Gaze Estimation
Appearance-Based Gaze Estimation Using Visual Saliency
Graph-based joint clustering of fixations and visual entities
Head direction estimation from low resolution images with scene adaptation
Image Preference Estimation from Eye Movements with A Data-driven Approach
Social Group Discovery from Surveillance Videos: A Data-Driven Approach with Attention-Based Cues
Coupling eye-motion and ego-motion features for first-person activity recognition
Head pose-free appearance-based gaze sensing via eye image synthesis
Incorporating visual field characteristics into a saliency map
Touch-consistent perspective for direct interaction under motion parallax
A Head Pose-free Approach for Appearance-based Gaze Estimation
Appearance-based head pose estimation with scene-specific adaptation
Attention Prediction in Egocentric Video Using Motion and Visual Saliency
Inferring human gaze from appearance via adaptive linear regression
Calibration-free gaze sensing using saliency maps
Can Saliency Map Models Predict Human Egocentric Visual Attention?
An Incremental Learning Method for Unconstrained Gaze Estimation
Person-Independent Monocular Tracking of Face and Facial Actions with Multilinear Models
Fast and Accurate Positioning Technique Using Ultrasonic Phase Accordance Method
引用
×