Immersive AR/VR demands unprecedented eye tracking performance. Eye tracking must be precise, accurate, and work all the time, for every person, in any environment. While advancements in deep learning have yielded successes in domains with similar challenges, the real time requirement and platform power limitations put serious memory and compute constraints on any ML-based solution. Additionally, a robust and efficient ML solution that is insensitive to environmental factors requires large amounts of highly accurate ground-truth training data from thousands of users in challenging conditions. Unfortunately, capturing accurate eye-gaze data in these environments requires a highly sophisticated and costly setup, and even with this setup, accuracy is limited by user fixation ability and cooperation. These issues place practical limitations on the amount and quality of training data that can be collected.
In the absence of accurate gaze labels, we propose to advance the state of the art by carefully designing two challenges that combine human annotation of eye features with unlabeled data. These challenges focus on deeper understanding of the distribution underlying human eye state. We invite ML and CV researchers for participation.
Performance Tracks
Track-1 Semantic Segmentation challenge: Many eye-tracking solutions require accurate estimation of eye-features in 2d images, typically per-pixel segmentation of the key eye regions: the sclera, the iris, the pupil, and everything else (background). Though eye segmentation solutions have been demonstrated [1,2], the ideal solution must be accurate, robust, and extremely power efficient. Therefore in this challenge, we evaluate both the accuracy of the model and approximate complexity using the model size as explained in Section 1.2.a of this document. This challenge encourages:
Track-2 Synthetic Eye Generation challenge: Most learning-based system achieve better performance and generalizability with more data. However, as explained earlier, capturing accurate real-world eye-gaze data at the scale required for training from image to gaze directly is difficult. For this challenge, we instead focus on generating realistic eye-data. Specifically, we propose a novel image-synthesis problem that aims to capture subject-specific signals from a few eye-images of an individual and generate realistic eye-images for the same individual under different eye-states (gaze direction, camera position, eye openness etc.). For this challenge, we posit that substantial information about the eye state is encoded in the feature segmentation masks similar to those derived in the Semantic Segmentation challenge. This task puts the current GAN and VAE based image synthesis models to an unprecedented test wherein exact pixel-level matching is required instead of high-level perceptual differences. This task encourages-
Note: The task requires generating a realistic eye image, I, from a given semantic segmentation mask, M, of the same person, P. We have provided three JSON files to map the eye images to different subjects in the Train, Val, and Test datasets. Please use the provided “image/mask to identity” map and generate realistic eye images for a given segmentation mask of the same subject.
For this task, you can use all the training image/masks and try to achieve the best performance on the given test set of semantic segmentation masks. The metric used for measuring the performance is L2 distance from the original eye image.
Dataset Description
OpenEDS is a data set of eye images captured using a virtual-reality HMD with two synchronized eye-facing cameras at a frame rate of 200 Hz under controlled illumination. The paper describing OpenEDS is available here.
This dataset is composed of:
OpenEDS Baseline
Accuracy (mIOU): 0.8948
Model # Parameters: 416088
First Place: Team RIT
Team Members: Aayush Chaudhary, Rakshit Kothari, Manoj Acharya, Shusil Dangi, Nitinraj Nair, Reynold Bailey, Christopher Kanan, Gabriel Diaz, and Jeff Pelz
Accuracy (mIOU): 0.9528
Model # Parameters: 248900
Second Place: Team Tetelias
Team Member: Teternikov Ilia Anatolyevich
Accuracy (mIOU): 0.9519
Model # Parameters: 242664
Third Place: Team Couger AI
Team Members: Devanathan Sabarinathan and Priya Kansal
Accuracy (mIOU): 0.949
Model # Parameters: 258021
OpenEDS Baseline
RMSE-Error: 59.25
First Place: Team AIT
Team Members: Seonwook Park, Xucong Zhang, Shalini De Mello, Otmar Hilliges, and Marcel Buehler
RMSE-Error: 25.23
Second Place: Team PAU
Team Members: Yu Yu and Jean-Marc Odobez
RMSE-Error: 27.69
Third Place: Team Tomcarrot
Team Member: Tom Hao Bu
RMSE-Error: 33.79
Step 1: Visit the challenge website and read the Official Rules (Rules for Semantic Segmentation Challenge; Rules for Synthetic Eye Generation Challenge) which govern your participation in each challenge.
Step 2: Submit the following information to the email address openedschallenge@fb.com to request access to the challenge data (“OpenEDS”):
By submitting your request to access OpenEDS, you agree to the Official Rules for the challenge that you are participating in. The Official Rules are a binding contract and govern your use of OpenEDS, and are linked below:
Step 3: Create an account at evalAI.cloudcv.org to use for one or both of the challenges.
Step 4: Design your model based on the training data and/or validation data available in OpenEDS.
Step 5: Generate a JSON file for results produced by your model as applied to the test dataset included in OpenEDS. The JSON file must be generated as follows:
The scripts to generate JSON files can be found in the submission_scripts folder of OpenEDS. The instruction to use the scripts and create your submission JSON are as follows:
Submissions must comply with the Official Rules of the applicable challenge.
Step 6: Login to your EvalAI account and upload your JSON file in compressed zip format to the applicable challenge portal: (1) Semantic segmentation challenge (2) Synthetic eye generation challenge. The scores will be made available on the EvalAI leadership board for each challenge respectively.
Winners of the 2019 OpenEDS Challenges will be announced on or about September 30, 2019.
*Accepted challenge papers will be archived on IEEE Xplore and CVF open access.
Robert Cavin
Facebook Reality Labs
Jixu Chen
Facebook
IIke Demir
DeepScale
Stephan Garbin
University College London
Oleg Komogortsev
Visiting Scientist, Facebook Reality Labs
Immo Scheutz
Postdoctoral Research Scientist, Facebook Reality Labs
Abhishek Sharma
Facebook Reality Labs
Yiru Shen
Facebook Reality Labs
Sachin S. Talathi
Facebook Reality Labs
NO PURCHASE NECESSARY TO ENTER OR WIN A PRIZE IN THIS CONTEST. A PURCHASE WILL NOT INCREASE YOUR CHANCES OF WINNING. INTERNET ACCESS AND A VALID EMAIL ADDRESS ARE REQUIRED TO PARTICIPATE. TRAVEL TO SOUTH KOREA BETWEEN 10/27/19 AND 11/2/19 IS REQUIRED TO RECEIVE A PRIZE IN THIS CONTEST. Open only to individuals who are at least 18 and the age of majority in jurisdiction of residence and are legal residents of any area, country, state, territory, or province where applicable laws do not prohibit participating or receiving a prize in the Contest and excludes China, Kenya, Venezuela, Argentina, Denmark, Greece, Quebec, Cuba, Iran, North Korea, Sudan, Myanmar/Burma, Syria, Zimbabwe, Iraq, Lebanon, Liberia, Libya, Somalia, Zimbabwe, Belarus, Balkans, and any other area or country designated by the applicable agency that designates trade sanctions. Submission required between 12.00AM PDT on 05/03/2019 and 11:59:59 PDT on 09/15/2019. Access to data set requires request by email. Subject to OFFICIAL RULES. Limit 1 entry per person. Void where prohibited by law. Entries will be scored based on model performance and model complexity. Total ARV of all prizes in this Contest: $13,000 USD. Sponsor: Facebook Technologies, LLC, a wholly-owned subsidiary of Facebook, Inc. 1601 Willow Road, Menlo Park, CA 94025.
NO PURCHASE NECESSARY TO ENTER OR WIN A PRIZE IN THIS CONTEST. A PURCHASE WILL NOT INCREASE YOUR CHANCES OF WINNING. INTERNET ACCESS AND A VALID EMAIL ADDRESS ARE REQUIRED TO PARTICIPATE. TRAVEL TO SOUTH KOREA BETWEEN 10/27/19 AND 11/2/19 IS REQUIRED TO RECEIVE A PRIZE IN THIS CONTEST. Open only to individuals who are at least 18 and the age of majority in jurisdiction of residence and are legal residents of any area, country, state, territory, or province where applicable laws do not prohibit participating or receiving a prize in the Contest and excludes China, Kenya, Venezuela, Argentina, Denmark, Greece, Quebec, Cuba, Iran, North Korea, Sudan, Myanmar/Burma, Syria, Zimbabwe, Iraq, Lebanon, Liberia, Libya, Somalia, Zimbabwe, Belarus, Balkans, and any other area or country designated by the applicable agency that designates trade sanctions. Submission required between 12.00AM PDT on 05/03/2019 and 11:59:59 PDT on 09/15/2019. Access to data set requires request by email. Subject to OFFICIAL RULES. Limit 1 entry per person. Void where prohibited by law. Entries will be scored based on model performance. Total ARV of all prizes in this Contest: $13,000 USD. Sponsor: Facebook Technologies, LLC, a wholly-owned subsidiary of Facebook, Inc. 1601 Willow Road, Menlo Park, CA 94025.