ICB2009 FACE VERIFICATION COMPETITION
(WITH QUALITY MEASURES):

Expression of Interest

face verif



Outline


Introduction

MOTIVATION:

With an increasing number of mobile devices with built-in webcams, e.g., PDA, mobile phones and laptops, face is arguably the most widely accepted means of person verification. However, the biometric authentication task based on face images acquired by a mobile device in an uncontrolled environment is very challenging. One way to boost the face verification performance is to use multiple samples. We aim to evaluate the performance of face verification on mobile devices given a sequence of unconstrained face images.

OBJECTIVE:

The face verification competition has several objectives:

CONFERENCE:

The result of the evaluation will be published in the proceedings of the upcoming International Conference on Biometrics 2009 (ICB'09) to be held in Alghero in Sardinia Island, Italy.

ORGANIZER:

This competition will be organized by the Centre for Vision, Speech and Signal Processing, University of Surrey.

Contact Prof. Josef Kittler, Mr. Chan Chi Ho or Dr. Norman Poh if you need more information with the heading "ICB2009 face verification competition".

ACTION REQUIRED:

To express your interest, simply enter your name, email address and organization in this page.

Back to top


Database and downloads

We will use the BANCA video sequence database. Please ignore the £1000 price tag. For the purpose of the competition, the licence for using the database has been reduced to £500. Check if your institute has already the database installed. If not, contact us. We will send you the data in a hard disk In addition to that, we will also make available a text-based annotation of quality measures (see below) which will be released only after 1 May. 

PROPOSED EXPERIMENTAL PROTOCOLS

The BANCA database has three protocols: Mc: (for controlled scenario), Ua (for adverse or noisy scenario) and Ud (for degraded scenario, typically due to using a different biometric device). However, for the competition, we will use only the Mc and Ua protocols:

The reason to have two rounds of competition is that the first round will enable us to verify that your program runs correctly - not just giving low error rates but in accordance to our requirements. Our role is to ensure that all algorithms can be compared on equal footing. Given that each participanting team develops and runs their programs independently, standardizing the experimental methodology can help us to identify sources of inconsistency as early as the round one of the competition.

Note that the Ud will not be used. In addition to the above data, the following separate (single session) training set is made available

QUALITY MEASURES

Note that in addition to the video files, we also make available some automatically annotated labels for each image in a video file. These annotations include:

  1. Left eye coordinate (x,y) of the original image (extracted from video)

  2. Right eye coordinate (x,y) of the original image (extracted from video)

  3. Reliability of the face detector- this is the output of a classifier that has been trained to give an overall measure of quality given the quality measures 4-16 below

  4. Brightness

  5. Contrast

  6. Focus - this quantifies the sharpness of an image

  7. Bit per pixel - it measures the colour resolution in terms of bits

  8. Spatial resolution - the number of pixels between eyes

  9. Illumination

  10. Uniform Background – measuring the variance of the background intensity

  11. Background Brightness – the average intensity of the background

  12. Reflection - or, specular reflection

  13. Glasses – face wearing glasses

  14. Rotation in Plane

  15. Rotation in Depth

  16. Frontalness - it measures how much a face image deviates from a typical mug-shot face image

The above annotations are generated by OmniPerception's SDK. Some of the quality measures are dependent on a face detector and so is face specific (i.e., frontal, rotation, reflection, bits per pixel and reliability), while others are general image quality measures defined in MPEG standards. Note that the face detector is not always perfect and so this may affect the recognition performance.

Back to top


Schedules

Competition Start

1st May, 2008

1st Round Competition submission deadline

1st August, 2008, 29th September, 2008

2nd Round information release

30th September, 2008

2nd Round Competition submission deadline

1st December, 2008

Final Manuscript

1st Feb, 2009


Back to top


Performance measure

We will assess how well a system perform using two criteria: accuracy and computation cost. Example of accuracy measures are that we consider are FRR, FAR, TER/(VER @0.1% FAR), ROC curve.

Note that one can process a video sequence for 30 minutes and get zero verification error, which is impractical. As a result, our goal is to rate an algorithm in terms of the trade-off between the two criteria: accuracy vs computation cost. Ideally, in order to benchmark  the time a program needs to process a video file, we will have to run every program submitted to us on a single PC. This is not a feasible option for us as the submitted programs may be run on different OSs, exploiting dual-co processors capability, etc. We propose instead to use an abstract cost that is a function of

Note that NQ < NA since we assume that an image used for verification must have already been used for analysis. However, the converse is not necessarily true. An example of a derived computation cost (but not yet formally adopted) is

Total Cost = NT *NQ +NA

Total Frames in a query video =NQ+NA

Back to top


Instructions for the participants

THE SPIRIT OF SCIENTIFIC EVALUATION

While it has been decided upon us that this is a face verification "competition", it may be more appropriate to take this as an evaluation campaign, whose goal is to provide a benchmark on face verification algorithms on video sequences [Winning should not be the only goal because if it were, one could cheat easily in order to win!].

WHAT TO SUBMIT

Apart from observing the deadlines, you are to provide the following information.

There are two experiments in each competition, which are pre-registered and automated tests. In each test, there are two subtests based on the standard protocol which are group1 and group 2 tests. In other words, there are score files of each test. To enter the competition, all participants has to submit the results of pre-registered test, while the automated test is an option.

Pre-registered test: eye location files are supplied for participants to localise and normalise a face in the video. These videos are then used to train and test the authentication system

Automated test (optional): Participant has to use their automated localisation method for at least the testing phase of the protocol. You may use our standard eye location files to localise the face in the training phase.

FILE FORMAT

The score files should have the following format:

The unit of NT, NQ and NA are computed per video part (not the entire video).

Testing_Filename Score NT NQ NA

For example:

1030_m_g1_s02_1030_en_1 1.000 5 5 245

1030_m_g1_s02_1030_en_2 0.970 5 3 247

1030_m_g1_s02_1030_en_3 0.980 5 2 248

...

...

1030_m_g1_s01_1031_en_1 0.300 5 10 240

.......................................

Back to top


Quick Downloads

All the following links will be available on or after 1 May.

        1. Only the first 250 frames of a video is used.

        2. A video file is split into 5 equal parts.

        3. The rest of the video files are not used.

For example:

1030_m_g1_s01_1031_en will have 5 parts


Start Frame

End Frame

1030_m_g1_s01_1031_en_1

0

49

1030_m_g1_s01_1031_en_2

50

99

1030_m_g1_s01_1031_en_3

100

149

1030_m_g1_s01_1031_en_4

150

199

1030_m_g1_s01_1031_en_5

200

249


Notes: The frame numbers above correspond to the files in "1stQualityV2.zip" and “2ndQualityV2.zip”.

Back to top


Frequently Asked Questions

  1. So, what platform should I use?

    We recommend you to use PyVerif. For the first-timer, it will be much easier because a tutorial on BANCA has been set up for you. Download an example using PyVerif for BANCA experiments.

  2. Why must I use your face detector?

    In order to ensure that results are comparable and not affected, in particular, by the quality of face detection, we provide a standard set of eye location for each image in the video. However, you may also submit the results using your own face detection software.

  3. Why two rounds of competition?

    The reason to have two rounds of competition is that the first round will enable us to verify that your program runs correctly - not just giving low error rates but in accordance to our requirements. Our role is to ensure that all algorithms can be compared on equal footing. Given that each participating team develops and runs their programs independently, standardizing the experimental methodology can help us to identify sources of inconsistency as early as the round one of the competition. The results of the second will then be more reliable.

  4. More questions? Send email to Norman Poh and CC to Chan Chi Ho with heading "ICB2009 face verif competition".

Back to top


Acknowledgement

We would like to thank the EU-funded Mobio project for sponsoring the competition and the BANCA project for the use of the database.