© Read the copyrights information before any use.
On-board vehicle acquisition in a dense urban environment.
11 179 frames (8min 49sec, @25FPS)
640×480 (RGB, 8bits)
Paris (France)
Acquisition description:
acquired from the C3 vehicle, camera sensor Marling F-046C (@ 25Hz), lens 12mm, camera was mounted behind the interior rear-view mirror, vehicle speed is < 50km/h (< 31 mph)
Sequence and groundtruths data are available publicly and for free.
The sequences can be downloaded either as MPEG-2, JPEG single files, JSEQ, or RTMaps (cf. below). Ground truths can also be downloaded from the same page. Since it exists several file format for Ground Truth we chose to distribute our files in all the main formats: GT (text formatting), CVML, and VIPER.
Data are also available as RTMaps files which contain raw acquisition data (such as: camera output with timestamps data). RTMaps is a real time multisensor prototyping software which we use as on-board application to record our acquisitions and then to replay the latter. More information are available on the RTMaps compagny website.
We will be pleased to publish the result of your Traffic Light Recognition algorithm on our website. As long as you use the same databases (or if your databases are public).
Sequence 11179 frames (640×480, RGB, 8bits)
Ground Truth files v0.5 (9168 hand-labeled traffic lights)
Here are listed the performance of the algorithms on the above described sequences. For more information about the evaluation, please refer to FAQ section below.
If you want your algorithm to be listed in this section contact us and send us your result (cf. Publishing your results).
(Raoul de Charette1 and Fawzi Nashashibi1,2, 2010)
Download high res. video 1min44 (XVID, 240MB)
Download low res. video 1min44 (XVID, 20MB)
Publications
[1] R. de Charette and F. Nashashibi, “Real time visual traffic lights recognition based on Spot Light Detection and adaptive traffic lights templates,” 2009 IEEE Intelligent Vehicles Symposium, Xian: IEEE, 2009, pp. 358-363. (read on IEEEXplore - ACM)
[2] R. de Charette and F. Nashashibi, “Traffic light recognition using image processing compared to learning processes,” 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Saint Louis: IEEE, 2009, pp. 333-338. (read on IEEEXplore - ACM)
Please note these publications do not describe the current state of our traffic light recognition system. It has evolved consequently since these two publications.
A new publication will be published describing the whole system.
1 Robotics Centre of Mines ParisTech, France (CAOR - Centre de Robotique)
2 Imara Team, INRIA Rocquencourt, France (IMARA - Informatique, Mathématiques et Automatique pour la Route Automatisée)
[1] G. Siogkas, E. Skodras, and E. Dermatas, “Traffic Lights Detection in Adverse Conditions Using Color, Symmetry and Spatiotemporal Information,” in International Conference on Computer Vision Theory and Applications (VISAPP 2012), 2012, pp. 620–627. (read on Patra's website - Research Gate)
Please refer to the section Publishing your results.
So far, only Traffic Lights (with circular light). But since we made this sequence public, if you are a do-gooder feel free to label others objects in the sequence and to send us the new ground truth file that we will be pleased to add on this webpage.
The ground truth file contains 9 168 instances of traffic lights, hand-labeled.
Traffic lights detail is as follows: 3 381 “green” (called 'go'), 58 “orange” (called 'warning'), 5 280 “red” (called 'stop'), 449 “ambiguous” (cf. below).
During the labeling process our human operator noticed several ambiguous regions for which they had issue to decide whether it was a real traffic light (with circle light) or not. We thus decided to simply ignored these ambiguous regions during the evaluation. Therefore, any traffic light detected in these regions won't be taken into account neither as a “false positive” nor as a “true positive”.
Indeed, there are very few number of “ambiguous” regions and they were strictly labeled “ambiguous” only if they validate one of the following conditions:
Traffic lights were labeled as soon as they are 5 pixels wide or more.
Coordinates out-of-bounds (negative or superior than image width/height) are due to the traffic lights partially visible (leaving the camera Field Of View). These occluded traffic lights are ignored during the evaluation.
All the objects used for the evaluation of the performance are those which are: not set as “ambiguous” (cf. above), entirely visible (not partially occluded), and not “warning”/orange (due to the very few number of traffic lights).
Finally, 8 437 instances of traffic lights are used for the evaluation (731 were ignored because of partial occlusion, 423 due to 'ambiguous' status, and 58 because it is 'warning').
In order to publish your performance on this webpage, please send us the result of your algorithm on the above described sequences. The “recognition result file” should be written in one of the following formats: CVML, VIPER, or GT. It is also possible to use our tool (cf. Tools section) to generate easily the file.
It is up to you to give details about your algorithm or to attach a video of your results which we will publish also on this webpage.
The “result file” (as well as any additional information) should be sent to Raoul de CHARETTE: raoul.de_charette{ARO_BASE}mines-paristech.fr
Notice that the performance are computed according to the rules described in the FAQ Section and are (of course) exactly the same for all the algorithms.
All data are free, publicly available and can be used for any research purposes.
However, if you publish results (or make public your tests in any other way) please acknowledge that data are coming from the Robotics Centre of Mines ParisTech and are publicly available at:http://www.lara.prd.fr/benchmarks/trafficlightsrecognition
Commercial use is NOT ALLOWED without our official agreement.
For any information or for question about commercial use please contact Raoul de CHARETTE: raoul.de_charette{ARO_BASE}mines-paristech.fr
from: http://www.lara.prd.fr/benchmarks/trafficlightsrecognition