Difference between revisions of "Publications/dubuisson.15.visapp"
From LRDE
Line 5: | Line 5: | ||
| title = A self-adaptive likelihood function for tracking with particle filter |
| title = A self-adaptive likelihood function for tracking with particle filter |
||
| booktitle = Proceedings of the 10th International Conference on Computer Vision Theory and Applications (VISAPP) |
| booktitle = Proceedings of the 10th International Conference on Computer Vision Theory and Applications (VISAPP) |
||
+ | | pages = 446 to 453 |
||
| abstract = The particle filter is known to be efficient for visual tracking. However, its parameters are empirically fixeddepending on the target application, the video sequences and the context. In this paper, we introduce a new algorithm which automatically adjusts ``on-line" two majors of them: the correction and the propagation parameters. Our purpose is to determine, for each frame of a video, the optimal value of the correction parameter and to adjust the propagation one to improve the tracking performance. On one hand, our experimental results show that the common settings of particle filter are sub-optimal. On another hand, we prove that our approach achieves a lower tracking error without needing tuning these parameters. Our adaptive method allows to track objects in complex conditions (illumination changes, cluttered background, etc.) without adding any computational cost compared to the common usage with fixed parameters. |
| abstract = The particle filter is known to be efficient for visual tracking. However, its parameters are empirically fixeddepending on the target application, the video sequences and the context. In this paper, we introduce a new algorithm which automatically adjusts ``on-line" two majors of them: the correction and the propagation parameters. Our purpose is to determine, for each frame of a video, the optimal value of the correction parameter and to adjust the propagation one to improve the tracking performance. On one hand, our experimental results show that the common settings of particle filter are sub-optimal. On another hand, we prove that our approach achieves a lower tracking error without needing tuning these parameters. Our adaptive method allows to track objects in complex conditions (illumination changes, cluttered background, etc.) without adding any computational cost compared to the common usage with fixed parameters. |
||
| lrdeprojects = Image |
| lrdeprojects = Image |
||
− | | note = accepted |
||
| type = inproceedings |
| type = inproceedings |
||
| id = dubuisson.15.visapp |
| id = dubuisson.15.visapp |
||
Line 20: | Line 20: | ||
month = mar, |
month = mar, |
||
year = 2015, |
year = 2015, |
||
⚫ | |||
abstract = <nowiki>{</nowiki> The particle filter is known to be efficient for visual |
abstract = <nowiki>{</nowiki> The particle filter is known to be efficient for visual |
||
tracking. However, its parameters are empirically fixed, |
tracking. However, its parameters are empirically fixed, |
||
Line 36: | Line 37: | ||
(illumination changes, cluttered background, etc.) without |
(illumination changes, cluttered background, etc.) without |
||
adding any computational cost compared to the common usage |
adding any computational cost compared to the common usage |
||
− | with fixed parameters.<nowiki>}</nowiki> |
+ | with fixed parameters.<nowiki>}</nowiki> |
⚫ | |||
<nowiki>}</nowiki> |
<nowiki>}</nowiki> |
||
Revision as of 16:55, 10 May 2016
- Authors
- Séverine Dubuisson, Myriam Robert-Seidowsky, Jonathan Fabrizio
- Where
- Proceedings of the 10th International Conference on Computer Vision Theory and Applications (VISAPP)
- Type
- inproceedings
- Projects
- Image"Image" is not in the list (Vaucanson, Spot, URBI, Olena, APMC, Tiger, Climb, Speaker ID, Transformers, Bison, ...) of allowed values for the "Related project" property.
- Date
- 2015-03-01
Abstract
The particle filter is known to be efficient for visual tracking. However, its parameters are empirically fixeddepending on the target application, the video sequences and the context. In this paper, we introduce a new algorithm which automatically adjusts ``on-line" two majors of them: the correction and the propagation parameters. Our purpose is to determine, for each frame of a video, the optimal value of the correction parameter and to adjust the propagation one to improve the tracking performance. On one hand, our experimental results show that the common settings of particle filter are sub-optimal. On another hand, we prove that our approach achieves a lower tracking error without needing tuning these parameters. Our adaptive method allows to track objects in complex conditions (illumination changes, cluttered background, etc.) without adding any computational cost compared to the common usage with fixed parameters.
Bibtex (lrde.bib)
@InProceedings{ dubuisson.15.visapp, author = {S\'everine Dubuisson and Myriam Robert-Seidowsky and Jonathan Fabrizio}, title = {A self-adaptive likelihood function for tracking with particle filter}, booktitle = {Proceedings of the 10th International Conference on Computer Vision Theory and Applications (VISAPP)}, month = mar, year = 2015, pages = {446--453}, abstract = { The particle filter is known to be efficient for visual tracking. However, its parameters are empirically fixed, depending on the target application, the video sequences and the context. In this paper, we introduce a new algorithm which automatically adjusts ``on-line" two majors of them: the correction and the propagation parameters. Our purpose is to determine, for each frame of a video, the optimal value of the correction parameter and to adjust the propagation one to improve the tracking performance. On one hand, our experimental results show that the common settings of particle filter are sub-optimal. On another hand, we prove that our approach achieves a lower tracking error without needing tuning these parameters. Our adaptive method allows to track objects in complex conditions (illumination changes, cluttered background, etc.) without adding any computational cost compared to the common usage with fixed parameters.} }