Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DCFNet: Discriminant Correlation Filters Network for Visual Tracking (1704.04057v1)

Published 13 Apr 2017 in cs.CV

Abstract: Discriminant Correlation Filters (DCF) based methods now become a kind of dominant approach to online object tracking. The features used in these methods, however, are either based on hand-crafted features like HoGs, or convolutional features trained independently from other tasks like image classification. In this work, we present an end-to-end lightweight network architecture, namely DCFNet, to learn the convolutional features and perform the correlation tracking process simultaneously. Specifically, we treat DCF as a special correlation filter layer added in a Siamese network, and carefully derive the backpropagation through it by defining the network output as the probability heatmap of object location. Since the derivation is still carried out in Fourier frequency domain, the efficiency property of DCF is preserved. This enables our tracker to run at more than 60 FPS during test time, while achieving a significant accuracy gain compared with KCF using HoGs. Extensive evaluations on OTB-2013, OTB-2015, and VOT2015 benchmarks demonstrate that the proposed DCFNet tracker is competitive with several state-of-the-art trackers, while being more compact and much faster.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Qiang Wang (271 papers)
  2. Jin Gao (38 papers)
  3. Junliang Xing (80 papers)
  4. Mengdan Zhang (18 papers)
  5. Weiming Hu (92 papers)
Citations (272)

Summary

  • The paper introduces an end-to-end network that unifies feature extraction and correlation filtering for visual tracking.
  • It achieves real-time performance at over 60 FPS and a 10% accuracy improvement over traditional KCF trackers on OTB benchmarks.
  • The compact, efficient design makes DCFNet ideal for resource-constrained applications such as robotics, video surveillance, and autonomous vehicles.

Overview of DCFNet: Discriminant Correlation Filters Network for Visual Tracking

The paper presents an innovative approach to online object tracking through the development of DCFNet, an end-to-end network architecture integrated with Discriminant Correlation Filters (DCFs). Traditional DCF-based trackers typically rely on hand-crafted features such as Histogram of Oriented Gradients (HoGs) or employ deep convolutional features trained independently for different tasks like image classification. However, these approaches often involve a disconnect between feature extraction and the tracking process, which can potentially lead to inefficiencies both in terms of computational cost and tracking performance.

Key Contributions

The primary contribution of the paper is the integration of feature learning with correlation filtering within a single neural network framework. This is achieved by implementing DCF as a specialized correlation filter layer within a Siamese network, where the backpropagation is meticulously derived to facilitate simultaneous learning of convolutional features and execution of the tracking process. Notable features of this approach include:

  • End-to-End Architecture: The network is trained to learn features that are optimally suited for DCF tracking, eliminating reliance on external, pre-trained convolutional layers, and ensuring that the tracking process is tightly coupled with feature extraction.
  • Efficiency: The tracker maintains the efficiency advantages of DCFs by performing operations in the Fourier frequency domain. The computational efficiency enables real-time tracking capabilities at over 60 Frames Per Second (FPS).
  • Compactness: DCFNet's architecture is lightweight, which is a significant advantage over existing deep learning-based trackers that are often computationally expensive and memory-intensive.

Numerical Results and Comparisons

The paper provides extensive evaluations demonstrating the efficacy of DCFNet compared to several state-of-the-art object trackers. It achieves a notable improvement in accuracy over the Kernelized Correlation Filters (KCF) using HoGs, as evidenced by the results on standard object tracking benchmarks including OTB-2013, OTB-2015, and VOT2015. Importantly, DCFNet outperforms many traditional and deep learning-based trackers in terms of speed and competitive accuracy, showing a 10% performance improvement on OTB-2015 over KCF.

Implications and Future Work

The development of DCFNet signifies a shift towards more integrated approaches in visual tracking, where feature learning and tracking operations are not disjoint processes but rather coalesce within a unified framework. The theoretical underpinning provided highlights the potential for using tailored convolutional features to enhance tracking performance while maintaining real-time speed.

From a practical standpoint, DCFNet's architecture presents a valuable solution for applications requiring robust and fast object tracking, such as robotics, video surveillance, and autonomous vehicles. The lightweight nature of the architecture makes it particularly appealing for deployments on resource-constrained devices.

In terms of future developments, the paper identifies the potential for further enhancing the robustness of the feature extractor by employing deeper architectures possibly trained with larger datasets. This could mitigate limitations arising from the current shallow architecture and small training corpus, thereby leveraging the full capabilities of deep learning in the context of DCF-based tracking.

In conclusion, DCFNet represents a crucial step in advancing the integration of feature extraction and correlation filtering within visual tracking systems, offering a balance of accuracy, speed, and model compactness that is critical in both research and practical applications in the field of computer vision.