Balancing long- and short-term dynamics for the modeling of saliency in videos

arXiv:2504.05913, doi: 10.48550/arXiv.2504.05913 - Apr 2025
Associated documents :  
The role of long- and short-term dynamics towards salient object detection in videos is under-researched. We present a Transformer-based approach to learn a joint representation of video frames and past saliency information. Our model embeds long- and short-term information to detect dynamically shifting saliency in video. We provide our model with a stream of video frames and past saliency maps, which acts as a prior for the next prediction, and extract spatiotemporal tokens from both modalities. The decomposition of the frame sequence into tokens lets the model incorporate short-term information from within the token, while being able to make long-term connections between tokens throughout the sequence. The core of the system consists of a dual-stream Transformer architecture to process the extracted sequences independently before fusing the two modalities. Additionally, we apply a saliency-based masking scheme to the input frames to learn an embedding that facilitates the recognition of deviations from previous outputs. We observe that the additional prior information aids in the first detection of the salient location. Our findings indicate that the ratio of spatiotemporal long- and short-term features directly impacts the model's performance. While increasing the short-term context is beneficial up to a certain threshold, the model's performance greatly benefits from an expansion of the long-term context.

 

@Article{WAAW25, 
 	 author =  {Wulff, Theodor and Abawi, Fares and Allgeuer, Philipp and Wermter, Stefan},  
 	 title = {Balancing long- and short-term dynamics for the modeling of saliency in videos}, 
 	 booktitle = {},
 	 journal = {arXiv:2504.05913},
 	 editors = {},
 	 number = {},
 	 volume = {},
 	 pages = {},
 	 year = {2025},
 	 month = {Apr},
 	 publisher = {},
 	 doi = {10.48550/arXiv.2504.05913}, 
 }