Skip to Main content Skip to Navigation
Conference papers

Extending 2D Saliency Models for Head Movement Prediction in 360-degree Images using CNN-based Fusion

Abstract : Saliency prediction can be of great benefit for 360-degree image/video applications, including compression, streaming , rendering and viewpoint guidance. It is therefore quite natural to adapt the 2D saliency prediction methods for 360-degree images. To achieve this, it is necessary to project the 360-degree image to 2D plane. However, the existing projection techniques introduce different distortions, which provides poor results and makes inefficient the direct application of 2D saliency prediction models to 360-degree content. Consequently, in this paper, we propose a new framework for effectively applying any 2D saliency prediction method to 360-degree images. The proposed framework particularly includes a novel convolutional neural network based fusion approach that provides more accurate saliency prediction while avoiding the introduction of distortions. The proposed framework has been evaluated with five 2D saliency prediction methods, and the experimental results showed the superiority of our approach compared to the use of weighted sum or pixel-wise maximum fusion methods.
Complete list of metadatas

Cited literature [29 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02482285
Contributor : Wassim Hamidouche <>
Submitted on : Wednesday, February 19, 2020 - 4:53:03 PM
Last modification on : Thursday, March 12, 2020 - 4:08:05 PM

Files

main.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02482285, version 1
  • ARXIV : 2002.09196

Citation

Ibrahim Djemai, Sid Fezza, Wassim Hamidouche, Olivier Deforges. Extending 2D Saliency Models for Head Movement Prediction in 360-degree Images using CNN-based Fusion. IEEE International Symposium on Circuits and Systems (ISCAS), May 2020, Seville, Spain. ⟨hal-02482285⟩

Share

Metrics

Record views

36

Files downloads

25