C. Bak, A. Kocak, E. Erdem, and A. Erdem, Spatio-temporal saliency networks for dynamic saliency prediction, IEEE Transactions on Multimedia, vol.20, issue.7, pp.1688-1698, 2018.

L. Bazzani, H. Larochelle, and L. Torresani, Recurrent mixture density network for spatiotemporal visual attention, 2016.

A. Borji, Saliency prediction in the deep learning era: An empirical investigation, 2018.

Z. Bylinskii, T. Judd, A. Oliva, A. Torralba, and F. Durand, What do different evaluation metrics tell us about saliency models?, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.41, issue.3, pp.740-757, 2019.

Z. Bylinskii, T. Judd, A. Borji, L. Itti, F. Durand et al., Mit saliency benchmark, 2015.

M. Cornia, L. Baraldi, G. Serra, and R. Cucchiara, A deep multi-level network for saliency prediction, 2016 23rd International Conference on Pattern Recognition (ICPR), pp.3488-3493, 2016.

S. Dutt-jain, B. Xiong, and K. Grauman, Fusionseg: Learning to combine motion and appearance for fully automatic segmentation of generic objects in videos, Proceedings of the IEEE conference on computer vision and pattern recognition, pp.3664-3673, 2017.

Y. Fang, Z. Wang, W. Lin, and Z. Fang, Video saliency incorporating spatiotemporal cues and uncertainty weighting, IEEE Transactions on Image Processing, vol.23, issue.9, pp.3910-3921, 2014.

T. Foulsham, A. Kingstone, and G. Underwood, Turning the world around: Patterns in saccade direction vary with picture orientation, Vision research, vol.48, issue.17, pp.1777-1790, 2008.

C. Guo and L. Zhang, A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression, IEEE transactions on image processing, vol.19, issue.1, pp.185-198, 2010.

X. Guo, L. Cui, B. Park, W. Ding, M. Lockhart et al., How will humans cut through automated vehicle platoons in mixed traffic environments? a simulation study of drivers' gaze behaviors based on the dynamic areas-of-interest, 2018.

J. Harel, C. Koch, and P. Perona, Graph-based visual saliency, Advances in neural information processing systems, pp.545-552, 2007.

S. Hossein-khatoonabadi, N. Vasconcelos, I. V. Bajic, and Y. Shan, How many bits does it take for a stimulus to be salient?, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.

X. Hou, J. Harel, and C. Koch, Image signature: Highlighting sparse salient regions, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.34, issue.1, pp.194-201, 2012.

I. P. Howard and B. Rogers, Depth perception. Stevens Handbook of, Experimental Psychology, vol.6, pp.77-120, 2002.

X. Huang, C. Shen, X. Boix, and Q. Zhao, Salicon: Reducing the semantic gap in saliency prediction by adapting deep neural networks, Proceedings of the IEEE International Conference on Computer Vision, pp.262-270, 2015.

L. Itti, C. Koch, and E. Niebur, A model of saliency-based visual attention for rapid scene analysis, IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.20, issue.11, pp.1254-1259, 1998.

Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long et al., Caffe: Convolutional architecture for fast feature embedding, Proceedings of the 22Nd ACM International Conference on Multimedia, pp.675-678, 2014.

L. Jiang, M. Xu, T. Liu, M. Qiao, and Z. Wang, Deepvs: A deep learning based video saliency prediction approach, Proceedings of the European Conference on Computer Vision (ECCV), pp.602-617, 2018.

T. Judd, K. Ehinger, F. Durand, and A. Torralba, Learning to predict where humans look, 2009 IEEE 12th international conference on computer vision, pp.2106-2113, 2009.

D. K. Kim and T. Chen, Deep neural network for real-time autonomous indoor navigation, 2015.

V. Krassanakis, V. Filippakopoulou, and B. Nakos, Eyemmv toolbox: An eye movement post-analysis tool based on a two-step spatial dispersion threshold for fixation identification, Journal of Eye Movement Research, vol.7, issue.1, 2014.

V. Krassanakis, M. Perreira-da-silva, and V. Ricordel, Monitoring human visual behavior during the observation of unmanned aerial vehicles (uavs) videos, Drones, vol.2, issue.4, p.36, 2018.
URL : https://hal.archives-ouvertes.fr/hal-01928841

A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, vol.25, pp.1097-1105, 2012.

M. Kummerer, T. S. Wallis, and M. Bethge, Saliency benchmarking made easy: Separating models, maps and metrics, Proceedings of the European Conference on Computer Vision (ECCV), pp.770-787, 2018.

L. Meur, O. Baccino, and T. , Methods for comparing scanpaths and saliency maps: strengths and weaknesses, Behavior Research Method, vol.45, issue.1, pp.251-266, 2013.
URL : https://hal.archives-ouvertes.fr/hal-00757615

L. Meur, O. Le-callet, P. Barba, and D. , Predicting visual fixations on video based on low-level visual features, Vision research, vol.47, issue.19, pp.2483-2498, 2007.
URL : https://hal.archives-ouvertes.fr/hal-00287424

G. Li, Y. Xie, T. Wei, K. Wang, and L. Lin, Flow guided recurrent neural encoder for video salient object detection, Proceedings of the IEEE conference on computer vision and pattern recognition, pp.3243-3252, 2018.

M. Mueller, N. Smith, and B. Ghanem, A benchmark and simulator for uav tracking, European conference on computer vision, pp.445-461, 2016.

N. Murray, M. Vanrell, X. Otazu, and C. A. Parraga, Saliency estimation using a non-parametric low-level vision model, CVPR 2011, pp.433-440, 2011.

A. Ninassi, O. Le-meur, P. Le-callet, and D. Barba, Does where you gaze on an image affect your perception of quality? applying visual attention to image quality metric, 2007 IEEE International Conference on Image Processing, vol.2, p.169, 2007.
URL : https://hal.archives-ouvertes.fr/hal-00342599

J. Pan, C. C. Ferrer, K. Mcguinness, N. E. O'connor, J. Torres et al., Salgan: Visual saliency prediction with generative adversarial networks, 2017.

J. Pan, E. Sayrol, X. Giro-i-nieto, K. Mcguinness, and N. E. O'connor, Shallow and deep convolutional networks for saliency prediction, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.598-606, 2016.

N. Riche, M. Mancas, M. Duvinage, M. Mibulumukini, B. Gosselin et al., Rare2012: A multi-scale rarity-based saliency detection with its comparative statistical analysis, Signal Processing: Image Communication, vol.28, issue.6, pp.642-658, 2013.

D. Rudoy, D. B. Goldman, E. Shechtman, and L. Zelnik-manor, Learning video saliency from human gaze using candidate selection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1147-1154, 2013.

J. Sokalski, T. P. Breckon, and I. Cowling, Automatic salient object detection in uav imagery, Proc. of the 25th Int. Unmanned Air Vehicle Systems, pp.1-12, 2010.

H. Trinh, J. Li, S. Miyazawa, J. Moreno, and S. Pankanti, Efficient uav video event summarization, Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), pp.2226-2229, 2012.

P. H. Tseng, R. Carmi, I. G. Cameron, D. P. Munoz, and L. Itti, Quantifying center bias of observers in free viewing of dynamic natural scenes, Journal of vision, vol.9, issue.7, pp.4-4, 2009.

E. Vig, M. Dorr, and D. Cox, Large-scale optimization of hierarchical features for saliency prediction in natural images, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.2798-2805, 2014.

Z. Wang, J. Ren, D. Zhang, M. Sun, and J. Jiang, A deep-learning based feature hybrid framework for spatiotemporal saliency detection inside videos, Neurocomputing, vol.287, pp.68-83, 2018.

J. Zhang and S. Sclaroff, Exploiting surroundedness for saliency detection: a boolean map approach, vol.38, pp.889-902, 2016.

L. Zhang, M. H. Tong, T. K. Marks, H. Shan, and G. W. Cottrell, Sun: A bayesian framework for saliency using natural statistics, Journal of vision, vol.8, issue.7, pp.32-32, 2008.

Y. Zhao, J. Ma, X. Li, and J. Zhang, Saliency detection and deep learning-based wildfire identification in uav imagery, Sensors, vol.18, issue.3, p.712, 2018.