Targets Detection Using Multiple Foveas
Resumo
Target detection enables running a robotic task. However, their limited resources make large amount of data processing harder. Image foveation is an approach that can reduce processing demand by reducing the amount of data to be processed. However, as an important visual stimulli can be attenuated by this reduction, some strategy should be applied in order to keep/recover awareness of it. This work compares gradient descent (potential field), maximum likelihood, multilateration, trilateration, and barycentric coordinates to solve this problem in a multiple mobile foveas context. Our results demonstrate that the proposed methodology detects the target converging with an average euclidian distance of 51 pixels from the target's center position.Referências
G. Zhang, L. Bin, Z. Li, W. Cong, H. Zhang, S. Hong, H. Weijian, and Z. Tao, “Development of robotic spreader for earthquake rescue,” in 2014 IEEE International Symposium on Safety, Security, and Rescue Robotics, 10 2014.
B. Peters, P. Armijo, C. Krause, S. Choudhury, and D. Oleynikov, “Review of emerging surgical robotic technology,” Surgical endoscopy, vol. 32, no. 4, pp. 1636–1655, 2018.
M. Diab-El Schahawi, W. Zingg, M. Vos, H. Humphreys, L. Lopez- Cerero, A. Fueszl, J. R. Zahar, and E. Presterl, “Ultraviolet disinfection robots to improve hospital cleaning: Real promise or just a gimmick?” Annual Review of Psychology, vol. 10, no. 33, pp. 2047–2994, 2021.
H. Pashler, J. C. Johnston, and E. Ruthruff, “Attention and performance,” Annual Review of Psychology, vol. 52, no. 1, pp. 629–651, 2001.
L. Itti, “Automatic foveation for video compression using a neurobiological model of visual attention,” IEEE Transactions on Image Processing, vol. 13, no. 10, pp. 1304–1318, Oct 2004.
A. Quan, C. Herrmann, and H. Soliman, “Project vulture: A prototype for using drones in search and rescue operations,” in 15th International Conference on Distributed Computing in Sensor Systems (DCOSS). Santorini Island, Greece: IEEE, May 2019, pp. 619–624.
A. Basu, A. Sullivan, and K. J. Wiebe, “Variable resolution teleconferencing,” in Proceedings of IEEE Systems Man and Cybernetics Conference - SMC. Le Touquet, France: IEEE, Oct. 1993.
A. Basu and K. J. Wiebe, “Videoconferencing using spatially varying sensing with multiple and moving foveae,” in Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 2 - Conference B: Computer Vision and Image Processing. Jerusalem, Israel: IEEE, oct 1994, pp. 30–34.
——, “Enhancing videoconferencing using spatially varying sensing,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 28, pp. 137–148, mar 1998.
P. Dario, M. Bergamasco, A. Fiorillo, and R. Leonardo, “Geometrical optimization criteria for the design of tactile sensing patterns,” in IEEE International Conference on Robotics And Automation. IEEE, Apr. 1986, pp. 1268–1273.
N. Dhavale and L. Itti, “Saliency-based multi-foveated mpeg compression,” in Proceedings of Seventh International Conference on Signal Processing and its Applications. IEEE, Jul. 2003, pp. 229–232.
G. Pioppo, R. Ansari, A. A. Khokhar, and G. Masera, “Low-complexity video compression combining adaptive multifoveation and reuse of highresolution information,” in Proceedings of the International Conference on Image Processing, (ICIP), October 2006, pp. 3153–3156.
R. B. Gomes, R. Q. Gardiman, L. E. C. Leite, B. M. Carvalho, and L. M. G. Goncalves, “Towards real time data reduction and feature abstraction for robotics vision,” Robot Vision, pp. 345–362, mar 2010.
X. C. Benjamim, R. B. Gomes, A. F. Burlamaqui, and L. M. G. Gonçalves, “Visual identification of medicine boxes using features matching,” in 2012 IEEE International Conference on Virtual Environments Human-Computer Interfaces and Measurement Systems (VECIMS) Proceedings, July 2012, pp. 43–47.
L. Uhr, “Layered ”recognition cone” networks that preprocess, classify, and describe,” IEEE Transactions on Computers, vol. C-21, no. 7, pp. 758–768, July 1972.
L. P. Kobbelt, Handbook of Computer Aided Geometric Design. Elsevier Academic Press, 2002.
L. M. G. Gonçalves, “A robotic control system for integration of multimodal sensory information (in portuguese),” Ph.D. dissertation, Federal University of Rio de Janeiro, 1999.
R. B. Gomes, L. M. G. Gonçalves, and B. M. Carvalho, “Real time vision for robotics using a moving fovea approach with multi resolution,” in IEEE International Conference on Robotics and Automation. Pasadena, CA, USA: IEEE, May 2008, pp. 19–23.
R. B. Gomes, B. M. de Carvalho, and L. M. G. Gonçalves, “Visual attention guided features selection with foveated images,” Neurocomputing, vol. 120, pp. 34 – 44, 2013, special issue on Image Feature Detection and Description.
R. Gomes, “Feature selection guided by visual attention in images with fovea (in portuguese),” Ph.D. dissertation, Federal University of Rio Grande do Norte, 2013.
P. R. T. Medeiros, “Multifoveation in multiresolution with mobile foveas (in portuguese),” Master’s thesis, Federal University of Rio Grande do Norte, 2016.
P. R. T. Medeiros, R. B. Gomes, E. W. G. Clua, and L. Gonçalves, “Dynamic multifoveated structure for real-time vision tasks in robotic systems: A tool for removing redundancy in multifoveated image processing,” Journal of Real-Time Image Processing, vol. 17, no. 5, 2019.
F. F. Oliveira, A. A. A. Souza, M. A. C. Fernandes, R. B. Gomes, and L. M. G. Goncalves, “Efficient 3d objects recognition using multifoveated point clouds,” Sensors, vol. 18, no. 7, 2018.
F. L. Lim, A. W. West, and S. Venkatesh, “Tracking in a space variant active vision system,” in Proceedings of the 13th International Conference on Pattern Recognition, vol. 1. Vienna: IEEE, Aug. 1996, pp. 745–749.
A. Ude, V. Wyart, L. Lin, and G. Cheng, “Distributed visual attention on a humanoid robot,” in 5th IEEE-RAS International Conference on Humanoid Robots, 2005, pp. 381–386.
P. Camacho, F. Arrebola, and F. Sandoval, “Adaptive multifovea sensors for mobiles tracking,” in IEEE International Conference on Electronics, Circuits and Systems. Lisboa: IEEE, Sep. 1998, pp. 449–452.
——, “Multiresolution sensors with adaptive structure,” in IECON 98. Proceedings of the 24th Annual Conference of the IEEE Industrial Electronics Society (Cat. No.98CH36200. Aachen, Germany: IEEE, Sep. 1998, p. 1230–1235.
J. A. Rodríguez, C. Urdiales, A. Bandera, and F. Sandoval, “Nonuniform video coding by means of multifoveal geometries,” International Journal of Imaging Systems and Technology, vol. 12, pp. 27–34, Jan. 2002.
J. Wei and Z. Li, “On active camera control and camera motion recovery with foveate wavelet transform,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 8, pp. 896–903, Aug. 2001.
S. Sankara, R. Ansari, and A. Khokhar, “Adaptive multifoveation for low-complexity video compression with a stationary camera perspective,” in Proceedings of SPIE - The International Society for Optical Engineering, vol. 5685, 2005, pp. 5685–5697.
N. J. Butko, L. Zhang, G. W. Cottrell, and J. R. Movellan, “Visual saliency model for robot cameras,” in IEEE International Conference on Robotics and Automation. Pasadena, CA, USA: IEEE, May 2008, pp. 2398–2403.
N. J. Butko and J. R. Movellan, “I-pomdp: An infomax model of eye movement,” in IEEE International Conference on Development and Learning (ICDL). Monterey, CA, USA: IEEE, Aug. 2008, pp. 669–672.
——, “Optimal scanning for faster object detection,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR). Miami, FL, USA: IEEE, Jun. 2009, pp. 2751–2758.
——, “Infomax control of eye movements,” IEEE Transactions on Autonomous Mental Development, vol. 2, pp. 91–107, Jun. 2010.
——, “Learning to look,” in IEEE International Conference on Development and Learning (ICDL). Ann Arbor, MI, USA: IEEE, Aug. 2010, pp. 70–75.
N. J. Butko, “Active perception,” Ph.D. dissertation, University of California, San Diego, 2010.
W. A. Talbott, H. C. Huang, and J. Movellan, “Infomax models of oculomotor control,” in 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL), 2012, pp. 1–6.
A. Cavallaro, D. Hands, and T. Popkin, “Multi-foveation filtering,” in IEEE International Conference on Acoustics, Speech, and Signal Processing, (ICASSP), vol. 1, 2009, pp. 669–672.
T. Xu, T. Zhang, K. Kühnlezn, and M. Buss, “Attentional object detection with an active multi-focal vision system,” International Journal of Humanoid Robotics, vol. 7, no. 2, pp. 223–243, jan 2010.
G. B. Soos and C. Rekeczky, “Elastic grid based analysis of motion field for object-motion detection in airborne video flows,” in 2007 IEEE International Symposium on Circuits and Systems, 2007, pp. 617–620.
B. G. Soos, V. Szabo, and C. Rekeczky, “Elastic grid-based multi-fovea algorithm for real-time object-motion detection in airborne surveillance,” in Cellular Nanoscale Sensory Wave Computing, C. Baatar, W. Porod, and T. Roska, Eds. Boston: Springer, 2009, pp. 181–213.
B. G. Soos, “Multi-fovea architecture and algorithms based on cellular many-core processor arrays,” Ph.D. dissertation, Faculty of Information Technology, 2010.
A. Zarándy, C. Rekeczky, P. Földesy, R. C. Galán, G. L. Cembrano, B. G. Soos, A. R. Vázquez, and T. Roska, “Viscube: A multi-layer vision chip,” in Focal-Plane Sensor-Processor Chips, A. Zarándy, Ed. New York: Springer-Verlag, 2011, pp. 181–208.
G. Osterberg, “Topography of the layer of rods and cones in the human retina,” Acta ophthalmologica: Supplementum, vol. 6, p. 1–102, 1935.
B. Peters, P. Armijo, C. Krause, S. Choudhury, and D. Oleynikov, “Review of emerging surgical robotic technology,” Surgical endoscopy, vol. 32, no. 4, pp. 1636–1655, 2018.
M. Diab-El Schahawi, W. Zingg, M. Vos, H. Humphreys, L. Lopez- Cerero, A. Fueszl, J. R. Zahar, and E. Presterl, “Ultraviolet disinfection robots to improve hospital cleaning: Real promise or just a gimmick?” Annual Review of Psychology, vol. 10, no. 33, pp. 2047–2994, 2021.
H. Pashler, J. C. Johnston, and E. Ruthruff, “Attention and performance,” Annual Review of Psychology, vol. 52, no. 1, pp. 629–651, 2001.
L. Itti, “Automatic foveation for video compression using a neurobiological model of visual attention,” IEEE Transactions on Image Processing, vol. 13, no. 10, pp. 1304–1318, Oct 2004.
A. Quan, C. Herrmann, and H. Soliman, “Project vulture: A prototype for using drones in search and rescue operations,” in 15th International Conference on Distributed Computing in Sensor Systems (DCOSS). Santorini Island, Greece: IEEE, May 2019, pp. 619–624.
A. Basu, A. Sullivan, and K. J. Wiebe, “Variable resolution teleconferencing,” in Proceedings of IEEE Systems Man and Cybernetics Conference - SMC. Le Touquet, France: IEEE, Oct. 1993.
A. Basu and K. J. Wiebe, “Videoconferencing using spatially varying sensing with multiple and moving foveae,” in Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 2 - Conference B: Computer Vision and Image Processing. Jerusalem, Israel: IEEE, oct 1994, pp. 30–34.
——, “Enhancing videoconferencing using spatially varying sensing,” IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, vol. 28, pp. 137–148, mar 1998.
P. Dario, M. Bergamasco, A. Fiorillo, and R. Leonardo, “Geometrical optimization criteria for the design of tactile sensing patterns,” in IEEE International Conference on Robotics And Automation. IEEE, Apr. 1986, pp. 1268–1273.
N. Dhavale and L. Itti, “Saliency-based multi-foveated mpeg compression,” in Proceedings of Seventh International Conference on Signal Processing and its Applications. IEEE, Jul. 2003, pp. 229–232.
G. Pioppo, R. Ansari, A. A. Khokhar, and G. Masera, “Low-complexity video compression combining adaptive multifoveation and reuse of highresolution information,” in Proceedings of the International Conference on Image Processing, (ICIP), October 2006, pp. 3153–3156.
R. B. Gomes, R. Q. Gardiman, L. E. C. Leite, B. M. Carvalho, and L. M. G. Goncalves, “Towards real time data reduction and feature abstraction for robotics vision,” Robot Vision, pp. 345–362, mar 2010.
X. C. Benjamim, R. B. Gomes, A. F. Burlamaqui, and L. M. G. Gonçalves, “Visual identification of medicine boxes using features matching,” in 2012 IEEE International Conference on Virtual Environments Human-Computer Interfaces and Measurement Systems (VECIMS) Proceedings, July 2012, pp. 43–47.
L. Uhr, “Layered ”recognition cone” networks that preprocess, classify, and describe,” IEEE Transactions on Computers, vol. C-21, no. 7, pp. 758–768, July 1972.
L. P. Kobbelt, Handbook of Computer Aided Geometric Design. Elsevier Academic Press, 2002.
L. M. G. Gonçalves, “A robotic control system for integration of multimodal sensory information (in portuguese),” Ph.D. dissertation, Federal University of Rio de Janeiro, 1999.
R. B. Gomes, L. M. G. Gonçalves, and B. M. Carvalho, “Real time vision for robotics using a moving fovea approach with multi resolution,” in IEEE International Conference on Robotics and Automation. Pasadena, CA, USA: IEEE, May 2008, pp. 19–23.
R. B. Gomes, B. M. de Carvalho, and L. M. G. Gonçalves, “Visual attention guided features selection with foveated images,” Neurocomputing, vol. 120, pp. 34 – 44, 2013, special issue on Image Feature Detection and Description.
R. Gomes, “Feature selection guided by visual attention in images with fovea (in portuguese),” Ph.D. dissertation, Federal University of Rio Grande do Norte, 2013.
P. R. T. Medeiros, “Multifoveation in multiresolution with mobile foveas (in portuguese),” Master’s thesis, Federal University of Rio Grande do Norte, 2016.
P. R. T. Medeiros, R. B. Gomes, E. W. G. Clua, and L. Gonçalves, “Dynamic multifoveated structure for real-time vision tasks in robotic systems: A tool for removing redundancy in multifoveated image processing,” Journal of Real-Time Image Processing, vol. 17, no. 5, 2019.
F. F. Oliveira, A. A. A. Souza, M. A. C. Fernandes, R. B. Gomes, and L. M. G. Goncalves, “Efficient 3d objects recognition using multifoveated point clouds,” Sensors, vol. 18, no. 7, 2018.
F. L. Lim, A. W. West, and S. Venkatesh, “Tracking in a space variant active vision system,” in Proceedings of the 13th International Conference on Pattern Recognition, vol. 1. Vienna: IEEE, Aug. 1996, pp. 745–749.
A. Ude, V. Wyart, L. Lin, and G. Cheng, “Distributed visual attention on a humanoid robot,” in 5th IEEE-RAS International Conference on Humanoid Robots, 2005, pp. 381–386.
P. Camacho, F. Arrebola, and F. Sandoval, “Adaptive multifovea sensors for mobiles tracking,” in IEEE International Conference on Electronics, Circuits and Systems. Lisboa: IEEE, Sep. 1998, pp. 449–452.
——, “Multiresolution sensors with adaptive structure,” in IECON 98. Proceedings of the 24th Annual Conference of the IEEE Industrial Electronics Society (Cat. No.98CH36200. Aachen, Germany: IEEE, Sep. 1998, p. 1230–1235.
J. A. Rodríguez, C. Urdiales, A. Bandera, and F. Sandoval, “Nonuniform video coding by means of multifoveal geometries,” International Journal of Imaging Systems and Technology, vol. 12, pp. 27–34, Jan. 2002.
J. Wei and Z. Li, “On active camera control and camera motion recovery with foveate wavelet transform,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 8, pp. 896–903, Aug. 2001.
S. Sankara, R. Ansari, and A. Khokhar, “Adaptive multifoveation for low-complexity video compression with a stationary camera perspective,” in Proceedings of SPIE - The International Society for Optical Engineering, vol. 5685, 2005, pp. 5685–5697.
N. J. Butko, L. Zhang, G. W. Cottrell, and J. R. Movellan, “Visual saliency model for robot cameras,” in IEEE International Conference on Robotics and Automation. Pasadena, CA, USA: IEEE, May 2008, pp. 2398–2403.
N. J. Butko and J. R. Movellan, “I-pomdp: An infomax model of eye movement,” in IEEE International Conference on Development and Learning (ICDL). Monterey, CA, USA: IEEE, Aug. 2008, pp. 669–672.
——, “Optimal scanning for faster object detection,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR). Miami, FL, USA: IEEE, Jun. 2009, pp. 2751–2758.
——, “Infomax control of eye movements,” IEEE Transactions on Autonomous Mental Development, vol. 2, pp. 91–107, Jun. 2010.
——, “Learning to look,” in IEEE International Conference on Development and Learning (ICDL). Ann Arbor, MI, USA: IEEE, Aug. 2010, pp. 70–75.
N. J. Butko, “Active perception,” Ph.D. dissertation, University of California, San Diego, 2010.
W. A. Talbott, H. C. Huang, and J. Movellan, “Infomax models of oculomotor control,” in 2012 IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL), 2012, pp. 1–6.
A. Cavallaro, D. Hands, and T. Popkin, “Multi-foveation filtering,” in IEEE International Conference on Acoustics, Speech, and Signal Processing, (ICASSP), vol. 1, 2009, pp. 669–672.
T. Xu, T. Zhang, K. Kühnlezn, and M. Buss, “Attentional object detection with an active multi-focal vision system,” International Journal of Humanoid Robotics, vol. 7, no. 2, pp. 223–243, jan 2010.
G. B. Soos and C. Rekeczky, “Elastic grid based analysis of motion field for object-motion detection in airborne video flows,” in 2007 IEEE International Symposium on Circuits and Systems, 2007, pp. 617–620.
B. G. Soos, V. Szabo, and C. Rekeczky, “Elastic grid-based multi-fovea algorithm for real-time object-motion detection in airborne surveillance,” in Cellular Nanoscale Sensory Wave Computing, C. Baatar, W. Porod, and T. Roska, Eds. Boston: Springer, 2009, pp. 181–213.
B. G. Soos, “Multi-fovea architecture and algorithms based on cellular many-core processor arrays,” Ph.D. dissertation, Faculty of Information Technology, 2010.
A. Zarándy, C. Rekeczky, P. Földesy, R. C. Galán, G. L. Cembrano, B. G. Soos, A. R. Vázquez, and T. Roska, “Viscube: A multi-layer vision chip,” in Focal-Plane Sensor-Processor Chips, A. Zarándy, Ed. New York: Springer-Verlag, 2011, pp. 181–208.
G. Osterberg, “Topography of the layer of rods and cones in the human retina,” Acta ophthalmologica: Supplementum, vol. 6, p. 1–102, 1935.
Publicado
18/10/2021
Como Citar
MEDEIROS, Petrucio R. T.; GOMES, Rafael B.; GONÇALVES, Luiz M. G..
Targets Detection Using Multiple Foveas. In: WORKSHOP DE TESES E DISSERTAÇÕES - CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 34. , 2021, Online.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2021
.
p. 91-97.
DOI: https://doi.org/10.5753/sibgrapi.est.2021.20019.