- -

Manipulación visual-táctil para la recogida de residuos domésticos en exteriores

RiuNet: Repositorio Institucional de la Universidad Politécnica de Valencia

Compartir/Enviar a

Citas

Estadísticas

  • Estadisticas de Uso

Manipulación visual-táctil para la recogida de residuos domésticos en exteriores

Mostrar el registro sencillo del ítem

Ficheros en el ítem

dc.contributor.author Castaño-Amorós, Julio es_ES
dc.contributor.author Páez-Ubieta, Ignacio de Loyola es_ES
dc.contributor.author Gil, Pablo es_ES
dc.contributor.author Puente, Santiago Timoteo es_ES
dc.date.accessioned 2023-04-18T12:17:30Z
dc.date.available 2023-04-18T12:17:30Z
dc.date.issued 2023-03-31
dc.identifier.issn 1697-7912
dc.identifier.uri http://hdl.handle.net/10251/192800
dc.description.abstract [EN] This work presents a perception system applied to robotic manipulation, that is able to assist in navegation, household waste classification and collection in outdoor environments. This system is made up of optical tactile sensors, RGBD cameras and a LiDAR. These sensors are integrated on a mobile platform with a robot manipulator and a robotic gripper. Our system is divided in three software modules, two of them are vision-based and the last one is tactile-based. The vision-based modules use CNNs to localize and recognize solid household waste, together with the grasping points estimation. The tactile-based module, which also uses CNNs and image processing, adjusts the gripper opening to control the grasping from touch data. Our proposal achieves localization errors around 6 %, a recognition accuracy of 98% and ensures the grasping stability the 91% of the attempts. The sum of runtimes of the three modules is less than 750 ms. es_ES
dc.description.abstract [ES] Este artículo presenta un sistema de percepcion orientado a la manipulación robótica, capaz de asistir en tareas de navegación, clasificacion y recogida de residuos domésticos en exterior. El sistema está compuesto de sensores táctiles ópticos, cámaras RGBD y un LiDAR. Estos se integran en una plataforma móvil que transporta un robot manipulador con pinza. El sistema consta de tres modulos software, dos visuales y uno táctil. Los módulos visuales implementan arquitecturas CNNs para la localización y reconocimiento de residuos sólidos, además de estimar puntos de agarre. El módulo táctil, también basado en CNNs y procesamiento de imagen, regula la apertura de la pinza para controlar el agarre a partir de informacion de contacto. Nuestra propuesta tiene errores de localizacion entorno al 6 %, una precisión de reconocimiento del 98 %, y garantiza estabilidad de agarre el 91 % de las veces. Los tres modulos trabajan en tiempos inferiores a los 750 ms. es_ES
dc.description.sponsorship Este trabajo ha sido financiado con Fondos Europeos de Desarrollo Regional (FEDER), el gobierno de la Generalitat Valenciana a través del proyecto PROMETEO/2021/075, y los recursos computaciones fueron financiados a traves de la ayuda IDIFEDER/2020/003. es_ES
dc.language Español es_ES
dc.publisher Universitat Politècnica de València es_ES
dc.relation.ispartof Revista Iberoamericana de Automática e Informática industrial es_ES
dc.rights Reconocimiento - No comercial - Compartir igual (by-nc-sa) es_ES
dc.subject Visual detection es_ES
dc.subject Object recognition es_ES
dc.subject Object location es_ES
dc.subject Tactile perception es_ES
dc.subject Robotic manipulation es_ES
dc.subject Detección visual es_ES
dc.subject Reconocimiento de objetos es_ES
dc.subject Localización de objetos es_ES
dc.subject Percepción táctil es_ES
dc.subject Manipulación robótica es_ES
dc.title Manipulación visual-táctil para la recogida de residuos domésticos en exteriores es_ES
dc.title.alternative Visual-tactile manipulation to collect household waste in outdoor es_ES
dc.type Artículo es_ES
dc.identifier.doi 10.4995/riai.2022.18534
dc.relation.projectID info:eu-repo/grantAgreement/GV//PROMETEO%2F2021%2F075 es_ES
dc.relation.projectID info:eu-repo/grantAgreement/GV//IDIFEDER%2F2020%2F003 es_ES
dc.rights.accessRights Abierto es_ES
dc.description.bibliographicCitation Castaño-Amorós, J.; Páez-Ubieta, IDL.; Gil, P.; Puente, ST. (2023). Manipulación visual-táctil para la recogida de residuos domésticos en exteriores. Revista Iberoamericana de Automática e Informática industrial. 20(2):163-174. https://doi.org/10.4995/riai.2022.18534 es_ES
dc.description.accrualMethod OJS es_ES
dc.relation.publisherversion https://doi.org/10.4995/riai.2022.18534 es_ES
dc.description.upvformatpinicio 163 es_ES
dc.description.upvformatpfin 174 es_ES
dc.type.version info:eu-repo/semantics/publishedVersion es_ES
dc.description.volume 20 es_ES
dc.description.issue 2 es_ES
dc.identifier.eissn 1697-7920
dc.relation.pasarela OJS\18534 es_ES
dc.contributor.funder Generalitat Valenciana es_ES
dc.contributor.funder European Regional Development Fund es_ES
dc.description.references Altikat, A., Gulbe, A., Altikat, S., 2022. Intelligent solid waste classification using deep convolutional neural networks. Int. J. Environmental Science and Technology 19, 1285-1292. https://doi.org/10.1007/s13762-021-03179-4 es_ES
dc.description.references Bircanoglu, C., Atay, M.and Beser, F., Genc¸, , Kızrak, M. A., 2018. Recyclenet: Intelligent waste sorting using deep neural networks. In: Innovations in intelligent systems and applications. pp. 1-7. https://doi.org/10.1109/INISTA.2018.8466276 es_ES
dc.description.references Bohg, J., Morales, A., Asfour, T., Kragic, D., 2013. Data-driven grasp synthesis- a survey. IEEE Transactions on robotics 30 (2), 289-309. https://doi.org/10.1109/TRO.2013.2289018 es_ES
dc.description.references Bolya, D., Zhou, C., Xiao, F., Lee, Y., 2019. Yolact: Real-time instance segmentation. In: IEEE/CVF Int. Conf. on Computer Vision. pp. 9157-9166. https://doi.org/10.1109/ICCV.2019.00925 es_ES
dc.description.references Castaño-Amoros, J., Gil, P., Puente, S., 2021. Touch detection with low-cost visual-based sensor. In: 2nd Int. Conf. on Robotics, Computer Vision and Intelligent Systems. pp. 136-142. https://doi.org/10.5220/0010699800003061 es_ES
dc.description.references De Gea, V., Puente, S., Gil, P., 2021. Domestic waste detection and grasping points for robotic picking up. 10.48550/arXiv.2105.06825, iEEE Int. Conf. on Robotics and Automation. Workshop: Emerging paradigms for robotic manipulation: from the lab to the productive world. es_ES
dc.description.references Del Pino, I., Muñoz-Bañon, M., Cova-Rocamora, S., Contreras, M., Candelas, F., Torres, F., 2020. Deeper in blue. Journal of Intelligent & Robotics Systems 98, 207-225. https://doi.org/10.1007/s10846-019-00983-6 es_ES
dc.description.references Donlon, E., Dong, S., Liu, M., Li, J., Adelson, E., Rodriguez, A., 2018. Gelslim: A high-resolution, compact, robust, and calibrated tactile-sensing finger. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 1927-1934. https://doi.org/10.1109/IROS.2018.8593661 es_ES
dc.description.references Feng, J., Tang, X., Jiang, X., Chen, Q., 2021. Garbage disposal of complex background based on deep learning with limited hardware resources. IEEE Sensors Journal 21(8), 21050-21058. https://doi.org/10.1109/JSEN.2021.3100636 es_ES
dc.description.references Fu, B., Li, S., Wei, J., Li, Q., Wang, Q., T. J., 2021. A novel intelligent garbage classification system based on deep learning and an embedded linux system. IEEE Access 9), 131134-131146. https://doi.org/10.1109/ACCESS.2021.3114496 es_ES
dc.description.references Guo, N., Zhang, B., Zhou, J., Zhan, K., Lai, S., 2020. Pose estimation and adaptable grasp configuration with point cloud registration and geometry understanding for fruit grasp planning. Computers and Electronics in Agriculture 179, 105818. https://doi.org/10.1016/j.compag.2020.105818 es_ES
dc.description.references He, K., Zhang, X., Ren, S., Sun, J., 2021. Deep residual learning for image recognition. In: IEEE Conf. on Computer Vision And Pattern Recognition. https://doi.org/10.1109/CVPR.2016.90 es_ES
dc.description.references Jiang, D., Li, G., Sun, Y., Hu, J., Yun, J., Liu, Y., 2021. Manipulator grabbing position detection with information fusion of color image and depth image using deep learning. Journal of Ambient Intelligence and Humanized Computing 12 (12), 10809-10822. https://doi.org/10.1007/s12652-020-02843-w es_ES
dc.description.references Kim, D., Li, A., Lee, J., 2021. Stable robotic grasping of multiple objects using deep neural networks. Robotica 39 (4), 735-748. https://doi.org/10.1017/S0263574720000703 es_ES
dc.description.references Kiyokawa, T., Katayama, H., Tatsuta, Y., Takamatsu, J., Ogasawara, T., 2021. Robotic waste sorter with agile manipulation and quickly trainable detector. IEEE Access 9), 124616-124631. https://doi.org/10.1109/ACCESS.2021.3110795 es_ES
dc.description.references Kolamuri, R., Si, Z., Zhang, Y., Agarwal, A., Yuan, W., 2021. Improving grasp stability with rotation measurement from tactile sensing. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 6809-6816. https://doi.org/10.1109/IROS51168.2021.9636488 es_ES
dc.description.references Lambeta, Chou, P.-W., Tian, S., Yang, B., Maloon, B., Most, V., Stroud, D., Santos, R., B.-A., Kammerer, G., Jayaraman, D., Calandra, R., 2020. Digit: A novel design for a low-cost compact high-resolution tactile sensor with application to in-hand manipulation. IEEE Robotics and Automation Letters 5(3), 3838-38451. https://doi.org/10.1109/LRA.2020.2977257 es_ES
dc.description.references Lin, Y., Lloyd, J., Church, A., Lepora, N. F., 2022. Tactile gym 2.0: Sim-to-real deep reinforcement learning for comparing low-cost high-resolution robot touch. IEEE Robotics and Automation Letters 7 (4), 10754-10761. https://doi.org/10.1109/LRA.2022.3195195 es_ES
dc.description.references Liu, L., Ouyang, W., Wang, X., Fieguth, P., Chen, J., Liu, X., Pietikainen, M., 2020. Deep learning for generic object detection: A survey. Int. J. of Computer Vision 128, 261--318. https://doi.org/10.1007/s11263-019-01247-4 es_ES
dc.description.references Liu, Y., Jiang, D., Duan, H., Sun, Y., Li, G., Tao, B., Yun, J., Liu, Y., Chen, B., 2021. Dynamic gesture recognition algorithm based on 3d convolutional neural network. Computational Intelligence and Neuroscience 2021. https://doi.org/10.1155/2021/4828102 es_ES
dc.description.references Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., Terzopoulos, D., 2020. Image segmentation using deep learning: A survey. IEEE Trans on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2021.3059968 es_ES
dc.description.references Newbury, R., Gu, M., Chumbley, L., Mousavian, A., Eppner, C., Leitner, J., Bohg, J., Morales, A., Asfour, T., Kragic, D., et al., 2022. Deep learning approaches to grasp synthesis: A review. arXiv preprint arXiv:2207.02556. es_ES
dc.description.references Patrizi, A., Gambosi, G., Zanzotto, F., 2021. Data augmentation using background replacement for automated sorting of littered waste. J. of Imaging 7(8), 144. https://doi.org/10.3390/jimaging7080144 es_ES
dc.description.references Redmon, J., 2014. Darknet: Open source neural networks in c. http://pjreddie.com/darknet/. es_ES
dc.description.references Sahbani, A., El-Khoury, S., Bidaud, P., 2012. An overview of 3d object grasp synthesis algorithms. Robotics and Autonomous Systems 60 (3), 326-336. https://doi.org/10.1016/j.robot.2011.07.016 es_ES
dc.description.references Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C., 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In: IEEE Conf. on Computer Vision and Pattern Recognition. pp. 4510-4520. https://doi.org/10.1109/CVPR.2018.00474 es_ES
dc.description.references Sandykbayeva, D., Kappassov, Z., Orazbayev, B., 2022. Vibrotouch: Active tactile sensor for contact detection and force sensing via vibrations. Sensors 22 (17). https://doi.org/10.3390/s22176456 es_ES
dc.description.references Shaw-Cortez, W., Oetomo, D., Manzie, C., Choong, P., 2018. Tactile-based blind grasping: A discrete-time object manipulation controller for robotic hands. IEEE Robotics and Automation Letters 3 (2), 1064-1071. https://doi.org/10.1109/LRA.2018.2794612 es_ES
dc.description.references Simonyan, K., Zisserman, A., 2015. Very deep convolutional networks for large-scale image recognition. In: 3rd Int. Conf. on Learning Representations. DOI: https://doi.org/10.48550/arXiv.1409.1556 es_ES
dc.description.references Suárez, R., Palomo-Avellaneda, L., Martínez, J., Clos, D., García, N., 2020. Manipulador móvil, bibrazo y diestro con nuevas ruedas omnidireccionales. Revista Iberoamericana de Automática e Informática industrial 17 (1), 10-21. https://doi.org/10.4995/riai.2019.11422 es_ES
dc.description.references Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z., 2016. Rethinking the inception architecture for computer vision. In: IEEE Conf. on Computer Vision and Pattern Recognition. pp. 2818-2826. https://doi.org/10.1109/CVPR.2016.308 es_ES
dc.description.references Velasco, E., Zapata-Impata, B. S., Gil, P., Torres, F., 2020. Clasificación de objetos usando percepción bimodal de palpación única en acciones de agarre robótico. Revista Iberoamericana de Automática e Informática Industrial 17 (1) , 44-55. https://doi.org/10.4995/riai.2019.10923 es_ES
dc.description.references Vo, A. H., Son, L., Vo, M., Le, T., 2019. A novel framework for trash classification using deep transfer learning. IEEE Access 7, 178631-178639. https://doi.org/10.1109/ACCESS.2019.2959033 es_ES
dc.description.references Ward-Cherrier, B., Pestell, N., Cramphorn, L., Winstone, B., Giannaccini, M. E., Rossiter, J., Lepora, N. F., 2018. The tactip family: Soft optical tactile sensors with 3d-printed biomimetic morphologies. Soft robotics 5 (2), 216-227. https://doi.org/10.1089/soro.2017.0052 es_ES
dc.description.references Yao, T., Guo, X., Li, C., Qi, H., Lin, H., Liu, L., Dai, Y., Qu, L., Huang, Z., Liu, P., et al., 2020. Highly sensitive capacitive flexible 3d-force tactile sensors for robotic grasping and manipulation. Journal of Physics D: Applied Physics 53 (44), 445109. https://doi.org/10.1088/1361-6463/aba5c0 es_ES
dc.description.references Yuan, W., Dong, S., Adelson, E. H., 2017. Gelsight: High-resolution robot tactile sensors for estimating geometry and force. Sensors 17 (12), 2762. https://doi.org/10.3390/s17122762 es_ES
dc.description.references Zapata-Impata, B., Gil, P., Pomares, J., Torres, F., 2019a. Fast geometry-based computation of grasping points on three-dimensional point clouds. Int. J. of Advanced Robotic Systems, 1-18. https://doi.org/10.1177/1729881419831846 es_ES
dc.description.references Zapata-Impata, B. S., Gil, P., Torres, F., 2019b. Learning spatio temporal tactile features with a convlstm for the direction of slip detection. Sensors 19 (3), 523. https://doi.org/10.3390/s19030523 es_ES


Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem