Campo DC | Valor | Idioma |
dc.creator | Viturino, Caio Cristiano Barros | - |
dc.date.accessioned | 2024-04-04T18:03:01Z | - |
dc.date.available | 2024-03-27 | - |
dc.date.available | 2024-04-04T18:03:01Z | - |
dc.date.issued | 2023-12-01 | - |
dc.identifier.uri | https://repositorio.ufba.br/handle/ri/39253 | - |
dc.description.abstract | In recent years, robotic grasping methods based on deep learning have outperformed traditional methods. However, most of these methods use planar grasps due to the high computational cost associated with 6D grasps. Planar grasps, despite having a lower computational cost, have spatial limitations that restrict their applicability in complex environments, such as grasping objects inside 3D printers. Some robotic grasping techniques generate only one viable grasp per object. However, it is essential to obtain multiple possible grasps per object, as not all generated grasps have viable kinematic solution or avoid collisions with nearby obstacles. To overcome these limitations, a robotic grasping method is proposed that is capable of generating multiple selective 6D grasps per object, avoiding collisions with adjacent obstacles. Grasping tests were carried out in an Additive Manufacturing Unit, which presents a high level of complexity due to the possibility of collisions between the end effector and the inside of the printer. Experimental results indicate that it is possible to achieve a success rate of 62% in the 6D grasp of manufactured parts in confined environments. In addition, a success rate of 68% and 177 MPPH (Man Picks Per Hour) was achieved in the selective planar grasp of objects positioned on flat surfaces. The UR5 robotic arm, the Intel Realsense D435 camera, and the Robotiq 2F-140 end effector were used to validate the proposed method in real experiments. | pt_BR |
dc.language | eng | pt_BR |
dc.publisher | Universidade Federal da Bahia | pt_BR |
dc.rights | Acesso Aberto | pt_BR |
dc.subject | preensão Robótica | pt_BR |
dc.subject | redes neurais convolucionais | pt_BR |
dc.subject | manipuladores robóticos | pt_BR |
dc.subject.other | robotic grasping | pt_BR |
dc.subject.other | convolutional neural network | pt_BR |
dc.subject.other | robotic manipulators | pt_BR |
dc.title | Grasping and identifying objects in unstructured environments with deep learning methods. | pt_BR |
dc.type | Tese | pt_BR |
dc.publisher.program | Programa de Pós-Graduação em Engenharia Elétrica (PPGEE) | pt_BR |
dc.publisher.initials | UFBA | pt_BR |
dc.publisher.country | Brasil | pt_BR |
dc.subject.cnpq | CNPQ::ENGENHARIAS::ENGENHARIA ELETRICA | pt_BR |
dc.contributor.advisor1 | Conceição, André Gustavo Scolari | - |
dc.contributor.advisor1ID | https://orcid.org/0000-0001-8124-6253 | pt_BR |
dc.contributor.advisor1Lattes | http://lattes.cnpq.br/6840685961007897 | pt_BR |
dc.contributor.referee1 | Conceição, André Gustavo Scolari | - |
dc.contributor.referee1ID | https://orcid.org/0000-0001-8124-6253 | pt_BR |
dc.contributor.referee1Lattes | http://lattes.cnpq.br/6840685961007897 | pt_BR |
dc.contributor.referee2 | Ribeiro, Tiago Trindade | - |
dc.contributor.referee2ID | https://orcid.org/0000-0002-8971-0687 | pt_BR |
dc.contributor.referee2Lattes | http://lattes.cnpq.br/3521539442337416 | pt_BR |
dc.contributor.referee3 | Simas Filho, Eduardo Furtado de | - |
dc.contributor.referee3ID | https://orcid.org/0000-0001-8707-785X | pt_BR |
dc.contributor.referee3Lattes | http://lattes.cnpq.br/6012321412801915 | pt_BR |
dc.contributor.referee4 | Grassi Junior, Valdir | - |
dc.contributor.referee4ID | https://orcid.org/0000-0001-6753-139X | pt_BR |
dc.contributor.referee4Lattes | http://lattes.cnpq.br/9407327032021776 | pt_BR |
dc.contributor.referee5 | Santos, Eduardo Telmo Fonseca | - |
dc.contributor.referee5Lattes | http://lattes.cnpq.br/9270620210307470 | pt_BR |
dc.creator.ID | https://orcid.org/0000-0003-2822-6493 | pt_BR |
dc.creator.Lattes | http://lattes.cnpq.br/4355017524299952 | pt_BR |
dc.description.resumo | In recent years, robotic grasping methods based on deep learning have outperformed traditional methods. However, most of these methods use planar grasps due to the high computational cost associated with 6D grasps. Planar grasps, despite having a lower computational cost, have spatial limitations that restrict their applicability in complex environments, such as grasping objects inside 3D printers. Some robotic grasping techniques generate only one viable grasp per object. However, it is essential to obtain multiple possible grasps per object, as not all generated grasps have viable kinematic solution or avoid collisions with nearby obstacles. To overcome these limitations, a robotic grasping method is proposed that is capable of generating multiple selective 6D grasps per object, avoiding collisions with adjacent obstacles. Grasping tests were carried out in an Additive Manufacturing Unit, which presents a high level of complexity due to the possibility of collisions between the end effector and the inside of the printer. Experimental results indicate that it is possible to achieve a success rate of 62% in the 6D grasp of manufactured parts in confined environments. In addition, a success rate of 68% and 177 MPPH (Man Picks Per Hour) was achieved in the selective planar grasp of objects positioned on flat surfaces. The UR5 robotic arm, the Intel Realsense D435 camera, and the Robotiq 2F-140 end effector were used to validate the proposed method in real experiments. | pt_BR |
dc.publisher.department | Escola Politécnica | pt_BR |
dc.type.degree | Doutorado | pt_BR |
Aparece nas coleções: | Tese (PPGEE)
|