Skip navigation
Universidade Federal da Bahia |
Repositório Institucional da UFBA
Use este identificador para citar ou linkar para este item: https://repositorio.ufba.br/handle/ri/39253
Registro completo de metadados
Campo DCValorIdioma
dc.creatorViturino, Caio Cristiano Barros-
dc.date.accessioned2024-04-04T18:03:01Z-
dc.date.available2024-03-27-
dc.date.available2024-04-04T18:03:01Z-
dc.date.issued2023-12-01-
dc.identifier.urihttps://repositorio.ufba.br/handle/ri/39253-
dc.description.abstractIn recent years, robotic grasping methods based on deep learning have outperformed traditional methods. However, most of these methods use planar grasps due to the high computational cost associated with 6D grasps. Planar grasps, despite having a lower computational cost, have spatial limitations that restrict their applicability in complex environments, such as grasping objects inside 3D printers. Some robotic grasping techniques generate only one viable grasp per object. However, it is essential to obtain multiple possible grasps per object, as not all generated grasps have viable kinematic solution or avoid collisions with nearby obstacles. To overcome these limitations, a robotic grasping method is proposed that is capable of generating multiple selective 6D grasps per object, avoiding collisions with adjacent obstacles. Grasping tests were carried out in an Additive Manufacturing Unit, which presents a high level of complexity due to the possibility of collisions between the end effector and the inside of the printer. Experimental results indicate that it is possible to achieve a success rate of 62% in the 6D grasp of manufactured parts in confined environments. In addition, a success rate of 68% and 177 MPPH (Man Picks Per Hour) was achieved in the selective planar grasp of objects positioned on flat surfaces. The UR5 robotic arm, the Intel Realsense D435 camera, and the Robotiq 2F-140 end effector were used to validate the proposed method in real experiments.pt_BR
dc.languageengpt_BR
dc.publisherUniversidade Federal da Bahiapt_BR
dc.rightsAcesso Abertopt_BR
dc.subjectpreensão Robóticapt_BR
dc.subjectredes neurais convolucionaispt_BR
dc.subjectmanipuladores robóticospt_BR
dc.subject.otherrobotic graspingpt_BR
dc.subject.otherconvolutional neural networkpt_BR
dc.subject.otherrobotic manipulatorspt_BR
dc.titleGrasping and identifying objects in unstructured environments with deep learning methods.pt_BR
dc.typeTesept_BR
dc.publisher.programPrograma de Pós-Graduação em Engenharia Elétrica (PPGEE) pt_BR
dc.publisher.initialsUFBApt_BR
dc.publisher.countryBrasilpt_BR
dc.subject.cnpqCNPQ::ENGENHARIAS::ENGENHARIA ELETRICApt_BR
dc.contributor.advisor1Conceição, André Gustavo Scolari-
dc.contributor.advisor1IDhttps://orcid.org/0000-0001-8124-6253pt_BR
dc.contributor.advisor1Latteshttp://lattes.cnpq.br/6840685961007897pt_BR
dc.contributor.referee1Conceição, André Gustavo Scolari-
dc.contributor.referee1IDhttps://orcid.org/0000-0001-8124-6253pt_BR
dc.contributor.referee1Latteshttp://lattes.cnpq.br/6840685961007897pt_BR
dc.contributor.referee2Ribeiro, Tiago Trindade-
dc.contributor.referee2IDhttps://orcid.org/0000-0002-8971-0687pt_BR
dc.contributor.referee2Latteshttp://lattes.cnpq.br/3521539442337416pt_BR
dc.contributor.referee3Simas Filho, Eduardo Furtado de-
dc.contributor.referee3IDhttps://orcid.org/0000-0001-8707-785Xpt_BR
dc.contributor.referee3Latteshttp://lattes.cnpq.br/6012321412801915pt_BR
dc.contributor.referee4Grassi Junior, Valdir-
dc.contributor.referee4IDhttps://orcid.org/0000-0001-6753-139Xpt_BR
dc.contributor.referee4Latteshttp://lattes.cnpq.br/9407327032021776pt_BR
dc.contributor.referee5Santos, Eduardo Telmo Fonseca-
dc.contributor.referee5Latteshttp://lattes.cnpq.br/9270620210307470pt_BR
dc.creator.IDhttps://orcid.org/0000-0003-2822-6493pt_BR
dc.creator.Latteshttp://lattes.cnpq.br/4355017524299952pt_BR
dc.description.resumoIn recent years, robotic grasping methods based on deep learning have outperformed traditional methods. However, most of these methods use planar grasps due to the high computational cost associated with 6D grasps. Planar grasps, despite having a lower computational cost, have spatial limitations that restrict their applicability in complex environments, such as grasping objects inside 3D printers. Some robotic grasping techniques generate only one viable grasp per object. However, it is essential to obtain multiple possible grasps per object, as not all generated grasps have viable kinematic solution or avoid collisions with nearby obstacles. To overcome these limitations, a robotic grasping method is proposed that is capable of generating multiple selective 6D grasps per object, avoiding collisions with adjacent obstacles. Grasping tests were carried out in an Additive Manufacturing Unit, which presents a high level of complexity due to the possibility of collisions between the end effector and the inside of the printer. Experimental results indicate that it is possible to achieve a success rate of 62% in the 6D grasp of manufactured parts in confined environments. In addition, a success rate of 68% and 177 MPPH (Man Picks Per Hour) was achieved in the selective planar grasp of objects positioned on flat surfaces. The UR5 robotic arm, the Intel Realsense D435 camera, and the Robotiq 2F-140 end effector were used to validate the proposed method in real experiments.pt_BR
dc.publisher.departmentEscola Politécnicapt_BR
dc.type.degreeDoutoradopt_BR
Aparece nas coleções:Tese (PPGEE)

Arquivos associados a este item:
Arquivo Descrição TamanhoFormato 
Caio Cristiano Barros Viturino. TCC - tese.pdf74,66 MBAdobe PDFVisualizar/Abrir
Mostrar registro simples do item Visualizar estatísticas


Os itens no repositório estão protegidos por copyright, com todos os direitos reservados, salvo quando é indicado o contrário.