• 通用链接
  • Mobile Version
  • Switch language:CN

Kechen Song(Associate professor)

+

  • Supervisor of Doctorate Candidates  Supervisor of Master's Candidates
  • Name (English):Kechen Song
  • E-Mail:
  • Education Level:With Certificate of Graduation for Doctorate Study
  • Gender:Male
  • Degree:博士
  • Status:Employed
  • Alma Mater:东北大学
  • Teacher College:机械工程与自动化学院

Click:

Open time:..

The Last Update Time:..

  • Robotic Visual Grasping Detection

Robotic Visual Grasping Detection

Data-driven robotic visual grasping detection for unknown objects
framework.jpg This paper presents a comprehensive survey of data-driven robotic visual grasping detection (DRVGD) for unknown objects. We review both object-oriented and scene-oriented aspects, using the DRVGD for unknown objects as a guide. Object-oriented DRVGD aims for the physical information of unknown objects, such as shape, texture, and rigidity, which can classify objects into conventional or challenging objects. Scene-oriented DRVGD focuses on unstructured scenes, which are explored in two aspects based on the position relationships of objectto-object, grasping isolated or stacked objects in unstructured scenes. In addition, this paper provides a detailed review of associated grasping representations and datasets. Finally, the challenges of DRVGD and future directions are pointed out.
Hongkun Tian, Kechen Song, et al.  Data-driven Robotic Visual Grasping Detection for Unknown Objects A Problem-oriented Review [J]. Expert Systems With Applications, 2023, 211, 118624. (paper


Lightweight Pixel-Wise Generative Robot Grasping Detection
Grasping detection is one of the essential tasks for robots to achieve automation and intelligence. The existing grasp detection mainly relies on data-driven discriminative and generative strategies. Generative strategies have significant advantages over discriminative strategies in terms of efficiency. RGB and depth (RGB-D) data are widely used in grasping data sources due to the sufficient amount of information and low cost of acquisition. RGB-D fusion has shown advantages over only using RGB or depth. However, existing research has mainly focused on early fusion and late fusion, which is challenging to utilize information from both modalities fully. Improving the accuracy of grasping while leveraging the knowledge of both modalities and ensuring lightweight and real time is crucial. Therefore, this article proposes a pixel-wise RGB-D dense fusion method based on a generative strategy. The technique is doubly experimentally validated on public datasets and real robot platform. Accuracy rates of 98.9% and 94.0% are achieved on Cornell and Jacquard datasets, and the efficiency of only 15 ms is achieved for single-image processing. The average success rate of the AUBO i5 robotic platform with DH-AG-95 parallel gripper reached 94.0% for single-object scenes, 86.7% for three-object scenes, and 84% for five-object scenes. Our approach has outperformed existing state-of-the-art methods. Lightweight Pixel-Wise.jpg
Hongkun Tian, Kechen Song, et al. Light-weight Pixel-wise Generative Robot Grasping Detection Based on RGB-D Dense Fusion [J]. IEEE Transactions on Instrumentation and Measuremente, 2022,71, 5017912(paper) (video)


Rotation Adaptive Grasping Estimation Network Oriented to Unknown Objects Based on Novel RGB-D Fusion Strategy
G.jpg

This paper proposes a framework for rotation adaptive grasping estimation based on a novel RGB-D fusion strategy. Specifically, the RGB-D is fused with shared weights in stages based on the proposed Multi-step Weight-learning Fusion (MWF) strategy. The spatial position is encoding learned autonomously based on the proposed Rotation Adaptive Conjoin (RAC) encoder to achieve spatial and rotational adaptiveness oriented to unknown objects with unknown poses. In addition, the Multi-dimensional Interaction-guided Attention (MIA) decoding strategy based on the fused multiscale features is proposed to highlight the practical elements and suppress the invalid ones. The method has been validated on the Cornell and Jacquard grasping datasets with cross-validation accuracies of 99.3% and 94.6%. The single-object and multi-object scene grasping success rates on the robot platform are 95.625% and 87.5%, respectively. 

Hongkun Tian, Kechen Song, et al. Rotation Adaptive Grasping Estimation Network Oriented to Unknown Objects Based on Novel RGB-D Fusion Strategy [J]. Engineering Applications of Artificial Intelligence, 2023(paper)