title
stringlengths
30
170
detail_url
stringlengths
45
45
author_list
sequencelengths
2
28
abstract
stringlengths
403
403
Meta-Reinforcement Learning for Robotic Industrial Insertion Tasks
https://ieeexplore.ieee.org/document/9340848/
[ "Gerrit Schoettler", "Ashvin Nair", "Juan Aparicio Ojea", "Sergey Levine", "Eugen Solowjow", "Gerrit Schoettler", "Ashvin Nair", "Juan Aparicio Ojea", "Sergey Levine", "Eugen Solowjow" ]
Robotic insertion tasks are characterized by contact and friction mechanics, making them challenging for conventional feedback control methods due to unmodeled physical effects. Reinforcement learning (RL) is a promising approach for learning control policies in such settings. However, RL can be unsafe during exploration and might require a large amount of real-world training data, which is expens...
Learning Motion Parameterizations of Mobile Pick and Place Actions from Observing Humans in Virtual Environments
https://ieeexplore.ieee.org/document/9341458/
[ "Gayane Kazhoyan", "Alina Hawkin", "Sebastian Koralewski", "Andrei Haidu", "Michael Beetz", "Gayane Kazhoyan", "Alina Hawkin", "Sebastian Koralewski", "Andrei Haidu", "Michael Beetz" ]
In this paper, we present an approach and an implemented pipeline for transferring data acquired from observing humans in virtual environments onto robots acting in the real world, and adapting the data accordingly to achieve successful task execution. We demonstrate our pipeline by inferring seven different symbolic and subsymbolic motion parameters of mobile pick and place actions, which allows ...
A control scheme for haptic inspection and partial modification of kinematic behaviors
https://ieeexplore.ieee.org/document/9341594/
[ "Dimitrios Papageorgiou", "Zoe Doulgeri", "Dimitrios Papageorgiou", "Zoe Doulgeri" ]
Over the last decades, Learning from Demonstration (LfD) has become a widely accepted solution for the problem of robot programming. According to LfD, the kinematic behavior is "taught" to the robot, based on a set of motion demonstrations performed by the human-teacher. The demonstrations can be either captured via kinesthetic teaching or external sensors, e.g., a camera. In this work, a controll...
Goal-driven variable admittance control for robot manual guidance
https://ieeexplore.ieee.org/document/9341722/
[ "Davide Bazzi", "Miriam Lapertosa", "Andrea Maria Zanchettin", "Paolo Rocco", "Davide Bazzi", "Miriam Lapertosa", "Andrea Maria Zanchettin", "Paolo Rocco" ]
In this paper we address variable admittance control for human-robot physical interaction in manual guidance applications. In the proposed solution, the parameters of the admittance filter can change not only as a function of the current state of motion (i.e. whether the human guiding the robot ia accelerating or decelerating) but also with reference to a predefined goal position. The human is in ...
Physical Human-Robot Interaction with Real Active Surfaces using Haptic Rendering on Point Clouds
https://ieeexplore.ieee.org/document/9341053/
[ "Michael Sommerhalder", "Yves Zimmermann", "Burak Cizmeci", "Robert Riener", "Marco Hutter", "Michael Sommerhalder", "Yves Zimmermann", "Burak Cizmeci", "Robert Riener", "Marco Hutter" ]
During robot-assisted therapy of hemiplegic patients, interaction with the patient must be intrinsically safe. Straight-forward collision avoidance solutions can provide this safety requirement with conservative margins. These margins heavily reduce the robot's workspace and make interaction with the patient's unguided body parts impossible. However, interaction with the own body is highly benefic...
Human-Drone Interaction for Aerially Manipulated Drilling using Haptic Feedback
https://ieeexplore.ieee.org/document/9340726/
[ "Dongbin Kim", "Paul Y. Oh", "Dongbin Kim", "Paul Y. Oh" ]
This paper presents a concept for haptic-based human-in-the-loop aerial manipulation for drilling. The concept serves as a case study for designing the human-drone interface to remotely drill with a mobile-manipulating drone. The notion of the work stems from using drones to perform dangerous tasks like material assembly, sensor insertion while being vertically elevated from bridge, wind turbine, ...
Design and Implementation of a Haptic Measurement Glove to Create Realistic Human-Telerobot Interactions
https://ieeexplore.ieee.org/document/9340976/
[ "Evan Capelle", "William N. Benson", "Zachary Anderson", "Jerry B. Weinberg", "Jenna L. Gorlewicz", "Evan Capelle", "William N. Benson", "Zachary Anderson", "Jerry B. Weinberg", "Jenna L. Gorlewicz" ]
Although research indicates that telepresence robots offer a more socially telepresent alternative to conventional forms of remote communication, the lack of touch-based interactions presents challenges for both remote and local users. In order to address these challenges, we have designed and implemented a robotic manipulator emulating a human arm. However, contact interactions like handshakes wi...
Feeling the True Force in Haptic Telepresence for Flying Robots
https://ieeexplore.ieee.org/document/9341778/
[ "Alexander Moortgat-Pick", "Anna Adamczyk", "Teodor Tomić", "Sami Haddadin", "Alexander Moortgat-Pick", "Anna Adamczyk", "Teodor Tomić", "Sami Haddadin" ]
Haptic feedback in teleoperation of flying robots can enable safe flight in unknown and densely cluttered environments. It is typically part of the robot's control scheme and used to aid navigation and collision avoidance via artificial force fields displayed to the operator. However, to achieve fully immersive embodiment in this context, high fidelity force feedback is needed. In this paper we pr...
Barometer-based Tactile Skin for Anthropomorphic Robot Hand
https://ieeexplore.ieee.org/document/9341691/
[ "Risto Kõiva", "Tobias Schwank", "Guillaume Walck", "Martin Meier", "Robert Haschke", "Helge Ritter", "Risto Kõiva", "Tobias Schwank", "Guillaume Walck", "Martin Meier", "Robert Haschke", "Helge Ritter" ]
We present our second generation tactile sensor for the Shadow Dexterous Hand's palm. We were able to significantly improve the tactile sensor characteristics by utilizing our latest barometer-based tactile sensing technology with linear (R2 ≥ 0.9996) sensor output and no noticeable hysteresis. The sensitivity threshold of the tactile cells and the spatial density were both dramatically increased....
Adaptive Potential Scanning for a Tomographic Tactile Sensor with High Spatio-Temporal Resolution
https://ieeexplore.ieee.org/document/9341436/
[ "Hiroki Mitsubayashi", "Shunsuke Yoshimoto", "Akio Yamamoto", "Hiroki Mitsubayashi", "Shunsuke Yoshimoto", "Akio Yamamoto" ]
A tactile sensor with high spatio-temporal resolution will greatly contribute to improving the performance of object recognition and human interaction in robots. In addition, being able to switch between higher spatial and higher temporal resolution will allow for more versatile sensing. To realize such a sensor, this paper introduces a method of increasing the sensing electrodes and adaptively se...
A Biomimetic Tactile Fingerprint Induces Incipient Slip
https://ieeexplore.ieee.org/document/9341310/
[ "Jasper W. James", "Stephen J. Redmond", "Nathan F. Lepora", "Jasper W. James", "Stephen J. Redmond", "Nathan F. Lepora" ]
We present a modified TacTip biomimetic optical tactile sensor design which demonstrates the ability to induce and detect incipient slip, as confirmed by recording the movement of markers on the sensor's external surface. Incipient slip is defined as slippage of part, but not all, of the contact surface between the sensor and object. The addition of ridges - which mimic the friction ridges in the ...
Noncontact Estimation of Stiffness Based on Optical Coherence Elastography under Acoustic Radiation Pressure
https://ieeexplore.ieee.org/document/9341235/
[ "Yuki Hashimoto", "Yasuaki Monnai", "Yuki Hashimoto", "Yasuaki Monnai" ]
In this study, we propose a method of noncontact elastography, which allows us to investigate stiffness of soft structures by combining optical and acoustic modalities. We use optical coherence tomography (OCT) as a means of detecting internal deformation of a sample appearing in response to a mechanical force applied by acoustic radiation pressure. Unlike most of other stiffness sensing, this met...
Deep Tactile Experience: Estimating Tactile Sensor Output from Depth Sensor Data
https://ieeexplore.ieee.org/document/9341596/
[ "Karankumar Patel", "Soshi Iba", "Nawid Jamali", "Karankumar Patel", "Soshi Iba", "Nawid Jamali" ]
Tactile sensing is inherently contact based. To use tactile data, robots need to make contact with the surface of an object. This is inefficient in applications where an agent needs to make a decision between multiple alternatives that depend the physical properties of the contact location. We propose a method to get tactile data in a non-invasive manner. The proposed method estimates the output o...
Learning to Live Life on the Edge: Online Learning for Data-Efficient Tactile Contour Following
https://ieeexplore.ieee.org/document/9341565/
[ "Elizabeth A. Stone", "Nathan F. Lepora", "David A.W. Barton", "Elizabeth A. Stone", "Nathan F. Lepora", "David A.W. Barton" ]
Tactile sensing has been used for a variety of robotic exploration and manipulation tasks but a common constraint is a requirement for a large amount of training data. This paper addresses the issue of data-efficiency by proposing a novel method for online learning based on a Gaussian Process Latent Variable Model (GP-LVM), whereby the robot learns from tactile data whilst performing a contour fol...
Interactive Tactile Perception for Classification of Novel Object Instances
https://ieeexplore.ieee.org/document/9341795/
[ "Radu Corcodel", "Siddarth Jain", "Jeroen van Baar", "Radu Corcodel", "Siddarth Jain", "Jeroen van Baar" ]
In this paper, we present a novel approach for classification of unseen object instances from interactive tactile feedback. Furthermore, we demonstrate the utility of a low resolution tactile sensor array for tactile perception that can potentially close the gap between vision and physical contact for manipulation. We contrast our sensor to high-resolution camera-based tactile sensors. Our propose...
Walking on TacTip toes: A tactile sensing foot for walking robots
https://ieeexplore.ieee.org/document/9340926/
[ "Elizabeth A. Stone", "Nathan F. Lepora", "David A.W. Barton", "Elizabeth A. Stone", "Nathan F. Lepora", "David A.W. Barton" ]
Little research into tactile feet has been done for walking robots despite the benefits such feedback could give when walking on uneven terrain. This paper describes the development of a simple, robust and inexpensive tactile foot for legged robots based on a high-resolution biomimetic TacTip tactile sensor. Several design improvements were made to facilitate tactile sensing while walking, includi...
TactileSGNet: A Spiking Graph Neural Network for Event-based Tactile Object Recognition
https://ieeexplore.ieee.org/document/9341421/
[ "Fuqiang Gu", "Weicong Sng", "Tasbolat Taunyazov", "Harold Soh", "Fuqiang Gu", "Weicong Sng", "Tasbolat Taunyazov", "Harold Soh" ]
Tactile perception is crucial for a variety of robot tasks including grasping and in-hand manipulation. New advances in flexible, event-driven, electronic skins may soon endow robots with touch perception capabilities similar to humans. These electronic skins respond asynchronously to changes (e.g., in pressure, temperature), and can be laid out irregularly on the robot’s body or end-effector. How...
A Miniaturised Neuromorphic Tactile Sensor integrated with an Anthropomorphic Robot Hand
https://ieeexplore.ieee.org/document/9341391/
[ "Benjamin Ward-Cherrier", "Jörg Conradt", "Manuel G. Catalano", "Matteo Bianchi", "Nathan F. Lepora", "Benjamin Ward-Cherrier", "Jörg Conradt", "Manuel G. Catalano", "Matteo Bianchi", "Nathan F. Lepora" ]
Restoring tactile sensation is essential to enable in-hand manipulation and the smooth, natural control of upper-limb prosthetic devices. Here we present a platform to contribute to that long-term vision, combining an anthropomorphic robot hand (QB SoftHand) with a neuromorphic optical tactile sensor (neuroTac). Neuromorphic sensors aim to produce efficient, spike-based representations of informat...
Fast Texture Classification Using Tactile Neural Coding and Spiking Neural Network
https://ieeexplore.ieee.org/document/9340693/
[ "Tasbolat Taunyazov", "Yansong Chua", "Ruihan Gao", "Harold Soh", "Yan Wu", "Tasbolat Taunyazov", "Yansong Chua", "Ruihan Gao", "Harold Soh", "Yan Wu" ]
Touch is arguably the most important sensing modality in physical interactions. However, tactile sensing has been largely under-explored in robotics applications owing to the complexity in making perceptual inferences until the recent advancements in machine learning or deep learning in particular. Touch perception is strongly influenced by both its temporal dimension similar to audition and its s...
Spatio-temporal Attention Model for Tactile Texture Recognition
https://ieeexplore.ieee.org/document/9341333/
[ "Guanqun Cao", "Yi Zhou", "Danushka Bollegala", "Shan Luo", "Guanqun Cao", "Yi Zhou", "Danushka Bollegala", "Shan Luo" ]
Recently, tactile sensing has attracted great interest in robotics, especially for facilitating exploration of unstructured environments and effective manipulation. A detailed understanding of the surface textures via tactile sensing is essential for many of these tasks. Previous works on texture recognition using camera based tactile sensors have been limited to treating all regions in one tactil...
GelTip: A Finger-shaped Optical Tactile Sensor for Robotic Manipulation
https://ieeexplore.ieee.org/document/9340881/
[ "Daniel Fernandes Gomes", "Zhonglin Lin", "Shan Luo", "Daniel Fernandes Gomes", "Zhonglin Lin", "Shan Luo" ]
Sensing contacts throughout the fingers is an essential capability for a robot to perform manipulation tasks in cluttered environments. However, existing tactile sensors either only have a flat sensing surface or a compliant tip with a limited sensing area. In this paper, we propose a novel optical tactile sensor, the GelTip, that is shaped as a finger and can sense contacts on any location of its...
Highly Underactuated Radial Gripper for Automated Planar Grasping and Part Fixturing
https://ieeexplore.ieee.org/document/9341103/
[ "Vatsal V. Patel", "Andrew S. Morgan", "Aaron M. Dollar", "Vatsal V. Patel", "Andrew S. Morgan", "Aaron M. Dollar" ]
Grasping can be conceptualized as the ability of an end-effector to temporarily attach or fixture an object to a manipulator-constraining all motion of the workpiece with respect to the end-effector's base frame. This seemingly simplistic action often requires excessive sensing, computation, or control to achieve with multi-fingered hands, which can be mitigated with underactuated mechanisms. In t...
Soft-bubble grippers for robust and perceptive manipulation
https://ieeexplore.ieee.org/document/9341534/
[ "Naveen Kuppuswamy", "Alex Alspach", "Avinash Uttamchandani", "Sam Creasey", "Takuya Ikeda", "Russ Tedrake", "Naveen Kuppuswamy", "Alex Alspach", "Avinash Uttamchandani", "Sam Creasey", "Takuya Ikeda", "Russ Tedrake" ]
Manipulation in cluttered environments like homes requires stable grasps, precise placement and robustness against external contact. Towards addressing these challenges, we present the Soft-bubble gripper system that combines highly compliant gripping surfaces with dense-geometry visuotactile sensing and facilitates multiple kinds of tactile perception. We first present several mechanical design a...
Design and Experimentation of a Variable Stiffness Bistable Gripper
https://ieeexplore.ieee.org/document/9341497/
[ "Elisha Lerner", "Haijie Zhang", "Jianguo Zhao", "Elisha Lerner", "Haijie Zhang", "Jianguo Zhao" ]
Grasping and manipulating objects is an integral part of many robotic systems. Both soft and rigid grippers have been investigated for manipulating objects in a multitude of different roles. Rigid grippers can hold heavy objects and apply large amounts of force, while soft grippers can conform to the size and shape of objects as well as protect fragile objects from excess stress. However, grippers...
Friction Identification in a Pneumatic Gripper
https://ieeexplore.ieee.org/document/9341593/
[ "Rocco A. Romeo", "Marco Maggiali", "Daniele Pucci", "Luca Fiorio", "Rocco A. Romeo", "Marco Maggiali", "Daniele Pucci", "Luca Fiorio" ]
Mechanical systems are typically composed of a number of contacting surfaces that move against each other. Such surfaces are subject to friction forces. These dissipate part of the actuation energy and cause an undesired effect on the overall system functioning. Therefore, a suitable model of friction is needed to elide its action. The choice of such a model is not always straightforward, as it is...
Vision and force based autonomous coating with rollers
https://ieeexplore.ieee.org/document/9341619/
[ "Yayun Du", "Zhaoxing Deng", "Zicheng Fang", "Yunbo Wang", "Taiki Nagata", "Karan Bansal", "Mohiuddin Quadir", "Mohammad Khalid Jawed", "Yayun Du", "Zhaoxing Deng", "Zicheng Fang", "Yunbo Wang", "Taiki Nagata", "Karan Bansal", "Mohiuddin Quadir", "Mohammad Khalid Jawed" ]
Coating rollers are widely popular in structural painting, in comparison with brushes and sprayers, due to thicker paint layer, better color consistency, and effortless customizability of holder frame and naps. In this paper, we introduce a cost-effective method to employ a general purpose robot (Sawyer, Rethink Robotics) for autonomous coating. To sense the position and the shape of the target ob...
Information Driven Self-Calibration for Lidar-Inertial Systems
https://ieeexplore.ieee.org/document/9341612/
[ "Mitchell Usayiwevu", "Cedric Le Gentil", "Jasprabhjit Mehami", "Chanyeol Yoo", "Robert Fitch", "Teresa Vidal-Calleja", "Mitchell Usayiwevu", "Cedric Le Gentil", "Jasprabhjit Mehami", "Chanyeol Yoo", "Robert Fitch", "Teresa Vidal-Calleja" ]
Multi-modal estimation systems have the advantage of increased accuracy and robustness. To achieve accurate sensor fusion with these types of systems, a reliable extrinsic calibration between each sensor pair is critical. This paper presents a novel self-calibration framework for lidar-inertial systems. The key idea of this work is to use an informative path planner to find the admissible path tha...
Targetless Calibration of LiDAR-IMU System Based on Continuous-time Batch Estimation
https://ieeexplore.ieee.org/document/9341405/
[ "Jiajun Lv", "Jinhong Xu", "Kewei Hu", "Yong Liu", "Xingxing Zuo", "Jiajun Lv", "Jinhong Xu", "Kewei Hu", "Yong Liu", "Xingxing Zuo" ]
Sensor calibration is the fundamental block for a multi-sensor fusion system. This paper presents an accurate and repeatable LiDAR-IMU calibration method (termed LI-Calib), to calibrate the 6-DOF extrinsic transformation between the 3D LiDAR and the Inertial Measurement Unit (IMU). Regarding the high data capture rate for LiDAR and IMU sensors, LI-Calib adopts a continuous-time trajectory formulat...
Extrinsic and Temporal Calibration of Automotive Radar and 3D LiDAR
https://ieeexplore.ieee.org/document/9341715/
[ "Chia-Le Lee", "Yu-Han Hsueh", "Chieh-Chih Wang", "Wen-Chieh Lin", "Chia-Le Lee", "Yu-Han Hsueh", "Chieh-Chih Wang", "Wen-Chieh Lin" ]
While automotive radars are widely used in most assisted and autonomous driving systems, only a few works were proposed to tackle the calibration problems of automotive radars with other perception sensors. One of the key calibration challenges of automotive planar radars with other sensors is the missing elevation angle in 3D space. In this paper, extrinsic calibration is accomplished based on th...
Robust Pedestrian Tracking in Crowd Scenarios Using an Adaptive GMM-based Framework
https://ieeexplore.ieee.org/document/9341463/
[ "Shuyang Zhang", "Di Wang", "Fulong Ma", "Chao Qin", "Zhengyong Chen", "Ming Liu", "Shuyang Zhang", "Di Wang", "Fulong Ma", "Chao Qin", "Zhengyong Chen", "Ming Liu" ]
In this paper, we address the issue of pedestrian tracking in crowd scenarios. People in close social relationships tend to act as a group which is a great challenge to individually discriminate and track pedestrians on a LiDAR system. In this paper, we integrally model groups of people and track them in a recursive framework based on Gaussian Mixture Model (GMM). The model is optimized by an exte...
Towards Understanding and Inferring the Crowd: Guided Second Order Attention Networks and Re-identification for Multi-object Tracking
https://ieeexplore.ieee.org/document/9341625/
[ "Niraj Bhujel", "Li Jun", "Yau Wei Yun", "Han Wang", "Niraj Bhujel", "Li Jun", "Yau Wei Yun", "Han Wang" ]
Multi-human tracking in the crowded environment is a challenging problem due to occlusions, pose change, viewpoint variation and cluttered background. In this work, we propose a robust feature learning for tracking-by-detection methods based on second-order attention network that can capture higher-order relationships between salient features at the early stages of Convolutional Neural Network (CN...
Relational Graph Learning for Crowd Navigation
https://ieeexplore.ieee.org/document/9340705/
[ "Changan Chen", "Sha Hu", "Payam Nikdel", "Greg Mori", "Manolis Savva", "Changan Chen", "Sha Hu", "Payam Nikdel", "Greg Mori", "Manolis Savva" ]
We present a relational graph learning approach for robotic crowd navigation using model-based deep reinforcement learning that plans actions by looking into the future. Our approach reasons about the relations between all agents based on their latent features and uses a Graph Convolutional Network to encode higher-order interactions in each agent’s state representation, which is subsequently leve...
Domain Adaptation for Outdoor Robot Traversability Estimation from RGB data with Safety-Preserving Loss
https://ieeexplore.ieee.org/document/9341044/
[ "Simone Palazzo", "Dario C. Guastella", "Luciano Cantelli", "Paolo Spadaro", "Francesco Rundo", "Giovanni Muscato", "Daniela Giordano", "Concetto Spampinato", "Simone Palazzo", "Dario C. Guastella", "Luciano Cantelli", "Paolo Spadaro", "Francesco Rundo", "Giovanni Muscato", "Daniela Giordano", "Concetto Spampinato" ]
Being able to estimate the traversability of the area surrounding a mobile robot is a fundamental task in the design of a navigation algorithm. However, the task is often complex, since it requires evaluating distances from obstacles, type and slope of terrain, and dealing with non-obvious discontinuities in detected distances due to perspective. In this paper, we present an approach based on deep...
SideGuide:A Large-scale Sidewalk Dataset for Guiding Impaired People
https://ieeexplore.ieee.org/document/9340734/
[ "Kibaek Park", "Youngtaek Oh", "Soomin Ham", "Kyungdon Joo", "Hyokyoung Kim", "Hyoyoung Kum", "In So Kweon", "Kibaek Park", "Youngtaek Oh", "Soomin Ham", "Kyungdon Joo", "Hyokyoung Kim", "Hyoyoung Kum", "In So Kweon" ]
In this paper, we introduce a new large-scale sidewalk dataset called SideGuide that could potentially help impaired people. Unlike most previous datasets, which are focused on road environments, we paid attention to sidewalks, where understanding the environment could provide the potential for improved walking of humans, especially impaired people. Concretely, we interviewed impaired people and c...
Deep Depth Estimation from Visual-Inertial SLAM
https://ieeexplore.ieee.org/document/9341448/
[ "Kourosh Sartipi", "Tien Do", "Tong Ke", "Khiem Vuong", "Stergios I. Roumeliotis", "Kourosh Sartipi", "Tien Do", "Tong Ke", "Khiem Vuong", "Stergios I. Roumeliotis" ]
This paper addresses the problem of learning to complete a scene's depth from sparse depth points and images of indoor scenes. Specifically, we study the case in which the sparse depth is computed from a visual-inertial simultaneous localization and mapping (VI-SLAM) system. The resulting point cloud has low density, it is noisy, and has nonuniform spatial distribution, as compared to the input fr...
Self-Supervised Attention Learning for Depth and Ego-motion Estimation
https://ieeexplore.ieee.org/document/9340820/
[ "Assem Sadek", "Boris Chidlovskii", "Assem Sadek", "Boris Chidlovskii" ]
We address the problem of depth and ego-motion estimation from image sequences. Recent advances in the domain propose to train a deep learning model for both tasks using image reconstruction in a self-supervised manner. We revise the assumptions and the limitations of the current approaches and propose two improvements to boost the performance of the depth and ego-motion estimation. We first use L...
DiPE: Deeper into Photometric Errors for Unsupervised Learning of Depth and Ego-motion from Monocular Videos
https://ieeexplore.ieee.org/document/9341074/
[ "Hualie Jiang", "Laiyan Ding", "Zhenglong Sun", "Rui Huang", "Hualie Jiang", "Laiyan Ding", "Zhenglong Sun", "Rui Huang" ]
Unsupervised learning of depth and ego-motion from unlabelled monocular videos has recently drawn great attention, which avoids the use of expensive ground truth in the supervised one. It achieves this by using the photometric errors between the target view and the synthesized views from its adjacent source views as the loss. Despite significant progress, the learning still suffers from occlusion ...
NBVC: A Benchmark for Depth Estimation from Narrow-Baseline Video Clips
https://ieeexplore.ieee.org/document/9340817/
[ "Philippos Mordohai", "Konstantinos Batsos", "Ameesh Makadia", "Noah Snavely", "Philippos Mordohai", "Konstantinos Batsos", "Ameesh Makadia", "Noah Snavely" ]
We present a benchmark for online, video-based depth estimation, a problem that is not covered by the current set of benchmarks for evaluating 3D reconstruction, which focus on offline, batch reconstruction. Online depth estimation from video captured by a moving camera is a key enabling technology for compelling applications in robotics and augmented reality. Inspired by progress in many aspects ...
LaNoising: A Data-driven Approach for 903nm ToF LiDAR Performance Modeling under Fog
https://ieeexplore.ieee.org/document/9341178/
[ "Tao Yang", "You Li", "Yassine Ruichek", "Zhi Yan", "Tao Yang", "You Li", "Yassine Ruichek", "Zhi Yan" ]
As a critical sensor for high-level autonomous vehicles, LiDAR's limitations in adverse weather (e.g. rain, fog, snow, etc.) impede the deployment of self-driving cars in all weather conditions. In this paper, we model the performance of a popular 903nm ToF LiDAR under various fog conditions based on a LiDAR dataset collected in a well-controlled artificial fog chamber. Specifically, a two-stage d...
360° Depth Estimation from Multiple Fisheye Images with Origami Crown Representation of Icosahedron
https://ieeexplore.ieee.org/document/9340981/
[ "Ren Komatsu", "Hiromitsu Fujii", "Yusuke Tamura", "Atsushi Yamashita", "Hajime Asama", "Ren Komatsu", "Hiromitsu Fujii", "Yusuke Tamura", "Atsushi Yamashita", "Hajime Asama" ]
In this study, we present a method for all-around depth estimation from multiple omnidirectional images for indoor environments. In particular, we focus on plane-sweeping stereo as the method for depth estimation from the images. We propose a new icosahedron-based representation and ConvNets for omnidirectional images, which we name "CrownConv" because the representation resembles a crown made of ...
Video Depth Estimation by Fusing Flow-to-Depth Proposals
https://ieeexplore.ieee.org/document/9341659/
[ "Jiaxin Xie", "Chenyang Lei", "Zhuwen Li", "Li Erran Li", "Qifeng Chen", "Jiaxin Xie", "Chenyang Lei", "Zhuwen Li", "Li Erran Li", "Qifeng Chen" ]
Depth from a monocular video can enable billions of devices and robots with a single camera to see the world in 3D. In this paper, we present a model for video depth estimation, which consists of a flow-to-depth layer, a camera pose refinement module, and a depth fusion network. Given optical flow and camera poses, our flow-to-depth layer generates depth proposals and their corresponding confidenc...
Unsupervised Depth and Confidence Prediction from Monocular Images using Bayesian Inference
https://ieeexplore.ieee.org/document/9341024/
[ "Vishal Bhutani", "Madhu Vankadari", "Omprakash Jha", "Anima Majumder", "Swagat Kumar", "Samrat Dutta", "Vishal Bhutani", "Madhu Vankadari", "Omprakash Jha", "Anima Majumder", "Swagat Kumar", "Samrat Dutta" ]
In this paper, we propose an unsupervised deep learning framework with Bayesian inference for improving the accuracy of per-pixel depth prediction from monocular RGB images. The proposed framework predicts confidence map along with depth and pose information for a given input image. The depth hypotheses from previous frames are propagated forward and fused with the depth hypothesis of the current ...
TT-TSDF: Memory-Efficient TSDF with Low-Rank Tensor Train Decomposition
https://ieeexplore.ieee.org/document/9341464/
[ "Alexey I. Boyko", "Mikhail P. Matrosov", "Ivan V. Oseledets", "Dzmitry Tsetserukou", "Gonzalo Ferrer", "Alexey I. Boyko", "Mikhail P. Matrosov", "Ivan V. Oseledets", "Dzmitry Tsetserukou", "Gonzalo Ferrer" ]
In this paper we apply the low-rank Tensor Train decomposition for compression and operations on 3D objects and scenes represented by volumetric distance functions. Our study shows that not only it allows for a very efficient compression of the high-resolution TSDF maps (up to three orders of magnitude of the original memory footprint at resolution of 5123), but also allows to perform TSDF-Fusion ...
Fast Uncertainty Estimation for Deep Learning Based Optical Flow
https://ieeexplore.ieee.org/document/9340963/
[ "Serin Lee", "Vincenzo Capuano", "Alexei Harvard", "Soon-Jo Chung", "Serin Lee", "Vincenzo Capuano", "Alexei Harvard", "Soon-Jo Chung" ]
We present a novel approach to reduce the processing time required to derive the estimation uncertainty map in deep learning-based optical flow determination methods. Without uncertainty aware reasoning, the optical flow model, especially when it is used for mission critical fields such as robotics and aerospace, can cause catastrophic failures. Although several approaches such as the ones based o...
Diagnose like a Clinician: Third-order Attention Guided Lesion Amplification Network for WCE Image Classification
https://ieeexplore.ieee.org/document/9340750/
[ "Xiaohan Xing", "Yixuan Yuan", "Max Q.-H. Meng", "Xiaohan Xing", "Yixuan Yuan", "Max Q.-H. Meng" ]
Wireless capsule endoscopy (WCE) is a novel imaging tool that allows the noninvasive visualization of the entire gastrointestinal (GI) tract without causing discomfort to the patients. Although convolutional neural networks (CNNs) have obtained promising performance for the automatic lesion recognition, the results of the current approaches are still limited due to the small lesions and the backgr...
Wiping 3D-objects using Deep Learning Model based on Image/Force/Joint Information
https://ieeexplore.ieee.org/document/9341275/
[ "Namiko Saito", "Danyang Wang", "Tetsuya Ogata", "Hiroki Mori", "Shigeki Sugano", "Namiko Saito", "Danyang Wang", "Tetsuya Ogata", "Hiroki Mori", "Shigeki Sugano" ]
We propose a deep learning model for a robot to wipe 3D-objects. Wiping of 3D-objects requires recognizing the shapes of objects and planning the motor angle adjustments for tracing the objects. Unlike previous research, our learning model does not require pre-designed computational models of target objects. The robot is able to wipe the objects to be placed by using image, force, and arm joint in...
D2VO: Monocular Deep Direct Visual Odometry
https://ieeexplore.ieee.org/document/9341313/
[ "Qizeng Jia", "Yuechuan Pu", "Jingyu Chen", "Junda Cheng", "Chunyuan Liao", "Xin Yang", "Qizeng Jia", "Yuechuan Pu", "Jingyu Chen", "Junda Cheng", "Chunyuan Liao", "Xin Yang" ]
In this paper, we present a novel deep learning and direct method based monocular visual odometry system named D2VO. Our system reconstructs the dense depth map of each keyframe and tracks camera poses based on these keyframes. Combining direct method and deep learning, both tracking and mapping of the system could benefit from the geometric measurement and semantic information. For each input fra...
CalibRCNN: Calibrating Camera and LiDAR by Recurrent Convolutional Neural Network and Geometric Constraints
https://ieeexplore.ieee.org/document/9341147/
[ "Jieying Shi", "Ziheng Zhu", "Jianhua Zhang", "Ruyu Liu", "Zhenhua Wang", "Shengyong Chen", "Honghai Liu", "Jieying Shi", "Ziheng Zhu", "Jianhua Zhang", "Ruyu Liu", "Zhenhua Wang", "Shengyong Chen", "Honghai Liu" ]
In this paper, we present Calibration Recurrent Convolutional Neural Network (CalibRCNN) to infer a 6 degrees of freedom (DOF) rigid body transformation between 3D LiDAR and 2D camera. Different from the existing methods, our 3D-2D CalibRCNN not only uses the LSTM network to extract the temporal features between 3D point clouds and RGB images of consecutive frames, but also uses the geometric loss...
Latent Replay for Real-Time Continual Learning
https://ieeexplore.ieee.org/document/9341460/
[ "Lorenzo Pellegrini", "Gabriele Graffieti", "Vincenzo Lomonaco", "Davide Maltoni", "Lorenzo Pellegrini", "Gabriele Graffieti", "Vincenzo Lomonaco", "Davide Maltoni" ]
Training deep neural networks at the edge on light computational devices, embedded systems and robotic platforms is nowadays very challenging. Continual learning techniques, where complex models are incrementally trained on small batches of new data, can make the learning problem tractable even for CPU-only embedded devices enabling remarkable levels of adaptiveness and autonomy. However, a number...
Learning to Switch CNNs with Model Agnostic Meta Learning for Fine Precision Visual Servoing
https://ieeexplore.ieee.org/document/9341756/
[ "Prem Raj", "Vinay P. Namboodiri", "L. Behera", "Prem Raj", "Vinay P. Namboodiri", "L. Behera" ]
Convolutional Neural Networks (CNNs) have been successfully applied for relative camera pose estimation from labeled image-pair data, without requiring any handengineered features, camera intrinsic parameters or depth information. The trained CNN can be utilized for performing pose based visual servo control (PBVS). One of the ways to improve the quality of visual servo output is to improve the ac...
HD Map Change Detection with Cross-Domain Deep Metric Learning
https://ieeexplore.ieee.org/document/9340757/
[ "Minhyeok Heo", "Jiwon Kim", "Sujung Kim", "Minhyeok Heo", "Jiwon Kim", "Sujung Kim" ]
High-definition (HD) maps are emerging as an essential tool for autonomous driving since they provide high-precision semantic information about the physical environment. To function as a reliable source of map information, HD maps must be constantly updated with changes that occur to the state of the road. In this paper, we propose a novel framework for HD map change detection that can be used to ...
CNN-based Foothold Selection for Mechanically Adaptive Soft Foot
https://ieeexplore.ieee.org/document/9340910/
[ "Jakub Bednarek", "Noel Maalouf", "Mathew J. Pollayil", "Manolo Garabini", "Manuel G. Catalano", "Giorgio Grioli", "Dominik Belter", "Jakub Bednarek", "Noel Maalouf", "Mathew J. Pollayil", "Manolo Garabini", "Manuel G. Catalano", "Giorgio Grioli", "Dominik Belter" ]
In this paper, we consider a problem of foothold selection for the quadrupedal robots equipped with compliant adaptive feet. Starting from a model of the foot we compute the quality of the potential footholds considering also kinematic constraints and collisions during evaluation. Since terrain assessment and constraints checking are computationally expensive we applied a Convolutional Neural Netw...
Depth Estimation from Monocular Images and Sparse Radar Data
https://ieeexplore.ieee.org/document/9340998/
[ "Juan-Ting Lin", "Dengxin Dai", "Luc Van Gool", "Juan-Ting Lin", "Dengxin Dai", "Luc Van Gool" ]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network. We give a comprehensive study of the fusion between RGB images and Radar measurements from different aspects and proposed a working solution based on the observations. We find that the noise existing in Radar measurements is one of the mai...
Tidying Deep Saliency Prediction Architectures
https://ieeexplore.ieee.org/document/9341574/
[ "Navyasri Reddy", "Samyak Jain", "Pradeep Yarlagadda", "Vineet Gandhi", "Navyasri Reddy", "Samyak Jain", "Pradeep Yarlagadda", "Vineet Gandhi" ]
Learning computational models for visual attention (saliency estimation) is an effort to inch machines/robots closer to human visual cognitive abilities. Data-driven efforts have dominated the landscape since the introduction of deep neural network architectures. In deep learning research, the choices in architecture design are often empirical and frequently lead to more complex models than necess...
Whole-Game Motion Capturing of Team Sports: System Architecture and Integrated Calibration
https://ieeexplore.ieee.org/document/9341009/
[ "Yosuke Ikegami", "Milutin Nikolić", "Ayaka Yamada", "Lei Zhang", "Natsu Ooke", "Yoshihiko Nakamura", "Yosuke Ikegami", "Milutin Nikolić", "Ayaka Yamada", "Lei Zhang", "Natsu Ooke", "Yoshihiko Nakamura" ]
This paper discusses the application of video motion capturing technology (VMocap) to a competitive team sports game. The setting introduces a specific set of constraints: large scale markerless motion capturing, big recording volume, transmitting and processing gigabytes of data, operation without interfering with players or distracting spectators and staff, etc... In this paper, we present how w...
A particle filter technique for human pose estimation in case of occlusion exploiting holographic human model and virtualized environment
https://ieeexplore.ieee.org/document/9341399/
[ "Costanza Messeri", "Lorenzo Rebecchi", "Andrea Maria Zanchettin", "Paolo Rocco", "Costanza Messeri", "Lorenzo Rebecchi", "Andrea Maria Zanchettin", "Paolo Rocco" ]
In a collaborative scenario, robots working side by side with humans might rely on vision sensors to monitor the activity of the other agent. When occlusions of the human body occur, both the safety of the cooperation and the performance of the team can be penalized, since the robot could receive incorrect information about the ongoing cooperation. In this work, we propose a novel particle filter ...
DR-SPAAM: A Spatial-Attention and Auto-regressive Model for Person Detection in 2D Range Data
https://ieeexplore.ieee.org/document/9341689/
[ "Dan Jia", "Alexander Hermans", "Bastian Leibe", "Dan Jia", "Alexander Hermans", "Bastian Leibe" ]
Detecting persons using a 2D LiDAR is a challenging task due to the low information content of 2D range data. To alleviate the problem caused by the sparsity of the LiDAR points, current state-of-the-art methods fuse multiple previous scans and perform detection using the combined scans. The downside of such a backward looking fusion is that all the scans need to be aligned explicitly, and the nec...
Vision-Based Gesture Recognition in Human-Robot Teams Using Synthetic Data
https://ieeexplore.ieee.org/document/9340728/
[ "Celso M. de Melo", "Brandon Rothrock", "Prudhvi Gurram", "Oytun Ulutan", "B.S. Manjunath", "Celso M. de Melo", "Brandon Rothrock", "Prudhvi Gurram", "Oytun Ulutan", "B.S. Manjunath" ]
Building successful collaboration between humans and robots requires efficient, effective, and natural communication. Here we study a RGB-based deep learning approach for controlling robots through gestures (e.g., "follow me"). To address the challenge of collecting high-quality annotated data from human subjects, synthetic data is considered for this domain. We contribute a dataset of gestures th...
HAMLET: A Hierarchical Multimodal Attention-based Human Activity Recognition Algorithm
https://ieeexplore.ieee.org/document/9340987/
[ "Md Mofijul Islam", "Tariq Iqbal", "Md Mofijul Islam", "Tariq Iqbal" ]
To fluently collaborate with people, robots need the ability to recognize human activities accurately. Although modern robots are equipped with various sensors, robust human activity recognition (HAR) still remains a challenging task for robots due to difficulties related to multimodal data fusion. To address these challenges, in this work, we introduce a deep neural network-based multimodal HAR a...
Collision Avoidance in Human-Robot Interaction Using Kinect Vision System Combined With Robot’s Model and Data
https://ieeexplore.ieee.org/document/9341248/
[ "Hugo Nascimento", "Martin Mujica", "Mourad Benoussaad", "Hugo Nascimento", "Martin Mujica", "Mourad Benoussaad" ]
Human-Robot Interaction (HRI) is a largely ad-dressed subject today. Collision avoidance is one of main strategies that allow space sharing and interaction without contact between human and robot. It is thus usual to use a 3D depth camera sensor which may involves issues related to occluded robot in camera view. While several works overcame this issue by applying infinite depth principle or increa...
Human Gait Phase Recognition using a Hidden Markov Model Framework
https://ieeexplore.ieee.org/document/9341380/
[ "Ferhat Attal", "Yacine Amirat", "Abdelghani Chibani", "Samer Mohammed", "Ferhat Attal", "Yacine Amirat", "Abdelghani Chibani", "Samer Mohammed" ]
Analysis of human daily living activities, particularly walking activity, is essential for health-care applications such as fall prevention, physical rehabilitation exercises, and gait monitoring. Studying the evolution of the gait cycle using wearable sensors is beneficial for the detection of any abnormal walking pattern. This paper proposes a novel discrete/continuous unsupervised Hidden Markov...
Using Diverse Neural Networks for Safer Human Pose Estimation: Towards Making Neural Networks Know When They Don’t Know
https://ieeexplore.ieee.org/document/9341634/
[ "Patrick Schlosser", "Christoph Ledermann", "Patrick Schlosser", "Christoph Ledermann" ]
In recent years, human pose estimation has seen great improvements by the use of neural networks. However, these approaches are unsuitable for safety-critical applications such as human-robot interaction (HRI), as no guarantees are given whether a produced detection is correct or not and false detections with high confidence scores are produced on a regular basis. In this work, we propose a method...
Residual Pose: A Decoupled Approach for Depth-based 3D Human Pose Estimation
https://ieeexplore.ieee.org/document/9340695/
[ "Angel Martínez-González", "Michael Villamizar", "Olivier Canévet", "Jean-Marc Odobez", "Angel Martínez-González", "Michael Villamizar", "Olivier Canévet", "Jean-Marc Odobez" ]
We propose to leverage recent advances in reliable 2D pose estimation with Convolutional Neural Networks (CNN) to estimate the 3D pose of people from depth images in multi-person Human-Robot Interaction (HRI) scenarios. Our method is based on the observation that using the depth information to obtain 3D lifted points from 2D body landmark detections provides a rough estimate of the true 3D human p...
Simple means Faster: Real-Time Human Motion Forecasting in Monocular First Person Videos on CPU
https://ieeexplore.ieee.org/document/9340999/
[ "Junaid Ahmed Ansari", "Brojeshwar Bhowmick", "Junaid Ahmed Ansari", "Brojeshwar Bhowmick" ]
We present a simple, fast, and light-weight RNN based framework for forecasting future locations of humans in first person monocular videos. The primary motivation for this work was to design a network which could accurately predict future trajectories at a very high rate on a CPU. Typical applications of such a system would be a social robot or a visual assistance system "for all", as both cannot...
JRMOT: A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset
https://ieeexplore.ieee.org/document/9341635/
[ "Abhijeet Shenoi", "Mihir Patel", "JunYoung Gwak", "Patrick Goebel", "Amir Sadeghian", "Hamid Rezatofighi", "Roberto Martín-Martín", "Silvio Savarese", "Abhijeet Shenoi", "Mihir Patel", "JunYoung Gwak", "Patrick Goebel", "Amir Sadeghian", "Hamid Rezatofighi", "Roberto Martín-Martín", "Silvio Savarese" ]
Robots navigating autonomously need to perceive and track the motion of objects and other agents in its surroundings. This information enables planning and executing robust and safe trajectories. To facilitate these processes, the motion should be perceived in 3D Cartesian space. However, most recent multi-object tracking (MOT) research has focused on tracking people and moving objects in 2D RGB v...
Factor Graph based 3D Multi-Object Tracking in Point Clouds
https://ieeexplore.ieee.org/document/9340932/
[ "Johannes Pöschmann", "Tim Pfeifer", "Peter Protzel", "Johannes Pöschmann", "Tim Pfeifer", "Peter Protzel" ]
Accurate and reliable tracking of multiple moving objects in 3D space is an essential component of urban scene understanding. This is a challenging task because it requires the assignment of detections in the current frame to the predicted objects from the previous one. Existing filter-based approaches tend to struggle if this initial assignment is not correct, which can happen easily.We propose a...
Self-supervised Object Tracking with Cycle-consistent Siamese Networks
https://ieeexplore.ieee.org/document/9341621/
[ "Weihao Yuan", "Michael Yu Wang", "Qifeng Chen", "Weihao Yuan", "Michael Yu Wang", "Qifeng Chen" ]
Self-supervised learning for visual object tracking possesses valuable advantages compared to supervised learning, such as the non-necessity of laborious human annotations and online training. In this work, we exploit an end-to-end Siamese network in a cycle-consistent self-supervised framework for object tracking. Self-supervision can be performed by taking advantage of the cycle consistency in t...
3D Multi-Object Tracking: A Baseline and New Evaluation Metrics
https://ieeexplore.ieee.org/document/9341164/
[ "Xinshuo Weng", "Jianren Wang", "David Held", "Kris Kitani", "Xinshuo Weng", "Jianren Wang", "David Held", "Kris Kitani" ]
3D multi-object tracking (MOT) is an essential component for many applications such as autonomous driving and assistive robotics. Recent work on 3D MOT focuses on developing accurate systems giving less attention to practical considerations such as computational cost and system complexity. In contrast, this work proposes a simple real-time 3D MOT system. Our system first obtains 3D detections from...
se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains
https://ieeexplore.ieee.org/document/9341314/
[ "Bowen Wen", "Chaitanya Mitash", "Baozhang Ren", "Kostas E. Bekris", "Bowen Wen", "Chaitanya Mitash", "Baozhang Ren", "Kostas E. Bekris" ]
Tracking the 6D pose of objects in video sequences is important for robot manipulation. This task, however, introduces multiple challenges: (i) robot manipulation involves significant occlusions; (ii) data and annotations are troublesome and difficult to collect for 6D poses, which complicates machine learning solutions, and (iii) incremental error drift often accumulates in long term tracking to ...
Motion Prediction in Visual Object Tracking
https://ieeexplore.ieee.org/document/9341158/
[ "Jianren Wang", "Yihui He", "Jianren Wang", "Yihui He" ]
Visual object tracking (VOT) is an essential component for many applications, such as autonomous driving or assistive robotics. However, recent works tend to develop accurate systems based on more computationally expensive feature extractors for better instance matching. In contrast, this work addresses the importance of motion prediction in VOT. We use an off-the-shelf object detector to obtain i...
Look and Listen: A Multi-modality Late Fusion Approach to Scene Classification for Autonomous Machines
https://ieeexplore.ieee.org/document/9341557/
[ "Jordan J. Bird", "Diego R. Faria", "Cristiano Premebida", "Anikó Ekárt", "George Vogiatzis", "Jordan J. Bird", "Diego R. Faria", "Cristiano Premebida", "Anikó Ekárt", "George Vogiatzis" ]
The novelty of this study consists in a multi-modality approach to scene classification, where image and audio complement each other in a process of deep late fusion. The approach is demonstrated on a difficult classification problem, consisting of two synchronised and balanced datasets of 16,000 data objects, encompassing 4.4 hours of video of 8 environments with varying degrees of similarity. We...
CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection
https://ieeexplore.ieee.org/document/9341791/
[ "Su Pang", "Daniel Morris", "Hayder Radha", "Su Pang", "Daniel Morris", "Hayder Radha" ]
There have been significant advances in neural networks for both 3D object detection using LiDAR and 2D object detection using video. However, it has been surprisingly difficult to train networks to effectively use both modalities in a way that demonstrates gain over single-modality networks. In this paper, we propose a novel Camera-LiDAR Object Candidates (CLOCs) fusion network. CLOCs fusion prov...
Gimme Signals: Discriminative signal encoding for multimodal activity recognition
https://ieeexplore.ieee.org/document/9341699/
[ "Raphael Memmesheimer", "Nick Theisen", "Dietrich Paulus", "Raphael Memmesheimer", "Nick Theisen", "Dietrich Paulus" ]
We present a simple, yet effective and flexible method for action recognition supporting multiple sensor modalities. Multivariate signal sequences are encoded in an image and are then classified using a recently proposed EfficientNet CNN architecture. Our focus was to find an approach that generalizes well across different sensor modalities without specific adaptions while still achieving good res...
3D Localization of a Sound Source Using Mobile Microphone Arrays Referenced by SLAM
https://ieeexplore.ieee.org/document/9341098/
[ "Simon Michaud", "Samuel Faucher", "François Grondin", "Jean-Samuel Lauzon", "Mathieu Labbé", "Dominic Létourneau", "François Ferland", "François Michaud", "Simon Michaud", "Samuel Faucher", "François Grondin", "Jean-Samuel Lauzon", "Mathieu Labbé", "Dominic Létourneau", "François Ferland", "François Michaud" ]
A microphone array can provide a mobile robot with the capability of localizing, tracking and separating distant sound sources in 2D, i.e., estimating their relative elevation and azimuth. To combine acoustic data with visual information in real world settings, spatial correlation must be established. The approach explored in this paper consists of having two robots, each equipped with a microphon...
When We First Met: Visual-Inertial Person Localization for Co-Robot Rendezvous
https://ieeexplore.ieee.org/document/9341739/
[ "Xi Sun", "Xinshuo Weng", "Kris Kitani", "Xi Sun", "Xinshuo Weng", "Kris Kitani" ]
We aim to enable robots to visually localize a target person through the aid of an additional sensing modality - the target person's 3D inertial measurements. The need for such technology may arise when a robot is to meet a person in a crowd for the first time or when an autonomous vehicle must rendezvous with a rider amongst a crowd without knowing the appearance of the person in advance. A perso...
Using Machine Learning for Material Detection with Capacitive Proximity Sensors
https://ieeexplore.ieee.org/document/9341016/
[ "Yitao Ding", "Hannes Kisner", "Tianlin Kong", "Ulrike Thomas", "Yitao Ding", "Hannes Kisner", "Tianlin Kong", "Ulrike Thomas" ]
The ability of detecting materials plays an important role in robotic applications. The robot can incorporate the information from contactless material detection and adapt its behavior in how it grasps an object or how it walks on specific surfaces. In this, paper we apply machine learning on impedance spectra from capacitive proximity sensors for material detection. The unique spectra of certain ...
Tactile Event Based Grasping Algorithm using Memorized Triggers and Mechanoreceptive Sensors
https://ieeexplore.ieee.org/document/9341130/
[ "Won Dong Kim", "Jung Kim", "Won Dong Kim", "Jung Kim" ]
Humans perform grasping by breaking down the task into a series of action phases, where the transitions between the action phases are based on the comparison between the predicted tactile events and the actual tactile events. The dependency on tactile sensation in grasping allows humans to grasp objects without the need to locate the object precisely, which is a feature desirable in robot grasping...
Multimodal Sensor Fusion with Differentiable Filters
https://ieeexplore.ieee.org/document/9341579/
[ "Michelle A. Lee", "Brent Yi", "Roberto Martín-Martín", "Silvio Savarese", "Jeannette Bohg", "Michelle A. Lee", "Brent Yi", "Roberto Martín-Martín", "Silvio Savarese", "Jeannette Bohg" ]
Leveraging multimodal information with recursive Bayesian filters improves performance and robustness of state estimation, as recursive filters can combine different modalities according to their uncertainties. Prior work has studied how to optimally fuse different sensor modalities with analytical state estimation algorithms. However, deriving the dynamics and measurement models along with their ...
Multimodal Material Classification for Robots using Spectroscopy and High Resolution Texture Imaging
https://ieeexplore.ieee.org/document/9341165/
[ "Zackory Erickson", "Eliot Xing", "Bharat Srirangam", "Sonia Chernova", "Charles C. Kemp", "Zackory Erickson", "Eliot Xing", "Bharat Srirangam", "Sonia Chernova", "Charles C. Kemp" ]
Material recognition can help inform robots about how to properly interact with and manipulate real-world objects. In this paper, we present a multimodal sensing technique, leveraging near-infrared spectroscopy and close-range high resolution texture imaging, that enables robots to estimate the materials of household objects. We release a dataset of high resolution texture images and spectral meas...
DeepLiDARFlow: A Deep Learning Architecture For Scene Flow Estimation Using Monocular Camera and Sparse LiDAR
https://ieeexplore.ieee.org/document/9341077/
[ "Rishav Rishav", "Ramy Battrawy", "René Schuster", "Oliver Wasenmüller", "Didier Stricker", "Rishav Rishav", "Ramy Battrawy", "René Schuster", "Oliver Wasenmüller", "Didier Stricker" ]
Scene flow is the dense 3D reconstruction of motion and geometry of a scene. Most state-of-the-art methods use a pair of stereo images as input for full scene reconstruction. These methods depend a lot on the quality of the RGB images and perform poorly in regions with reflective objects, shadows, ill-conditioned light environment and so on. LiDAR measurements are much less sensitive to the aforem...
Balanced Depth Completion between Dense Depth Inference and Sparse Range Measurements via KISS-GP
https://ieeexplore.ieee.org/document/9341769/
[ "Sungho Yoon", "Ayoung Kim", "Sungho Yoon", "Ayoung Kim" ]
Estimating a dense and accurate depth map is the key requirement for autonomous driving and robotics. Recent advances in deep learning have allowed depth estimation in full resolution from a single image. Despite this impressive result, many deep-learning-based monocular depth estimation (MDE) algorithms have failed to keep their accuracy yielding a meter-level estimation error. In many robotics a...
Polygonal Perception for Mobile Robots
https://ieeexplore.ieee.org/document/9341742/
[ "Marcell Missura", "Arindam Roychoudhury", "Maren Bennewitz", "Marcell Missura", "Arindam Roychoudhury", "Maren Bennewitz" ]
Geometric primitives are a compact and versatile representation of the environment and the objects within. From a motion planning perspective, the geometric structure can be leveraged in order to implement potentially faster and smoother motion control algorithms than it has been possible with grid-based occupancy maps so far. In this paper, we introduce a novel perception pipeline that efficientl...
Real-time detection of broccoli crops in 3D point clouds for autonomous robotic harvesting
https://ieeexplore.ieee.org/document/9341381/
[ "Hector A. Montes", "Justin Le Louedec", "Grzegorz Cielniak", "Tom Duckett", "Hector A. Montes", "Justin Le Louedec", "Grzegorz Cielniak", "Tom Duckett" ]
Real-time 3D perception of the environment is crucial for the adoption and deployment of reliable autonomous harvesting robots in agriculture. Using data collected with RGB-D cameras under farm field conditions, we present two methods for processing 3D data that reliably detect mature broccoli heads. The proposed systems are efficient and enable real-time detection on depth data of broccoli crops ...
SGM-MDE: Semi-global optimization for classification-based monocular depth estimation
https://ieeexplore.ieee.org/document/9340766/
[ "Vlad-Cristian Miclea", "Sergiu Nedevschi", "Vlad-Cristian Miclea", "Sergiu Nedevschi" ]
Depth estimation plays a crucial role in robotic applications that require environment perception. With the introduction of convolutional neural networks, monocular depth estimation (MDE) methods have become viable alternatives to LiDAR and stereo reconstruction-based solutions. Such methods require less equipment, fewer resources and do not need additional sensor alignment requirements. However, ...
Multi-Task Deep Learning for Depth-based Person Perception in Mobile Robotics
https://ieeexplore.ieee.org/document/9340870/
[ "Daniel Seichter", "Benjamin Lewandowski", "Dominik Höchemer", "Tim Wengefeld", "Horst-Michael Gross", "Daniel Seichter", "Benjamin Lewandowski", "Dominik Höchemer", "Tim Wengefeld", "Horst-Michael Gross" ]
Efficient and robust person perception is one of the most basic skills a mobile robot must have to ensure intuitive human-machine interaction. In addition to person detection, this also includes estimating various attributes, like posture or body orientation, in order to achieve user-adaptive behavior. However, given limited computing and battery capabilities on a mobile robot, it is inefficient t...
Learning an Uncertainty-Aware Object Detector for Autonomous Driving
https://ieeexplore.ieee.org/document/9341623/
[ "Gregory P. Meyer", "Niranjan Thakurdesai", "Gregory P. Meyer", "Niranjan Thakurdesai" ]
The capability to detect objects is a core part of autonomous driving. Due to sensor noise and incomplete data, perfectly detecting and localizing every object is infeasible. Therefore, it is important for a detector to provide the amount of uncertainty in each prediction. Providing the autonomous system with reliable uncertainties enables the vehicle to react differently based on the level of unc...
Leveraging Stereo-Camera Data for Real-Time Dynamic Obstacle Detection and Tracking
https://ieeexplore.ieee.org/document/9340699/
[ "Thomas Eppenberger", "Gianluca Cesari", "Marcin Dymczyk", "Roland Siegwart", "Renaud Dubé", "Thomas Eppenberger", "Gianluca Cesari", "Marcin Dymczyk", "Roland Siegwart", "Renaud Dubé" ]
Dynamic obstacle avoidance is one crucial component for compliant navigation in crowded environments. In this paper we present a system for accurate and reliable detection and tracking of dynamic objects using noisy point cloud data generated by stereo cameras. Our solution is real-time capable and specifically designed for the deployment on computationally-constrained unmanned ground vehicles. Th...
Robust and efficient post-processing for video object detection
https://ieeexplore.ieee.org/document/9341600/
[ "Alberto Sabater", "Luis Montesano", "Ana C. Murillo", "Alberto Sabater", "Luis Montesano", "Ana C. Murillo" ]
Object recognition in video is an important task for plenty of applications, including autonomous driving perception, surveillance tasks, wearable devices or IoT networks. Object recognition using video data is more challenging than using still images due to blur, occlusions or rare object poses. Specific video detectors with high computational cost or standard image detectors together with a fast...
Modality-Buffet for Real-Time Object Detection
https://ieeexplore.ieee.org/document/9340960/
[ "Nicolai Dorka", "Johannes Meyer", "Wolfram Burgard", "Nicolai Dorka", "Johannes Meyer", "Wolfram Burgard" ]
Real-time object detection in videos using lightweight hardware is a crucial component of many robotic tasks. Detectors using different modalities and with varying computational complexities offer different trade-offs. One option is to have a very lightweight model that can predict from all modalities at once for each frame. However, in some situations (e.g., in static scenes) it might be better t...
Deep Mixture Density Network for Probabilistic Object Detection
https://ieeexplore.ieee.org/document/9340882/
[ "Yihui He", "Jianren Wang", "Yihui He", "Jianren Wang" ]
Mistakes/uncertainties in object detection could lead to catastrophes when deploying robots in the real world. In this paper, we measure the uncertainties of object localization to minimize this kind of risk. Uncertainties emerge upon challenging cases like occlusion. The bounding box borders of an occluded object can have multiple plausible configurations. We propose a deep multivariate mixture o...
MLOD: Awareness of Extrinsic Perturbation in Multi-LiDAR 3D Object Detection for Autonomous Driving
https://ieeexplore.ieee.org/document/9341254/
[ "Jianhao Jiao", "Peng Yun", "Lei Tai", "Ming Liu", "Jianhao Jiao", "Peng Yun", "Lei Tai", "Ming Liu" ]
Extrinsic perturbation always exists in multiple sensors. In this paper, we focus on the extrinsic uncertainty in multi-LiDAR systems for 3D object detection. We first analyze the influence of extrinsic perturbation on geometric tasks with two basic examples. To minimize the detrimental effect of extrinsic perturbation, we propagate an uncertainty prior on each point of input point clouds, and use...
Active 6D Multi-Object Pose Estimation in Cluttered Scenarios with Deep Reinforcement Learning
https://ieeexplore.ieee.org/document/9340842/
[ "Juil Sock", "Guillermo Garcia-Hernando", "Tae-Kyun Kim", "Juil Sock", "Guillermo Garcia-Hernando", "Tae-Kyun Kim" ]
In this work, we explore how a strategic selection of camera movements can facilitate the task of 6D multi-object pose estimation in cluttered scenarios while respecting real-world constraints such as time and distance travelled, important in robotics and augmented reality applications. In the proposed framework, multiple object hypotheses inferred by an object pose estimator are accumulated both ...
6D Pose Estimation for Flexible Production with Small Lot Sizes based on CAD Models using Gaussian Process Implicit Surfaces
https://ieeexplore.ieee.org/document/9341189/
[ "Jianjie Lin", "Markus Rickert", "Alois Knoll", "Jianjie Lin", "Markus Rickert", "Alois Knoll" ]
We propose a surface-to-surface (S2S) point registration algorithm by exploiting the Gaussian Process Implicit Surfaces for partially overlapping 3D surfaces to estimate the 6D pose transformation. Unlike traditional approaches, that separate the corresponding search and update steps in the inner loop, we formulate the point registration as a nonlinear non-constraints optimization problem which do...
Learning Orientation Distributions for Object Pose Estimation
https://ieeexplore.ieee.org/document/9340860/
[ "Brian Okorn", "Mengyun Xu", "Martial Hebert", "David Held", "Brian Okorn", "Mengyun Xu", "Martial Hebert", "David Held" ]
For robots to operate robustly in the real world, they should be aware of their uncertainty. However, most methods for object pose estimation return a single point estimate of the object's pose. In this work, we propose two learned methods for estimating a distribution over an object's orientation. Our methods take into account both the inaccuracies in the pose estimation as well as the object sym...
Estimation of object class and orientation from multiple viewpoints and relative camera orientation constraints
https://ieeexplore.ieee.org/document/9340771/
[ "Koichi Ogawara", "Keita Iseki", "Koichi Ogawara", "Keita Iseki" ]
In this research, we propose a method of estimating object class and orientation given multiple input images assuming the relative camera orientations are known. Input images are transformed to descriptors on 2-D manifolds defined for each class of object through a CNN, and the object class and orientation that minimize the distance between input descriptors and the descriptors associated with the...
Parts-Based Articulated Object Localization in Clutter Using Belief Propagation
https://ieeexplore.ieee.org/document/9340908/
[ "Jana Pavlasek", "Stanley Lewis", "Karthik Desingh", "Odest Chadwicke Jenkins", "Jana Pavlasek", "Stanley Lewis", "Karthik Desingh", "Odest Chadwicke Jenkins" ]
Robots working in human environments must be able to perceive and act on challenging objects with articulations, such as a pile of tools. Articulated objects increase the dimensionality of the pose estimation problem, and partial observations under clutter create additional challenges. To address this problem, we present a generative-discriminative parts-based recognition and localization method f...
3D Gaze Estimation for Head-Mounted Devices based on Visual Saliency
https://ieeexplore.ieee.org/document/9341755/
[ "Meng Liu", "You Fu Li", "Hai Liu", "Meng Liu", "You Fu Li", "Hai Liu" ]
Compared with the maturity of 2D gaze tracking technology, 3D gaze tracking has gradually become a research hotspot in recent years. The head-mounted gaze tracker has shown great potential for gaze estimation in 3D space due to its appealing flexibility and portability. The general challenge for 3D gaze tracking algorithms is that calibration is necessary before the usage, and calibration targets ...
Category-Level 3D Non-Rigid Registration from Single-View RGB Images
https://ieeexplore.ieee.org/document/9340878/
[ "Diego Rodriguez", "Florian Huber", "Sven Behnke", "Diego Rodriguez", "Florian Huber", "Sven Behnke" ]
In this paper, we propose a novel approach to solve the 3D non-rigid registration problem from RGB images using Convolutional Neural Networks (CNNs). Our objective is to find a deformation field (typically used for transferring knowledge between instances, e.g., grasping skills) that warps a given 3D canonical model into a novel instance observed by a single-view RGB image. This is done by trainin...
Relative Pose Estimation and Planar Reconstruction via Superpixel-Driven Multiple Homographies
https://ieeexplore.ieee.org/document/9341707/
[ "Xi Wang", "Marc Christie", "Eric Marchand", "Xi Wang", "Marc Christie", "Eric Marchand" ]
This paper proposes a novel method to simultaneously perform relative camera pose estimation and planar reconstruction of a scene from two RGB images. We start by extracting and matching superpixel information from both images and rely on a novel multi-model RANSAC approach to estimate multiple homographies from superpixels and identify matching planes. Ambiguity issues when performing homography ...
PERCH 2.0 : Fast and Accurate GPU-based Perception via Search for Object Pose Estimation
https://ieeexplore.ieee.org/document/9341257/
[ "Aditya Agarwal", "Yupeng Han", "Maxim Likhachev", "Aditya Agarwal", "Yupeng Han", "Maxim Likhachev" ]
Pose estimation of known objects is fundamental to tasks such as robotic grasping and manipulation. The need for reliable grasping imposes stringent accuracy requirements on pose estimation in cluttered, occluded scenes in dynamic environments. Modern methods employ large sets of training data to learn features in order to find correspondence between 3D models and observed data. However these meth...