CassieResized.png

Meet Our Robots!

Cassie Blue

Cassie Blue is one of Michigan’s Cassie-series robots. The robot is shown participating in a controlled burn. It has 20 DoF, 10 actuators, joint encoders, and an IMU. The robot’s serial number is 001. When standing up, Cassie is about one meter tall and its total mass is 31 kg.

I seek to develop a full autonomy system that allows bipedal robots to

1) Acquire multi-modal data from a calibrated perception suite;

2) Estimate their poses in textureless environments;

3) Detect and avoid dynamic obstacles;

4) Traverse unexplored, unstructured environments and undulating terrains.

5) Perform point-to-point topometric navigation.

 

All the research presented in focuses on advancing the state-of-the-art algorithms that will achieve autonomy of bipedal robots — Cassie Blue and Digit.

FullRoboticsSystem2_edited.jpg

Current Research & Projects

Screen Shot 2021-08-29 at 18.48.03.png

Topological Global Planning

 

Jiunn-Kai Huang and Jessy W. Grizzle

 

Screen Shot 2021-08-29 at 18.39.21.png

 

Jiunn-Kai Huang, Hengxu You, Kaustav Chakraborty,
Jianyang Tang, Maani Ghaffari, and Jessy W. Grizzle

Accurate detection of objects is critical for full scale robotic autonomy. In this paper, we propose KC3D-RCNN, a novel 2 stage predictor-refiner network that is able to efficiently detect, classify and estimate the pose of objects using both 2D images and 3D LiDAR point cloud data. The first stage of our network follows a K-Means-based region proposal approach and generates bounding boxes for the observed objects. We use a continuous 3D loss function and K-Means clustering algorithm to filter and reduce the number of bounding boxes. The 3D loss function scores each proposal by lifting the LiDAR returns to a continuous function in a Reproducing Kernel Hilbert Space. We also show how this function can be used for both classification and pose estimation. The adoption of this loss function makes our network especially adept in handling objects that are far away or have sparse LiDAR returns like pedestrians and bicyclists. The second stage updates the bounding boxes to refine pose estimation and classification. Our results, tested on the KITTI dataset, have outperformed state-of-the-art detection architectures and the ablation studies show how well our network performs even as we gradually make our inputs harder by increasing the data sparsity. All implementations and trained weights will be available at https://github.com/UMich-BipedLab/KC3D-RCNN.

FirstImage.png

 

Jiunn-Kai Huang and Jessy W. Grizzle

We propose and experimentally demonstrate a reactive planning system for bipedal robots on unexplored, challenging terrains. The system consists of a low-frequency planning thread (5 Hz) to find an asymptotically optimal path and a high-frequency reactive thread (300 Hz) to accommodate robot deviation. The planning thread includes: a multi-layer local map to compute traversability for the robot on the terrain; an anytime omnidirectional Control Lyapunov Function (CLF) for use with a Rapidly Exploring Random Tree Star (RRT*) that generates a vector field for specifying motion between nodes; a sub-goal finder when the final goal is outside of the current map; and a finite-state machine to handle high-level mission decisions. The system also includes a reactive thread to obviate the non-smooth motions that arise with traditional RRT* algorithms when performing path following. The reactive thread copes with robot deviation while eliminating non-smooth motions via a vector field (defined by a closed-loop feedback policy) that provides real-time control commands to the robot's gait controller as a function of instantaneous robot pose.

seed2-sGuess-Conv-nLength-rot6-R4N5Iillumination-nReg-sensitivityPuffibilityRatio-OptimalS

 

Jiunn-Kai Huang, William Clark, and Jessy W. Grizzle

This paper introduces the concept of optimizing target shape to remove pose ambiguity for LiDAR point clouds. A target is designed to induce large gradients at edge points under rotation and translation relative to the LiDAR to ameliorate the quantization uncertainty associated with point cloud sparseness. Moreover, given a target shape, we present a means that leverages the target's geometry to estimate the target's vertices while globally estimating the pose. We achieve centimeter error in translation and a few degrees in rotation even when a partially illuminated target is placed 30 meters away.

FirstFigure.png

 

Jiunn-Kai Huang, Chenxi Feng, Madhav Achar, Maani Ghaffari, and Jessy W. Grizzle

Intrinsic calibration models for spinning LiDARs have been based on hypothesized physical mechanisms, resulting in anywhere from three to ten parameters to be estimated from data, while no phenomenological models have yet been proposed for solid-state LiDARs. Instead of going down that road, we propose to abstract away from the physics of a LiDAR type (spinning vs. solid-state, for example) and focus on the point cloud's spatial geometry generated by the sensor. By modeling the calibration parameters as an element of a matrix Lie Group, we achieve a unifying view of calibration for different types of LiDARs. We further prove mathematically that the proposed model is well-constrained (has a unique answer) given four appropriately orientated targets. The proof provides a guideline for target positioning in the form of a tetrahedron. Moreover, an existing semi-definite programming global solver for SE(3) can be modified to efficiently compute the optimal calibration parameters.

first_fig2.jpg

 

Jiunn-Kai Huang, Shoutian Wang, Maani Ghaffari, and Jessy W. Grizzle

This article introduces LiDARTag, a novel fiducial tag design and detection algorithm suitable for light detection and ranging (LiDAR) point clouds. The proposed method runs in real-time and can process data at 100 Hz, which is faster than the currently available LiDAR sensor frequencies. Because of the LiDAR sensors’ nature, rapidly changing ambient lighting will not affect the detection of a LiDARTag; hence, the proposed fiducial marker can operate in a completely dark environment. In addition, the LiDARTag nicely complements and is compatible with existing visual fiducial markers, such as AprilTags, allowing for efficient multi-sensor fusion and calibration tasks. We further propose a concept of minimizing a fitting error between a point cloud and the marker's template to estimate the marker's pose. The proposed method achieves millimeter error in translation and a few degrees in rotation. Due to LiDAR returns’ sparsity, the point cloud is lifted to a continuous function in a reproducing kernel Hilbert space where the inner product can be used to determine a marker's ID.

test7_3.png

Jiunn-Kai Huang and Jessy W. Grizzle

The rigid-body transformation between a LiDAR and monocular camera is required for sensor fusion tasks, such as SLAM. While determining such a transformation is not considered glamorous in any sense of the word, it is nonetheless crucial for many modern autonomous systems. Indeed, an error of a few degrees in rotation or a few percent in translation can lead to 20 cm reprojection errors at a distance of 5 m when overlaying a LiDAR image on a camera image. The biggest impediments to determining the transformation accurately are the relative sparsity of LiDAR point clouds and systematic errors in their distance measurements. This paper proposes (1) the use of targets of known dimension and geometry to ameliorate target pose estimation in face of the quantization and systematic errors inherent in a LiDAR image of a target, (2) a fitting method for the LiDAR to monocular camera transformation that avoids the tedious task of target edge extraction from the point cloud, and (3) a “cross-validation study” based on projection of the 3D LiDAR target vertices to the corresponding corners in the camera image. The end result is a 50% reduction in projection error and a 70% reduction in its variance with respect to baseline.

Previous Research & Projects

IMG_0552.JPG


Visual Odometry

 

Fig.JPG


Autonomous
Manipulation and Searching

 

slide3.jpg


Integrated Circuits
 

challenge_map.jpg


Autonomous Navigation and Exploration
 

IMG_2456.JPG


Autonomous
Segway

 

Mentor-10-03-11-Fig3.jpg


TSV in 3D-IC
 

withICPIGSAM.png


3D Reconstruction

 

_MG_0959.JPG


Autonomous
Wireless Charging

 

Rezence_Z_icon_logo.png


Rectenna & WPT