This project is an offshoot of the Kinematic Navigation and Cartography Knapsack (KNaCK), a backpack-mounted LiDAR system that allows humans to map their environment. While KNaCK is being developed for astronaut situational awareness, integrating it onto an autonomous rover (KNaCK-Car) enables the system to asses an environment in advance before the arrival of astronauts and other vehicles.
KNaCK-Car is being developed concurrently with the Smart Video Guidance System (SVGS), a lightweight positioning system that only requires a LED target and smartphone-grade camera. SVGS provides reliable positioning, but has no way of detecting hazards. SVGS relies on KNaCK-Car to provide a hazard map, eliminating the need for power-hungry LiDAR scanners on all but the initial scouting vehicle.
This project uses the Lunar Terrain Field (LTF) at Marshall Space Flight Center to provide an accurate testing environment (shown in Figure 1.1).
Figures 1.2 and 1.3 show two maps created by KNaCK-Car. The original KNaCK is focused entirely on 3D mapping; this can be directly ported to KNaCK-Car. The development of KNaCK-Car is thus focused on 2D mapping, which creates the hazard map for subsequent vehicles.
The navigation of KNaCK-Car is built upon Robot Operating System (ROS) and its Navigation 2 (Nav2) framework. Nav2 provides accessible means of simultaneous localization and mapping (SLAM), but is mainly intended for structured environments such as warehouses. Thus, most of KNaCK-Car's technical challenges involve adapting this 2D SLAM software to function in an environment with uneven terrain and non-uniform obstacles.
Figure 1.4 shows KNaCK-Car navigating through a series of hallways. One can clearly see "scan matching" taking place, wherein the robot matches its LiDAR scan to the map like a puzzle piece. This ensures accurate positioning in the face of sensor errors.
Figure 1.5 shows similar scan matching in the LTF. This was a vast improvement over an earlier test (shown in Figure 1.3), which failed to detect several hazards in the "hallway" between two berms.
At the start of my internship, KNaCK-Car's hardware was all in place but there was no software implementation beyond manual driving with a remote control. Over the course of my internship, I delivered:
At my internship's closeout, KNaCK-Car is capable of SLAM both in structured environments and the Lunar Terrain Field. KNaCK-Car can autonomously navigate between user-defined waypoints, pathfinding around obstacles and updating its path as the map evolves. SVGS demonstrated usage of a hazard map generated by KNaCK-Car, fulfilling KNaCK-Car's role in the mission architecture.
I had the amazing opportunity to take the inaugural offering of UNCW's introductory robotics course. The entire course was focused on a continuous project, designed to give an understanding of robotic systems. A general outline of the project is shown below.
The hardware consisted of a robot manipulator arm and RGB-Depth (RGBD) camera. We knew the characteristics of the object (tennis ball) and could tune our detection programs accordingly. Figure 2.1 shows two different methods of object detection. The center panel isolates the ball through filtering color values, while the rightmost panel uses shape detection from Python's OpenCV library.
Once we know the ball's location in the frame, we can isolate the appropriate pixels. The depth values within this area are fitted to a sphere to create a model of the ball.
Figure 2.2 shows the point cloud generated from the camera's color and depth data. Areas highlighted in white are those isolated from Figure 2.1. In this image, the ball was isolated solely using color values. As we can see, the ball's model (shown in red) has been distorted by background areas making it through the filter. While the robotic gripper used is large enough to tolerate the resulting offset, a more accurate solution would be ideal.
Figure 2.3 shows the vastly superior model generated using shape detection. This was done by detecting the ball's shape, and then using a slightly smaller area to perform the curve fitting. We can see this as certain yellow areas are excluded from the filter. This ensures that no background pixels are mistakenly counted as part of the ball.
The motion planning itself was fairly straightforward, with our program providing a series of waypoints that the robot goes to in sequence. One such sequence is shown in Figure 2.4, being simulated in a program called Gazebo. The sequence is meant to imitate picking up the ball and then placing it in another location.
Finally, the entire system was tested using the real-world robot and camera as shown in Figure 2.5. The ball could be placed anywhere on the sheet of paper (limited by camera's FOV at such close range), and the robot would locate and move the ball.