This project is focused on realization of the path planning and obstacle avoidance capabilities of a mobile robot platform, Pioneer 3-AT “Frio”, at disposal in APASLab. The versatility and modifiability of the said platform makes it suitable for a vast number of functions (which might be implemented later in the project development), but for now, the main goal was to allow the platform to navigate complex spatial situations without collisions. This being realized, will be the foundation for later development. Another thing worth mentioning is that this project was started as my final project in the undergraduate course in Computer Science.
There were multiple possible ways to achieve the needed goal, but ultimately, the method that was chosen was with using ROS (Robot Operating System), to be more precise with the “navigation” stack (a stack is a collection of codependent packages and tools) which offers tools and packages ready to be parametrized accordingly and used in the system. That being said, the “navigation” stack by itself is not enough for the system to work. Another stack that was used is the “frio-ros-stack”, which was developed at the Faculty of Engineering in Rijeka. That package enables the communication between the ROS system and the robot platform, as well as the sensor used. Those were the software basics used in the project.
The hardware used is as follows:
- a P3-AT robot platform mentioned in the first paragraph – physically modified to accommodate the following components.
- a Microsoft Kinect sensor – used as the only way for the system to collect outside data, ie. to scan the surroundings. The data that the sensor transmits into the system is modified via one of the “frio-ros-stack” capabilities to extract a 1D laser scan from the 3D point cloud that Kinect transmits.
- a personal laptop with Ubuntu 14.04 LTS and a full “fuerte” ROS distribution installed, with the addition of “RosAria” package used for communication with the robot platform.
There are other possible hardware options that might be used to further optimize the use of the system including but not limited to: adding more Kinect sensors (currently, one more is available) which would expand the field of view used to map the environment and localize the robot. The alternative to this is getting an actual laser sensor, of which none are available currently; implementing the system on multiple machines, preferably using one or more Raspberry Pi devices as on-board controllers or data collectors, and a remote machine doing the more computationally intensive tasks.
The setup that was realized in this project was tested by successfully mapping the APASLab laboratory, without collisions with detectable obstacles, but I believe that it requires a more extensive test and some adaptations to conclude that it is a complete success.
Video of a test:
Instructions and setup download (temporary).
– Filip Jurada