The goal of this project was to test and further upgrade Google’s Mannequinn challenge project. It implements a method for predicting dense depth in scenarios where both a monocular camera and people in the scene are freely moving. At inference time, the method uses motion parallax cues from the static areas of the scenes to guide the depth prediction. The neural network is trained on filtered YouTube videos in which people imitate mannequins, i.e., freeze in elaborate, natural poses, while a hand-held camera tours the scene.
My task was to prepare TurtleBot, so that people can work with them. TurtleBot has many fuctions from basic fuctions(moving around or rotate) to mapping a room.
Preparing my PC for work
First I had to install Ubuntu 16.04 on my remote PC. After that I had to install ROS and dependent ROS packages. And I had to set my IP adress in bashrc file. After that my PC was set up to work with TurtleBot.
Preparing Raspberry Pi for work
Now I hade to communicate with motors so that TurtleBot could move around. That was made over OpenCR board. I hade to upload OpenCR firmware on OpenCR board which I found on robotis github repository.There were 2 ways of uploading Firmware to OpenCR. First one is through terminal and second one is trough Arduino IDE. After that I could press button SW 1 or SW 2 on OpenCR and robot would move forward or rotate.
First I had to make server on which would TurtleBot connect so that my PC and TurtleBot could communicate. This procedure is called Bringup. To see if everything is working normally I loaded TurtleBot inside rviz program. Inside rviz I could see that my TurtleBot is sending me data from his laser sensor. This data is used to see how far is the obstacle from TurtleBot. Rviz uses this data to visually show us where the obstacle is. It is shown to us with small red dots.
Now I could finally start with basic operation
There are many ways of how to control TurtleBot. You can use keyboard, PS3 Joystick, XBOX 360 Joystick, Wii Remote, LEAP Motion, etc. just to move hime around your room (move forward, backwards, left, right, rotate right, rotate left). Rviz program is giving us the ability to control TurtleBot (to move him around room).
TurtleBot has fuction to detect obstacle. It will move forward until it detects obstacle and will move very close to it without touching it. It will send data to our PC so we can see how far the obstacle is and when did it stop.
There is also point operation where we give TurtleBot x,y coordinates and z-angular and TurtleBot will move to point x,y and then rotates for z-angular.
One cool feature is called patrol. We can say what type of patrol we want (rectangle, triangle and circle). So we give TurtleBot type of patrol, radius of patrol (for example if we say circle we must give him radius of that circle) and how many times do we want him to repeat that lap.
SLAM (Simultaneous Localization and Mapping)
This feature gives us ability to make a map of a room and to navigate TurtleBot with that map. Firstly I started SLAM program and used my keyboard to move robot around a room. After I made mape of a room I started navigation program in which you can tell estimated pose of robot. Then you move it around a little to get a precise pose of the robot. After that you can tell him point/goal where it needs to move and he will move to that point while avoiding obstacles. Also we can use that map to run simulation so we dont need to use real robot. So first you make simulation and when everything works inside of simulation then you test it on real TurtleBot.
GitHub link: https://github.com/lukastoja/TurtleBot
Izradio: Luka Otović
This project was about creating a face recognition program for a coffee machine.
The main purpose of the program is to recognize a person’s face (identity) and offer him the coffee he mostly drinks. If the person is new in the system or if he doesn’t want the usual coffee, he can choose from a list of available coffees.
The use of augmented reality results in faster recovery of patients who are rehabilitating from accidents like car accident, stroke, etc. which result in impaired movement.
The task was to explore or suggest your own hardware and software solution which would be the most appropriate solution for integration into augmented reality system with the goal of alleviating recovery of patients with reduced motor skills.
It was decided that the best results out of all explored 3D cameras were given by ORBBEC Astra and therefore with Astra SDK, and PCL (Point Cloud Library), skeleton tracking application which displays human was developed. Users are able to choose between joint or human tracking in application menu.
This application can track human joints, human body points, and human body points in color. Human body is segmented from it’s environment using simple technique.
Functionality can be seen in the following video:
This project’s objective was to discover, test and implement skeletal tracking algorithm, using Intel’s R200 RealSense Camera.
Project to a certain extent implements algorithms using existing Skeletal Tracking Algorithm (Intel RealSense SDK) with 6 body joint points (hands, shoulders, head and spine-mid), send them to a TCP port via socket, as well as render the result with the help of OpenGL.
Current stage of the project is deciding if the Intel’s current cameras are adequate for further improvement and work, or is it more profitable to move on a different piece of hardware.
Authors: Domagoj Makar, Vito Medved
Mentor: izv. prof. dr. sc. Kristijan Lenac
While all sorts od Cloud APIs are becoming more and more popular in replacing local-based alternatives, this project was intented to demonstrate Face Recogition via Cloud, with OpenCV’s Haar Classificator running locally and is used for detecting faces.
Project implements two optional Cloud APIs – Face++ and FaceR.
Recognition is done comparing locally detected face (using OpenCV) with person model pre-trained by few hundred images (total 505 images were used in a process using Cloud APIs).
Result of a Face recognition is confidence level which tells us what are the odds that unknown, detected face really belongs to the pretrained person model. Threshold of 70% is set for recognition to be successful.
Project, including documentation and Python scripts, is available on this link.
Additionally, introductional video with app demonstration is available.
DJI Phantom 2 Vision Plus
This project’s main objective was to test the quadcopter autonomus landing system by using the previously developed android applications „APASLab Aviator“ and „APASLab Base“ and suggest improvements to increase landing accuracy. Before testing, mentioned apps did undergo a landing algorithm optimization, which led us to conclusion that the landing has improved significantly. However, there is still room for more improvements e.g. computer vision, which will hopefully come in the near future.
Project documentation available here.
Blender is a free application that allows you to create a wide range of 2D and 3D content. Blender provides a number of functions for modeling, texture, lighting, animation and video in a single package. Blender is one of the most popular open-source 3D graphics applications in the world.
The project is based on the usage of rigid body physics on a human model in order to detect a collision with other objects and prevent its passage through other objects. For the purposes of the project, human model was made from scratch together with its armature and walking animations.
Blender file of a project is available here.
The goal of this project was to create a Virtual Reality experience using the Oculus Rift and any available 3D sensor. As we wanted to implement skeletal tracking the logical choice was to use the Xbox 360 Kinect as it had skeletal tracking implemented in the driver.
We wanted to render arms inside the engine, which would use the partaker’s hand, elbow and shoulder positions to set the bone positions of the virtual arms.
As the engine has a built in plugin for Oculus Rift launching the project within it was trivial.
An additional goal was to port the project for use on a 3D projector. Unreal Engine 4 however, does not support stereoscopic 3D rendering. Hence, a plugin was created to alleviate the problem.
Project main goal was to determine functionality of Android OS on Raspberry Pi 2 and Ubuntu Touch OS on Samsung Galaxy S2.
Android OS worked fine on Raspberry Pi 2 and there is a potential to develop even applications for embedded systems (there are still few problems like lack of GPU driver).
Ubuntu Touch OS is totally different OS than Ubuntu for PC. It is orientated on smartphones, and support for Samsung Galaxy S2 expired before 2 years. Some new smartphones like Nexus 4, Nexus 7, Nexus 10, Meizu MX4, BQ Aquaris E4.5 and BQ Aquaric E5 have support even today. There are a lots of problems with OS (some hardware keys not working, bugged UI, WiFi, mobile network also not working), so the best thing would be to test Ubuntu Touch on smartphones that are supported.
Here is a video of Ubuntu Touch installation process: