Joint tracking using Intel RealSense R200 3D camera

This project’s objective was to discover, test and implement skeletal tracking algorithm, using Intel’s R200 RealSense Camera.

Project to a certain extent implements algorithms using existing Skeletal Tracking Algorithm (Intel RealSense SDK) with 6 body joint points (hands, shoulders, head and spine-mid), send them to a TCP port via socket, as well as render the result with the help of OpenGL.

Current stage of the project is deciding if the Intel’s current cameras are adequate for further improvement and work, or is it more profitable to move on a different piece of hardware.

 

Authors: Domagoj Makar, Vito Medved

Mentor: izv. prof. dr. sc. Kristijan Lenac

Cloud Face recognition

While all sorts od Cloud APIs are becoming more and more popular in replacing local-based alternatives, this project was intented to demonstrate Face Recogition via Cloud, with OpenCV’s Haar Classificator running locally and is used for detecting faces.

Project implements two optional Cloud APIs – Face++ and FaceR.
Recognition is done comparing locally detected face (using OpenCV) with person model pre-trained by few hundred images (total 505 images were used in a process using Cloud APIs).
Result of a Face recognition is confidence level which tells us what are the odds that unknown, detected face really belongs to the pretrained person model. Threshold of 70% is set for recognition to be successful.

Project, including documentation and Python scripts, is available on this link.

Additionally, introductional video  with app demonstration is available.

QUADCOPTER LANDING ACCURACY MEASUREMENTS AND SUGGESTIONS FOR IMPROVEMENT

DJI Phantom 2 Vision Plus

 

This project’s main objective was to test the quadcopter autonomus landing system by using the previously developed android applications „APASLab Aviator“ and „APASLab Base“ and suggest improvements to increase landing accuracy. Before testing, mentioned apps did undergo a landing algorithm optimization, which led us to conclusion that the landing has improved significantly. However, there is still room for more improvements e.g. computer vision,  which will hopefully come in the near future.

Project documentation available here.

Project participants:

Nenad Vrkić

Kristian Suljć

 

Indoor localization application

BuildNGO is an application which is used for indoor localization, it is free to download and easy to use. Before you can use this application you must register on SAILS Tech and download Josm editor.
Josm is a free editor that allows you to create 2D image, building scheme, which are seen in the application buildNGO floor by floor. There are some other applications for android cell phones like “indoor map painter” which are used for editing “painting” walls and rooms that are seen in the same application but are not necessary.

The project was built for the hotel Corinthia in Baška, Croatia, for their guests to find their way around. For an easier testing we built one more application for our college.


BuildNGO
https://play.google.com/store/apps/details?id=com.sails.buildngo&hl=hr
Josm
https://josm.openstreetmap.de/
Indoor map painter
https://play.google.com/store/apps/details?id=com.sails.mapvieweditor

Using rigid body physics with Blender for people modeling

Blender is a free application that allows you to create a wide range of 2D and 3D content. Blender provides a number of functions for modeling, texture, lighting, animation and video in a single package. Blender is one of the most popular open-source 3D graphics applications in the world.

The project is based on the usage of rigid body physics on a human model in order to detect a collision with other objects and prevent its passage through other objects. For the purposes of the project, human model was made from scratch together with its armature and walking animations.

Blender file of a project is available here.

Project participants:

David Dominić

Virtual Reality application with Oculus Rift, Optoma 3D and 3D sensor

The goal of this project was to create a Virtual Reality experience using the Oculus Rift and any available 3D sensor. As we wanted to implement skeletal tracking the logical choice was to use the Xbox 360 Kinect as it had skeletal tracking implemented in the driver.

We wanted to  render arms inside the engine, which would use the partaker’s hand, elbow and shoulder positions to set the bone positions of the virtual arms.

As the engine has a built in plugin for Oculus Rift launching the project within it was trivial.

An additional goal was to port the project for use on a 3D projector. Unreal Engine 4 however, does not support stereoscopic 3D rendering. Hence, a plugin was created to alleviate the problem.

 

 

 

Project participants:

Edi Ćiković

Kathrin Mäusl

Support STEMI: a Platform Helping Students Learn Science, Technology, Engineering, and Math

stemi_team

STEMI is a project created by Croatian students’ non-profit organisation and its main goal is to make learning by creating possible. Logit joined their crowdfunding campaign to help them collect funds to finance STEMI – a composable hexapod robot + an online multimedia learning platform.

STEMI has been created to teach scientific, technological, engineering, and mathematical concepts in a fun and easy way. Its purpose is to educate new generations of students and everyone interested in STEM (science, technology, engineering, math).

STEMI’s crowdfunding campaign on Indiegogo started on October 26, 2015 and has reached the funding goal of $16,000 in only 3 days. The campaign is still active, is backed by 200+ backers, and we encourage you to donate now (the campaign ends on December 5, 2015).

Project summary: Setting up Android on Raspberry Pi 2 and Ubuntu Touch on Samsung Galaxy S2

Project main goal was to determine functionality of Android OS on Raspberry Pi 2 and Ubuntu Touch OS on Samsung Galaxy S2.

Android OS worked fine on Raspberry Pi 2 and there is a potential to develop even applications for embedded systems (there are still few problems like lack of GPU driver).

Ubuntu Touch OS is totally different OS than Ubuntu for PC. It is orientated on smartphones, and support for Samsung Galaxy S2 expired before 2 years. Some new smartphones like Nexus 4, Nexus 7, Nexus 10, Meizu MX4, BQ Aquaris E4.5 and BQ Aquaric E5 have support even today. There are a lots of problems with OS (some hardware keys not working, bugged UI, WiFi, mobile network also not working), so the best thing would be to test Ubuntu Touch on smartphones that are supported.

Here is a video of Ubuntu Touch installation process: