Localization and tracking of UAVs using an RGB camera

The goal of this project was implementing an indoor drone position monitoring system using AR (Augmented reality) binary markers and an external RGB (Red Green Blue) camera. The project was implemented using ROS (Robot Operating System) to enable easy upgrades in the future.

This project was made with the intention of further improvement by integration with multi-drone coordination system (swarm control). The given solution would be used to track and identify drones, find errors in the movements of drones, and provide information for the decision making process for following movements of each of the drones.
AR markers are an adequate solution for the purposes of this project, but it can be improved by using various computer vision algorithms, IE detection and object identification algorithms. This would provide useful information for the needs of precise collision avoidance, complex flight formations or formations using large numbers of drones.
The work provided provides a basis for future improvements and integrations into new systems
for the needs of the RiTeh Drone Team.

Github: https://github.com/matildabenac/zavrsni_rad

Learning the depths of moving people by watching frozen people

The goal of this project was to test and further upgrade Google’s Mannequinn challenge project. It implements a method for predicting dense depth in scenarios where both a monocular camera and people in the scene are freely moving. At inference time, the method uses motion parallax cues from the static areas of the scenes to guide the depth prediction. The neural network is trained on filtered YouTube videos in which people imitate mannequins, i.e., freeze in elaborate, natural poses, while a hand-held camera tours the scene.

turtlebot3

Summary

My task was to prepare TurtleBot, so that people can work with them. TurtleBot has many fuctions from basic fuctions(moving around or rotate) to mapping a room.

Preparing my PC for work

First I had to install Ubuntu 16.04 on my remote PC. After that I had to install ROS and dependent ROS packages. And I had to set my IP adress in bashrc file. After that my PC was set up to work with TurtleBot.

Preparing Raspberry Pi for work

Now I hade to communicate with motors so that TurtleBot could move around. That was made over OpenCR board. I hade to upload OpenCR firmware on OpenCR board which I found on robotis github repository.There were 2 ways of uploading Firmware to OpenCR. First one is through terminal and second one is trough Arduino IDE. After that I could press button SW 1 or SW 2 on OpenCR and robot would move forward or rotate.

Bringup

First I had to make server on which would TurtleBot connect so that my PC and TurtleBot could communicate. This procedure is called Bringup. To see if everything is working normally I loaded TurtleBot inside rviz program. Inside rviz I could see that my TurtleBot is sending me data from his laser sensor. This data is used to see how far is the obstacle from TurtleBot. Rviz uses this data to visually show us where the obstacle is. It is shown to us with small red dots.

Now I could finally start with basic operation

Basic Operation

There are many ways of how to control TurtleBot. You can use keyboard, PS3 Joystick, XBOX 360 Joystick, Wii Remote, LEAP Motion, etc. just to move hime around your room (move forward, backwards, left, right, rotate right, rotate left). Rviz program is giving us the ability to control TurtleBot (to move him around room).

TurtleBot has fuction to detect obstacle. It will move forward until it detects obstacle and will move very close to it without touching it. It will send data to our PC so we can see how far the obstacle is and when did it stop.

There is also point operation where we give TurtleBot x,y coordinates and z-angular and TurtleBot will move to point x,y and then rotates for z-angular.

One cool feature is called patrol. We can say what type of patrol we want (rectangle, triangle and circle). So we give TurtleBot type of patrol, radius of patrol (for example if we say circle we must give him radius of that circle) and how many times do we want him to repeat that lap.

SLAM (Simultaneous Localization and Mapping)

This feature gives us ability to make a map of a room and to navigate TurtleBot with that map. Firstly I started SLAM program and used my keyboard to move robot around a room. After I made mape of a room I started navigation program in which you can tell estimated pose of robot. Then you move it around a little to get a precise pose of the robot. After that you can tell him point/goal where it needs to move and he will move to that point while avoiding obstacles. Also we can use that map to run simulation so we dont need to use real robot. So first you make simulation and when everything works inside of simulation then you test it on real TurtleBot.

GitHub link: https://github.com/lukastoja/TurtleBot

Izradio: Luka Otović

Dash point of sale embedded system implemented in a coffee machine


The goal of this project was to build an embedded system which will enable products and/or services to be bought with cryptocurrency. The chosen cryptocurrency was Dash. A Raspberry Pi was used as the hardware that runs the required code and stores the blockchain. The embedded system was built into a vending machine. MDB protocol was used for communicating with the vending machine. Communication with the blockchain is achieved with RPC and ZMQ. The final product enables drinks served by the vending machine to be bought with Dash cryptocurrency by scanning a QR code with the Dash wallet mobile app.

Infobip Code Escape – humanitarno riješavanje coding problema

Infobip organizira Code Escape (www.infobip.com/en/code-escape, https://youtu.be/sh2P9U3sRv0, verzija Room Escape za programere) sa humanitarnim karakterom. Za svaki problem koji timovi riješe, Infobip će donirati određenu sumu novca Dječjoj Bolnici Kantrida, udrugama za djecu DIRA Rijeka i Moje mjesto pod suncem.

Pored donacija Infobip-a, za svaki tim koji se prijavi sa Tehničkog fakulteta i odradi Code Escape (bez obzira na uspješnost) donirati će se dodatnih 2.000,00 Kn po izboru prijavljenog tima (Djecjoj Bolnici Kantrida, udruzi DIRA Rijeka, ili udruzi Moje mjesto pod suncem). Sredstva će osigurati i uplatiti donatori koji su nas direktno kontaktirali. Zainteresirani studenti neka mi se jave za više informacija.

Kristijan Lenac

Prezentacija IoT Challenge-a

Umreži se u internet stvari na prvim specijaliziranim radionicama za razvijanje vlastitih IoT rješenja na SIGFOX mreži.

Sadržaj i prijava
Vodeći svjetski brand za internet stvari premrežio je Hrvatsku i otvorio bezbroj kanala svim developerima i hobistima i kreativcima opće prakse da ostvare svoje IoT ideje.

Najbolji i najbrži način da svoje ideje predstavite poslovnoj i tehnološkoj javnosti je natječaj IoT Challenge, povodom kojeg tvrtka IoT Net Adria održava niz radionica u više hrvatskih gradova i na lokacijama koje okupljaju velik broj tehnoloških entuzijasta koji prepoznaju potencijal koncepta Interneta stvari.

Kako biste prikupili inspiraciju, znanja i vještine potrebne za realizaciju svoje ideje, dana 11.03.2019. godine u 14h, Marko Gojić, IoT specijalist iz tvrtke IoT Net Adria s vama će podijeliti svoje bogato iskustvo u razvoju senzora, platforma i cjelovitih rješenja za Sigfox platformu. Prezentacija IoT Challenge-a će de održati u prostorima STEP RI, Radmile Matejčić 10, Rijeka.

 

Prezentacija IoT Challenge-a

 

Body tracking using ORBBEC Astra

The use of augmented reality results in faster recovery of patients who are rehabilitating from accidents like car accident, stroke, etc. which result in impaired movement.

The task was to explore or suggest your own hardware and software solution which would be the most appropriate solution for integration into augmented reality system with the goal of alleviating recovery of patients with reduced motor skills.

It was decided that the best results out of all explored 3D cameras were given by ORBBEC Astra and therefore with Astra SDK, and PCL (Point Cloud Library), skeleton tracking application which displays human was developed. Users are able to choose between joint or human tracking in application menu.

This application can track human joints, human body points, and human body points in color. Human body is segmented from it’s environment using simple technique.

Functionality can be seen in the following video: