Project summary: Musical instrument controlled by hand movement using Creative Senz3D camera

This project was written in java and developed using Eclipse Mars and Intel Perceptual Computing 2013 SDK for use with Creative Senz3D camera.

The application is made out of two parts. First is a virtual musical instrument which consist of 8 different notes and can be played alongside a simple drum background track, or without it. Second is using the Cenz3D camera for both 2D video and depth map. In this part Intel PerC SDK was used for manipulating this information and also detecting hands, their location, movement, and palm openness. Hand coordinates are then sent to java’s Robot class which controls the cursor location, and closed/opened palm is used as mouse left click pressed/released.

Project summary: Kinect audio

Project „Kinect Audio“ is designed to test the possibilities of the Kinect sensor and thereby make an air drum app. It is made in Visual Studio 2013 community edition, using C# for code behind and Windows Presentation Foundation for the UI.

The application is based on the use of the skeleton stream of Kinect for detecting the user’s skeleton. Each second comes with 30 new frames and for each frame there is detection of the right and the left hand of the user. Furthermore, for each frame lines are drawn, representing the user’s skeleton.

X and Y coordinates of each hand are used to set attributes of an object that is instance of AudioRectangle class. This is the main class where all the logic of the app is and besides hands variables, it has a list of squares that represent the area in which user needs to put his hand to start the audio playback. Each rectangle is connected to its own audio file that represents different drum sound.

When the user presses the space key he enters the mode for changing the position of the rectangles. Then the first two rectangles, in which the user enters, become sticky rectangles and they stick to the position of the user’s hands, till the space key is pressed again. This enables user to arrange rectangles how he likes best. Also, when the user press the J, K or L key, a different song starts to play, so the user can drum along it.

Video of usage is here, and the whole code can be found here.

Project summary: 3D Modelling with DJI Phantom 2 Vision+ quadcopter

Main goal of the project was to create a 3D model of Trsat Castle using quadcopter and available applications.

This was achieved using DJI’s Phantom 2 Vision+ quadcopter with Pix4D Mapper Pro application (for PC). Gathered dataset of images (around 400) was collected by manual flight over Trsat Castle using DJI Vision mobile application. Furthermore, mobile application Pix4D Capture helped us to achieve autonomous flight and gather extra geotagged images needed for construction of 3D model.

Video:

Project participants:
Domagoj Poljančić
Bojan Filipović

Colaboration with University of Rijeka CUDA Teaching Center

We are proud to announce our collaboration with University of Rijeka CUDA Teaching Center, which was kind enough to offer us development hardware and technical support and training in usage of Nvidia CUDA technologies. With this, we now have access to a CUDA embedded development kit called Jetson TK1. Details about the development kit in the full post.

Continue reading Colaboration with University of Rijeka CUDA Teaching Center

Project summary: 3D human modeling and animation

This project goals are 3D  human modeling and animation. For this purpose we are using Blender 3D modeling software as a simulator, Makehuman for character generation and Python scripts to tie all the components together.

Using Python we are able to animate human models so that they move through a scene avoiding objects. One such Python script allows the user to specify a starting position,  an ending position, a starting frame and an ending frame. Other scripts allow the user to set the camera on a specific position and to set cameras focus. The project has a scene made in Blender, into which we export a human model. Scripts set key frames which are combined into an animation of human movement.

Video:

Files can be downloaded at this location.

Project participants:
– Ana Vranković

Project summary: Devices and methods of navigation for mobile robots

This project is focused on realization of the path planning and obstacle avoidance capabilities of a mobile robot platform, Pioneer 3-AT “Frio”, at disposal in APASLab. The versatility and modifiability of the said platform makes it suitable for a vast number of functions (which might be implemented later in the project development), but for now, the main goal was to allow the platform to navigate complex spatial situations without collisions. This being realized, will be the foundation for later development. Another thing worth mentioning is that this project was started as my final project in the undergraduate course in Computer Science.

There were multiple possible ways to achieve the needed goal, but ultimately, the method that was chosen was with using ROS (Robot Operating System), to be more precise with the “navigation” stack (a stack is a collection of codependent packages and tools) which offers tools and packages ready to be parametrized accordingly and used in the system. That being said, the “navigation” stack by itself is not enough for the system to work. Another stack that was used is the “frio-ros-stack”, which was developed at the Faculty of Engineering in Rijeka. That package enables the communication between the ROS system and the robot platform, as well as the sensor used. Those were the software basics used in the project.

The hardware used is as follows:

  • a P3-AT robot platform mentioned in the first paragraph – physically modified to accommodate the following components.
  • a Microsoft Kinect sensor – used as the only way for the system to collect outside data, ie. to scan the surroundings. The data that the sensor transmits into the system is modified via one of the “frio-ros-stack” capabilities to extract a 1D laser scan from the 3D point cloud that Kinect transmits.
  • a personal laptop with Ubuntu 14.04 LTS and a full “fuerte” ROS distribution installed, with the addition of “RosAria” package used for communication with the robot platform.

There are other possible hardware options that might be used to further optimize the use of the system including but not limited to: adding more Kinect sensors (currently, one more is available) which would expand the field of view used to map the environment and localize the robot. The alternative to this is getting an actual laser sensor, of which none are available currently; implementing the system on multiple machines, preferably using one or more Raspberry Pi devices as on-board controllers or data collectors, and a remote machine doing the more computationally intensive tasks.

The setup that was realized in this project was tested by successfully mapping the APASLab laboratory, without collisions with detectable obstacles, but I believe that it requires a more extensive test and some adaptations to conclude that it is a complete success.

Video of a test:

Instructions and setup download (temporary).

Project participants:
– Filip Jurada

“Mali Roboti” looking for members

“Mali Roboti” is an volunteer group that aims to educate younger generations in mobile robotics and promote engineering disciplines like electronics, programming and mechanics through fun and play.

Right now they are looking for new student members to participate in their activities, so if you want play and teach this could be a good place to spend your time.

Contact:
Sanjin Gotić: sanjingotic (at) gmail.com
Adriana Beović: adribeo (at) net.hr