Playing GTA V on a PS3 with Leap Motion

I have a PlayStation 3 and I love working with new types of user input so, as my last hack of the year, I wanted to use the Leap Motion Controller to play some game on the PS3.
The Leap Motion Controller is obviously not compatible with the PS3 so the plan was to use a regular computer, interpret the gestures from the Leap Motion, and send the respective controls to the console.

For the game, I chose GTA V because it involves many different actions such as running, jumping, driving or shooting… and it’s awesome!.

Here is the video of yours truly using this script to do some disastrous driving but having a lot of fun with the Leap Motion and GTA V:

The reason why the big video has such a low quality and the tiny one is fine is that they were recorded with my Nexus 5 and my Canon S95, respectively, and my living room was very dark.

How it works

As seen in the video, it is also possible to control the PS3 menu and choose the game from there. The player’s actions I chose to implement were walking, running, jumping, driving and enter/leaving a vehicle. All of those were easy to implement except for the driving. The thing is that I can easily get the angle for the imaginary steering wheel that a user does with the Leap Motion device but I could only simulate turning the left analog stick fully to the left or to the right. This makes it kind of difficult to steer a car, as can be seen in the video, but it’s still fun to do it.

For the communication with the PS3, it uses the GIMX project which makes it possible to simulate a SixAxis game pad from a computer and send its actions over bluetooth to the PS3. GIMX has some nice utilities, being its main one the emuclient which detects key events and uses a configuration file to map them to the actions of the SixAxis. It would be much more elegant to send the commands to the PS3 directly from the script I wrote but it was simply faster to instead simulate the key events and let GIMX do the rest with the right configuration file.

As with the Leap GNOME Controller, this is a small script rather than a big project. To know how to use the project, please refer to the README file that ships with it. Hopefully someone will like to try it out and improve the current gestures or make new ones.

Get the source at GitHub and have a great 2014!

Controlling GNOME with Leap

When I explained how the Leap Motion device could be used on Fedora 19, I mentioned how I had one of those early prototypes. Well, Leap Motion was extremely kind and sent me an actual device as a thank you for starting the thread asking for Linux support. Now that GUADEC is over and I am spending my vacation in Portugal, I had a little time to play with my fancy new device and wrote a relatively small script to control GNOME with it. I call it the รผber original name of: Leap GNOME Controller!

For those who don’t care about technical details, here’s the video showing what can be done with Leap, GNOME and this script. Technical details follow below the video:

The two videos that compose the one above were recorded with an HD camera and GNOME Shell’s screencast recorder. I tried to sync them the best I could but a certain delay can be noticed, especially at the end of the video.

The code

Leap Motion provides a “close source” shared library and a high-level API with the respective documentation for the many bindings it has. To code it quickly, I used the Python bindings and Xlib to fake the input events.

Leap Motion’s APIs make it really easy for one to simulate a touch-screen. It even offers a “screen tap” gesture that should be the obvious choice when mapping a finger touch/tap to a mouse click. However, this didn’t work very well. The problem is that if we are tracking one finger to control the mouse movement, when performing the “screen tap” gesture, the finger (and mouse) will of course move. Making it as frustrating as seen on ArsTechnica hands-on video.

I came up with a solution for this by dropping the “screen tap” gesture and using the “key tap” instead. The “key tap” is a simple, quick down-and-up finger movement, like pressing a key. This is much more precise and easier for a user to do than the “screen tap”. Of course that when the finger moves for performing the gesture, the mouse pointer would move as well, so I came up with a little trick to work around this: when the mouse pointer doesn’t move more than a couple of pixels for half a second, it will stop and only move again if the user makes it move for more than 150 pixels. This allows for the user to stop the pointer with precision where it needs to be and perform the gesture without making the pointer move.

Future

The Leap device offers a lot of possibilities for adding many gestures. Ideally they should be implemented per application but being able to control the shell is already pretty useful, so it would be wonderful to fine-tune the current gestures and add new ones. I also wish the library’s source code were open because I ran into small issues and I wish I could take a look at the source code, instead of trying to fix it based on the theories of what might be wrong.

I haven’t explored the AirSpace appstore yet so I don’t know if it is worth adding (or possible to add) this script there but I will check it out.

Have fun with Leap and GNOME!

Skeltrack 0.1.14 released

The first release of Skeltrack in 2013 is out!

This is also the first version I release without being associated with Igalia but the company agreed that I keep maintaining the project. That’s one of the good aspects of working in Free Software as your daily job: you can continue working on them even if you’re no longer in the company where it was first developed.

This new version is not bloated with new features so “why such a wait?” you might ask. Well, the reason is that there have been a few changes in my life since last year and, more importantly, after I left Igalia, I didn’t have a Kinect and thus, working on Skeltrack was a bit difficult… But finally a few weeks ago I bought one so now I am able to keep hacking.

Faster Tracking

The greatest thing in this release is a big improvement in the performance of Skeltrack when tracking joints. You see, when I originally developed it, I was more concerned with getting it to actually track the skeleton joints than doing it quickly ๐Ÿ™‚
Luckily, Iago Lรณpez (kudos to him) did a neat work in fixing some misuses of GList but more importantly, re-implementing Dijkstra using a priority queue. If you remember your algorithms courses well, with this change you can make Dijkstra go from quadratic to linearithmic time and that’s a BIG improvement.
Here is the difference between the old code and the new expressed in a plot:

Plot comparing Skeltrack's versions

The values used to generate the plot were the time it took skeltrack_skeleton_track_joints_sync to execute. For each of the versions, this function was called 25 times for each of the 550 depth frames and the final values in the plot are the arithmetic average of the results per frame. They were executed in my i5 2.40 GHz laptop without any applications or the desktop running (besides system’s services).
As you can see, there’s a great difference between the speed of the old code and the new. I can’t wait to try Skeltrack on weaker hardware now ๐Ÿ˜€

(BTW, why 0.1.14 and not 0.1.12? Because of the “release curse” which made me aware of a simple but important bug fix right after pushing 0.1.12…)

Other Things

Besides some small bug fixing in the library, the Skeltrack Kinect example was ported to Clutter 1.12 to keep up with the work done upstream.

Another new thing is that now Skeltrack has a webpage. I recently found out about GitHub’s automatically generated web pages and I thought this would be better than having people visit GitHub’s repo page so there you go: http://joaquimrocha.github.io/Skeltrack/
The page is a bit ugly and minimalist but we’ll improve it.

The documentation can now be found here and bugs can be filed in GitHub as usual.