Bored again: Looking into Gestures
July 24th, 2013So I was bored, and this time, after playing TRAUMA (great game you should get it, but be aware that there is no “real” linux client, “just flash”), I felt like looking into gestures & Qt.
The assistant instantly pointed me to QGestureRecognizer and I was like “Yay! This is gonna be so damn easy!”. But I quickly realized thats not the case. You have to overwrite QGestureRecognizer::recognize and then do all the detection by yourself, then I suddenly was like “hm.. not great, but I can do it”.
For the following attempt(s) I am talking about a mouse-turn gesture.
Try 1: breaking things down
A beautiful turn, doable with your mouse and your finger, but difficult to describe in a plain x/y world, as you don’t know the end(&start) yet. But then it suddenly hit me, I could break it down to very basic directions. (To keep it easy let us use the svg 0,0 point at the left-bottom corner.) The turn-gesture for example can be split into 2 parts. The first part only goes up, x&y only raise, and the second part goes down, y still raises but x goes down.
Depending on how well you want to describe it, you also can break it down into several small moves. This “breaking down” is really possible with all gestures I can imagine, even with the “lift” gesture from TRAUMA.
This is pretty easy to implement, as for the turn, you only need 4 states.
- Start: Set when the mouse is pressed, only x >= 0 && y>= 0 moves are allowed, after the first move we set the State to RightUp.
- RightUp: x >= 0 && y >= 0 moves are allowed, but after getting the first x>= 0 && y <= 0 move we set the state to RightDown.
- RightDown: Only x >= 0 && y <= 0 moves are allowed, but after getting the first move we set the state to Done.
- Done: Only x >= 0 && y <= 0 moves are allowed, but whenever the mouse is released we can trigger the event.
NOTE: Qt sometimes sends 0,1 or 1,0 move events when moving right-up, so >= 0 is needed.
The 2 extra states are needed to make sure we actually moved in that direction (at least 1 px), so it gets a little bit more complex, but still very easy to implement. You can improve the checks a lot, but it shows the basic concept.
And the best thing about it is… it works! If you move in a good half-circle, or one right-top&&right-down line it triggers the turn event. This is a really quick to implement and needs very less resources.
But now to the disadvantages… with the mouse I have huge trouble to really just make a right-top&right-down move, it is possible, but you really need to concentrate, that is not something you can do if you just barely can reach the mouse. Especially when you expect to do a half-circle. At least my half-circles look a bit different.
That’s how my half-circles look if I draw them quickly. The first one maybe passes, but the other two fail right at the start, as the move goes in the wrong direction. This method is very strict, too strict for complex gestures.
Try 2: The universal path
As the first attempt was not very good for complex gestures… I had to think of something else. Describing the gestures by move-directions was very easy, but not very good. So this problem must be solved first.
After browsing the assistant for a while I found QPainterPath, does not sound helpful huh? ;-)
But be not fooled! QPaintedPath has some amazing functions, for example arcTo, now you can finally describe a real circle and that with only one line. The only problem is that QPainterPath works with absolute pixel positions, so this would define the height&width of the gesture, not good. QPainterPath does not limit it, but luckily I can make it relative by myself, so describing it is only allowed from 0,0 to 1,1, qreal makes it possible.
QPainterPath path; path.moveTo(0,1); path.arcTo(0.0, 0.0, 1.0, 2.0, 180, -180);
You might wonder why I use 2.0 as height-bounding-box, keep in mind that the bounding-box describes a full circle, I only want the upper half and fill the full 0,0 to 1,1 with it.
So far so good, now I can describe it. But it only describes a very thin line, noone is abled to do make a circle like that. That gets us to the matching part. There must be some kind of allowed variance from the line. It is easy to measure the distance from a line, QPainterPath::lineTo, but complicated for arcTo or curveTo. And it should be relative to the size of the gesture overall, this will allow a greater variance for big gestures and a smaller variance for a small one. Lets call this variance the “threshold”, it is definable for every new gesture.
Now it gets a bit performance problematic, we have to record ALL mouse moves (while the button is pressed), this is the only way we can actually get the width&height of the gesture AND scale the QPainterPath to the gesture to compare it. Scaling the QPainterPath itself is a huge PITA and does still not give us information if the points are in range. But lets not forget we can paint with the QPainterPath, using QPainter::scale +QBitmap+QRegion::contains makes this very easy. At this point we also know the size of the gesture and with the threshold value we can set the Pensize.
Let’s see what we have so far…
The dynamic threshold and the line, they fit perfect, hurray!
Here have some more quick drawings.
Also fits, how nice :)
While it sounds like we are done here at this point… we are not. We have a huge problem now, we can no longer tell the direction of the gesture. As we only compare points, so a simple move to the right, is the same for us as a simple move to the left.
The QPainterPath has a direction itself and also from the recorded mouse moves we can tell which was the first and which was the last move, it is not hopeless at all! For a simple line it is enough to check if the first point is close to the first QPainterPath point and likewise for the last. For complex gestures this will not work, as we might be to strict or to relaxed.
For example if you imagine we have two gestures, a line, from the right to the left and a y-mirrored “z”. You might think they are completely different, but if you draw the y-mirrored “z” with a very small height, it almost gets a thick line and has all characteristics of a line from the right to the left. The first QPainterPath point matches the first mouse move point and so does the last.
But the QPainterPath tells us even more, it tells us the count of the painter-actions aka points, NOTE: arcTo generated more than just 1 point, depending on the size it gets like 4 to 10 (or more). So the first check is to see if I can find a mouse move point to every QPainterPath point depending on the order. But as the mouse movement can be very “unsteady” this is not a easy task, so instantly canceling the detection if we move further away from the point is not an option. If we check every point we loose the direction again.
After some tests the best solution seemed to be splitting the QPainterPath list in 2 parts, for the first half we only check for the closest QPainterPath in max 2/3 of the mouse-move list, without ever going back. The second part checks from behind but only the last 2/3 of the mousemove list. This gives good results in even very narrow situations. I choosed 2/3 over 1/2 because the mouse is moved at different speed in different points in the gesture.
And to be mega sure, lets pass the found points back “to the user” so he can compare the important logic of it.
bool BCutGestureRecognizer::pointCheck(const QVector< QPoint >& list) { bool ok = list.size() == 4 && list.at(0).x() > list.at(1).x() && list.at(1).y() < list.at(3).y() && list.at(2).x() > list.at(3).x(); return ok; }
But as this point and direction finding turned out to be cause a bit more load then I wanted to… I added minWidth and minHeight to gesture, for example minHeight 20px for the y-mirrored “z”.
So lets sum up, what do I need to get a full new complex gesture working?
- The QPainterPath aka the structure of the gesture.
- The Threshold, very important!
- OPTIONAL: minWidth
- OPTIONAL: minHeight
- OPTIONAL: pointCheck to check the logic of the found points
Sounds doable? :)
Wouldn’t it be nice if something like this (maybe more advanced) would be in Qt itself? Or maybe is there something easier to detect complex gestures?
My Implementation works pretty good, even if you have MANY similar complex gestures, so I am a bit proud, but currently (without cleanup) it super ugly and maybe way to heavy for low-power devices. But anyway in worst case it just filled boredom. (And by the way this is my super first time dealing with gestures, I may have overlooked something.)
Another boredom chapter closed.