Earlier this evening I deployed a new update to enable Multitouch and Gestures in Flowlab’s behaviors.
New Features
Gesture Block
This new trigger block adds three types of gestures to use as game inputs:
Drag - track multiple touch points
Rotate - track rotating fingers
Pinch - track finger distance for e.g. pinch-to-zoom
Touch Check
This new behavior will check if your game is running on a touch screen enabled device, so you can more easily show or hide specific UI elements on those devices
Fixes
Ease blocks now complete normally after game pause/resume
Fixed Timer/pause regression
Fixed Timer/pause problems in emitted objects
occasional “Loading” error when editing level info
Let me know in the thread if you have feedback, problems, etc
Thanks!
Note that touch only works on mobile browsers and will not be detected by pc atm, but you can still test it by using a mobile browser test in the browser console.
Upon my first try using this, everything seems to work mostly fine (the multi-touches for instance); however, I noticed a few potential issues already:
-the game coordinates do not output correctly when the camera is zoomed (same with MouseMove I think)
-pinching still triggers one of the touches as a drag (or both, I can’t tell), so using it for, say, camera zooming may cause issues if drag is tied to something else
-the pinch % resets to start at 100 each time, which is nice except that it then requires extra code to tie it to camera zoom (so that it starts from the current zoom instead of 100)
-the background repeat still doesn’t work correctly with zoom (not specfic to this update but I figured it’s still relevant since pinch to zoom has been added)
It behaves the same as MouseMove, yes - they should probably both be adapted to compensate for camera zoom, so that you don’t need to do that manually.
This is an intentional part of the design - using one gesture trigger does not negate or disable other blocks. The current behavior enables you to e.g. drag with one finger, while also pinching or zooming with the other.
This is also by design (rotate is the same way). The behaviors do not try to keep a history of different gestures and append the outputs over time. It does mean that the zoom or rotation must be manually stored when the gesture is complete, as you mentioned.