Web lists-archives.com

Re: [kde] enabling pinch and zoom feature on KDE tablet


 Thank you for this: I am sorry about the HTML posting. I don't post in anything but plain text, but maybe Yahoo! Mail posts in some other format. I will try and see if I can change about this, but if not, I guess I will simply change my e-mail address.

I will go ahead and wait a couple of weeks before trying. THis is because Linux needs the 4.2 kernel and that is not as of yet available on KaOS. That will become available in the first week of December. By that time, KDE 5.5 may also be available, though I am not sure about this.

But, is there a resource I could read up on while waiting to do the sort of changes that you mention? I would love to have your script files as a resource. They can not hurt anyway/

Many thanks again, and sorry again for the HTML posting: I hope that this does not happen this time also.
Best wishes,

From: Duncan <1i5t5.duncan@xxxxxxx>
To: kde@xxxxxxxxxxxxxxx 
Sent: Sunday, November 22, 2015 2:28 AM
Subject: Re: [kde] enabling pinch and zoom feature on KDE tablet

Globe Trotter posted on Sat, 21 Nov 2015 13:38:19 +0000 as excerpted:

> I got a MS Surface Pro 3 and would like to install KDE on it. I am not
> very familiar with touch devices (this is my first) or KDE, but I
> thought I would try first with KaOS which is a KDE-based distribution
> and is running the latest stable version. I was unclear about what
> special things I should do to get pinch-and-zoom going. Also, does the
> rotate work automatically, or is something additional needed?
> I am sorry to ask this novice question, but my readings on the web have
> left me very confused.

First, note for next time that you posted the message in both plain text 
and HTML formats.  The latter is frowned upon as it tends to irritate 
grumpy traditionalists like me that consider HTML messages security risks 
and indications of spam, and thus run clients that don't parse the HTML, 
which depending on how the message is displayed, can make the message 
very ugly due to the display of the raw HTML formatting commands.  Since 
in many cases these grumpy traditionalists are the regulars that have the 
answer you're looking for, offending them isn't a particularly good idea 
since it may mean missing out on your answer, so it's just better all 
around if you post in plain text only, thus avoiding the entire problem. 

Back on thread topic...

To some degree touch configurability probably depends on kde frameworks 
and applications versions.  I'm still on kde4 here, tho I have most of a 
kde5/frameworks/plasma base installed for testing, only lacking the bits 
that can't be installed without breaking the kde4 I've long had 
customized to my liking and thus continue to run until I get time to test 
and setup kde5/plasma more to my liking.  So I'm mostly talking about kde4 
here, but kde5/plasma will probably be somewhat better in this regard, 
I'm just not yet sure how much...

About a year ago I got a proper touchpad/trackpad here.  At the time, the 
mainline kernel didn't even properly support it yet, but it happened to 
get support shortly after I bought it, and I've been happily using it 
since, including pinch/rotate and all the fancy 2/3/4-finger swipe 
motions. =:^)

*BUT*, at least with kde4, direct kde (or really anything else) 
application support isn't really built-in, or doesn't appear to work 
properly if it is.  (Firefox, for instance, tho gtk-based not kde/qt-
based, supposedly has pinch/rotate/etc support, and I could see and 
adjust the settings for it in the configuration mania extension settings, 
as I have it installed, but I couldn't get it to detect the gestures and 
thus couldn't get it to work... directly.)

But, while honestly the hoops I had to jump thru to get it to work are 
going to be beyond the capacity of many users, for the sufficiently 
technically literate and determined, it's quite possible to get it to 
work, as I eventually did here.

What I did first is install the xf86-input-mtrack xorg input driver.  The 
default evdev input driver has extremely limited support for gestures, 
etc, and the synaptics driver normally used for touchpads has better 
support, including two-finger-scrolling and the like, and may in fact 
work with advanced gestures like pinch/rotate if the rest of the 
application stack has the support, but at least here, the advanced 
gestures still wouldn't work, only the ones (like scrolling) that xorg 
has standardized (buttons 4/5 to vertical-scrolling, 6/7 to horizontal 

What xf86-input-mtrack actually does is expose the advanced pinch/rotate/
N-finger-swipe/etc gestures as higher button events, beyond the seven 
buttons xorg has standardized as left/middle/right, scroll-up/down/left/
right.  Its documentation comes as a readme file, installed (at least on 
gentoo) as /usr/share/doc/<package-name>/README.md.

Once the mtrack driver is configured you have the advanced gestures, 
pinch/rotate/etc, appearing as buttons 8-20.  But as xorg hasn't yet 
standardized those buttons to actually /mean/ anything so apps don't 
normally know what to do with them, the next thing you have to do is find 
an app that signs up to the xorg input event queue and watchs for these 
events, then actually does something with them.

For that I chose sxhkd, aka simple X hotkey daemon.  With it installed 
you'll get the sxhkd manpage, providing documentation for options and the 
config file, sxhkdrc.

This is where every configuration will be different, because there really 
is no standardization on the buttons and what they mean.  But the way I 
set it up here:

Buttons 4 and 5 are of course normally vert-scroll, but some apps still 
don't respond to vert-scroll events, so for these, I have it setup to 
detect the active app (see discussion below) and if it's one I have a 
config for, generally ones that don't natively respond to vert-scroll, 
it'll trigger the configured actions instead of the usual vert-scroll, 
which of course the app wouldn't respond to as it doesn't know how.

I have mtrack setup to report three-finger-swipe up/down/left/right as 
buttons 8-11.  These I chose to configure as app-specific, so again, I 
detect what app is active, and trigger a configured action as desired.

Mtrack is setup to report four-finger-swipe up/down/left/right as buttons 
12-15.  These I chose to configure for global actions, tho they're 
sometimes implemented in app-specific ways.

I configured swipe4left (button 14) as window-close.  To do that, I use a 
utility called wmctrl, specifically: wmctrl -c :ACTIVE: .  So swipe4left 
is a global action that closes the currently active window, whatever it 
is, by using mtrack to detect the swipe4left and issue the xorg-input 
button-14 event, which sxhkd detects and triggers the wmctrl -c :ACTIVE: 
command to close the currently active window.  Works rather nicely. =:^)

swipe4right (button 15) is configured as global desktop-up, switch to the 
next virtual-desktop.  To do that, I have kde4's kwin set to use the 
super-F12 hotkey (super being another notation for what is commonly 
called the win-key) to switch virtual desktop.  Then I simply have sxhkd 
configured to call another utility, xdotool, which simulates key presses, 
in this case with the xdotool key super-F12 command, simulating the super-
F12 key, which kwin is already configured to detect and switch virtual 
desktops on.  Again, swipe4right works rather nicely, switching virtual 
desktops at a 4-finger-right-swipe. =:^)

swipe4up (button 12) and swipe4down (13) are button multiplexers, 
generally locking the next button triggered, or otherwise trigger a 
repeat, etc.  This I do using scripts that I created for the purpose.

On a trackpad without physical buttons, a single-finger tap is commonly 
used to trigger a button-1 click, for instance, but single finger touch 
and move is normally used to move the pointer.  Similarly, two-finger-tap 
might be middle-click, but two-finger-drag is scroll, and three-finger-
tap might be right-click, but three-finger-drag would be interpreted as a 
three-finger-swipe.  So without a physical button you don't have a way to 
actually drag, unless you assign another button as a drag-lock initiator, 
telling it to drag lock the next button clicked until you click that 
button again to release.  Similarly with the advanced pinch-to-zoom and 
rotate-to-rotate gestures.  You can either configure them to repeat until 
a click, and then end up with either very slow zooming, or not being able 
to zoom only one notch, or you can configure them to trigger only once 
and then get tired repeating the gesture again and again to zoom in a 
long way or whatever.  But configuring it to trigger once, but then 
configuring another button to trigger a repeat until canceled, allows you 
to zoom in only one notch, or trigger a repeating zoom and stop it when 
desired, depending on how far you want to zoom.

Pinch-zoom in/out is buttons 16 and 17.  While individual apps may have a 
zoom feature, I already commonly use kwin's general desktop zoom and have 
it configured with hotkeys (ctrl-super-up/down, zoom in/out, ctrl-super-
left, return to 100% normal zoom).  So I configured pinch-zoom to trigger 
those hotkeys using xdotool (tho I actually use a script, helping to 
better integrate it with the multiplexer functions mentioned above).

Two-finger-rotate, by contrast, is more a per-app function, so that's how 
I configured it.  sxhkd detects the active window and where a rotate 
function is configured for it, it'll run that function accordingly.

OK, but how to detect what the active window is, so I can act based on 
whether it's a specific window, or not?

That again I setup using a couple purpose-built scripts.  The way it 
appears in the sxhkd config is like this (here using the three-finger-
swipe-up, button 8, config, as an example):

button 8
    winactiveis gwenview && xdotool key super+Return ; \
    winactiveis firefox && xdotool key ctrl+minus ; \
    winactiveis orion && killall xdotool

As you can tell if you know shell scripting, that's reasonably 
straightforward shell scripting, complete with line continuation escapes, 
in the sxhkd config.  Winactiveis is in turn a simple script that returns 
shell true or false, so it can be used as a logic conditional.

So how does winactiveis work?  From the script's help text, quote:

Based on various hardcoded winprop matches, does a window match that of a 
friendly name ($1)?

$2 is the window-ID to check.  If not provided it defaults to the active 

Any $3 will activate debug.

Exit code of shell true (0) if so, else not.

End quote.

Right now, I have it setup to detect three different apps via 
friendlyname, firefox, gwenview, and orion (an old DOS-based game I still 
run, in DOSBox).

It can match on these window properties: class, fullclass, role, name.  
Using (bash) shell equality match, patterns or regex, it can match those 
properties based on exact, shell, shellnocase, regex, regexnocase.

If you're familiar with kwin's window rules, this should look pretty 
familiar, as that's what it's based on, except of course here I'm doing 
it in bash shellscript context.

To actually get the window properties from the live window in ordered to 
match them to the ones stored in the config, I call a second shell 
script, winpropis.


OK, so while I've not spelled out every detail, if you're still reading, 
chances are pretty good you both have a good idea the level of hoops I 
jumped thru to do this, and are Linux/shell/xorg technically literate 
enough to have some chance of actually doing something with it... 
assuming you are sufficiently determined and thus still consider it worth 
the trouble at all.

If you are/do, then I can save us both quite a bit of trouble by simply 
posting the configuration and scripts as necessary, and you can take it 
from there and simply tweak as desired.  If you find that reasonable, 
simply request the set and I can post it, either here, or mailed 
privately.  Otherwise, I'll assume I either lost you well before this 
point, as I'd guess I'd likely have lost 90% or more, or you got the idea 
but appreciate and accept the challenge of coding it up yourself, much as 
I did.

Or perhaps you're lucky and are running kde/frameworks/plasma/apps5 on 
qt5, and it's actually enough better that you no longer need to jump thru 
all these hoops!?  One can hope, anyway.  I expect it's at least some 
better, but I'm not sure if it has actually gone the full way, yet.

Alternatively, if you're running the plasma5 for tablets, etc, stuff, I 
guess that's rather farther advanced in terms of gesture recognition and 
the like, but may be limited in other ways, such as how many normal Linux 
apps it can run directly, and if it does run them, it's very likely that 
plasma-native touch doesn't work with normal Linux-native apps without 
porting or jumping thru hoops such as the above.

Anyway, from my own experience it's definitely possible and I have my own 
system reasonably well configured for it, but as the above should 
demonstrate, it's technically well beyond a level most people would find 
at all acceptable, thus, while technically possible, still well beyond 
the reach of most people.  Tho that's with kde4.  Kde/frameworks/plasma/
apps5 may indeed actually work without that same level of hoop jumping, 
but I'd be very surprised if it actually works out of the box, at least 
for more than an extremely limited set of apps.

Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

This message is from the kde mailing list.
Account management:  https://mail.kde.org/mailman/listinfo/kde.
Archives: http://lists.kde.org/.
More info: http://www.kde.org/faq.html.
This message is from the kde mailing list.
Account management:  https://mail.kde.org/mailman/listinfo/kde.
Archives: http://lists.kde.org/.
More info: http://www.kde.org/faq.html.