Re: events-delivery branch review - crossing events
- Date: Tue, 4 Apr 2017 15:10:08 +0200
- From: Carlos Garnacho <carlosg@xxxxxxxxx>
- Subject: Re: events-delivery branch review - crossing events
On Mon, Apr 3, 2017 at 10:58 PM, Alexander Larsson <alexl@xxxxxxxxxx> wrote:
> So, I took a quick look at the event-delivery branch. One fundamental
> thing that it is currently missing is the handling of crossing events
> due to size allocation changes. In the simple case this is just a
> label changing context, making the widget wider which changes the
> widget under the pointer.
> However, there is also some real complexity in this. For instance,
> it's possible to get loops this way, such as if the :hover state
> causes the new widget position to be outside the pointer.
> If you look at how CSS handles this, the specification says:
> User agents are not required to reflow a currently displayed
> document due to pseudo-class transitions. For instance, a style
> sheet may specify that the 'font-size' of an :active link should be
> larger than that of an inactive link, but since this may cause
> letters to change position when the reader selects the link, a UA
> may ignore the corresponding style rule.
> However, testing firefox and chrome it seems in practice what
> happens is that :hover causes a reflow, but the new reflow does not in
> turn cause re-calculating the :hover state (until the next mouse event
> at least). This seems like like a pretty nice behaviour in a weird
> In the current pre-event-delivery branch what happens in the layout
> cycle is this:
> 1) emit all events in the queue then freeze the event queue
> 2) emit update which triggers all animations etc
> 3) Do size machinery, possibly loop up to 4 times
> 4) Paint
> 5) Unfreeze events, possibly queueing a new frame
> In the above, a mouse enter event in 1 would cause a css property
> change, which would cause the cause the relayout in 3 to produce a new
> GdkWindow geometry, which in turn will emit new enter/leave events
> when the queue is unfrozen at 5 and which cause a new cycle the next
> frame. The frame clock will limit the cpu use from 100%, but its still
> not ideal to constantly switch between two states. Also, even if we
> don't get a loop the correct rendering is always delayed one frame.
> The question is how to handle this in the new model. The naive version
> would cause the hover css property to change immediately from size
> allocate, which will cause a layout loop that runs 4 times, and then
> paint. Another alternative is to keep a crossing event for this so
> that we can store it on the event queue, and this way we can reproduce
> the current behaviour. However, that strikes me as non-ideal too.
> An alternative would be to treat crossing events as level-triggered
> instead of edge-triggered, at most once a frame. Every frame, after
> the first iteration in the layout machinery we pick the current
> position for all pointers, and emit css state changes (as well as
> generic widget event callbacks). If any of these queue a resize we'll
> handle these in the next iteration, but we never generate further
> crossing events this frame, nor do we automatically schedule a new
> frame just due to this.
This was roughly my idea too, and I think it fits in nicely with frame
clock based drawing, we certainly want to limit the cpu time necessary
to handle the aftereffects of input events to a minimum.
> This has some complications in semantics though. If you move the
> pointer between multiple widgets in one frame we will miss some
> crossing events that we would otherwise have seen. I don't think this
> is a problem in itself, because these would unlikely have resulted in
> something that would affect the final frame result (it would
> essentially be like the user moved the mousepointer so fast that it
> jumped over the inbetween widget completely). However, it is not
> entierly clear how to report the motion events that land in-between
> the two widgets that had the enter notify reported to them. Getting
> motion events without enter events is quite a change compared to
> the current semantics. Can we just drop these events?
IMHO, if we go down this path we can and should drop them, it's not
coherent to report motion to widgets that didn't previously receive an
enter event. As you say this is not too different from quick pointer
motions, or touchscreen-driven pointer on x11. It'd just appear as if
the pointer warped from one place to the other. And we'd just be
rate-limiting it to 60fps.
For motions alone, I think this shouldn't matter much in practical
terms, plain hovering with no interaction should just cause visual
feedback most often, just like crossing events , and even more
so if the pointer is/gets grabbed somewhere else. This is more unclear
if the user manages to click in between, as we should arbitrate which
widget actually received the button press and implicitly grab on it.
What I'd suggest is:
1) All arriving input events are queued in toplevel coordinates.
2) When the frame clock says it's time to handle events:
- Coalesce the queue to a minimal series of events, might result in
>1 motions if there's button presses in between
- Emit those ensuring the correct/orderly crossing event emission
given the coalesced motions (which should still be one set of crossing
events given the appearance of grabs)
3) For widgets/controllers that want the full motion history, keep the
original events around during the event delivery phase so we can
reconstruct the event history for these widgets within motion
Things complicate a bit further I guess if we account for
press+release within the same frame time (Which I don't think is
possible, except programmatically). In this case the resulting
implicit grab kind of helps despite relayouts (although whether the
current target widget is correct for the button release is debatable),
style wise I think it's fair to say "toggling an state on and off
within the same frame may not result in visible effects", and event
management wise both events would be delivered, so all "permanent"
effects should still apply.
As per these ideas, event compression happens by default throughout
the widget hierarchy with no off switch whatsoever, so should be
compensated by accessors to let motion handlers in gestures/widgets
rebuild the set of x+y+time out of the uncompressed events.
 FWIW, GtkGesture just listens to press/motion*/release sequences.
I think we'll need some "motion capturer" event controller, but it
would be a GtkEventController rather than a GtkGesture, so it'll have
no means to claim pointer input for itself, grabs aside.
 I can think other less common usecases where missing all
crossing/motion might bring unexpected effects, like
capturing/confining the pointer whenever it enters some area, however
I'd expect/trust these to be big enough that they can't be crossed in
gtk-devel-list mailing list