Web lists-archives.com

Re: [RFC PATCH v2 11/12] x86/mm/tlb: Use async and inline messages for flushing




> On May 31, 2019, at 1:37 PM, Andy Lutomirski <luto@xxxxxxxxxx> wrote:
> 
> On Fri, May 31, 2019 at 1:13 PM Dave Hansen <dave.hansen@xxxxxxxxx> wrote:
>> On 5/31/19 12:31 PM, Nadav Amit wrote:
>>>> On May 31, 2019, at 11:44 AM, Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote:
>>>> 
>>>> 
>>>> 
>>>>> On May 31, 2019, at 3:57 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>>>>> 
>>>>>> On Thu, May 30, 2019 at 11:36:44PM -0700, Nadav Amit wrote:
>>>>>> When we flush userspace mappings, we can defer the TLB flushes, as long
>>>>>> the following conditions are met:
>>>>>> 
>>>>>> 1. No tables are freed, since otherwise speculative page walks might
>>>>>> cause machine-checks.
>>>>>> 
>>>>>> 2. No one would access userspace before flush takes place. Specifically,
>>>>>> NMI handlers and kprobes would avoid accessing userspace.
>>>>>> 
>>>>>> Use the new SMP support to execute remote function calls with inlined
>>>>>> data for the matter. The function remote TLB flushing function would be
>>>>>> executed asynchronously and the local CPU would continue execution as
>>>>>> soon as the IPI was delivered, before the function was actually
>>>>>> executed. Since tlb_flush_info is copied, there is no risk it would
>>>>>> change before the TLB flush is actually executed.
>>>>>> 
>>>>>> Change nmi_uaccess_okay() to check whether a remote TLB flush is
>>>>>> currently in progress on this CPU by checking whether the asynchronously
>>>>>> called function is the remote TLB flushing function. The current
>>>>>> implementation disallows access in such cases, but it is also possible
>>>>>> to flush the entire TLB in such case and allow access.
>>>>> 
>>>>> ARGGH, brain hurt. I'm not sure I fully understand this one. How is it
>>>>> different from today, where the NMI can hit in the middle of the TLB
>>>>> invalidation?
>>>>> 
>>>>> Also; since we're not waiting on the IPI, what prevents us from freeing
>>>>> the user pages before the remote CPU is 'done' with them? Currently the
>>>>> synchronous IPI is like a sync point where we *know* the remote CPU is
>>>>> completely done accessing the page.
>>>>> 
>>>>> Where getting an IPI stops speculation, speculation again restarts
>>>>> inside the interrupt handler, and until we've passed the INVLPG/MOV CR3,
>>>>> speculation can happen on that TLB entry, even though we've already
>>>>> freed and re-used the user-page.
>>>>> 
>>>>> Also, what happens if the TLB invalidation IPI is stuck behind another
>>>>> smp_function_call IPI that is doing user-access?
>>>>> 
>>>>> As said,.. brain hurts.
>>>> 
>>>> Speculation aside, any code doing dirty tracking needs the flush to happen
>>>> for real before it reads the dirty bit.
>>>> 
>>>> How does this patch guarantee that the flush is really done before someone
>>>> depends on it?
>>> 
>>> I was always under the impression that the dirty-bit is pass-through - the
>>> A/D-assist walks the tables and sets the dirty bit upon access. Otherwise,
>>> what happens when you invalidate the PTE, and have already marked the PTE as
>>> non-present? Would the CPU set the dirty-bit at this point?
>> 
>> Modulo bugs^Werrata...  No.  What actually happens is that a
>> try-to-set-dirty-bit page table walk acts just like a TLB miss.  The old
>> contents of the TLB are discarded and only the in-memory contents matter
>> for forward progress.  If Present=0 when the PTE is reached, you'll get
>> a normal Present=0 page fault.
> 
> Wait, does that mean that you can do a lock cmpxchg or similar to
> clear the dirty and writable bits together and, if the dirty bit was
> clear, skip the TLB flush?  If so, nifty!  Modulo errata, of course.
> And I seem to remember some exceptions relating to CET shadow stack
> involving the dirty bit being set on not-present pages.

I did something similar with the access-bit in the past.

Anyhow, I have a bug here - the code does not wait for the indication that
the IPI was received. I need to rerun performance measurements again once I
fix it.