Re: [PATCH v3] coccicheck: process every source file at once
- Date: Wed, 10 Oct 2018 13:44:41 +0200
- From: SZEDER Gábor <szeder.dev@xxxxxxxxx>
- Subject: Re: [PATCH v3] coccicheck: process every source file at once
On Mon, Oct 08, 2018 at 11:15:42PM -0400, Jeff King wrote:
> On Fri, Oct 05, 2018 at 09:54:13PM +0200, SZEDER Gábor wrote:
> > Runtimes tend to fluctuate quite a bit more on Travis CI compared to
> > my machine, but not this much, and it seems to be consistent so far.
> > After scripting/querying the Travis CI API a bit, I found that from
> > the last 100 static analysis build jobs 78 did actully run 'make
> > coccicheck' , avaraging 470s for the whole build job, with only 4
> > build job exceeding the 10min mark.
> > I had maybe 6-8 build jobs running this patch over the last 2-3 days,
> > I think all of them were over 15min. (I restarted some of them, so I
> > don't have separate logs for all of them, hence the uncertainty.)
> So that's really weird and counter-intuitive, since we should be doing
> strictly less work. I know that spatch tries to parallelize itself,
> though from my tests, 1.0.4 does not. I wonder if the version in Travis
> differs in that respect and starts too many threads, and the extra time
> is going to contention and context switches.
I don't think it does any parallel work.
Here is the timing again from my previous email:
960.50user 22.59system 16:23.74elapsed 99%CPU (0avgtext+0avgdata 1606156maxresident)k
Notice that 16:23 is 983s, and that it matches the sum of the user and
system times. I usually saw this kind of timing with CPU-intensive
single-threaded programs, and if there were any parallelization, then I
would expect the elapsed time to be at least somewhat smaller than the
> Have you tried passing "-j1" to spatch? My 1.0.4 does not even recognize
I have just gave it a try, but the v1.0.0 on Travis CI errored out with
"unknown option `-j'.
> That seems like a pretty unlikely explanation to me, but I am having
> trouble coming up with another one.
> I guess the other plausible thing is that the extra memory is forcing us
> into some slower path. E.g., a hypervisor may even be swapping,
> unbeknownst to the child OS, and it gets accounted in the child OS as
> "boy, that memory load was really slow", which becomes used CPU.
> That actually sounds more credible to me.