RE: [PATCH v3] coccicheck: process every source file at once
- Date: Fri, 5 Oct 2018 16:53:35 +0000
- From: "Keller, Jacob E" <jacob.e.keller@xxxxxxxxx>
- Subject: RE: [PATCH v3] coccicheck: process every source file at once
> -----Original Message-----
> From: Jeff King [mailto:peff@xxxxxxxx]
> Sent: Friday, October 05, 2018 9:25 AM
> To: SZEDER Gábor <szeder.dev@xxxxxxxxx>
> Cc: Jacob Keller <jacob.keller@xxxxxxxxx>; Keller, Jacob E
> <jacob.e.keller@xxxxxxxxx>; Git mailing list <git@xxxxxxxxxxxxxxx>
> Subject: Re: [PATCH v3] coccicheck: process every source file at once
> On Fri, Oct 05, 2018 at 02:40:48PM +0200, SZEDER Gábor wrote:
> > On Thu, Oct 04, 2018 at 07:17:47PM -0700, Jacob Keller wrote:
> > > Junio, do you want me to update the commit message on my side with the
> > > memory concerns? Or could you update it to mention memory as a noted
> > > trade off.
> > We have been running 'make -j2 coccicheck' in the static analysis
> > build job on Travis CI, which worked just fine so far. The Travis CI
> > build environments have 3GB of memory available , but, as shown in
> > , with this patch the memory consumption jumps up to about
> > 1.3-1.8GB for each of those jobs. So with two parallel jobs we will
> > very likely bump into this limit.
> > So this patch should definitely change that build script to run only a
> > single job.
> It should still be a net win, since the total CPU seems to drop by a
> factor of 3-4.
> Are we OK with saying 1.3-1.8GB is necessary to run coccicheck? That
> doesn't feel like an exorbitant request for a developer-only tool these
> days, but I have noticed some people on the list tend to have lousier
> machines than I do. ;)
It's probably not worth trying to make this more complicated and scale up how many files we do at once based on the amount of available memory on the system...