Re: [Samba] ceph_vms performance
- Date: Thu, 24 May 2018 09:23:29 +0200
- From: Thomas Bennett via samba <samba@xxxxxxxxxxxxxxx>
- Subject: Re: [Samba] ceph_vms performance
> > I'm testing out ceph_vms vs a cephfs mount with a cifs export.
> I take it you mean the Ceph VFS module (vfs_ceph)?
Yes. Keyboard slip :)
> > I currently have 3 active ceph mds servers to maximise throughput and
> > when I have configured a cephfs mount with a cifs export, I'm getting
> > a reasonable benchmark results.
> Keep in mind that increasing the number of active MDS servers doesn't
> necessarily mean that you'll see better performance, especially if the
> client workload is spread across the full filesystem tree, rather than
> isolated into the corresponding sharded MDS subdirectories.
Thanks. I've not figured out mds ranks yet - my naieve assumption was that
they where sharing all the work load, but see that's not the case.
I'm doing some pretty simple benchmarking with an iozone and sysbench
fileio workloads writing to a mounted directory. However, the benchmark
workloads were identical for each test scenario, so I was expecting at
minimum - equivalent performance.
> However, when I tried some benchmarking with the ceph_vms module, I
> > only got a 3rd of the comparable write throughput.
> > I'm just wondering if this is expected, or if there is an obvious
> > configuration setup that I'm missing.
> My initial assumption would be that the improved CephFS kernel mount
> performance is mostly due to the Linux page cache, which is utilised
> for buffered client I/O.
I'm writing large data (double my machines memory) onto the mount point -
so I'm expecting caching to have minimal effect on my testing. I'm also
flush the page cache before performing reading tests just to be sure.
Thanks for the feedback.
To unsubscribe from this list go to the following URL and read the