Web lists-archives.com

Re: [PATCH v1 3/5] fsmonitor: add test cases for fsmonitor extension






On 5/16/2017 12:59 AM, Junio C Hamano wrote:
Ben Peart <peartben@xxxxxxxxx> writes:

Add test cases that ensure status results are correct when using the new
fsmonitor extension.  Test untracked, modified, and new files by
ensuring the results are identical to when not using the extension.

Add a test to ensure updates to the index properly mark corresponding
entries in the index extension as dirty so that the status is correct
after commands that modify the index but don't trigger changes in the
working directory.

Add a test that verifies that if the fsmonitor extension doesn't tell
git about a change, it doesn't discover it on its own.  This ensures
git is honoring the extension and that we get the performance benefits
desired.

Signed-off-by: Ben Peart <benpeart@xxxxxxxxxxxxx>
---
 t/t7519-status-fsmonitor.sh | 134 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 134 insertions(+)
 create mode 100644 t/t7519-status-fsmonitor.sh

Please make this executable.


Sorry, long time Windows developer so I forgot this extra step. Fixed for next roll.

diff --git a/t/t7519-status-fsmonitor.sh b/t/t7519-status-fsmonitor.sh
new file mode 100644
index 0000000000..2d63efc27b
--- /dev/null
+++ b/t/t7519-status-fsmonitor.sh
@@ -0,0 +1,134 @@
...
+# Ensure commands that call refresh_index() to move the index back in time
+# properly invalidate the fsmonitor cache
+...
+	git status >output &&
+	git -c core.fsmonitor=false status >expect &&
+	test_i18ncmp expect output
+'

Hmm. I wonder if we can somehow detect the case where we got the
correct and expected result only because fsmonitor was not in
effect, even though the test requested it to be used?  Not limited
to this particular test piece, but applies to all of them in this
file.


I have tested this manually by editing the test hook proc to output invalid results and ensured that the test failed as a result but adding that to the test script was kind of ugly (all tests end up getting duplicated - one ensuring success, one ensuring failure).

On further reflection, a better idea is to have the test hook proc output a marker file that can be tested for existence. If it exists, the hook was used to update the results, if it doesn't exist, then the hook proc wasn't used. A much cleaner solution that doesn't require duplicating the tests.