Web lists-archives.com

[PATCH v2 02/26] preempt-locking.txt: standardize document format

Each text file under Documentation follows a different
format. Some doesn't even have titles!

Change its representation to follow the adopted standard,
using ReST markups for it to be parseable by Sphinx:

- mark titles;
- mark literal blocks;
- adjust identation where needed;
- use :Author: for authorship.

Signed-off-by: Mauro Carvalho Chehab <mchehab@xxxxxxxxxxxxxxxx>
 Documentation/preempt-locking.txt | 40 ++++++++++++++++++++++++---------------
 1 file changed, 25 insertions(+), 15 deletions(-)

diff --git a/Documentation/preempt-locking.txt b/Documentation/preempt-locking.txt
index e89ce6624af2..c945062be66c 100644
--- a/Documentation/preempt-locking.txt
+++ b/Documentation/preempt-locking.txt
@@ -1,10 +1,13 @@
-		  Proper Locking Under a Preemptible Kernel:
-		       Keeping Kernel Code Preempt-Safe
-			 Robert Love <rml@xxxxxxxxx>
-			  Last Updated: 28 Aug 2002
+Proper Locking Under a Preemptible Kernel: Keeping Kernel Code Preempt-Safe
+:Author: Robert Love <rml@xxxxxxxxx>
+:Last Updated: 28 Aug 2002
 A preemptible kernel creates new locking issues.  The issues are the same as
@@ -17,9 +20,10 @@ requires protecting these situations.
 RULE #1: Per-CPU data structures need explicit protection
-Two similar problems arise. An example code snippet:
+Two similar problems arise. An example code snippet::
 	struct this_needs_locking tux[NR_CPUS];
 	tux[smp_processor_id()] = some_value;
@@ -35,6 +39,7 @@ You can also use put_cpu() and get_cpu(), which will disable preemption.
 RULE #2: CPU state must be protected.
 Under preemption, the state of the CPU must be protected.  This is arch-
@@ -52,6 +57,7 @@ However, fpu__restore() must be called with preemption disabled.
 RULE #3: Lock acquire and release must be performed by same task
 A lock acquired in one task must be released by the same task.  This
@@ -61,17 +67,20 @@ like this, acquire and release the task in the same code path and
 have the caller wait on an event by the other task.
 Data protection under preemption is achieved by disabling preemption for the
 duration of the critical region.
-preempt_enable()		decrement the preempt counter
-preempt_disable()		increment the preempt counter
-preempt_enable_no_resched()	decrement, but do not immediately preempt
-preempt_check_resched()		if needed, reschedule
-preempt_count()			return the preempt counter
+  preempt_enable()		decrement the preempt counter
+  preempt_disable()		increment the preempt counter
+  preempt_enable_no_resched()	decrement, but do not immediately preempt
+  preempt_check_resched()	if needed, reschedule
+  preempt_count()		return the preempt counter
 The functions are nestable.  In other words, you can call preempt_disable
 n-times in a code path, and preemption will not be reenabled until the n-th
@@ -89,7 +98,7 @@ So use this implicit preemption-disabling property only if you know that the
 affected codepath does not do any of this. Best policy is to use this only for
 small, atomic code that you wrote and which calls no complex functions.
 	cpucache_t *cc; /* this is per-CPU */
@@ -102,7 +111,7 @@ Example:
 	return 0;
 Notice how the preemption statements must encompass every reference of the
-critical variables.  Another example:
+critical variables.  Another example::
 	int buf[NR_CPUS];
@@ -114,7 +123,8 @@ This code is not preempt-safe, but see how easily we can fix it by simply
 moving the spin_lock up two lines.
+Preventing preemption using interrupt disabling
 It is possible to prevent a preemption event using local_irq_disable and