diff mbox series

[RFC,31/41] random: introduce struct health_test + health_test_reset() placeholders

Message ID 20200921075857.4424-32-nstange@suse.de (mailing list archive)
State Not Applicable
Delegated to: Herbert Xu
Headers show
Series random: possible ways towards NIST SP800-90B compliance | expand

Commit Message

Nicolai Stange Sept. 21, 2020, 7:58 a.m. UTC
The to be implemented health tests will maintain some per-CPU state as they
successively process the IRQ samples fed into the resp. fast_pool from
add_interrupt_randomness().

In order to not to clutter future patches with trivialities, introduce
an empty struct health_test supposed to keep said state in the future.
Add a member of this new type to struct fast_pool.

Introduce a health_test_reset() stub, which is supposed to (re)initialize
instances of struct health_test.

Invoke it from the fast_pool_init_accounting() to make sure that a
fast_pool's contained health_test instance gets initialized once before
its first usage.

Make add_interrupt_randomness call fast_pool_init_accounting() earlier:
health test functionality will get invoked before the latter's old location
and it must have been initialized by that time.

Signed-off-by: Nicolai Stange <nstange@suse.de>
---
 drivers/char/random.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 37746df53acf..0f56c873a501 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -879,6 +879,11 @@  static void discard_queued_entropy(struct entropy_store *r,
 	spin_unlock_irqrestore(&r->lock, flags);
 }
 
+struct health_test {};
+
+static void health_test_reset(struct health_test *h)
+{}
+
 struct fast_pool {
 	__u32		pool[4];
 	unsigned long	last;
@@ -886,6 +891,7 @@  struct fast_pool {
 	unsigned char	count;
 	int		event_entropy_shift;
 	struct queued_entropy	q;
+	struct health_test	health;
 };
 
 /*
@@ -1644,6 +1650,7 @@  static inline void fast_pool_init_accounting(struct fast_pool *f)
 		return;
 
 	f->event_entropy_shift = min_irq_event_entropy_shift();
+	health_test_reset(&f->health);
 }
 
 void add_interrupt_randomness(int irq, int irq_flags)
@@ -1674,6 +1681,8 @@  void add_interrupt_randomness(int irq, int irq_flags)
 	add_interrupt_bench(cycles);
 	this_cpu_add(net_rand_state.s1, fast_pool->pool[cycles & 3]);
 
+	fast_pool_init_accounting(fast_pool);
+
 	if (unlikely(crng_init == 0)) {
 		if ((fast_pool->count >= 64) &&
 		    crng_fast_load((char *) fast_pool->pool,
@@ -1692,8 +1701,6 @@  void add_interrupt_randomness(int irq, int irq_flags)
 	if (!spin_trylock(&r->lock))
 		return;
 
-	fast_pool_init_accounting(fast_pool);
-
 	if (!fips_enabled) {
 		/* award one bit for the contents of the fast pool */
 		nfrac = 1 << ENTROPY_SHIFT;