Message ID | d58f339a9c0dd8352b50d2f7a216f67ec2844f20.1434501121.git.luto@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c index 5bd3a99dc20b..c5ceec532799 100644 --- a/arch/x86/kernel/cpu/amd.c +++ b/arch/x86/kernel/cpu/amd.c @@ -107,7 +107,7 @@ static void init_amd_k6(struct cpuinfo_x86 *c) const int K6_BUG_LOOP = 1000000; int n; void (*f_vide)(void); - unsigned long d, d2; + u64 d, d2; printk(KERN_INFO "AMD K6 stepping B detected - "); @@ -118,10 +118,10 @@ static void init_amd_k6(struct cpuinfo_x86 *c) n = K6_BUG_LOOP; f_vide = vide; - rdtscl(d); + d = native_read_tsc(); while (n--) f_vide(); - rdtscl(d2); + d2 = native_read_tsc(); d = d2-d; if (d > 20*K6_BUG_LOOP)
This code is timing 100k indirect calls, so the added overhead of counting the number of cycles elapsed as a 64-bit number should be insignificant. Drop the optimization of using a 32-bit count. Signed-off-by: Andy Lutomirski <luto@kernel.org> --- arch/x86/kernel/cpu/amd.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)