ARM: mmu64: Don't flush freshly invalidated region
Current code for dma_sync_single_for_device(), when called with dir
set to DMA_FROM_DEVICE, will first invalidate given region of memory
as a first step and then clean+invalidate it as a second. While the
second step should be harmless it seems to be an unnecessary no-op
that could probably be avoided.

Analogous code in Linux kernel (4.18) in arch/arm64/mm/cache.S:

ENTRY(__dma_map_area)
	cmp	w2, #DMA_FROM_DEVICE
	b.eq	__dma_inv_area
	b	__dma_clean_area
ENDPIPROC(__dma_map_area)

is written to only perform either invalidate or clean, depending on
the direction, so change dma_sync_single_for_device() to behave in the
same vein and perfom _either_ invlidate or flush of the given region.

Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com>
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
1 parent dd36eef commit 4e0112568c318b666624e51f91073bac03b6006b
@Andrey Smirnov Andrey Smirnov authored on 23 Aug 2018
Sascha Hauer committed on 24 Aug 2018
Showing 1 changed file
View
arch/arm/cpu/mmu_64.c