In barebox we used 1MiB sections to map our SDRAM cachable. This
has the drawback that we have to map our sdram twice: cached for
normal sdram and uncached for DMA operations. As address space gets
sparse on newer systems we are sometines unable to find a suitably
big enough area for the dma coherent space.
This patch changes the MMU code to use second level page tables.
With it we can implement dma_alloc_coherent as normal malloc, we
just have to remap the allocated area uncached afterwards and map
it cached again after free().
This makes arm_create_section(), setup_dma_coherent() and mmu_enable()
noops.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>