# HG changeset patch # User Francisco J Ballesteros # Date 1316446740 -7200 # Node ID 9adfd913d396d815b6dda8309eb907ec4e60321d # Parent a54f7788991d22d3adf36c640bfe12c9239a157d mmu: cleanup. How many times must one send a change before codereview sends what you told it to send? Note that I am also adding a cleanup for map.c, not just mmu.c R=nix-dev, rminnich CC=nix-dev http://codereview.appspot.com/5066043 diff -r a54f7788991d -r 9adfd913d396 sys/src/nix/k10/map.c --- a/sys/src/nix/k10/map.c Mon Sep 19 17:06:50 2011 +0200 +++ b/sys/src/nix/k10/map.c Mon Sep 19 17:39:00 2011 +0200 @@ -18,7 +18,7 @@ if(pa < TMFM) return KSEG0+va; -assert(pa < KSEG2); + assert(pa < KSEG2); return KSEG2+va; } @@ -40,9 +40,9 @@ KMap* kmap(Page* page) { -// print("kmap(%#llux) @ %#p: %#p %#p\n", -// page->pa, getcallerpc(&page), -// page->pa, KADDR(page->pa)); + DBG("kmap(%#llux) @ %#p: %#p %#p\n", + page->pa, getcallerpc(&page), + page->pa, KADDR(page->pa)); return KADDR(page->pa); } diff -r a54f7788991d -r 9adfd913d396 sys/src/nix/k10/mmu.c --- a/sys/src/nix/k10/mmu.c Mon Sep 19 17:06:50 2011 +0200 +++ b/sys/src/nix/k10/mmu.c Mon Sep 19 17:39:00 2011 +0200 @@ -16,6 +16,8 @@ * calculate and map up to TMFM (conf crap); */ +#define TMFM (64*MiB) /* kernel memory */ + #define PPN(x) ((x)&~(PGSZ-1)) void @@ -118,13 +120,6 @@ dumpptepg(4, m->pml4->pa); } -/* - * define this to disable the 4K page allocator. - */ -#define PTPALLOC - -#ifdef PTPALLOC - static Page mmuptpfreelist; static Page* @@ -172,7 +167,6 @@ return page; } -#endif /* PTPALLOC */ void mmuswitch(Proc* proc) @@ -217,16 +211,12 @@ next = page->next; if(--page->ref) panic("mmurelease: page->ref %d\n", page->ref); -#ifdef PTPALLOC lock(&mmuptpfreelist); page->next = mmuptpfreelist.next; mmuptpfreelist.next = page; mmuptpfreelist.ref++; page->prev = nil; unlock(&mmuptpfreelist); -#else - pagechainhead(page); -#endif /* PTPALLOC */ } if(proc->mmuptp[0] && pga.r.p) wakeup(&pga.r); @@ -251,9 +241,9 @@ Mpl pl; uintmem pa; - DBG("up %#p mmuput %#p %#Px %#ux\n", up, va, pa, attr); pa = pg->pa; + DBG("up %#p mmuput %#p %#Px %#ux\n", up, va, pa, attr); assert(pg->pgszi >= 0); pgsz = m->pgsz[pg->pgszi]; if(pa & (pgsz-1)) @@ -279,15 +269,8 @@ break; } if(page == nil){ - if(up->mmuptp[0] == 0){ -#ifdef PTPALLOC + if(up->mmuptp[0] == nil) page = mmuptpalloc(); -#else - page = newpage(1, 0, 0, PTSZ); - page->va = VA(kmap(page)); - assert(page->pa == PADDR(UINT2PTR(page->va))); -#endif /* PTPALLOC */ - } else { page = up->mmuptp[0]; up->mmuptp[0] = page->next; @@ -361,6 +344,7 @@ uintptr pae; PTE *pd, *pde, *pt, *pte; int pdx, pgsz; + Page *pg; pd = (PTE*)(PDMAP+PDX(PDMAP)*4096); @@ -380,25 +364,10 @@ } else{ if(*pde == 0){ -/* - * XXX: Shouldn't this be mmuptpalloc()? - */ - /* - * Need a PTSZ physical allocator here. - * Because space will never be given back - * (see vunmap below), just malloc it so - * Ron can prove a point. - *pde = pmalloc(PTSZ)|PteRW|PteP; - */ - void *alloc; - - alloc = mallocalign(PTSZ, PTSZ, 0, 0); - if(alloc != nil){ - *pde = PADDR(alloc)|PteRW|PteP; -//print("*pde %#llux va %#p\n", *pde, va); - memset((PTE*)(PDMAP+pdx*4096), 0, 4096); - - } + pg = mmuptpalloc(); + assert(pg != nil && pg->pa != 0); + *pde = pg->pa|PteRW|PteP; + memset((PTE*)(PDMAP+pdx*4096), 0, 4096); } assert(*pde != 0); @@ -481,6 +450,16 @@ return 0; } +/* + * KSEG0 maps low memory. + * KSEG2 maps almost all memory, but starting at an address determined + * by the address space map (see asm.c). + * Thus, almost everything in physical memory is already mapped, but + * there are things that fall in the gap + * (acpi tables, device memory-mapped registers, etc.) + * for those things, we also want to disable caching. + * vmap() is required to access them. + */ void* vmap(uintptr pa, usize size) { @@ -673,7 +652,6 @@ * * This is set up here so meminit can map appropriately. */ -#define TMFM (64*MiB) /* kernel memory */ o = sys->pmstart; sz = ROUNDUP(o, 4*MiB) - o; pa = asmalloc(0, sz, 1, 0);