ARM 32 物理地址转换虚拟地址

第一章  虚拟内存分布及常用宏定义

1.1内存分布

ARMlinux下虚拟内存分布在内核文档有介绍,与X86是有些不同。

部分地址分段在9850K项目上发现未曾使用,故灰色处理。

Kernel/documentation/arm/memory.txt

Start

End

Use

ffff8000

ffffffff

copy_user_page / clear_user_page use.

ffff4000

ffffffff

cache aliasing on ARMv6 and later CPUs

ffff1000

ffff7fff

Reserved
Platforms must not use this address range

ffff0000

ffff0fff

CPU vector page.
                The CPU vectors are mapped here if the CPU supports vector relocation (control register V bit.)

Fffe0000

fffeffff

XScale cache flush area.  This is used in proc-xscale.S to flush the whole data cache. (XScale does not have TCM.)

fffe8000

fffeffff

DTCM mapping area for platforms with DTCM mounted inside the CPU.

Fffe0000

fffe7fff

ITCM mapping area for platforms with ITCM mounted inside the CPU

ffc00000

ffefffff

Fixmap mapping region.  Addresses provided by fix_to_virt() will be located here

fee00000

feffffff

Mapping of PCI I/O space. This is a static mapping within the vmalloc space.

VMALLOC_START

VMALLOC_END-1

vmalloc() / ioremap() space.
                Memory returned by vmalloc/ioremap will be dynamically placed in this region.Machine specific static mappings are also located here through iotable_init().VMALLOC_START is based upon the value of the high_memory variable, and VMALLOC_END is equal to 0xff800000.

PAGE_OFFSET

high_memory-1

Kernel direct-mapped RAM region.This maps the platforms RAM, and 
typically maps all platform RAM in a 1:1 relationship

PKMAP_BASE

PAGE_OFFSET-1

Permanent kernel mappings One way of mapping HIGHMEM pages into kernel space.

MODULES_VADDR

MODULES_END-1

Kernel module space Kernel modules inserted via insmod are placed here using dynamic mappings.

00001000

TASK_SIZE-1

User space mappings Per-thread mappings are placed here via the mmap() system call.

00000000

00000fff

CPU vector page / null pointer trap CPUs which do not support vector remapping place their vector page here.  NULL pointer dereferences by both the kernel and user space are also caught via this mapping.

将上表转化为图形形式:

从开机log里得到印证:

[ 0.000000]c0 Virtual kernel memory layout:

vector     : 0xffff0000 - 0xffff1000     (  4 kB)

fixmap      : 0xffc00000 - 0xfff00000    (3072 kB)

vmalloc    : 0xf0800000 - 0xff800000   ( 240 MB)

lowmem  : 0xc0000000 - 0xf0000000   ( 768 MB)

pkmap      : 0xbfe00000 - 0xc0000000   (   2MB)

modules : 0xbf000000 - 0xbfe00000   (  14MB)

.text        : 0xc0008000 - 0xc0b00000   (11232 kB)

.init           : 0xc1000000 - 0xc1400000   (4096 kB)

.data         : 0xc1400000 - 0xc14c76a4   ( 798 kB)

.bss           : 0xc14c76a4 - 0xc1cc0434   (8164 kB)

1.2内存地址基本概念

用户虚拟地址

这是被用户空间程序所能见到的常规地址.用户地址或者是32位的,或者是64位的,这取决于硬件的体系架构。并且每个进程有它自己的虚拟地址空间. 个人理解9850K上用户虚拟地址为

00001000         ~       TASK_SIZE-1

物理地址

该地址在处理器和系统内存之间使用.物理地址也是32位或者64位长的,在某些情况下甚至32系统也能使用64位物理内存。

内核逻辑地址

内核逻辑地址组成了内核的常规地址空间。该地址映射了部分(或者全部)内存,并经常被视为物理地址。在大多数体系架构中,逻辑地址和与其相关联的物理地址的不同,仅仅是在它们之间存在一个固定的偏移量。逻辑地址使用硬件内建的指针大小,因此在安装了大内存的32位系统中,它无法寻址全部的物理地址。逻辑地址通常保存在unsigned long或者void *这样的变量中。kmalloc返回的内存就是内核逻辑地址。

内核虚拟地址

内核虚拟地址和逻辑地址的相同之处在于, 它们都将内核空间的地址映射到物理地址上。 内核虚拟地址与物理地址的映射不必是线性的和一对一的, 但这是逻辑地址的特点。所有的逻辑地址都是内核虚拟地址, 但是许多内核虚拟地址不是逻辑地址. 例如, vmalloc 分配的内存具有一个虚拟地址(但没有直接的物理映射). kmap 函数(本章稍后描述)也返回一个虚拟地址.虚拟地址常常存储于指针变量.

线性地址

个人理解,线性地址即一个虚拟地址线性的对应一个物理地址;但此线性非一一映射关系。所以可以理解为0-4G虚拟地址都是线性地址。 

低端内存

存在于内核空间上的逻辑地址内存.

高端内存

是指那些不存在逻辑地址的内存, 这是因为它们处于内核虚拟地址之上.

9850K上高端内存个人理解:  物理地址大于768 M的,即物理地址大于0xb0000000,其page.flag 高两位为01,分配在highzone。如果手机物理内存小于768 M,一般就不配置高端内存。

1.3 Linux内存管理常用宏定义

Linux内核软件架构习惯与分成硬件相关层和硬件无关层。对于页表管理,2.6.10以前(包括2.6.10)在硬件无关层使用了3级页表目录管理的方式,它不管底层硬件是否实现的也是3级的页表管理.从2.6.11开始,为了配合64位CPU的体系结构,硬件无关层则使用了4级页表目录管理的方式。

9850K上是32位系统,采用的是二级映射。所以Linux硬件无关层走的是二级映射,即跳过了pud和pmd。下面宏定义是sprdroid7.0_trunk_k44_17b(Kernel4.4)分支上sp9850ka_1h10    工程里的各种宏定义实现:

      

kernel/arch/arm/include/asm/pgtable.h

/*

 * Just any arbitrary offset to the start ofthe vmalloc VM area: the

 * current 8MB value just means that there willbe a 8MB "hole" after the

 * physical memory until the kernel virtualmemory starts.  That means that

 * any out-of-bounds memory accesses willhopefully be caught.

 * The vmalloc() routines leaves a hole of 4kBbetween each vmalloced

 * area for the same reason. ;)

 */

#define VMALLOC_OFFSET            (8*1024*1024)

#defineVMALLOC_START              (((unsignedlong)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))        //4034920448 = 0xF0800000  ,high_memory =0xf0000000

#define VMALLOC_END                           0xff800000UL

#defineLIBRARY_TEXT_START      0x0c000000

#include<asm-generic/pgtable-nopud.h>

#include<asm/memory.h>

#include<asm/pgtable-hwdef.h>

#include<asm-generic/pgtable.h>

/* to find an entryin a page-table-directory */

#definepgd_index(addr)                ((addr)>> PGDIR_SHIFT)

#definepgd_offset(mm, addr)      ((mm)->pgd +pgd_index(addr))

/* to find an entryin a kernel page-table-directory */

#definepgd_offset_k(addr)  pgd_offset(&init_mm,addr)

#definepmd_none(pmd)               (!pmd_val(pmd))

static inline pte_t*pmd_page_vaddr(pmd_t pmd)

{

         return __va(pmd_val(pmd) &PHYS_MASK & (s32)PAGE_MASK);

}

#definepmd_page(pmd)               pfn_to_page(__phys_to_pfn(pmd_val(pmd)& PHYS_MASK))   // PHYS_MASK为0xffffffff

#define__pte_map(pmd)              (pte_t*)kmap_atomic(pmd_page(*(pmd)))

#definepte_index(addr)                 (((addr)>> PAGE_SHIFT) & (PTRS_PER_PTE - 1))

#definepte_offset_kernel(pmd,addr)     (pmd_page_vaddr(*(pmd))+ pte_index(addr))    //个人认为直接映射区地址没有pte,此宏只是给pte页表存在直接映射区的用户空间地址或者高端内存地址用。

#definepte_offset_map(pmd,addr)       (__pte_map(pmd)+ pte_index(addr))

#definepte_unmap(pte)                         __pte_unmap(pte)

#definepte_pfn(pte)             ((pte_val(pte)& PHYS_MASK) >> PAGE_SHIFT)

#definepfn_pte(pfn,prot)     __pte(__pfn_to_phys(pfn)| pgprot_val(prot))

#definepte_page(pte)          pfn_to_page(pte_pfn(pte))           

#definemk_pte(page,prot)   pfn_pte(page_to_pfn(page),prot)

#definepte_none(pte)          (!pte_val(pte))                                    //判断为真表示尚未为这个表项建立映射

#definepte_present(pte)      (pte_isset((pte),L_PTE_PRESENT))                  //检测页面是否在内存中,为真则在内存中

kernel/arch/arm/include/asm/ pgtable-2level.h

#definePTRS_PER_PTE                           512

#definePTRS_PER_PMD                1      

#definePTRS_PER_PGD                 2048

#definePTE_HWTABLE_PTRS        (PTRS_PER_PTE)

#definePTE_HWTABLE_OFF          (PTE_HWTABLE_PTRS* sizeof(pte_t))

#definePTE_HWTABLE_SIZE         (PTRS_PER_PTE *sizeof(u32))

/*

 * PMD_SHIFT determines the size of the area asecond-level page table can map

 * PGDIR_SHIFT determines what a third-levelpage table entry can map

 */

#define PMD_SHIFT              21

#define PGDIR_SHIFT           21

#define PMD_SIZE                 (1UL << PMD_SHIFT)

#define PMD_MASK              (~(PMD_SIZE-1))

#define PGDIR_SIZE              (1UL << PGDIR_SHIFT)           //2 的21次方,2MB

#define PGDIR_MASK           (~(PGDIR_SIZE-1))

/*

 * section address mask and size definitions.

 */

#defineSECTION_SHIFT                 20

#defineSECTION_SIZE                   (1UL << SECTION_SHIFT)

#defineSECTION_MASK                 (~(SECTION_SIZE-1))

#defineUSER_PTRS_PER_PGD      (TASK_SIZE / PGDIR_SIZE)   //0xbf0/2 = 0x5f8,此值* 8即0x2fc

/*

 * The "pud_xxx()" functions here aretrivial when the pmd is folded into

 * the pud: the pud entry is never bad, alwaysexists, and can't be set or

 * cleared.

 */

#definepud_none(pud)                  (0)

#definepud_bad(pud)                    (0)

#definepud_present(pud)             (1)

#definepud_clear(pudp)                do { }while (0)

#defineset_pud(pud,pudp)            do { } while (0)

static inline pmd_t*pmd_offset(pud_t *pud, unsigned long addr)

{

         return (pmd_t *)pud;

}

Kernel/ arch/arm/include/asm/page.h      //有MMU不走asm-generic\page.h

/* PAGE_SHIFTdetermines the page size */

#define PAGE_SHIFT             12

#define PAGE_SIZE                (_AC(1,UL) << PAGE_SHIFT)

#define PAGE_MASK             (~((1 << PAGE_SHIFT) - 1))   //0xfffff000

#define PAGE_OFFSET          UL(CONFIG_PAGE_OFFSET)   //0xc0000000

#ifndef CONFIG_MMU

#include<asm/page-nommu.h>

#else

#ifdefCONFIG_ARM_LPAE

#include<asm/pgtable-3level-types.h>

#else

#include<asm/pgtable-2level-types.h>

#endif

#endif /*CONFIG_MMU */

#include<asm/memory.h>

kernel/arch/arm/include/asm/fixmap.h

#defineFIXADDR_START                0xffc00000UL

#define FIXADDR_END                   0xfff00000UL

#define FIXADDR_TOP                    (FIXADDR_END -PAGE_SIZE)  

kernel/arch/arm/include/asm/highmem.h

/* start afterfixmap area */

#define PKMAP_BASE                    (PAGE_OFFSET -PMD_SIZE)    // 0xBFE00000

/*PKMAP  size大小为LAST_PKMAP * PAGE_SIZE即512*(1<12) =2MB

x86 32位定义不同 kernel/arch/x86/include/asm/pgtable_32_types.h

#define PKMAP_BASE((FIXADDR_START - PAGE_SIZE * (LAST_PKMAP + 1))& PMD_MASK)

故网上多数图都是pmkbase在vmalloc_end之后,但ARM上是在PAGE_OFFSET之前的*/

#define LAST_PKMAP                     PTRS_PER_PTE                  //512

#defineLAST_PKMAP_MASK                   (LAST_PKMAP- 1)

#definePKMAP_NR(virt)                (((virt) -PKMAP_BASE) >> PAGE_SHIFT)

#definePKMAP_ADDR(nr)              (PKMAP_BASE +((nr) << PAGE_SHIFT))

#define kmap_prot                         PAGE_KERNEL

kernel\arch\arm\include\asm\pgtable-2level-types.h

typedef u32pteval_t;

typedef u32 pmdval_t;

typedef pteval_tpte_t;

typedef pmdval_tpmd_t;

typedef pmdval_tpgd_t[2];

typedef pteval_tpgprot_t;

#definepte_val(x)      (x)

#definepmd_val(x)      (x)

#define pgd_val(x)       ((x)[0])

#definepgprot_val(x)   (x)

#define__pte(x)        (x)

#define__pmd(x)        (x)

#define__pgprot(x)     (x)

kernel\include\asm-generic\memory_model.h                             

/*page结构体 与pfn 的转换,可以看出page与物理页是一一对应的*/

/*

 * Convert a physical address to a Page FrameNumber and back

 */

#define      __phys_to_pfn(paddr) ((unsigned long)((paddr) >> PAGE_SHIFT))

#define      __pfn_to_phys(pfn)     PFN_PHYS(pfn)

#define page_to_pfn__page_to_pfn

#define pfn_to_page__pfn_to_page

#define__pfn_to_page(pfn)          (mem_map + ((pfn) - ARCH_PFN_OFFSET))

#define__page_to_pfn(page)       ((unsignedlong)((page) - mem_map) + \                                              ARCH_PFN_OFFSET)     // 指针相减,结果为两个指针之间的元素数目

Kernel/include/asm-generic/pgtable-nopud.h

typedef struct {pgd_t pgd; } pud_t;

#define PUD_SHIFT              PGDIR_SHIFT

#definePTRS_PER_PUD        1

#definePUD_SIZE               (1UL <<PUD_SHIFT)

#definePUD_MASK           (~(PUD_SIZE-1))

/*

 * The "pgd_xxx()" functions here aretrivial for a folded two-level

 * setup: the pud is never bad, and a pudalways exists (as it's folded

 * into the pgd entry)

 */

static inline intpgd_none(pgd_t pgd)              { return0; }

static inline intpgd_bad(pgd_t pgd)                { return0; }

static inline intpgd_present(pgd_t pgd)          { return 1; }

static inline voidpgd_clear(pgd_t *pgd)          { }

#definepud_ERROR(pud)                                 (pgd_ERROR((pud).pgd))

#definepgd_populate(mm, pgd, pud)              do{ } while (0)

/*

 * (puds are folded into pgds so this doesn'tget actually called,

 * but the define is needed for a genericinline function.)

 */

#defineset_pgd(pgdptr, pgdval)                       set_pud((pud_t*)(pgdptr), (pud_t) { pgdval })

static inline pud_t* pud_offset(pgd_t * pgd, unsigned long address)

{

         return (pud_t *)pgd;

}

#define pud_val(x)                                   (pgd_val((x).pgd))

#define __pud(x)                                     ((pud_t) { __pgd(x) } )

#definepgd_page(pgd)                           (pud_page((pud_t){pgd }))                     //这两个宏定义也有些异常,感觉原生bug,需要去掉。

#definepgd_page_vaddr(pgd)                (pud_page_vaddr((pud_t){pgd }))

kernel\arch\arm\include\asm\memory.h

/*

 * TASK_SIZE - the maximum size of a user spacetask.

 * TASK_UNMAPPED_BASE - the lower boundary ofthe mmap VM area

 */

#define TASK_SIZE                         (UL(CONFIG_PAGE_OFFSET)- UL(SZ_16M))

#defineTASK_UNMAPPED_BASE   ALIGN(TASK_SIZE / 3,SZ_16M)

#defineARCH_PFN_OFFSET          PHYS_PFN_OFFSET

/*

 * Convert a page to/from a physical address

 */

#definepage_to_phys(page)         (__pfn_to_phys(page_to_pfn(page)))

#definephys_to_page(phys) (pfn_to_page(__phys_to_pfn(phys)))

#elifdefined(CONFIG_ARM_PATCH_PHYS_VIRT)

/*

 * Constants used to force the rightinstruction encodings and shifts

 * so that all we need to do is modify the8-bit constant field.

 */

#define__PV_BITS_31_24    0x81000000

#define __PV_BITS_7_0        0x81

extern unsignedlong __pv_phys_pfn_offset;

extern u64__pv_offset;

extern voidfixup_pv_table(const void *, unsigned long);

extern const void*__pv_table_begin, *__pv_table_end;

#define PHYS_OFFSET ((phys_addr_t)__pv_phys_pfn_offset <<PAGE_SHIFT)                        //0x80000000

#definePHYS_PFN_OFFSET (__pv_phys_pfn_offset)                                                                //0x80000

#definevirt_to_pfn(kaddr) \

         ((((unsigned long)(kaddr) -PAGE_OFFSET) >> PAGE_SHIFT) + \

          PHYS_PFN_OFFSET)

#define__pv_stub(from,to,instr,type)                       \

         __asm__("@ __pv_stub\n"                              \

         "1:     " instr "      %0,%1, %2\n"             \

         "        .pushsection.pv_table,\"a\"\n"              \

         "        .long 1b\n"                                     \

         "        .popsection\n"                                \

         : "=r" (to)                                        \

         : "r" (from), "I"(type))

#define__pv_stub_mov_hi(t)                                    \

         __asm__ volatile("@__pv_stub_mov\n"                 \

         "1:     mov  %R0, %1\n"                           \

         "        .pushsection.pv_table,\"a\"\n"              \

         "        .long 1b\n"                                     \

         "        .popsection\n"                                \

         : "=r" (t)                                          \

         : "I" (__PV_BITS_7_0))

#define__pv_add_carry_stub(x, y)                            \

         __asm__ volatile("@__pv_add_carry_stub\n"        \

         "1:     adds  %Q0, %1, %2\n"                    \

         "        adc    %R0, %R0, #0\n"                            \

         "        .pushsection.pv_table,\"a\"\n"              \

         "        .long 1b\n"                                     \

         "        .popsection\n"                                \

         : "+r" (y)                                          \

         : "r" (x), "I"(__PV_BITS_31_24)              \

         : "cc")

static inlinephys_addr_t __virt_to_phys(unsigned long x)

{

         phys_addr_t t;

         if (sizeof(phys_addr_t) == 4) {

                   __pv_stub(x, t,"add", __PV_BITS_31_24);

         } else {

                   __pv_stub_mov_hi(t);

                   __pv_add_carry_stub(x, t);

         }

         return t;

}

static inlineunsigned long __phys_to_virt(phys_addr_t x)

{

         unsigned long t;

         /*

          * 'unsigned long' cast discard upper word when

          * phys_addr_t is 64 bit, and makes sure thatinline

          * assembler expression receives 32 bitargument

          * in place where 'r' 32 bit operand isexpected.

          */

         __pv_stub((unsigned long) x, t,"sub", __PV_BITS_31_24);

         return t;

}

#else

/*

 * These are *only* valid on the kernel directmapped RAM memory.

 * Note: Drivers should NOT use these.  Theyare the wrong

 * translation for translating DMAaddresses.  Use the driver

 * DMA support - see dma-mapping.h.

 */

#define virt_to_physvirt_to_phys

static inlinephys_addr_t virt_to_phys(const volatile void *x)

{

         return __virt_to_phys((unsignedlong)(x));

}

#definephys_to_virt phys_to_virt

static inline void*phys_to_virt(phys_addr_t x)

{

         return (void *)__phys_to_virt(x);

}

/*

 * Drivers should NOT use these either.

 */

#define __pa(x)                     __virt_to_phys((unsignedlong)(x))

#define __va(x)                     ((void*)__phys_to_virt((phys_addr_t)(x)))

#definepfn_to_kaddr(pfn)    __va((phys_addr_t)(pfn)<< PAGE_SHIFT)

/*

 * Conversion between a struct page and aphysical address.

 *                                 

 * page_to_pfn(page)       convert astruct page * to a PFN number

 * pfn_to_page(pfn) convert a _valid_PFN number to struct page *

 *

 * virt_to_page(k)    convert a_valid_ virtual address to struct page *

 * virt_addr_valid(k) indicateswhether a virtual address is valid

 */

#defineARCH_PFN_OFFSET          PHYS_PFN_OFFSET

#definevirt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr))

#definevirt_addr_valid(kaddr)       (((unsignedlong)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsignedlong)high_memory) \

                                               &&pfn_valid(virt_to_pfn(kaddr)))

第二章  虚拟地址到物理地址转换

2.1 MMU 硬件VA到PA转换

9850K 的cpu是Cortex A7的。

The Cortex-A7 MPCore processorimplements the Extended VMSAv7 MMU, which includes the ARMv7-A Virtual MemorySystem Architecture (VMSA), the Security Extensions, the Large Physical AddressExtensions (LPAE), and the Virtualization Extensions.

VMSAv7 defines two alternativetranslation table formats:

Short-descriptor format

This is the original format definedin issue A of this Architecture Reference Manual, and is the only

format supported on implementationsthat do not include the Large Physical Address Extension. It

uses 32-bit descriptor entries inthe translation tables, and provides:

• Up to two levels of address lookup.

• 32-bit input addresses.

• Output addresses of up to 40 bits.

• Support for PAs of more than 32 bits by use of supersections, with16MB granularity.

• Support for No access, Client, and Manager domains.

• 32-bit table entries.

Long-descriptor format

The Large Physical Address Extensionadds support for this format. It uses 64-bit descriptor entries

in the translation tables, andprovides:

• Up to three levels of address lookup.

• Input addresses of up to 40 bits, when used for stage 2translations.

• Output addresses of up to 40 bits.

• 4KB assignment granularity across the entire PA range.

• No support for domains, all memory regions are treated as in aClient domain.

• 64-bit table entries.

• Fixed 4KB table size, unless truncated by the size of the inputaddress space.

9850K项目是32位系统的,所以用的Short-descriptor format。也即二级MMU 映射,不开LPAE。Short-descriptor translation table format 支持sections映射和pages 映射,也即段映射和页表映射。

         一级页表描述符格式如下图:

最后两个bit标识描述符的类型

0b00, Invalid

The associated VA is unmapped, and anyattempt to access it generates a Translation fault.

Software can use bits[31:2] of thedescriptor for its own purposes, because the hardware ignores

these bits.

0b01, Page table

The descriptor gives the address of asecond-level translation table, that specifies the mapping of the associated1MByte VA range.

0b10, Section or Supersection

The descriptor gives the base address ofthe Section or Supersection. Bit[18] determines whether

the entry describes a Section or a Supersection.

If the implementation supports the PXNattribute, this encoding also defines the PXN bit as 0.

0b11, Section or Supersection, if theimplementation supports the PXN attribute

If an implementation supports the PXNattribute, this encoding is identical to 0b10, except that it

defines the PXN bit as 1.

0b11, Reserved, UNK/SBZP, if theimplementation does not support the PXN attribute

An attempt to access the associated VAgenerates a Translation fault.

On an implementation that does notsupport the PXN attribute, this encoding must not be used.

段映射的地址转换过程,对应代码里就是内核逻辑地址转换过程:

 

二级小页映射的地址转换过程:

 

                                               Figure B3-11 Small page address translation

注意上图Translation table base register里存的是物理地址

#define cpu_switch_mm(pgd,mm)cpu_do_switch_mm(virt_to_phys(pgd),mm)     //这里换成了物理地址

另外一级页表描述符高22bit是地址位,因为mmu上一个一级页表对应256个pte,所以一级页表描述符是低10bit用来做二级页表索引。这与linux代码不同。Linux做了调整,一个pgd对应的页表正好是一页大小(512个linux的、512个hw的),所以要留低12bit用来的二级页表索引,故*pgd的有效地址为高20bit或者高21bit。

可以看出,不管section-translation还是page-translation,在一级页表中都是完成1MB地址的映射,而page-translation的第二级页表项中完成4K页的映射。 另外,不管第一级页表项还是第二级页表项中除了存储物理地址,还会有很多bit是空余的,这些空余的bit完成了对所映射地址的访问权限以及操作属性的控制,主要包括AP位(access permission)和cache属性位。具体的含义见mmu手册或者pgtable-2level-hwdef.h描述。

pgtable-2level-hwdef.h

/*

* + Level 1 descriptor (PMD)*

这是硬件定义的一级页表描述符,crash工具在vtop时,是取vaddr高12bit+base_pgd,然后取值得到的;并非我们软件pgd_offset取高11bit。kernel 代码似乎都未用到这些宏定义,但实际crash工具有用到,代码里判断pmd_bad似乎亦与此相关*/

#define PMD_TYPE_MASK                (_AT(pmdval_t, 3) << 0)

#define PMD_TYPE_FAULT                (_AT(pmdval_t, 0) << 0)            //表示该范围的VA没有映射到PA,访问将产生Translationfault.

#define PMD_TYPE_TABLE               (_AT(pmdval_t, 1) << 0)            //页表映射

#define PMD_TYPE_SECT                  (_AT(pmdval_t, 2) << 0)            //段页表,相当于没有pte

另外从二级小页映射转换过程可以看出,二级页表描述符最后两个bit是标识大小页的。下面代码也有定义:

pgtable-2level-hwdef.h

/*

* + Level 2 descriptor (PTE) *

这里是h/w pte,所以计算得到linux pte后,要偏移2048得到硬件pte,取其值即是这里二级页表描述符;代码即是(long long)pte_val(pte[PTE_HWTABLE_PTRS]) */

#define PTE_TYPE_MASK                  (_AT(pteval_t, 3) << 0)

#define PTE_TYPE_FAULT                  (_AT(pteval_t, 0) << 0)     //这个地址没有映射,访问产生Translationfault

#define PTE_TYPE_LARGE                 (_AT(pteval_t, 1) << 0)     //64KB大页

#define PTE_TYPE_SMALL                 (_AT(pteval_t, 2) << 0)     //4KB小页

综上,MMU层的地址转换可参考下图理解:

2.2 Linux Kernel虚拟地址转换

Arm上的linux(正式)页表采用的是一级段映射结合二级小页表实现4G空间的访问。为了配合64位CPU的体系结构,Linux从2.6.11开始采用四级分页模型:(为了适用于32位和64位系统)

 PGD(Page Global Directory)                  页全局目录

 PUD(Page Upper Directory)                  页上级目录

 PMD(Page Middle Directory)                页中间目录

 PT(Page Table)                                          页表

这里linux软件的分级与上面MMU略有不同。在linux中,二级映射采用了4K小页作为最小单元,4KB的页大小决定了虚拟地址的低12bit留作偏移地址用。也决定了二级页描述符的低12位作为用户标志用,4K的页大小还决定了虚拟地址空间最多可以映射出(4GB/4KB=1024×1024)个页。程序中下列宏用于定义页的大小:

Kernel/arch/arm/include/asm/page.h

#define PAGE_SHIFT 12

#define PAGE_SIZE (1UL << PAGE_SHIFT)

再看linux根据MMU 硬件页表略微调整,调整后的linux页表图:

 于是可以确定下面宏定义:

kernel/arch/arm/include/asm/pgtable-2level.h

#define PTRS_PER_PTE 512   表示每个末级页表PTE中含有512个条目(9bit)

#define PTRS_PER_PMD 1    表示中间页表PMD表等同于末级页表PT

#define PTRS_PER_PGD 2048  表示全局页目录表中含有2048个条目(11bit)

因此概括为,ARM体系下物理内存和虚拟内存按照4KB的大小进行分页,页索引表分为两级,其中全局一级页表PGD含有2048个条目,每个条目对应一个二级页表物理首地址。二级页表(PMD或者PT)最多2048个,每个页表(PT)中含有512个页表项Page Table Entry(PTE),每个页表项(PTE)指向一页物理首地址。

Linux下虚拟地址转换格式,记住这是linux软件上定义的,实际MMU略有不同。

2.3代码上虚拟地址转换物理地址

2.3.1 Crash工具解析地址

Crash工具目前看到的多次转换都是求的linux pte,*pgd高20bit与pteindex相加。

 

  1. 内核虚拟地址( 大于MODULES_VADDR bf000000 ),包含一级段映射

一级段映射:

pgd= c0004000+ vaddr>>20 * 4

 Paddr= *pgd& PAGE_MASK + vaddr& (~PAGE_MASK)

二级页表映射:

  pgd= c0004000+ vaddr>>20 * 4    

     pte = *pgd& PAGE_MASK  + pte_offset(vaddr)    //大部分是取*pgd高20位,相加找到linux pte,取高21位也行,相加就得到了hw pte;pte为中间9位bit[21:13],可加2048=0x800取值得到arm pte

         Paddr = (rd –p pte)  + vaddr& (~PAGE_MASK)

 

    2) 用户空间地址(小于MODULES_VADDR bf000000),属于2级页表映射的地址:

        pgd = mm.pgd + vaddr>>20 * 4

        pte = *pgd& PAGE_MASK  + pte_offset(vaddr)    //大部分是取*pgd高20位,相加找到linux pte,取高21位也行,相加就得到了hw pte;pte为中间9位bit[21:13],可加2048=0x800取值得到arm pte

        Paddr = (rd –p pte)  + vaddr& (~PAGE_MASK)

2.3.2 Kernel代码流程

1)内核低端地址:

于内核直接映射区的内核逻辑地址(用kmalloc(), __get_free_pages申请的),也就是相当于一级段映射的虚拟地址,使用virt_to_phys()和phys_to_virt()来实现物理地址和内核逻辑地址之间的互相转换。

代码上是汇编实现,没找到直接计算公式,但可归纳成如下公式:

Paddr = vaddr – 0x40000000

#define virt_to_phys virt_to_phys

static inline phys_addr_t virt_to_phys(const volatile void *x)

{

     return__virt_to_phys((unsigned long)(x));

}

#define phys_to_virt phys_to_virt

static inline void *phys_to_virt(phys_addr_t x)

{

     return (void*)__phys_to_virt(x);

}

/*

 * Drivers should NOT use these either.

 */

#define__pa(x)                           __virt_to_phys((unsignedlong)(x))

#define__va(x)                           ((void*)__phys_to_virt((phys_addr_t)(x)))

2)用户空间地址及内核高端地址

也就是2级页表映射的地址,此类地址转换公式总结如下:

         pgd= mm.pgd + vaddr>>21 * 8    (对于内核高端地址mm.pgd为0xc0004000)

         page=(unsigned int) (*pgd>>PAGE_SHIFT – 0x80000) +mem_map

         pt虚拟地址 = (pte_t*)kmap_atomic(page);

         pte= * pt虚拟地址+  pte_offset(vaddr)    // pte_offset取中间9位bit[21:12],算出的pte可加2048=0x800取值得到arm pte

         Paddr= (*pte)& PAGE_MASK  +  vaddr& (~PAGE_MASK)

/*

 * This is useful to dump out the page tablesassociated with

 * 'addr' in mm 'mm'.

 */

voidshow_pte(struct mm_struct *mm, unsigned long addr)

{

              pgd_t *pgd;

              if (!mm)

                            mm = &init_mm;

              pr_alert("pgd = %p\n",mm->pgd);

              pgd = pgd_offset(mm, addr);                          //这个pgd值还是虚拟地址,*pgd 这个值是物理地址

              pr_alert("[%08lx]*pgd=%08llx",addr, (long long)pgd_val(*pgd));

              do {

                            pud_t *pud;

                            pmd_t *pmd;

                            pte_t*pte;

                            if (pgd_none(*pgd))

                                          break;

                            if (pgd_bad(*pgd)) {

                                          pr_cont("(bad)");

                                          break;

                            }

                            pud =pud_offset(pgd, addr);

                            if (PTRS_PER_PUD !=1)

                                          pr_cont(",*pud=%08llx", (long long)pud_val(*pud));

                            if (pud_none(*pud))

                                          break;

                            if (pud_bad(*pud)) {

                                          pr_cont("(bad)");

                                          break;

                            }

                            pmd =pmd_offset(pud, addr);

                            if (PTRS_PER_PMD !=1)

                                          pr_cont(",*pmd=%08llx", (long long)pmd_val(*pmd));

                            if (pmd_none(*pmd))

                                          break;

                            if (pmd_bad(*pmd)){                 //内核低端地址这里会报错返回

                                          pr_cont("(bad)");

                                          break;

                            }

                            /* We must not mapthis if we have highmem enabled */

                            if(PageHighMem(pfn_to_page(pmd_val(*pmd) >> PAGE_SHIFT)))

                                          //break;                                //注释掉,即使高端地址没有映射这里也重新映射一下

                            pte =pte_offset_map(pmd, addr);

                            pr_cont(", *pte=%08llx",(long long)pte_val(*pte));

#ifndefCONFIG_ARM_LPAE

                            pr_cont(", *ppte=%08llx",(longlong)pte_val(pte[PTE_HWTABLE_PTRS]));                //ARM PTE

#endif

                            If(!pte_none(*pte)&&pte_present(*pte)){

                                          Printk(“pa is %llx\n”, (longlong)pte_val(pte[PTE_HWTABLE_PTRS])&0xfffff000 | addr&0xfff ) ;

}else{
                            Printk(“ pteis not present\n”) ;

}       

                            pte_unmap(pte);

              } while(0);

              pr_cont("\n");

}

//上面pte_offset_map会进入下面函数,当存pgd/pmd里存的物理地址为高端内存,就得通过kmap_high_get获取这个物理地址所在物理页高端映射对应的虚拟地址。如果这个映射不存在, 那就给这个高端物理页建立一个映射,得到此物理页虚拟地址。

此处高端内存判断与入参想要转换的vaddr无关,是对*pmd 这个物理地址的判断。

void*kmap_atomic(struct page *page)

{

              unsigned int idx;

              unsigned long vaddr;

              void *kmap;

              int type;

              preempt_disable();

              pagefault_disable();

              if (!PageHighMem(page))

                            return page_address(page);

#ifdefCONFIG_DEBUG_HIGHMEM

              /*

               * There is no cache coherency issue when nonVIVT, so force the

               * dedicated kmap usage for better debuggingpurposes in that case.

               */

              if (!cache_is_vivt())

                            kmap = NULL;

              else

#endif

                            kmap = kmap_high_get(page);

              if (kmap)

                            return kmap;

              type = kmap_atomic_idx_push();

              idx = FIX_KMAP_BEGIN + type +KM_TYPE_NR * smp_processor_id();

              vaddr = __fix_to_virt(idx);

#ifdefCONFIG_DEBUG_HIGHMEM

              /*

               * With debugging enabled, kunmap_atomic forcesthat entry to 0.

               * Make sure it was indeed properly unmapped.

               */

              BUG_ON(!pte_none(get_fixmap_pte(vaddr)));

#endif

              /*

               * When debugging is off, kunmap_atomic leavesthe previous mapping

               * in place, so the contained TLB flush ensuresthe TLB is updated

               * with the new mapping.

               */

              set_fixmap_pte(idx, mk_pte(page,kmap_prot));

              return (void *)vaddr;

}

EXPORT_SYMBOL(kmap_atomic);

Page_address 函数根据page结构体获取此page对应的虚拟地址。

Page结构体和pfn是一一对应的。

低端内存直接根据物理地址偏移就能得到page对应的虚拟地址。高端内存较复杂,大致根据下面关系图来得到page对应的虚拟地址:

/**

 * page_address - get the mapped virtualaddress of a page

 * @page: &struct page to get the virtualaddress of

 *

 * Returns the page's virtual address.

 */

void*page_address(const struct page *page)

{

              unsigned long flags;

              void *ret;

              struct page_address_slot *pas;

              if (!PageHighMem(page))

                            returnlowmem_page_address(page);

              pas = page_slot(page);

              ret = NULL;

              spin_lock_irqsave(&pas->lock,flags);

              if (!list_empty(&pas->lh)){

                            structpage_address_map *pam;

                            list_for_each_entry(pam,&pas->lh, list) {

                                          if(pam->page == page) {

                                                        ret= pam->virtual;

                                                        gotodone;

                                          }

                            }

              }

done:

              spin_unlock_irqrestore(&pas->lock,flags);

              return ret;

}

EXPORT_SYMBOL(page_address);

判断page是高端内存还是低端内存:

/* Page flags: | [SECTION] | [NODE] |ZONE | [LAST_CPUPID] | ... | FLAGS | */

2.4转换实例

         对于一级段映射的转换就不实例介绍了,这里两个例子是二级映射的转换过程:

Vmalloc的:Vaddr为f1d2f000 ,*pgd=ad0a6811此物理地址为低端内存。

故此先直接偏移得到其物理页虚拟地址为pageaddress=ed0a6000;将此虚拟地址偏移pte_index,便得到存放pte的虚拟地址为ed0a64bc

取其值*pte即可得到真正物理地址为e600a65f,此物理地址是个高端内存。此f1d2f000虚拟地址对应的是高端内存。

[ 314.620010] c1 pgd = dc08c000

[ 314.620018] c0 [f1d2f000] *pgd=ad0a6811

[ 314.620025] c1 cyx pgg_index=78e  pgd-p=dc08fc70

[ 314.620034] c1 cyx pmdpage=ef59b4c0 mem_map=eeffa000 pmd_val=ad0a6811

[ 314.620043] c1 cyx pte-index=12f pageaddress=ed0a6000

[ 314.620050] c1 cyx cyx_page.flga =0

[ 314.620059] c1 cyx page=ef59b4c0 flag=0 page_address(page)=ed0a6000

[ 314.620067] c1 cyx page_to_pfn(page)=ad0a6 PFN_PHYS=ad0a6000 va=ed0a6000

[ 314.620076] c1 cyx pte=ed0a64bc pte_node=0  pte_present=1hashptr=46

[ 314.620082] c0 , *pte=e600a65f, *ppte=e600a45f

Crash 工具先是根据MMU硬件解析的方式来的,取vadrr的高12bit乘4与base pgd相加;取值得到的一级描述符为 ad9c8c11,末两位01是页表映射故需要取其二级描述符。

一级描述符ad9c8c11 & PAGE_MASK + pte_offset(vaddr) 得到二级描述符地址ad9c8688,这个硬件地址取值偏移就得到第一个页表项物理地址*pte = da61d65f,末两位为11说明是小页。

Copyfromuser的:Vaddr为b5f00250得到,*pgd= fc432835此物理地址为高端内存。

Pageaddress发现此高端物理页没有映射,于是建立映射得到此物理页虚拟地址为kmap=ffeee000;

将此虚拟地址偏移pte_index,便得到存放pte的虚拟地址为ffeee400  ,取其值*pte即可得到真正物理地址为fa1d475f。此虚拟地址b5f00250对应的物理内存为高端内存。

[  16.887729] c1 pgd = e8474000

[  16.887737] c0 [b5f00250] *pgd=fc432835

[  16.887742] c1 cyx pgg_index=5af  pgd-p=e8476d78

[  16.887750] c1 cyx highmem

[  16.887758] c1 cyx pmdpage=eff82640 mem_map=eeffa000 pmd_val=fc432835

[  16.887767] c1 cyx pte-index=100 pageaddress=  (null)

[  16.887774] c1 cyx cyx_page.flga =40000000

[  16.887783] c1 cyx2 kmap=ffeee000

[  16.887793] c0 cyx page=ef021d80 flag=10028 page_address(page)=c13ec000

[  16.887796] c1 cyx pte=ffeee400 pte_node=0  pte_present=1hashptr=0

[  16.887810] c0 , *pte=fa1d475f, *ppte=fa1d4c7f

Crash 工具较为奇怪先是按照mmu解析算pgd的,后来算pte按照标准的linux软件划分来的。取vadrr的高12bit乘4与base pgd相加;取值得到的一级描述符为 fc432c35,末两位01是页表映射故需要取其二级描述符。

一级描述符fc432c35& PAGE_MASK+ pte_offset(vaddr) 得到二级描述符地址fc432400,这个硬件地址取值偏移就得到物理地址页表项也即二级页表描述符*pte = fa1d475f,末两位为11说明是小页。

2.5 软件页表和硬件页表配合 

arm硬件页表机制中,每个一级页目录项对应的二级页表空间都是独立分配的,虽然连续2个一级页目录项所映射2MB地址空间是连续的虚拟地址空间,但是用来存储二级页表的空间之间是没有什么关系的。 
不过arm-linux内核为了实现高效的内存管理,做了一个很巧妙的安排。 

我们知道1个一级页目录项对应的二级页表是256x4 =1024字节。arm-linux将2个连续的一级页目录项对应的2个二级页表分配在一起。而且还在这2个二级硬件页表之下在建立2个对应的二级软件页表,一共是4KB,正好占用1页空间。如下:

对于这样的安排在./arch/arm/include/asm/pgtable-2level.h有详细的英文解释。 
我们告诉linux内核arm一级页目录项是8 bytes,一共是2048个。配置页目录项时,一次分配4KB空间,高2KB空间每1KB的物理基地址依次写入8 bytes一级页目录项的2个word中。 
我的理解,arm-linux如此安排的原因有二。 
(1)减少空间浪费,一个页目录项仅对应1024字节的页表,这样页表初始化时,按页分配的空间仅能使用1KB,其余3KB空间浪费,如上安排,可以完全利用这4KB页。 
(2)linux软件二级页表属性位定义与arm硬件二级页表不一致
(软件二级页表位定义在arch/arm/include/asm/pgtable-2level.h,硬件二级页表位定义在arch/arm/include/asm/pgtable-2level-hwdef.h,其中有详细说明),有些属性arm硬件页表没有,如dirty,accessed,young等,因此使用软件页表进行模拟兼容。arm-mmu读取硬件2级页表进行地址翻译,而第二级的软件页表仅仅留给linux来配置和读取。最终配置二级页表的set_pte_ext也会设置完软件页表后,在根据软件页表属性值来配置硬件页表。

所以我们在arm-linux内核中看到各种相关宏定义都表示,linux看到的arm一级页目录项有2048个,每个页目录项8 bytes,二级页表项是512个。不过arm-mmu的硬件机制还是4096个一级页目录项,每个页表有256个页表项。 
两种机制是靠2个相邻页目录项的页表存储空间连续来平滑过渡的

  

根据这个案例crash与代码对比,因为bit[20]位为1,所以代码这种算法取的pgd是前4个字节,因为linux分配一个pgd项对应的两个pt是连续的。所以代码*pgdfc432835 比crash工具算的*pgd fc432c35小0x400,crash工具取的pgd后(有的例子也不是后)4个字节也就是相邻的h/w  pt 1了。但是不要紧这两个pt在同一个page里,所以其pte也是连续的,所以可以计算转换得到一样的。

 正因为连续,所以crash工具后面又可以采用linux的计算方式,*pgd fc432c35去掉低12位,加上9位的pteindex就得到linux的pte fc432400了,linux pt加上0x800就得到armpte  fc432c00。 加入crash工具后面继续用MMU方式,也是能正确算出来的。记住我们MMU一级描述符高22位是地址有效位,也即MMU上*pgd的高22位是pte的base address。所以按照MMU的方式来,也就是*pgd + 8位的pteindex,也能得出来fc43c00+0x00= fc43c00这就是最终的arm pte。

再按照我们linux代码的方式计算,也是能正确算出来 arm pte的。代码*pgd fc432835,去掉低10位base address为fc432800,加上9位的pte  0x400就得到fc432c00,这就是最终的armpte了。

 

之所以分linux 页表和 ARM页表。是因为arm的页表,没有dirty位,这个位用来实现页交换时是必须的

所以linux模拟了dirty位,你可以看到有pgtable-2level.h和pgtable-2level-hwdef.h两个定义,前者是sw(也就是linux页表,模拟了dirty young等等),后者是hw(ARM页表)

2.6 二级映射及高端内存

         2.6.1高端内存由来

为了合理的利用逻辑4G空间,Linux采用了3:1的策略,即内核占用1G的线性地址空间,用户占用3G的线性地址空间。所以用户进程的地址范围从0~3G,内核地址范围从3G~4G,也就是说,内核空间只有1G的逻辑线性地址空间。

如果Linux物理内存小于1G的空间,通常内核把物理内存与其地址空间做了线性映射,也就是一一映射,这样可以提高访问速度。但是,当Linux物理内存超过1G时,线性访问机制就不够用了,因为只能有1G的内存可以被映射,剩余的物理内存无法被内核管理,所以,为了解决这一问题,Linux把内核地址分为线性区和非线性区两部分,线性区规定最大为896M,剩下的128M为非线性区。从而,线性区映射的物理内存成为低端内存,剩下的物理内存被成为高端内存。与线性区不同,非线性区不会提前进行内存映射,而是在使用时动态映射

         高端内存是个物理概念,ARM上将高于768M以上物理内存划分为高端内存。X86上将高于896M以上物理内存划分为高端内存。        

         高端内存的最基本思想:借一段地址空间,建立临时地址映射,用完后释放,达到这段地址空间可以循环使用,访问所有物理内存

看到这里,不禁有人会问:万一有内核进程或模块一直占用某段逻辑地址空间不释放,怎么办?若真的出现的这种情况,则内核的高端内存地址空间越来越紧张,若都被占用不释放,则没有建立映射到物理内存都无法访问了。

在香港尖沙咀有些写字楼,洗手间很少且有门锁的。客户要去洗手间的话,可以向前台拿钥匙,方便完后,把钥匙归还到前台。这样虽然只有一个洗手间,但可以满足所有客户去洗手间的需求。要是某个客户一直占用洗手间、钥匙不归还,那么其他客户都无法上洗手间了。Linux内核高端内存管理的思想类似

         2.6.2 二级映射:

         ARMMMU似乎一开始设计就包含了一级页表和二级页表。一级页表管理简单,查找快速,而且二级页表是建立在二级页表之上的,所以肯定有一级页表。那么只用一级页表管理是否可以呢?答案肯定不行,

1)  只有一级页表的话,那MMUTTB就得频繁切换了,尤其是我们想用4K小页了,这就说明一个页表项就对应一个物理页,那么CPU访问前后两次超过4K范围就得重新设置下TTB,这将是很大的开销。除非MMU设计成超大页减少更新TTB,那么显然内存最小管理单元太大是很浪费内存的。

2)  每个进程都需要访问0-4G的所有虚拟地址,如果只有一级页表,那就得申请4Byte*1M(假如按照4K小页管理)内存用来存储页表,因为这个进程要有独立的页表访问所有内存。如果也像二级内存一样要用内存时再临时申请内存存放页表,那就得用链表来存放所有页表项了,一页为4K的话需要1M个页表项即1024个物理页空间来表示所有内存,每个进程从链表里遍历1024个物理页里的1M个页表项显然是愚蠢的。

所以需要二级映射,其一个一级页表项权值为1M,MMU的TTB基本不会频繁更改了。另外每个进程可以先只分配一级页表(2048*8Byte=16KB)空间,二级页表在需要使用时才分配。这样至少在一级页表上是连续的,另外每个二级页表正好只占1页空间,所以二级页表寻址也不存在什么遍历。一个进程即使要表示所有4G虚拟地址,也都是环环相扣直接计算出物理地址的,根本不会耗时。

2.7 PTE 压栈

PTE压栈

对应pte已经被clear掉,但是根据压栈:

/space/builder/repo/sprdroid7.0_trunk_k44_17b_gms/kernel/include/asm-generic/pgtable.h: 116

0xc0228f9c <unmap_single_vma+392>:      ldr     r3, [r3, #-4]

0xc0228fa0 <unmap_single_vma+396>:      str     r3, [r11, #-84] ; 0x54

 

crash-arm32> p/x 0xc58fdd7c-84

$11 = 0xc58fdd28

 

crash-arm32> rd -x 0xc58fdd28

c58fdd28:  7f818081

addr的pte是7f818081,根据pte算出page地址,为0xeefea300:

猜你喜欢

转载自blog.csdn.net/chenpuo/article/details/80155952