DOC HOME SITE MAP MAN PAGES GNU INFO SEARCH PRINT BOOK
 
HDK Technical Reference

Memory-mapped I/O

Memory-mapped I/O is an I/O scheme where the device's own on-board memory is mapped into the processor's address space. Data to be written to the device is copied by the driver to the device memory, and data read in by the device is available in the shared memory for copying back into the system memory.

Memory-mapped I/O is frequently used by network and video devices. Many adapters offer a combination of programmed I/O and memory-mapped modes, where the data buffers are mapped into the processor's memory space and the internal device registers that provide status are accessed through the I/O space. See ``Programmed I/O (PIO)''.

The adapter's memory is mapped into system address space through the PCI BIOS, a software setup program, or by setting jumpers on the device. Before the kernel can access the adapter's memory, it must map the adapter's entire physical address range into the kernel's virtual address space using the functions supplied by the driver interface. The functions used for the different driver interface versions are listed later in this article.

General purpose utility functions, such as bcopy(D3) and bcopy(D3oddi), and instructions like mov are used to transfer data to and from the mapped memory region, which is available to both the device and the host. See ``Data, copying'' for more information about copying data.

When using memory-mapped I/O, be sure to declare all pointers to memory-mapped addresses as volatile so the compiler knows to fetch that memory every time it is accessed. Normally, the compiler optimizes access to memory locations by not physically reading them each time. With memory-mapped I/O, the device can modify the contents of the memory location independently of the kernel, so this compiler optimization must be defeated by declaring this memory as volatile.

See ``Memory access'' for information about memory ranges that can be accessed by different types of drivers.

Mapping device memory (DDI 7)

The following DDI 8 functions can be used to map device memory into the host CPU's memory address space:


physmap(D3)
Obtain virtual address mapping for physical offsets within device memory.

physmap_free(D3)
Free virtual address mapping for physical offsets within device memory.

phystoppid(D3)
Get physical page ID for physical offsets within device memory.

Drivers that call the physmap(D3) function in the start(D2) function to allocate the memory should do so with the KM_NOSLEEP flag set. Otherwise, for large allocations, the driver could hang indefinitely waiting for a large memory allocation.

The following code example illustrates how to implement memory-mapped I/O in a DDI 7 driver:

    1  el16init()
    2  {
    3      ...
    4      /* Physical memory size derived from config files */
    5      p = (bdd_t *) bd->bd_dependent1;
    6      p->ram_size = bd->mem_end - bd->mem_start + 1;
    7 
    8      /* Generate a virtual address for the memory available on the card
    9         This virtual memory will be base addr for all driver operations
   10         including copying messages to the board for transfer over the
   11         network.
   12      */
   13      if ( (p->virt_ram=(addr_t) physmap(bd->mem_start, p->ram_size, \
   14        KM_NOSLEEP)) == NULL){
   15              cmn_err(CE_NOTE, "el16_init: no memory for physmap");
   16              return ;
   16      }
   18      ...
   19  }
   20     /*
   21     The memory mapped device driver copies the outbound data directly to
   22     the on-board RAM using common memory move functions like bcopy().
   23     */
   24  el16xmit_packet(bd,mp)
   25  DLbdconfig_t *bd;
   26  mblk_t *mp;
   27  {
   28      bdd_t *bdd = (bdd_t *) bd->bd_dependent1;
   29      tbd_t   *p_tbd;
   30      cmd_t   *p_cmd;
   31      char    *p_txb;
   32      ...
   33      p_cmd = (cmd_t *) (bdd->virt_ram + bdd->tail_cmd->ofst_cmd);
   34      p_tbd = (tbd_t *) (bdd->virt_ram + p_cmd->prmtr.prm_xmit.xmt_tbd_ofst);
   35      p_txb = bdd->virt_ram + p_tbd->tbd_buff;       
   36                              /* Find the transmit buffer on the RAM */
   37      ...
   38      for (mp_tmp = mp; mp_tmp != NULL; mp_tmp = mp_tmp->b_cont) {
   39              msg_size = mp_tmp->b_wptr - mp_tmp->b_rptr;
   40              /* Copy the data from the STREAMS message directly to
   41                 the transmit buffer.
   42              */
   43              bcopy ((caddr_t) mp_tmp->b_rptr, p_txb, msg_size);
   44              p_txb          += msg_size;
   45              tx_size        += msg_size;
   46      }
   47      ...
   58  }

Mapping device memory (DDI 8)

DDI 8 uses a different set of functions to map device memory than earlier DDI versions do, because of the need to avoid paddr_t references in support of the large memory feature. See ``Large memory support (DDI 8)''.

The DDI 8 functions used are:


devmem_mapin(D3)
Obtain virtual address mapping for physical offsets within device memory

devmem_mapout(D3)
Free virtual address mapping for physical offsets within device memory

devmem_ppid(D3)
Get physical page ID for physical offsets within device memory

devmem_size(D3)
Determine size of the device memory block

The memory-mapped I/O region is allocated in the CFG_ADD subfunction of the config(D2) entry point routine. The driver first calls the devmem_size(D3) function to get the size of the memory chunk, then allocates it with the devmem_mapin(D3) function. The driver's _unload(D2) routine or the CFG_REMOVE subfunction to the config( ) routine should then call devmem_mapout(D3) to free this memory region.

Mapping device memory (ODDI)

The following DDI functions can be used to map device memory into the host CPU's memory address space:


sptalloc(D3oddi)
Map a device's memory. Set the base argument to the physical page frame number.

sptfree(D3oddi)
Unmap a device. Set the freeflg argument to 0.

As an example, the following call sets up memory-mapped I/O for an imaginary device being installed at physical address 0xB8000:

   ptr=sptalloc(1, PG_P | PG_PCD, btoc(0xB8000), NOSLEEP);
Note that the base argument passed is the page frame number of the physical memory to map. The btoc(D3oddi) macro is used to convert from a physical address to a page frame number.

Supporting large memory mapped regions (DDI)

DDI memory-mapped drivers should increase the value of the DRV_SEGKMEM_BYTES tunable parameter by the size of the mapping in their installation scripts. This parameter specifies the quantity of kernel virtual memory that is reserved for large driver mappings.

This can be done in the driver's postinstall script: read out the current value of DRV_SEGKMEM_BYTES, calculate the size to which it should be set, and then set DRV_SEGKMEM_BYTES to the sum. For example, if the driver needs a 4MB area of physical memory, increase DRV_SEGKMEM_BYTES by 0x400000. The script must increment the current value for each system rather than hard-coding it, since other drivers installed on the system may also modify this parameter.

Note that DRV_SEGKMEM_BYTES is supported for all SVR5 Release 7.1 and later systems. For earlier releases of SVR5, you must install the appropriate PTF to access this parameter:


Release 7.0.0
ptf7016d

Release 7.0.1
ptf7096a

In some cases, setting the SEGKMEM_BYTES parameter may be adequate for earlier releases that do not support the DRV_SEGKMEM_BYTES parameter.

On SVR5 systems, the driver's postinstall script may also modify the ZBM_LGMAP_PERCENT (ZBM Large Map Percent) tunable parameter that controls the percentage of kernel virtual memory that is reserved for large mappings. By default, it is set to 10%, which means that 10 percent of the kpg space is managed by an algorithm that does not fragment as fast. Using this tunable in conjunction with increases to the DRV_SEGKMEM_BYTES parameter can enable a driver to achieve a large physical map in a loadable module.

Because DRV_SEGKMEM_BYTES is only supported for some SVR5 systems and ZBM_LGMAP_PERCENT is supported for all SVR5 systems but neither parameter is supported for SCO SVR5 2.X systems, you may want to write the postinstall script to handle the different installation possibilities. A possible strategy for a postinstall script is:

  1. Attempt to increment DRV_SEGKMEM_BYTES by the mapping size. Do not print error messages if it fails.

  2. If step 1 fails, attempt to bump up ZBM_LGMAP_PERCENT and SEGKMEM_BYTES according to the old heuristic. For example:
       SEGKMEM_BYTES += delta;
       memsize = `memsize`;
       if (memsize > 2GB)
           memsize = 2GB;
       ZBM_LGMAP_PERCENT += delta / 4096 * 100 /
           min((SEGKMEM_BYTES + memsize/2) / 4096, 650000);
       if (ZBM_LGMAP_PERCENT > 90)
           ZBM_LGMAP_PERCENT = 90;
    
    Do not print error messages if it fails.

  3. If both steps 1 and 2 fail, just go on silently because this is a SCO SVR5 2.X system.

See ``Virtual memory (VM) parameters'' for more information about these tunable parameters.


© 2005 The SCO Group, Inc. All rights reserved.
OpenServer 6 and UnixWare (SVR5) HDK - June 2005