The constructor pyopencl.Buffer() can consume a fairly large amount of processing time if it is invoked very frequently. For example, code based on pyopencl.array.Array can easily run into this issue because a fresh memory area is allocated for each intermediate result. Memory pools are a remedy for this problem based on the observation that often many of the block allocations are of the same sizes as previously used ones.
Then, instead of fully returning the memory to the system and incurring the associated reallocation overhead, the pool holds on to the memory and uses it to satisfy future allocations of similarly-sized blocks. The pool reacts appropriately to out-of-memory conditions as long as all memory allocations are made through it. Allocations performed from outside of the pool may run into spurious out-of-memory conditions due to the pool owning much or all of the available memory.
An object representing a MemoryPool-based allocation of device memory. Once this object is deleted, its associated device memory is returned to the pool. This supports the same interface as pyopencl.Buffer.
mem_flags takes its values from pyopencl.mem_flags and corresponds to the flags argument of pyopencl.Buffer.
Allocate a pyopencl.Buffer of the given size.
A memory pool for OpenCL device memory.
The number of unused blocks being held by this pool.
The number of blocks in active use that have been allocated through this pool.
Return a PooledBuffer of the given size.
Synoynm for allocate() to match CLAllocator interface.
Free all unused memory that the pool is currently holding.
Instruct the memory to start immediately freeing memory returned to it, instead of holding it for future allocations. Implicitly calls free_held(). This is useful as a cleanup action when a memory pool falls out of use.
Provides memoization for things that get created inside a context, i.e. mainly programs and kernels. Assumes that the first argument of the decorated function is an OpenCL object that might go away, such as a context or a queue, and based on which we might want to clear the cache.
New in version 2011.2.
Empties all first-argument-dependent memoization caches. Also releases all held reference contexts. If it is important to you that the program detaches from its context, you might need to call this function to free all remaining references to your context.
New in version 2011.2.
Using the line:
from pyopencl.tools import pytest_generate_tests_for_pyopencl \
as pytest_generate_tests
in your py.test test scripts allows you to use the arguments ctx_factory, device, or platform in your test functions, and they will automatically be run for each OpenCL device/platform in the system, as appropriate.
The following two environment variables are also supported to control device/platform choice:
PYOPENCL_TEST=0:0,1;intel=i5,i7
Return a list of flags valid on device dev that enable fast, but potentially inaccurate floating point math.
Return an estimate of how many work items will be executed across SIMD lanes. This returns the size of what Nvidia calls a warp and what AMD calls a wavefront.
Only refers to implicit SIMD.
Parameters: | type_size – number of bytes in vector entry type. |
---|
“Fix to allow incomplete amd double support in low end boards
Return the number of bytes per bank in local memory.
Return the number of banks present in local memory.
If dev is an Nvidia GPU pyopencl.Device, return a tuple (major, minor) indicating the device’s compute capability.
Return the number of work items that access local memory simultaneously and thereby may conflict with each other.
Return an estimate of the usable local memory size. :arg nargs: Number of 32-bit arguments passed.
Parameters: |
|
---|---|
Returns: | a tuple (multiplicity, explanation), where multiplicity is the number of work items that will conflict on a bank when accessing local memory. explanation is a string detailing the found conflict. |