dace package

Subpackages

Submodules

dace.builtin_hooks module

A set of built-in hooks.

dace.builtin_hooks.cli_optimize_on_call(sdfg)

Calls a command-line interface for interactive SDFG transformations on every DaCe program call.

Parameters:

sdfg (SDFG) – The current SDFG to optimize.

dace.builtin_hooks.instrument(itype, filter, annotate_maps=True, annotate_tasklets=False, annotate_states=False, annotate_sdfgs=False)

Context manager that instruments every called DaCe program. Depending on the given instrumentation type and parameters, annotates the given elements on the SDFG. Filtering is possible with strings and wildcards, or a function (if given).

Example usage:

with dace.instrument(dace.InstrumentationType.GPU_Events, 
                     filter='*add??') as profiler:
    some_program(...)
    # ...
    other_program(...)

# Print instrumentation report for last call
print(profiler.reports[-1])
Parameters:
  • itype (InstrumentationType) – Instrumentation type to use.

  • filter (Union[str, Callable[[Any], bool], None]) – An optional string with * and ? wildcards, or function that receives one parameter, determining whether to instrument the element or not.

  • annotate_maps (bool) – If True, instruments scopes (e.g., map, consume) in the SDFGs.

  • annotate_tasklets (bool) – If True, instruments tasklets in the SDFGs.

  • annotate_states (bool) – If True, instruments states in the SDFGs.

  • annotate_sdfgs (bool) – If True, instruments whole SDFGs and sub-SDFGs.

dace.builtin_hooks.instrument_data(ditype, filter, restore_from=None, verbose=False)

Context manager that instruments (serializes/deserializes) the data of every called DaCe program. This can be used for reproducible runs and debugging. Depending on the given data instrumentation type and parameters, annotates the access nodes on the SDFG. Filtering is possible with strings and wildcards, or a function (if given). An optional instrumented data report can be given to load a specific set of data.

Example usage:

@dace
def sample(a: dace.float64, b: dace.float64):
    arr = a + b
    return arr + 1

with dace.instrument_data(dace.DataInstrumentationType.Save, filter='a??'):
    result_ab = sample(a, b)

# Optionally, get the serialized data containers
dreport = sdfg.get_instrumented_data()
assert dreport.keys() == {'arr'}  # dreport['arr'] is now the internal ``arr``

# Reload latest instrumented data (can be customized if ``restore_from`` is given)
with dace.instrument_data(dace.DataInstrumentationType.Restore, filter='a??'):
    result_cd = sample(c, d)  # where ``c, d`` are different from ``a, b``

assert numpy.allclose(result_ab, result_cd)
Parameters:
  • ditype (DataInstrumentationType) – Data instrumentation type to use.

  • filter (Union[str, Callable[[Any], bool], None]) – An optional string with * and ? wildcards, or function that receives one parameter, determining whether to instrument the access node or not.

  • restore_from (Union[str, InstrumentedDataReport, None]) – An optional parameter that specifies which instrumented data report to load data from. It could be a path to a folder, an InstrumentedDataReport object, or None to load the latest generated report.

  • verbose (bool) – If True, prints information about created and loaded instrumented data reports.

dace.builtin_hooks.profile(repetitions=100, warmup=0)

Context manager that enables profiling of each called DaCe program. If repetitions is greater than 1, the program is run multiple times and the average execution time is reported.

Example usage:

with dace.profile(repetitions=100) as profiler:
    some_program(...)
    # ...
    other_program(...)

# Print all execution times of the last called program (other_program)
print(profiler.times[-1])
Parameters:
  • repetitions (int) – The number of times to run each DaCe program.

  • warmup (int) – Number of additional repetitions to run the program without measuring time.

Note:

Running functions multiple times may affect the results of the program.

dace.config module

class dace.config.Config

Bases: object

Interface to the DaCe hierarchical configuration file.

static append(*key_hierarchy, value=None, autosave=False)

Appends to the current value of a given configuration entry and sets it.

Parameters:
  • key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

  • value – The value to append.

  • autosave – If True, saves the configuration to the file after modification.

Returns:

Current configuration entry value.

Examples:

Config.append('compiler', 'cpu', 'args', value='-fPIC')
static cfg_filename()

Returns the current configuration file path.

default_filename = '.dace.conf'
static get(*key_hierarchy)

Returns the current value of a given configuration entry.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Configuration entry value.

static get_bool(*key_hierarchy)

Returns the current value of a given boolean configuration entry. This specialization allows more string types to be converted to boolean, e.g., due to environment variable overrides.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Configuration entry value (as a boolean).

static get_default(*key_hierarchy)

Returns the default value of a given configuration entry. Takes into accound current operating system.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Default configuration value.

static get_metadata(*key_hierarchy)

Returns the configuration specification of a given entry from the schema.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Configuration specification as a dictionary.

static initialize()

Initializes configuration.

Note:

This function runs automatically when the module is loaded.

static load(filename=None)

Loads a configuration from an existing file.

Parameters:

filename – The file to load. If unspecified, uses default configuration file.

static load_schema(filename=None)

Loads a configuration schema from an existing file.

Parameters:

filename – The file to load. If unspecified, uses default schema file.

static nondefaults()
Return type:

Dict[str, Any]

static save(path=None, all=False)

Saves the current configuration to a file.

Parameters:
  • path – The file to save to. If unspecified, uses default configuration file.

  • all (bool) – If False, only saves non-default configuration entries. Otherwise saves all entries.

static set(*key_hierarchy, value=None, autosave=False)

Sets the current value of a given configuration entry.

Parameters:
  • key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

  • value – The value to set.

  • autosave – If True, saves the configuration to the file after modification.

Examples:

Config.set('profiling', value=True)
dace.config.set_temporary(*path, value)

Temporarily set configuration value at path to value, and reset it after the context manager exits.

Example:

print(Config.get("compiler", "build_type")
with set_temporary("compiler", "build_type", value="Debug"):
    print(Config.get("compiler", "build_type")
print(Config.get("compiler", "build_type")
dace.config.temporary_config()

Creates a context where all configuration options changed will be reset when the context exits.

Example:

with temporary_config():
    Config.set("testing", "serialization", value=True)
    Config.set("optimizer", "autooptimize", value=True)
    foo()

dace.data module

class dace.data.Array(*args, **kwargs)

Bases: Data

Array data descriptor. This object represents a multi-dimensional data container in SDFGs that can be accessed and modified. The definition does not contain the actual array, but rather a description of how to construct it and how it should behave.

The array definition is flexible in terms of data allocation, it allows arbitrary multidimensional, potentially symbolic shapes (e.g., an array with size N+1 x M will have shape=(N+1, M)), of arbitrary data typeclasses (dtype). The physical data layout of the array is controlled by several properties:

  • The strides property determines the ordering and layout of the dimensions — it specifies how many elements in memory are skipped whenever one element in that dimension is advanced. For example, the contiguous dimension always has a stride of 1; a C-style MxN array will have strides (N, 1), whereas a FORTRAN-style array of the same size will have (1, M). Strides can be larger than the shape, which allows post-padding of the contents of each dimension.

  • The start_offset property is a number of elements to pad the beginning of the memory buffer with. This is used to ensure that a specific index is aligned as a form of pre-padding (that element may not necessarily be the first element, e.g., in the case of halo or “ghost cells” in stencils).

  • The total_size property determines how large the total allocation size is. Normally, it is the product of the shape elements, but if pre- or post-padding is involved it may be larger.

  • alignment provides alignment guarantees (in bytes) of the first element in the allocated array. This is used by allocators in the code generator to ensure certain addresses are expected to be aligned, e.g., for vectorization.

  • Lastly, a property called offset controls the logical access of the array, i.e., what would be the first element’s index after padding and alignment. This mimics a language feature prominent in scientific languages such as FORTRAN, where one could set an array to begin with 1, or any arbitrary index. By default this is set to zero.

To summarize with an example, a two-dimensional array with pre- and post-padding looks as follows:

[xxx][          |xx]
     [          |xx]
     [          |xx]
     [          |xx]
     ---------------
     [xxxxxxxxxxxxx]

shape = (4, 10)
strides = (12, 1)
start_offset = 3
total_size = 63   [= 3 + 12 * 5]
offset = (0, 0, 0)

Notice that the last padded row does not appear in strides, but is a consequence of total_size being larger.

Apart from memory layout, other properties of Array help the data-centric transformation infrastructure make decisions about the array. allow_conflicts states that warnings should not be printed if potential conflicted acceses (e.g., data races) occur. may_alias inhibits transformations that may assume that this array does not overlap with other arrays in the same context (e.g., function).

alignment

Allocation alignment in bytes (0 uses compiler-default)

allow_conflicts

If enabled, allows more than one memlet to write to the same memory location without conflict resolution.

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

clone()
covers_range(rng)
property free_symbols

Returns a set of undefined symbols in this data descriptor.

classmethod from_json(json_obj, context=None)
is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

may_alias

This pointer may alias with other pointers in the same function

offset

Initial offset to translate all indices by.

optional

Specifies whether this array may have a value of None. If False, the array must not be None. If option is not set, it is inferred by other properties and the OptionalArrayInference pass.

pool

Hint to the allocator that using a memory pool is preferred

properties()
set_shape(new_shape, strides=None, total_size=None, offset=None)

Updates the shape of an array.

sizes()
start_offset

Allocation offset elements for manual alignment (pre-padding)

strides

For each dimension, the number of elements to skip in order to obtain the next element in that dimension.

to_json()
total_size

The total allocated size of the array. Can be used for padding.

validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.Data(*args, **kwargs)

Bases: object

Data type descriptors that can be used as references to memory. Examples: Arrays, Streams, custom arrays (e.g., sparse matrices).

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

property ctype
debuginfo

Object property of type DebugInfo

dtype

Object property of type typeclass

property free_symbols: Set[Basic | SymExpr]

Returns a set of undefined symbols in this data descriptor.

is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

lifetime

Data allocation span

location

Full storage location identifier (e.g., rank, GPU ID)

properties()
set_strides_from_layout(*dimensions, alignment=1, only_first_aligned=False)

Sets the absolute strides and total size of this data descriptor, according to the given dimension ordering and alignment.

Parameters:
  • dimensions (int) – A sequence of integers representing a permutation of the descriptor’s dimensions.

  • alignment (Union[Basic, SymExpr]) – Padding (in elements) at the end, ensuring stride is a multiple of this number. 1 (default) means no padding.

  • only_first_aligned (bool) – If True, only the first dimension is padded with alignment. Otherwise all dimensions are.

shape

Object property of type tuple

storage

Storage location

strides_from_layout(*dimensions, alignment=1, only_first_aligned=False)

Returns the absolute strides and total size of this data descriptor, according to the given dimension ordering and alignment.

Parameters:
  • dimensions (int) – A sequence of integers representing a permutation of the descriptor’s dimensions.

  • alignment (Union[Basic, SymExpr]) – Padding (in elements) at the end, ensuring stride is a multiple of this number. 1 (default) means no padding.

  • only_first_aligned (bool) – If True, only the first dimension is padded with alignment. Otherwise all dimensions are.

Return type:

Tuple[Tuple[Union[Basic, SymExpr]], Union[Basic, SymExpr]]

Returns:

A 2-tuple of (tuple of strides, total size).

to_json()
property toplevel
transient

Object property of type bool

validate()

Validate the correctness of this object. Raises an exception on error.

property veclen
class dace.data.Reference(*args, **kwargs)

Bases: Array

Data descriptor that acts as a dynamic reference of another array. It can be used just like a regular array, except that it could be set to an arbitrary array or sub-array at runtime. To set a reference, connect another access node to it and use the “set” connector.

In order to enable data-centric analysis and optimizations, avoid using References as much as possible.

as_array()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.Scalar(*args, **kwargs)

Bases: Data

Data descriptor of a scalar value.

allow_conflicts

Object property of type bool

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

clone()
covers_range(rng)
static from_json(json_obj, context=None)
is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

property may_alias: bool
property offset
property optional: bool
property pool: bool
properties()
sizes()
property start_offset
property strides
property total_size
class dace.data.Stream(*args, **kwargs)

Bases: Data

Stream (or stream array) data descriptor.

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

buffer_size

Size of internal buffer.

clone()
covers_range(rng)
property free_symbols

Returns a set of undefined symbols in this data descriptor.

classmethod from_json(json_obj, context=None)
is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

is_stream_array()
property may_alias: bool
offset

Object property of type list

property optional: bool
properties()
size_string()
sizes()
property start_offset
property strides
to_json()
property total_size
class dace.data.View(*args, **kwargs)

Bases: Array

Data descriptor that acts as a reference (or view) of another array. Can be used to reshape or reinterpret existing data without copying it.

To use a View, it needs to be referenced in an access node that is directly connected to another access node. The rules for deciding which access node is viewed are:

  • If there is one edge (in/out) that leads (via memlet path) to an access node, and the other side (out/in) has a different number of edges.

  • If there is one incoming and one outgoing edge, and one leads to a code node, the one that leads to an access node is the viewed data.

  • If both sides lead to access nodes, if one memlet’s data points to the view it cannot point to the viewed node.

  • If both memlets’ data are the respective access nodes, the access node at the highest scope is the one that is viewed.

  • If both access nodes reside in the same scope, the input data is viewed.

Other cases are ambiguous and will fail SDFG validation.

In the Python frontend, numpy.reshape and numpy.ndarray.view both generate Views.

as_array()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

dace.data.create_datadescriptor(obj, no_custom_desc=False)

Creates a data descriptor from various types of objects.

See:

dace.data.Data

dace.data.find_new_name(name, existing_names)

Returns a name that matches the given name as a prefix, but does not already exist in the given existing name set. The behavior is typically to append an underscore followed by a unique (increasing) number. If the name does not already exist in the set, it is returned as-is.

Parameters:
  • name (str) – The given name to find.

  • existing_names (Sequence[str]) – The set of existing names.

Return type:

str

Returns:

A new name that is not in existing_names.

dace.data.make_array_from_descriptor(descriptor, original_array=None, symbols=None)

Creates an array that matches the given data descriptor, and optionally copies another array to it.

Parameters:
  • descriptor (Array) – The data descriptor to create the array from.

  • original_array (Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]], None]) – An optional array to fill the content of the return value with.

  • symbols (Optional[Dict[str, Any]]) – An optional symbol mapping between symbol names and their values. Used for creating arrays with symbolic sizes.

Return type:

Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]

Returns:

A NumPy-compatible array (CuPy for GPU storage) with the specified size and strides.

dace.data.make_reference_from_descriptor(descriptor, original_array, symbols=None)

Creates an array that matches the given data descriptor from the given pointer. Shares the memory with the argument (does not create a copy).

Parameters:
  • descriptor (Array) – The data descriptor to create the array from.

  • original_array (c_void_p) – The array whose memory the return value would be used in.

  • symbols (Optional[Dict[str, Any]]) – An optional symbol mapping between symbol names and their values. Used for referencing arrays with symbolic sizes.

Return type:

Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]

Returns:

A NumPy-compatible array (CuPy for GPU storage) with the specified size and strides, sharing memory with the pointer specified in original_array.

dace.dtypes module

A module that contains various DaCe type definitions.

class dace.dtypes.AllocationLifetime(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Options for allocation span (when to allocate/deallocate) of data.

Global = 4

Allocated throughout the entire program (outer SDFG)

Persistent = 5

Allocated throughout multiple invocations (init/exit)

SDFG = 3

Allocated throughout the innermost SDFG (possibly nested)

Scope = 1

Allocated/Deallocated on innermost scope start/end

State = 2

Allocated throughout the containing state

Undefined = 6
register(*args)
class dace.dtypes.DataInstrumentationType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Types of data container instrumentation providers.

No_Instrumentation = 1
Restore = 3
Save = 2
Undefined = 4
register(*args)
class dace.dtypes.DebugInfo(start_line, start_column=0, end_line=-1, end_column=0, filename=None)

Bases: object

Source code location identifier of a node/edge in an SDFG. Used for IDE and debugging purposes.

static from_json(json_obj, context=None)
to_json()
class dace.dtypes.DeviceType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

An enumeration.

CPU = 1

Multi-core CPU

FPGA(Intel or Xilinx) = 3

FPGA (Intel or Xilinx)

GPU(AMD or NVIDIA) = 2

GPU (AMD or NVIDIA)

Snitch = 4

Compute Cluster (RISC-V)

Undefined = 5
register(*args)
class dace.dtypes.InstrumentationType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Types of instrumentation providers.

FPGA = 7
GPU_Events = 6
LIKWID_CPU = 4
LIKWID_GPU = 5
No_Instrumentation = 1
PAPI_Counters = 3
Timer = 2
Undefined = 8
register(*args)
class dace.dtypes.Language(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available programming languages for SDFG tasklets.

CPP = 2
MLIR = 5
OpenCL = 3
Python = 1
SystemVerilog = 4
Undefined = 6
register(*args)
class dace.dtypes.OMPScheduleType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available OpenMP shedule types for Maps with CPU-Multicore schedule.

Default = 1

OpenMP library default

Dynamic = 3

Dynamic schedule

Guided = 4

Guided schedule

Static = 2

Static schedule

Undefined = 5
register(*args)
class dace.dtypes.ReductionType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Reduction types natively supported by the SDFG compiler.

Bitwise_And = 7

Bitwise AND (&)

Bitwise_Or = 9

Bitwise OR (|)

Bitwise_Xor = 11

Bitwise XOR (^)

Custom = 1

Defined by an arbitrary lambda function

Div = 16

Division (only supported in OpenMP)

Exchange = 14

Set new value, return old value

Logical_And = 6

Logical AND (&&)

Logical_Or = 8

Logical OR (||)

Logical_Xor = 10

Logical XOR (!=)

Max = 3

Maximum value

Max_Location = 13

Maximum value and its location

Min = 2

Minimum value

Min_Location = 12

Minimum value and its location

Product = 5

Product

Sub = 15

Subtraction (only supported in OpenMP)

Sum = 4

Sum

Undefined = 17
class dace.dtypes.ScheduleType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available map schedule types in the SDFG.

CPU_Multicore = 4

OpenMP

Default = 1

Scope-default parallel schedule

FPGA_Device = 12
FPGA_Multi_Pumped = 15

Used for double pumping

GPU_Default = 7

Default scope schedule for GPU code. Specializes to schedule GPU_Device and GPU_Global during inference.

GPU_Device = 8

Kernel

GPU_Persistent = 11
GPU_ThreadBlock = 9

Thread-block code

GPU_ThreadBlock_Dynamic = 10

Allows rescheduling work within a block

MPI = 3

MPI processes

SVE_Map = 6

Arm SVE

Sequential = 2

Sequential code (single-thread)

Snitch = 13
Snitch_Multicore = 14
Undefined = 16
Unrolled = 5

Unrolled code

register(*args)
class dace.dtypes.StorageType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available data storage types in the SDFG.

CPU_Heap = 4

Host memory allocated on heap

CPU_Pinned = 3

Host memory that can be DMA-accessed from accelerators

CPU_ThreadLocal = 5

Thread-local host memory

Default = 1

Scope-default storage location

FPGA_Global = 8

Off-chip global memory (DRAM)

FPGA_Local = 9

On-chip memory (bulk storage)

FPGA_Registers = 10

On-chip memory (fully partitioned registers)

FPGA_ShiftRegister = 11

Only accessible at constant indices

GPU_Global = 6

GPU global memory

GPU_Shared = 7

On-GPU shared memory

Register = 2

Local data on registers, stack, or equivalent memory

SVE_Register = 12

SVE register

Snitch_L2 = 14

External memory

Snitch_SSR = 15

Memory accessed by SSR streamer

Snitch_TCDM = 13

Cluster-private memory

Undefined = 16
register(*args)
class dace.dtypes.TilingType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available tiling types in a StripMining transformation.

CeilRange = 2
Normal = 1
NumberOfTiles = 3
Undefined = 4
register(*args)
class dace.dtypes.Typeclasses(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

An enumeration.

Undefined = 16
bool = 1
bool_ = 2
complex128 = 15
complex64 = 14
float16 = 11
float32 = 12
float64 = 13
int16 = 4
int32 = 5
int64 = 6
int8 = 3
register(*args)
uint16 = 8
uint32 = 9
uint64 = 10
uint8 = 7
class dace.dtypes.callback(return_types, *variadic_args)

Bases: typeclass

Looks like dace.callback([None, <some_native_type>], *types)

as_arg(name)
as_ctypes()

Returns the ctypes version of the typeclass.

as_numpy_dtype()
cfunc_return_type()

Returns the typeclass of the return value of the function call.

Return type:

typeclass

static from_json(json_obj, context=None)
get_trampoline(pyfunc, other_arguments, refs)
is_scalar_function()

Returns True if the callback is a function that returns a scalar value (or nothing). Scalar functions are the only ones that can be used within a dace.tasklet explicitly.

Return type:

bool

to_json()
dace.dtypes.can_access(schedule, storage)

Identifies whether a container of a storage type can be accessed in a specific schedule.

dace.dtypes.can_allocate(storage, schedule)

Identifies whether a container of a storage type can be allocated in a specific schedule. Used to determine arguments to subgraphs by the innermost scope that a container can be allocated in. For example, FPGA_Global memory cannot be allocated from within the FPGA scope, or GPU shared memory cannot be allocated outside of device-level code.

Parameters:
  • storage (StorageType) – The storage type of the data container to allocate.

  • schedule (ScheduleType) – The scope schedule to query.

Returns:

True if the container can be allocated, False otherwise.

class dace.dtypes.compiletime

Bases: object

Data descriptor type hint signalling that argument evaluation is deferred to call time.

Example usage:

@dace.program
def example(A: dace.float64[20], constant: dace.compiletime):
    if constant == 0:
        return A + 1
    else:
        return A + 2

In the above code, constant will be replaced with its value at call time during parsing.

dace.dtypes.deduplicate(iterable)

Removes duplicates in the passed iterable.

dace.dtypes.is_array(obj)

Returns True if an object implements the data_ptr(), __array_interface__ or __cuda_array_interface__ standards (supported by NumPy, Numba, CuPy, PyTorch, etc.). If the interface is supported, pointers can be directly obtained using the _array_interface_ptr function.

Parameters:

obj (Any) – The given object.

Return type:

typeclass

Returns:

True iff the object implements the array interface.

dace.dtypes.is_gpu_array(obj)

Returns True if an object is a GPU array, i.e., implements the __cuda_array_interface__ standard (supported by Numba, CuPy, PyTorch, etc.). If the interface is supported, pointers can be directly obtained using the _array_interface_ptr function.

Parameters:

obj (Any) – The given object.

Return type:

typeclass

Returns:

True iff the object implements the CUDA array interface.

dace.dtypes.isallowed(var, allow_recursive=False)

Returns True if a given object is allowed in a DaCe program.

Parameters:

allow_recursive – whether to allow dicts or lists containing constants.

dace.dtypes.isconstant(var)

Returns True if a variable is designated a constant (i.e., that can be directly generated in code).

dace.dtypes.ismodule(var)

Returns True if a given object is a module.

dace.dtypes.ismodule_and_allowed(var)

Returns True if a given object is a module and is one of the allowed modules in DaCe programs.

dace.dtypes.ismoduleallowed(var)

Helper function to determine the source module of an object, and whether it is allowed in DaCe programs.

dace.dtypes.json_to_typeclass(obj, context=None)
dace.dtypes.max_value(dtype)

Get a max value literal for dtype.

dace.dtypes.min_value(dtype)

Get a min value literal for dtype.

class dace.dtypes.opaque(typename)

Bases: typeclass

A data type for an opaque object, useful for C bindings/libnodes, i.e., MPI_Request.

as_ctypes()

Returns the ctypes version of the typeclass.

as_numpy_dtype()
static from_json(json_obj, context=None)
to_json()
dace.dtypes.paramdec(dec)

Parameterized decorator meta-decorator. Enables using @decorator, @decorator(), and @decorator(…) with the same function.

class dace.dtypes.pointer(wrapped_typeclass)

Bases: typeclass

A data type for a pointer to an existing typeclass.

Example use:

dace.pointer(dace.struct(x=dace.float32, y=dace.float32)).

as_ctypes()

Returns the ctypes version of the typeclass.

as_numpy_dtype()
property base_type
static from_json(json_obj, context=None)
property ocltype
to_json()
dace.dtypes.ptrtocupy(ptr, inner_ctype, shape)
dace.dtypes.ptrtonumpy(ptr, inner_ctype, shape)
class dace.dtypes.pyobject

Bases: opaque

A generic data type for Python objects in un-annotated callbacks. It cannot be used inside a DaCe program, but can be passed back to other Python callbacks. Use with caution, and ensure the value is not removed by the garbage collector or the program will crash.

as_ctypes()

Returns the ctypes version of the typeclass.

as_numpy_dtype()
to_python(obj_id)
dace.dtypes.reduction_identity(dtype, red)

Returns known identity values (which we can safely reset transients to) for built-in reduction types.

Parameters:
Return type:

Any

Returns:

Identity value in input type, or None if not found.

dace.dtypes.result_type_of(lhs, *rhs)

Returns the largest between two or more types (dace.types.typeclass) according to C semantics.

class dace.dtypes.stringtype

Bases: pointer

A specialization of t