dace package

Subpackages

Submodules

dace.builtin_hooks module

A set of built-in hooks.

dace.builtin_hooks.cli_optimize_on_call(sdfg)

Calls a command-line interface for interactive SDFG transformations on every DaCe program call.

Parameters:

sdfg (SDFG) – The current SDFG to optimize.

dace.builtin_hooks.instrument(itype, filter, annotate_maps=True, annotate_tasklets=False, annotate_states=False, annotate_sdfgs=False)

Context manager that instruments every called DaCe program. Depending on the given instrumentation type and parameters, annotates the given elements on the SDFG. Filtering is possible with strings and wildcards, or a function (if given).

Example usage:

with dace.instrument(dace.InstrumentationType.GPU_Events, 
                     filter='*add??') as profiler:
    some_program(...)
    # ...
    other_program(...)

# Print instrumentation report for last call
print(profiler.reports[-1])
Parameters:
  • itype (InstrumentationType) – Instrumentation type to use.

  • filter (Union[str, Callable[[Any], bool], None]) – An optional string with * and ? wildcards, or function that receives one parameter, determining whether to instrument the element or not.

  • annotate_maps (bool) – If True, instruments scopes (e.g., map, consume) in the SDFGs.

  • annotate_tasklets (bool) – If True, instruments tasklets in the SDFGs.

  • annotate_states (bool) – If True, instruments states in the SDFGs.

  • annotate_sdfgs (bool) – If True, instruments whole SDFGs and sub-SDFGs.

dace.builtin_hooks.instrument_data(ditype, filter, restore_from=None, verbose=False)

Context manager that instruments (serializes/deserializes) the data of every called DaCe program. This can be used for reproducible runs and debugging. Depending on the given data instrumentation type and parameters, annotates the access nodes on the SDFG. Filtering is possible with strings and wildcards, or a function (if given). An optional instrumented data report can be given to load a specific set of data.

Example usage:

@dace
def sample(a: dace.float64, b: dace.float64):
    arr = a + b
    return arr + 1

with dace.instrument_data(dace.DataInstrumentationType.Save, filter='a??'):
    result_ab = sample(a, b)

# Optionally, get the serialized data containers
dreport = sdfg.get_instrumented_data()
assert dreport.keys() == {'arr'}  # dreport['arr'] is now the internal ``arr``

# Reload latest instrumented data (can be customized if ``restore_from`` is given)
with dace.instrument_data(dace.DataInstrumentationType.Restore, filter='a??'):
    result_cd = sample(c, d)  # where ``c, d`` are different from ``a, b``

assert numpy.allclose(result_ab, result_cd)
Parameters:
  • ditype (DataInstrumentationType) – Data instrumentation type to use.

  • filter (Union[str, Callable[[Any], bool], None]) – An optional string with * and ? wildcards, or function that receives one parameter, determining whether to instrument the access node or not.

  • restore_from (Union[str, InstrumentedDataReport, None]) – An optional parameter that specifies which instrumented data report to load data from. It could be a path to a folder, an InstrumentedDataReport object, or None to load the latest generated report.

  • verbose (bool) – If True, prints information about created and loaded instrumented data reports.

dace.builtin_hooks.profile(repetitions=100, warmup=0, tqdm_leave=True, print_results=True)

Context manager that enables profiling of each called DaCe program. If repetitions is greater than 1, the program is run multiple times and the average execution time is reported.

Example usage:

with dace.profile(repetitions=100) as profiler:
    some_program(...)
    # ...
    other_program(...)

# Print all execution times of the last called program (other_program)
print(profiler.times[-1])
Parameters:
  • repetitions (int) – The number of times to run each DaCe program.

  • warmup (int) – Number of additional repetitions to run the program without measuring time.

  • tqdm_leave (bool) – Sets the leave parameter of the tqdm progress bar (useful for nested progress bars). Ignored if tqdm progress bar is not used.

  • print_results (bool) – Whether or not to print the median execution time after all repetitions.

Note:

Running functions multiple times may affect the results of the program.

dace.config module

class dace.config.Config

Bases: object

Interface to the DaCe hierarchical configuration file.

static append(*key_hierarchy, value=None, autosave=False)

Appends to the current value of a given configuration entry and sets it.

Parameters:
  • key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

  • value – The value to append.

  • autosave – If True, saves the configuration to the file after modification.

Returns:

Current configuration entry value.

Examples:

Config.append('compiler', 'cpu', 'args', value='-fPIC')
static cfg_filename()

Returns the current configuration file path.

default_filename = '.dace.conf'
static get(*key_hierarchy)

Returns the current value of a given configuration entry.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Configuration entry value.

static get_bool(*key_hierarchy)

Returns the current value of a given boolean configuration entry. This specialization allows more string types to be converted to boolean, e.g., due to environment variable overrides.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Configuration entry value (as a boolean).

static get_default(*key_hierarchy)

Returns the default value of a given configuration entry. Takes into accound current operating system.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Default configuration value.

static get_metadata(*key_hierarchy)

Returns the configuration specification of a given entry from the schema.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Configuration specification as a dictionary.

static initialize()

Initializes configuration.

Note:

This function runs automatically when the module is loaded.

static load(filename=None)

Loads a configuration from an existing file.

Parameters:

filename – The file to load. If unspecified, uses default configuration file.

static load_schema(filename=None)

Loads a configuration schema from an existing file.

Parameters:

filename – The file to load. If unspecified, uses default schema file.

static nondefaults()
Return type:

Dict[str, Any]

static save(path=None, all=False)

Saves the current configuration to a file.

Parameters:
  • path – The file to save to. If unspecified, uses default configuration file.

  • all (bool) – If False, only saves non-default configuration entries. Otherwise saves all entries.

static set(*key_hierarchy, value=None, autosave=False)

Sets the current value of a given configuration entry.

Parameters:
  • key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

  • value – The value to set.

  • autosave – If True, saves the configuration to the file after modification.

Examples:

Config.set('profiling', value=True)
dace.config.set_temporary(*path, value)

Temporarily set configuration value at path to value, and reset it after the context manager exits.

Example:

print(Config.get("compiler", "build_type")
with set_temporary("compiler", "build_type", value="Debug"):
    print(Config.get("compiler", "build_type")
print(Config.get("compiler", "build_type")
dace.config.temporary_config()

Creates a context where all configuration options changed will be reset when the context exits.

Example:

with temporary_config():
    Config.set("testing", "serialization", value=True)
    Config.set("optimizer", "autooptimize", value=True)
    foo()

dace.data module

class dace.data.Array(*args, **kwargs)

Bases: Data

Array data descriptor. This object represents a multi-dimensional data container in SDFGs that can be accessed and modified. The definition does not contain the actual array, but rather a description of how to construct it and how it should behave.

The array definition is flexible in terms of data allocation, it allows arbitrary multidimensional, potentially symbolic shapes (e.g., an array with size N+1 x M will have shape=(N+1, M)), of arbitrary data typeclasses (dtype). The physical data layout of the array is controlled by several properties:

  • The strides property determines the ordering and layout of the dimensions — it specifies how many elements in memory are skipped whenever one element in that dimension is advanced. For example, the contiguous dimension always has a stride of 1; a C-style MxN array will have strides (N, 1), whereas a FORTRAN-style array of the same size will have (1, M). Strides can be larger than the shape, which allows post-padding of the contents of each dimension.

  • The start_offset property is a number of elements to pad the beginning of the memory buffer with. This is used to ensure that a specific index is aligned as a form of pre-padding (that element may not necessarily be the first element, e.g., in the case of halo or “ghost cells” in stencils).

  • The total_size property determines how large the total allocation size is. Normally, it is the product of the shape elements, but if pre- or post-padding is involved it may be larger.

  • alignment provides alignment guarantees (in bytes) of the first element in the allocated array. This is used by allocators in the code generator to ensure certain addresses are expected to be aligned, e.g., for vectorization.

  • Lastly, a property called offset controls the logical access of the array, i.e., what would be the first element’s index after padding and alignment. This mimics a language feature prominent in scientific languages such as FORTRAN, where one could set an array to begin with 1, or any arbitrary index. By default this is set to zero.

To summarize with an example, a two-dimensional array with pre- and post-padding looks as follows:

[xxx][          |xx]
     [          |xx]
     [          |xx]
     [          |xx]
     ---------------
     [xxxxxxxxxxxxx]

shape = (4, 10)
strides = (12, 1)
start_offset = 3
total_size = 63   [= 3 + 12 * 5]
offset = (0, 0, 0)

Notice that the last padded row does not appear in strides, but is a consequence of total_size being larger.

Apart from memory layout, other properties of Array help the data-centric transformation infrastructure make decisions about the array. allow_conflicts states that warnings should not be printed if potential conflicted acceses (e.g., data races) occur. may_alias inhibits transformations that may assume that this array does not overlap with other arrays in the same context (e.g., function).

alignment

Allocation alignment in bytes (0 uses compiler-default)

allow_conflicts

If enabled, allows more than one memlet to write to the same memory location without conflict resolution.

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

as_python_arg(with_types=True, for_call=False, name=None)

Returns a string for a Data-Centric Python function signature (e.g., A: dace.int32[M]).

clone()
covers_range(rng)
property free_symbols

Returns a set of undefined symbols in this data descriptor.

classmethod from_json(json_obj, context=None)
is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

may_alias

This pointer may alias with other pointers in the same function

offset

Initial offset to translate all indices by.

optional

Specifies whether this array may have a value of None. If False, the array must not be None. If option is not set, it is inferred by other properties and the OptionalArrayInference pass.

pool

Hint to the allocator that using a memory pool is preferred

properties()
set_shape(new_shape, strides=None, total_size=None, offset=None)

Updates the shape of an array.

sizes()
start_offset

Allocation offset elements for manual alignment (pre-padding)

strides

For each dimension, the number of elements to skip in order to obtain the next element in that dimension.

to_json()
total_size

The total allocated size of the array. Can be used for padding.

used_symbols(all_symbols)

Returns a set of symbols that are used by this data descriptor.

Parameters:

all_symbols (bool) – Include not-strictly-free symbols that are used by this data descriptor, e.g., shape and size of a global array.

Return type:

Set[Union[Basic, SymExpr]]

Returns:

A set of symbols that are used by this data descriptor. NOTE: The results are symbolic rather than a set of strings.

validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.ArrayReference(*args, **kwargs)

Bases: Array, Reference

Data descriptor that acts as a dynamic reference of another array. See Reference for more information.

In order to enable data-centric analysis and optimizations, avoid using References as much as possible.

as_array()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.ArrayView(*args, **kwargs)

Bases: Array, View

Data descriptor that acts as a static reference (or view) of another array. Can be used to reshape or reinterpret existing data without copying it.

In the Python frontend, numpy.reshape and numpy.ndarray.view both generate ArrayViews.

as_array()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.ContainerArray(*args, **kwargs)

Bases: Array

An array that may contain other data containers (e.g., Structures, other arrays).

classmethod from_json(json_obj, context=None)
properties()
stype

Object property of type Data

class dace.data.ContainerArrayReference(*args, **kwargs)

Bases: ContainerArray, Reference

Data descriptor that acts as a dynamic reference of another data container array. See Reference for more information.

In order to enable data-centric analysis and optimizations, avoid using References as much as possible.

as_array()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.ContainerView(*args, **kwargs)

Bases: ContainerArray, View

Data descriptor that acts as a view of another container array. Can be used to access nested container types without a copy.

as_array()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.Data(*args, **kwargs)

Bases: object

Data type descriptors that can be used as references to memory. Examples: Arrays, Streams, custom arrays (e.g., sparse matrices).

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

as_python_arg(with_types=True, for_call=False, name=None)

Returns a string for a Data-Centric Python function signature (e.g., A: dace.int32[M]).

property ctype
debuginfo

Object property of type DebugInfo

dtype

Object property of type typeclass

property free_symbols: Set[Basic | SymExpr]

Returns a set of undefined symbols in this data descriptor.

is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

lifetime

Data allocation span

location

Full storage location identifier (e.g., rank, GPU ID)

properties()
set_strides_from_layout(*dimensions, alignment=1, only_first_aligned=False)

Sets the absolute strides and total size of this data descriptor, according to the given dimension ordering and alignment.

Parameters:
  • dimensions (int) – A sequence of integers representing a permutation of the descriptor’s dimensions.

  • alignment (Union[Basic, SymExpr]) – Padding (in elements) at the end, ensuring stride is a multiple of this number. 1 (default) means no padding.

  • only_first_aligned (bool) – If True, only the first dimension is padded with alignment. Otherwise all dimensions are.

shape

Object property of type tuple

storage

Storage location

strides_from_layout(*dimensions, alignment=1, only_first_aligned=False)

Returns the absolute strides and total size of this data descriptor, according to the given dimension ordering and alignment.

Parameters:
  • dimensions (int) – A sequence of integers representing a permutation of the descriptor’s dimensions.

  • alignment (Union[Basic, SymExpr]) – Padding (in elements) at the end, ensuring stride is a multiple of this number. 1 (default) means no padding.

  • only_first_aligned (bool) – If True, only the first dimension is padded with alignment. Otherwise all dimensions are.

Return type:

Tuple[Tuple[Union[Basic, SymExpr]], Union[Basic, SymExpr]]

Returns:

A 2-tuple of (tuple of strides, total size).

to_json()
property toplevel
transient

Object property of type bool

used_symbols(all_symbols)

Returns a set of symbols that are used by this data descriptor.

Parameters:

all_symbols (bool) – Include not-strictly-free symbols that are used by this data descriptor, e.g., shape and size of a global array.

Return type:

Set[Union[Basic, SymExpr]]

Returns:

A set of symbols that are used by this data descriptor. NOTE: The results are symbolic rather than a set of strings.

validate()

Validate the correctness of this object. Raises an exception on error.

property veclen
class dace.data.Reference

Bases: object

Data descriptor that acts as a dynamic reference of another data descriptor. It can be used just like a regular data descriptor, except that it could be set to an arbitrary container (or subset thereof) at runtime. To set a reference, connect another access node to it and use the “set” connector.

In order to enable data-centric analysis and optimizations, avoid using References as much as possible.

static view(viewed_container, debuginfo=None)

Create a new Reference of the specified data container.

Parameters:
  • viewed_container (Data) – The data container properties of this reference.

  • debuginfo – Specific source line information for this reference, if different from viewed_container.

Returns:

A new subclass of View with the appropriate viewed container properties, e.g., StructureReference for a Structure.

class dace.data.Scalar(*args, **kwargs)

Bases: Data

Data descriptor of a scalar value.

property alignment
allow_conflicts

Object property of type bool

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

as_python_arg(with_types=True, for_call=False, name=None)

Returns a string for a Data-Centric Python function signature (e.g., A: dace.int32[M]).

clone()
covers_range(rng)
static from_json(json_obj, context=None)
is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

property may_alias: bool
property offset
property optional: bool
property pool: bool
properties()
sizes()
property start_offset
property strides
property total_size
class dace.data.Stream(*args, **kwargs)

Bases: Data

Stream (or stream array) data descriptor.

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

buffer_size

Size of internal buffer.

clone()
covers_range(rng)
property free_symbols

Returns a set of undefined symbols in this data descriptor.

classmethod from_json(json_obj, context=None)
is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

is_stream_array()
property may_alias: bool
offset

Object property of type list

property optional: bool
properties()
size_string()
sizes()
property start_offset
property strides
to_json()
property total_size
used_symbols(all_symbols)

Returns a set of symbols that are used by this data descriptor.

Parameters:

all_symbols (bool) – Include not-strictly-free symbols that are used by this data descriptor, e.g., shape and size of a global array.

Return type:

Set[Union[Basic, SymExpr]]

Returns:

A set of symbols that are used by this data descriptor. NOTE: The results are symbolic rather than a set of strings.

class dace.data.Structure(*args, **kwargs)

Bases: Data

Base class for structures.

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

clone()
property free_symbols: Set[Basic | SymExpr]

Returns a set of undefined symbols in this data descriptor.

static from_json(json_obj, context=None)
keys()
property may_alias: bool
members

Dictionary of structure members

name

Structure type name

property offset
property optional: bool
property pool: bool
properties()
property start_offset
property strides
property total_size
class dace.data.StructureReference(*args, **kwargs)

Bases: Structure, Reference

Data descriptor that acts as a dynamic reference of another Structure. See Reference for more information.

In order to enable data-centric analysis and optimizations, avoid using References as much as possible.

as_structure()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.StructureView(*args, **kwargs)

Bases: Structure, View

Data descriptor that acts as a view of another structure.

as_structure()
static from_json(json_obj, context=None)
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.Tensor(*args, **kwargs)

Bases: Structure

Abstraction for Tensor storage format.

This abstraction is based on [https://doi.org/10.1145/3276493].

static from_json(json_obj, context=None)
index_ordering

Object property of type list

indices

Object property of type list

properties()
tensor_shape

Object property of type tuple

value_count

Object property of type SymbolicProperty

value_dtype

Object property of type typeclass

class dace.data.TensorAssemblyType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Types of possible assembly strategies for the individual indices.

NoAssembly: Assembly is not possible as such.

Insert: index allows inserting elements at random (e.g. Dense)

Append: index allows appending to a list of existing coordinates. Depending on append order, this affects whether the index is ordered or not. This could be changed by sorting the index after assembly

Append = 3
Insert = 2
NoAssembly = 1
class dace.data.TensorIndex

Bases: ABC

Abstract base class for tensor index implementations.

abstract property assembly: TensorAssemblyType

What assembly type is supported by the index.

See TensorAssemblyType for reference.

abstract property branchless: bool

True if the level doesn’t branch, false otw.

A level is considered branchless if no coordinate has a sibling (another coordinate with same ancestor) and all coordinates in parent level have a child. In other words if there is a bijection between the coordinates in this level and the parent level. An example of the is the Singleton index level in the COO format.

abstract property compact: bool

True if the level is compact, false otw.

A level is compact if no two coordinates are separated by an unlabled node that does not encode a coordinate. An example of a compact level can be found in CSR, while the DIA formats range and offset levels are not compact (they have entries that would coorespond to entries outside the tensors index range, e.g. column -1).

abstract fields(lvl, dummy_symbol)

Generates the fields needed for the index.

Return type:

Dict[str, Data]

Returns:

a Dict of fields that need to be present in the struct

classmethod from_json(json_obj, context=None)
abstract property full: bool

True if the level is full, False otw.

A level is considered full if it encompasses all valid coordinates along the corresponding tensor dimension.

abstract property iteration_type: TensorIterationTypes

Iteration capability supported by this index.

See TensorIterationTypes for reference.

abstract property locate: bool

True if the index supports locate (aka random access), False otw.

abstract property ordered: bool

True if the level is ordered, False otw.

A level is ordered when all coordinates that share the same ancestor are ordered by increasing value (e.g. in typical CSR).

to_json()
abstract property unique: bool

True if coordinate in the level are unique, False otw.

A level is considered unique if no collection of coordinates that share the same ancestor contains duplicates. In CSR this is True, in COO it is not.

class dace.data.TensorIndexCompressed(*args, **kwargs)

Bases: TensorIndex

Tensor level that stores coordinates in segmented array.

Levels of this type are compressed using a segented array. The pos array holds the start and end positions of the segment in the crd (coordinate) array that holds the child coordinates corresponding the parent.

property assembly: TensorAssemblyType

What assembly type is supported by the index.

See TensorAssemblyType for reference.

property branchless: bool

True if the level doesn’t branch, false otw.

A level is considered branchless if no coordinate has a sibling (another coordinate with same ancestor) and all coordinates in parent level have a child. In other words if there is a bijection between the coordinates in this level and the parent level. An example of the is the Singleton index level in the COO format.

property compact: bool

True if the level is compact, false otw.

A level is compact if no two coordinates are separated by an unlabled node that does not encode a coordinate. An example of a compact level can be found in CSR, while the DIA formats range and offset levels are not compact (they have entries that would coorespond to entries outside the tensors index range, e.g. column -1).

fields(lvl, dummy_symbol)

Generates the fields needed for the index.

Return type:

Dict[str, Data]

Returns:

a Dict of fields that need to be present in the struct

property full: bool

True if the level is full, False otw.

A level is considered full if it encompasses all valid coordinates along the corresponding tensor dimension.

property iteration_type: TensorIterationTypes

Iteration capability supported by this index.

See TensorIterationTypes for reference.

property locate: bool

True if the index supports locate (aka random access), False otw.

property ordered: bool

True if the level is ordered, False otw.

A level is ordered when all coordinates that share the same ancestor are ordered by increasing value (e.g. in typical CSR).

properties()
property unique: bool

True if coordinate in the level are unique, False otw.

A level is considered unique if no collection of coordinates that share the same ancestor contains duplicates. In CSR this is True, in COO it is not.

class dace.data.TensorIndexDense(*args, **kwargs)

Bases: TensorIndex

Dense tensor index.

Levels of this type encode the the coordinate in the interval [0, N), where N is the size of the corresponding dimension. This level doesn’t need any index structure beyond the corresponding dimension size.

property assembly: TensorAssemblyType

What assembly type is supported by the index.

See TensorAssemblyType for reference.

property branchless: bool

True if the level doesn’t branch, false otw.

A level is considered branchless if no coordinate has a sibling (another coordinate with same ancestor) and all coordinates in parent level have a child. In other words if there is a bijection between the coordinates in this level and the parent level. An example of the is the Singleton index level in the COO format.

property compact: bool

True if the level is compact, false otw.

A level is compact if no two coordinates are separated by an unlabled node that does not encode a coordinate. An example of a compact level can be found in CSR, while the DIA formats range and offset levels are not compact (they have entries that would coorespond to entries outside the tensors index range, e.g. column -1).

fields(lvl, dummy_symbol)

Generates the fields needed for the index.

Return type:

Dict[str, Data]

Returns:

a Dict of fields that need to be present in the struct

property full: bool

True if the level is full, False otw.

A level is considered full if it encompasses all valid coordinates along the corresponding tensor dimension.

property iteration_type: TensorIterationTypes

Iteration capability supported by this index.

See TensorIterationTypes for reference.

property locate: bool

True if the index supports locate (aka random access), False otw.

property ordered: bool

True if the level is ordered, False otw.

A level is ordered when all coordinates that share the same ancestor are ordered by increasing value (e.g. in typical CSR).

properties()
property unique: bool

True if coordinate in the level are unique, False otw.

A level is considered unique if no collection of coordinates that share the same ancestor contains duplicates. In CSR this is True, in COO it is not.

class dace.data.TensorIndexOffset(*args, **kwargs)

Bases: TensorIndex

Tensor index that encodes the next coordinates as offset from parent.

Given a parent coordinate i and an offset index k, the level encodes the coordinate j = i + offset[k].

property assembly: TensorAssemblyType

What assembly type is supported by the index.

See TensorAssemblyType for reference.

property branchless: bool

True if the level doesn’t branch, false otw.

A level is considered branchless if no coordinate has a sibling (another coordinate with same ancestor) and all coordinates in parent level have a child. In other words if there is a bijection between the coordinates in this level and the parent level. An example of the is the Singleton index level in the COO format.

property compact: bool

True if the level is compact, false otw.

A level is compact if no two coordinates are separated by an unlabled node that does not encode a coordinate. An example of a compact level can be found in CSR, while the DIA formats range and offset levels are not compact (they have entries that would coorespond to entries outside the tensors index range, e.g. column -1).

fields(lvl, dummy_symbol)

Generates the fields needed for the index.

Return type:

Dict[str, Data]

Returns:

a Dict of fields that need to be present in the struct

property full: bool

True if the level is full, False otw.

A level is considered full if it encompasses all valid coordinates along the corresponding tensor dimension.

property iteration_type: TensorIterationTypes

Iteration capability supported by this index.

See TensorIterationTypes for reference.

property locate: bool

True if the index supports locate (aka random access), False otw.

property ordered: bool

True if the level is ordered, False otw.

A level is ordered when all coordinates that share the same ancestor are ordered by increasing value (e.g. in typical CSR).

properties()
property unique: bool

True if coordinate in the level are unique, False otw.

A level is considered unique if no collection of coordinates that share the same ancestor contains duplicates. In CSR this is True, in COO it is not.

class dace.data.TensorIndexRange(*args, **kwargs)

Bases: TensorIndex

Tensor index that encodes a interval of coordinates for every parent.

The interval is computed from an offset for each parent together with the tensor dimension size of this level (M) and the parent level (N) parents corresponding tensor. Given the parent coordinate i, the level encodes the range of coordinates between max(0, -offset[i]) and min(N, M - offset[i]).

property assembly: TensorAssemblyType

What assembly type is supported by the index.

See TensorAssemblyType for reference.

property branchless: bool

True if the level doesn’t branch, false otw.

A level is considered branchless if no coordinate has a sibling (another coordinate with same ancestor) and all coordinates in parent level have a child. In other words if there is a bijection between the coordinates in this level and the parent level. An example of the is the Singleton index level in the COO format.

property compact: bool

True if the level is compact, false otw.

A level is compact if no two coordinates are separated by an unlabled node that does not encode a coordinate. An example of a compact level can be found in CSR, while the DIA formats range and offset levels are not compact (they have entries that would coorespond to entries outside the tensors index range, e.g. column -1).

fields(lvl, dummy_symbol)

Generates the fields needed for the index.

Return type:

Dict[str, Data]

Returns:

a Dict of fields that need to be present in the struct

property full: bool

True if the level is full, False otw.

A level is considered full if it encompasses all valid coordinates along the corresponding tensor dimension.

property iteration_type: TensorIterationTypes

Iteration capability supported by this index.

See TensorIterationTypes for reference.

property locate: bool

True if the index supports locate (aka random access), False otw.

property ordered: bool

True if the level is ordered, False otw.

A level is ordered when all coordinates that share the same ancestor are ordered by increasing value (e.g. in typical CSR).

properties()
property unique: bool

True if coordinate in the level are unique, False otw.

A level is considered unique if no collection of coordinates that share the same ancestor contains duplicates. In CSR this is True, in COO it is not.

class dace.data.TensorIndexSingleton(*args, **kwargs)

Bases: TensorIndex

Tensor index that encodes a single coordinate per parent coordinate.

Levels of this type hold exactly one coordinate for every coordinate in the parent level. An example can be seen in the COO format, where every coordinate but the first is encoded in this manner.

property assembly: TensorAssemblyType

What assembly type is supported by the index.

See TensorAssemblyType for reference.

property branchless: bool

True if the level doesn’t branch, false otw.

A level is considered branchless if no coordinate has a sibling (another coordinate with same ancestor) and all coordinates in parent level have a child. In other words if there is a bijection between the coordinates in this level and the parent level. An example of the is the Singleton index level in the COO format.

property compact: bool

True if the level is compact, false otw.

A level is compact if no two coordinates are separated by an unlabled node that does not encode a coordinate. An example of a compact level can be found in CSR, while the DIA formats range and offset levels are not compact (they have entries that would coorespond to entries outside the tensors index range, e.g. column -1).

fields(lvl, dummy_symbol)

Generates the fields needed for the index.

Return type:

Dict[str, Data]

Returns:

a Dict of fields that need to be present in the struct

property full: bool

True if the level is full, False otw.

A level is considered full if it encompasses all valid coordinates along the corresponding tensor dimension.

property iteration_type: TensorIterationTypes

Iteration capability supported by this index.

See TensorIterationTypes for reference.

property locate: bool

True if the index supports locate (aka random access), False otw.

property ordered: bool

True if the level is ordered, False otw.

A level is ordered when all coordinates that share the same ancestor are ordered by increasing value (e.g. in typical CSR).

properties()
property unique: bool

True if coordinate in the level are unique, False otw.

A level is considered unique if no collection of coordinates that share the same ancestor contains duplicates. In CSR this is True, in COO it is not.

class dace.data.TensorIterationTypes(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Types of tensor iteration capabilities.

Value (Coordinate Value Iteration) allows to directly iterate over coordinates such as when using the Dense index type.

Position (Coordinate Position Iteratation) iterates over coordinate positions, at which the actual coordinates lie. This is for example the case with a compressed index, in which the pos array enables one to iterate over the positions in the crd array that hold the actual coordinates.

Position = 2
Value = 1
class dace.data.View

Bases: object

Data descriptor that acts as a static reference (or view) of another data container. Can be used to reshape or reinterpret existing data without copying it.

To use a View, it needs to be referenced in an access node that is directly connected to another access node. The rules for deciding which access node is viewed are:

  • If there is one edge (in/out) that leads (via memlet path) to an access node, and the other side (out/in) has a different number of edges.

  • If there is one incoming and one outgoing edge, and one leads to a code node, the one that leads to an access node is the viewed data.

  • If both sides lead to access nodes, if one memlet’s data points to the view it cannot point to the viewed node.

  • If both memlets’ data are the respective access nodes, the access node at the highest scope is the one that is viewed.

  • If both access nodes reside in the same scope, the input data is viewed.

Other cases are ambiguous and will fail SDFG validation.

static view(viewed_container, debuginfo=None)

Create a new View of the specified data container.

Parameters:
  • viewed_container (Data) – The data container properties of this view

  • debuginfo – Specific source line information for this view, if different from viewed_container.

Returns:

A new subclass of View with the appropriate viewed container properties, e.g., StructureView for a Structure.

dace.data.create_datadescriptor(obj, no_custom_desc=False)

Creates a data descriptor from various types of objects.

See:

dace.data.Data

dace.data.find_new_name(name, existing_names)

Returns a name that matches the given name as a prefix, but does not already exist in the given existing name set. The behavior is typically to append an underscore followed by a unique (increasing) number. If the name does not already exist in the set, it is returned as-is.

Parameters:
  • name (str) – The given name to find.

  • existing_names (Sequence[str]) – The set of existing names.

Return type:

str

Returns:

A new name that is not in existing_names.

dace.data.make_array_from_descriptor(descriptor, original_array=None, symbols=None)

Creates an array that matches the given data descriptor, and optionally copies another array to it.

Parameters:
  • descriptor (Array) – The data descriptor to create the array from.

  • original_array (Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]], None]) – An optional array to fill the content of the return value with.

  • symbols (Optional[Dict[str, Any]]) – An optional symbol mapping between symbol names and their values. Used for creating arrays with symbolic sizes.

Return type:

Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]

Returns:

A NumPy-compatible array (CuPy for GPU storage) with the specified size and strides.

dace.data.make_reference_from_descriptor(descriptor, original_array, symbols=None)

Creates an array that matches the given data descriptor from the given pointer. Shares the memory with the argument (does not create a copy).

Parameters:
  • descriptor (Array) – The data descriptor to create the array from.

  • original_array (c_void_p) – The array whose memory the return value would be used in.

  • symbols (Optional[Dict[str, Any]]) – An optional symbol mapping between symbol names and their values. Used for referencing arrays with symbolic sizes.

Return type:

Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]

Returns:

A NumPy-compatible array (CuPy for GPU storage) with the specified size and strides, sharing memory with the pointer specified in original_array.

dace.dtypes module

A module that contains various DaCe type definitions.

class dace.dtypes.AllocationLifetime(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Options for allocation span (when to allocate/deallocate) of data.

External = 6

Allocated and managed outside the generated code

Global = 4

Allocated throughout the entire program (outer SDFG)

Persistent = 5

Allocated throughout multiple invocations (init/exit)

SDFG = 3

Allocated throughout the innermost SDFG (possibly nested)

Scope = 1

Allocated/Deallocated on innermost scope start/end

State = 2

Allocated throughout the containing state

Undefined = 7
register(*args)
class dace.dtypes.DataInstrumentationType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Types of data container instrumentation providers.

No_Instrumentation = 1
Restore = 3
Save = 2
Undefined = 4
register(*args)
class dace.dtypes.DebugInfo(start_line, start_column=0, end_line=-1, end_column=0, filename=None)

Bases: object

Source code location identifier of a node/edge in an SDFG. Used for IDE and debugging purposes.

static from_json(json_obj, context=None)
to_json()
class dace.dtypes.DeviceType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

An enumeration.

CPU = 1

Multi-core CPU

FPGA(Intel or Xilinx) = 3

FPGA (Intel or Xilinx)

GPU(AMD or NVIDIA) = 2

GPU (AMD or NVIDIA)

Snitch = 4

Compute Cluster (RISC-V)

Undefined = 5
register(*args)
class dace.dtypes.InstrumentationType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Types of instrumentation providers.

FPGA = 7
GPU_Events = 6
LIKWID_CPU = 4
LIKWID_GPU = 5
No_Instrumentation = 1
PAPI_Counters = 3
Timer = 2
Undefined = 8
register(*args)
class dace.dtypes.Language(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available programming languages for SDFG tasklets.

CPP = 2
MLIR = 5
OpenCL = 3
Python = 1
SystemVerilog = 4
Undefined = 6
register(*args)
class dace.dtypes.OMPScheduleType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available OpenMP shedule types for Maps with CPU-Multicore schedule.

Default = 1

OpenMP library default

Dynamic = 3

Dynamic schedule

Guided = 4

Guided schedule

Static = 2

Static schedule

Undefined = 5
register(*args)
class dace.dtypes.ReductionType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Reduction types natively supported by the SDFG compiler.

Bitwise_And = 7

Bitwise AND (&)

Bitwise_Or = 9

Bitwise OR (|)

Bitwise_Xor = 11

Bitwise XOR (^)

Custom = 1

Defined by an arbitrary lambda function

Div = 16

Division (only supported in OpenMP)

Exchange = 14

Set new value, return old value

Logical_And = 6

Logical AND (&&)

Logical_Or = 8

Logical OR (||)

Logical_Xor = 10

Logical XOR (!=)

Max = 3

Maximum value

Max_Location = 13

Maximum value and its location

Min = 2

Minimum value

Min_Location = 12

Minimum value and its location

Product = 5

Product

Sub = 15

Subtraction (only supported in OpenMP)

Sum = 4

Sum

Undefined = 17
class dace.dtypes.ScheduleType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available map schedule types in the SDFG.

CPU_Multicore = 4

OpenMP parallel for loop

CPU_Persistent = 5

OpenMP parallel region

Default = 1

Scope-default parallel schedule

FPGA_Device = 13
FPGA_Multi_Pumped = 16

Used for double pumping

GPU_Default = 8

Default scope schedule for GPU code. Specializes to schedule GPU_Device and GPU_Global during inference.

GPU_Device = 9

Kernel

GPU_Persistent = 12
GPU_ThreadBlock = 10

Thread-block code

GPU_ThreadBlock_Dynamic = 11

Allows rescheduling work within a block

MPI = 3

MPI processes

SVE_Map = 7

Arm SVE

Sequential = 2

Sequential code (single-thread)

Snitch = 14
Snitch_Multicore = 15
Undefined = 17
Unrolled = 6

Unrolled code

register(*args)
class dace.dtypes.StorageType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available data storage types in the SDFG.

CPU_Heap = 4

Host memory allocated on heap

CPU_Pinned = 3

Host memory that can be DMA-accessed from accelerators

CPU_ThreadLocal = 5

Thread-local host memory

Default = 1

Scope-default storage location

FPGA_Global = 8

Off-chip global memory (DRAM)

FPGA_Local = 9

On-chip memory (bulk storage)

FPGA_Registers = 10

On-chip memory (fully partitioned registers)

FPGA_ShiftRegister = 11

Only accessible at constant indices

GPU_Global = 6

GPU global memory

GPU_Shared = 7

On-GPU shared memory

Register = 2

Local data on registers, stack, or equivalent memory

SVE_Register = 12

SVE register

Snitch_L2 = 14

External memory

Snitch_SSR = 15

Memory accessed by SSR streamer

Snitch_TCDM = 13

Cluster-private memory

Undefined = 16
register(*args)
class dace.dtypes.TilingType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available tiling types in a StripMining transformation.

CeilRange = 2
Normal = 1
NumberOfTiles = 3
Undefined = 4
register(*args)
class dace.dtypes.Typeclasses(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

An enumeration.

Undefined = 16
bool = 1
bool_ = 2
complex128 = 15
complex64 = 14
float16 = 11
float32 = 12
float64 = 13
int16 = 4
int32 = 5
int64 = 6
int8 = 3
register(*args)
uint16 = 8
uint32 = 9
uint64 = 10
uint8 = 7
class dace.dtypes.callback(return_types, *variadic_args)

Bases: typeclass

Looks like dace.callback([None, <some_native_type>], *types)

as_arg(name)
as_ctypes()

Returns the ctypes version of the typeclass.

as_numpy_dtype()
cfunc_return_type()

Returns the typeclass of the return value of the function call.

Return type:

typeclass

static from_json(json_obj, context=None)
get_trampoline(pyfunc, other_arguments, refs)
is_scalar_function()

Returns True if the callback is a function that returns a scalar value (or nothing). Scalar functions are the only ones that can be used within a dace.tasklet explicitly.

Return type:

bool

to_json()
dace.dtypes.can_access(schedule, storage)

Identifies whether a container of a storage type can be accessed in a specific schedule.

dace.dtypes.can_allocate(storage, schedule)

Identifies whether a container of a storage type can be allocated in a specific schedule. Used to determine arguments to subgraphs by the innermost scope that a container can be allocated in. For example, FPGA_Global memory cannot be allocated from within the FPGA scope, or GPU shared memory cannot be allocated outside of device-level code.

Parameters:
  • storage (StorageType) – The storage type of the data container to allocate.

  • schedule (ScheduleType) – The scope schedule to query.

Returns:

True if the container can be allocated, False otherwise.

class dace.dtypes.compiletime

Bases: object

Data descriptor type hint signalling that argument evaluation is deferred to call time.

Example usage:

@dace.program
def example(A: dace.float64[20], constant: dace.compiletime):
    if constant == 0:
        return A + 1
    else:
        return A + 2

In the above code, constant will be replaced with its value at call time during parsing.

dace.dtypes.deduplicate(iterable)

Removes duplicates in the passed iterable.

dace.dtypes.dtype_to_typeclass(dtype=None)
dace.dtypes.is_array(obj)

Returns True if an object implements the data_ptr(), __array_interface__ or __cuda_array_interface__ standards (supported by NumPy, Numba, CuPy, PyTorch, etc.). If the interface is supported, pointers can be directly obtained using the _array_interface_ptr function.

Parameters:

obj (Any) – The given object.

Return type:

typeclass

Returns:

True iff the object implements the array interface.

dace.dtypes.is_gpu_array(obj)

Returns True if an object is a GPU array, i.e., implements the __cuda_array_interface__ standard (supported by Numba, CuPy, PyTorch, etc.). If the interface is supported, pointers can be directly obtained using the _array_interface_ptr function.

Parameters:

obj (Any) – The given object.

Return type:

typeclass

Returns:

True iff the object implements the CUDA array interface.

dace.dtypes.isallowed(var, allow_recursive=False)

Returns True if a given object is allowed in a DaCe program.

Parameters:

allow_recursive – whether to allow dicts or lists containing constants.

dace.dtypes.isconstant(var)

Returns True if a variable is designated a constant (i.e., that can be directly generated in code).

dace.dtypes.ismodule(var)

Returns True if a given object is a module.

dace.dtypes.ismodule_and_allowed(var)

Returns True if a given object is a module and is one of the allowed modules in DaCe programs.

dace.dtypes.ismoduleallowed(var)

Helper function to determine the source module of an object, and whether it is allowed in DaCe programs.

dace.dtypes.json_to_typeclass(obj, context=None)
dace.dtypes.max_value(dtype)

Get a max value literal for dtype.

dace.dtypes.min_value(dtype)

Get a min value literal for dtype.

class dace.dtypes.opaque(typename)

Bases: typeclass

A data type for an opaque object, useful for C bindings/libnodes, i.e., MPI_Request.

as_ctypes()

Returns the ctypes version of the typeclass.

as_numpy_dtype()
static from_json(json_obj, context=None)
to_json()
dace.dtypes.paramdec(dec)

Parameterized decorator meta-decorator. Enables using @decorator, @decorator(), and @decorator(…) with the same function.

class dace.dtypes.pointer(wrapped_typeclass)

Bases: typeclass

A data type for a pointer to an existing typeclass.

Example use:

dace.pointer(dace.struct(x=dace.float32, y=dace.float32)).

as_ctypes()

Returns the ctypes version of the typeclass.

as_numpy_dtype()
property base_type
static from_json(json_obj, context=None)
property ocltype
to_json()
dace.dtypes.ptrtocupy(ptr, inner_ctype, shape)
dace.dtypes.ptrtonumpy(ptr, inner_ctype, shape)
class dace.dtypes.pyobject

Bases: opaque

A generic data type for Python objects in un-annotated callbacks. It cannot be used inside a DaCe program, but can be passed back to other Python callbacks. Use with caution, and ensure the value is not removed by the garbage collector or the program will crash.

as_ctypes()

Returns the ctypes version of the typeclass.

as_numpy_dtype()
to_python(obj_id)
dace.dtypes.reduction_identity(dtype, red)

Returns known identity values (which we can safely reset transients to) for built-in reduction types.

Parameters:
Return type:

Any

Returns:

Identity value in input type, or None if not found.

dace.dtypes.result_type_of(lhs, *rhs)

Returns the largest between two or more types (dace.types.typeclass) according to C semantics.

class dace.dtypes.stringtype

Bases: pointer

A specialization of the string data type to improve Python/generated code marshalling. Used internally when str types are given

static from_json(json_obj, context=None)
to_json()
class dace.dtypes.struct(name, **fields_and_types)

Bases: typeclass

A data type for a struct of existing typeclasses.

Example use: dace.struct(a=dace.int32, b=dace.float64).

as_ctypes()

Returns the ctypes version of the typeclass.

as_numpy_dtype()
emit_definition()
property fields
static from_json(json_obj, context=None)
to_json()
class dace.dtypes.typeclass(wrapped_type, typename=None)

Bases: object

An extension of types that enables their use in DaCe.

These types are defined for three reasons:
  1. Controlling DaCe types

  2. Enabling declaration syntax: dace.float32[M,N]

  3. Enabling extensions such as dace.struct and dace.vector

as_arg(name)
as_ctypes()

Returns the ctypes version of the typeclass.

as_numpy_dtype()
property base_type
static from_json(json_obj, context=None)
is_complex()
property ocltype
to_json()
to_string()

A Numpy-like string-representation of the underlying data type.

property veclen
dace.dtypes.validate_name(name)
class dace.dtypes.vector(dtype, vector_length)

Bases: typeclass

A data type for a vector-type of an existing typeclass.

Example use: dace.vector(dace.float32, 4) becomes float4.

as_ctypes()

Returns the ctypes version of the typeclass.

as_numpy_dtype()
property base_type
property ctype
property ctype_unaligned
static from_json(json_obj, context=None)
property ocltype
to_json()
property veclen

dace.hooks module

Module that provides hooks that can be used to extend DaCe functionality.

dace.hooks.invoke_compiled_sdfg_call_hooks(compiled_sdfg, args)

Internal context manager that calls all compiled SDFG call hooks in their registered order.

dace.hooks.invoke_sdfg_call_hooks(sdfg)

Internal context manager that calls all SDFG call hooks in their registered order.

dace.hooks.on_call(*, before=None, after=None, context_manager=None)

Context manager that registers a function to be called around each SDFG call. Use this to modify the SDFG before it is compiled and run.

For example, to print the SDFG before it is compiled and run:

# Will print "some_program was called"
with dace.hooks.on_call(before=lambda sdfg: print(f'{sdfg.name} was called')):
    some_program(...)

# Alternatively, using a context manager
@contextmanager
def print_sdfg_name(sdfg: dace.SDFG):
    print(f'{sdfg.name} is going to be compiled and run')
    yield
    print(f'{sdfg.name} has finished running')

with dace.hooks.on_call(context_manager=print_sdfg_name):
    some_program(...)
Parameters:
  • before (Optional[Callable[[SDFG], None]]) – An optional function that is called before the SDFG is compiled and run. This function should take an SDFG as its only argument.

  • after (Optional[Callable[[SDFG], None]]) – An optional function that is called after the SDFG is compiled and run. This function should take an SDFG as its only argument.

  • context_manager (Optional[Generator[Any, None, None]]) – A context manager to use around the SDFG’s compilation and running. This field can only be used if both before and after are None.

dace.hooks.on_compiled_sdfg_call(*, before=None, after=None, context_manager=None)

Context manager that registers a function to be called around each compiled SDFG call. Use this to wrap the compiled SDFG’s C function call.

For example, to time the execution of the compiled SDFG:

@contextmanager
def time_compiled_sdfg(csdfg: dace.codegen.compiled_sdfg.CompiledSDFG, *args, **kwargs):
    start = time.time()
    yield
    end = time.time()
    print(f'Compiled SDFG {csdfg.sdfg.name} took {end - start} seconds')

with dace.hooks.on_compiled_sdfg_call(context_manager=time_compiled_sdfg):
    some_program(...)
    other_program(...)
Parameters:
  • before (Optional[Callable[[CompiledSDFG, Tuple[Any, ...]], None]]) – An optional function that is called before the compiled SDFG is called. This function should take a compiled SDFG object, its arguments and keyword arguments.

  • after (Optional[Callable[[CompiledSDFG, Tuple[Any, ...]], None]]) – An optional function that is called after the compiled SDFG is called. This function should take a compiled SDFG object, its arguments and keyword arguments.

  • context_manager (Optional[Generator[Any, None, None]]) – A context manager to use around the compiled SDFG’s C function. This field can only be used if both before and after are None.

dace.hooks.register_compiled_sdfg_call_hook(*, before_hook=None, after_hook=None, context_manager=None)

Registers a hook that is called when a compiled SDFG is called.

Parameters:
  • before_hook (Optional[Callable[[CompiledSDFG, Tuple[Any, ...]], None]]) – An optional hook to call before the compiled SDFG is called.

  • after_hook (Optional[Callable[[CompiledSDFG, Tuple[Any, ...]], None]]) – An optional hook to call after the compiled SDFG is called.

  • context_manager (Optional[Generator[Any, None, None]]) – A context manager to use around the compiled SDFG’s C function. This field can only be used if both before_hook and after_hook are None.

Return type:

int

Returns:

The unique identifier of the hook (for removal).

dace.hooks.register_sdfg_call_hook(*, before_hook=None, after_hook=None, context_manager=None)

Registers a hook that is called when an SDFG is called.

Parameters:
  • before_hook (Optional[Callable[[SDFG], None]]) – An optional hook to call before the SDFG is compiled and run.

  • after_hook (Optional[Callable[[SDFG], None]]) – An optional hook to call after the SDFG is compiled and run.

  • context_manager (Optional[Generator[Any, None, None]]) – A context manager to use around the SDFG’s compilation and running. This field can only be used if both before_hook and after_hook are None.

Return type:

int

Returns:

The unique identifier of the hook (for removal).

dace.hooks.unregister_compiled_sdfg_call_hook(hook_id)

Unregisters a compiled SDFG call hook.

Parameters:

hook_id (int) – The unique identifier of the hook.

dace.hooks.unregister_sdfg_call_hook(hook_id)

Unregisters an SDFG call hook.

Parameters:

hook_id (int) – The unique identifier of the hook.

dace.jupyter module

Jupyter Notebook support for DaCe.

dace.jupyter.enable()
dace.jupyter.isnotebook()
dace.jupyter.preamble()

dace.library module

dace.library.change_default(library, implementation)
dace.library.environment(env)
dace.library.expansion(exp)
dace.library.get_environment(env_name)
dace.library.get_environments_and_dependencies(names)

Get the environment objects from names. Also resolve the dependencies.

Names:

set of environment names.

Return type:

List

Returns:

a list of environment objects, ordered such that environments with dependencies appear after their dependencies.

dace.library.get_library(lib_name)
dace.library.node(n)
dace.library.register_expansion(library_node, expansion_name)

Defines and registers an expansion.

dace.library.register_implementation(implementation_name, expansion_cls, node_cls)

Associate a given library node expansion class with a library node class. This is done automatically for expansions defined in a DaCe library module, but this function can be used to add additional expansions from an external context.

dace.library.register_library(module_name, name)

Called from a library’s __init__.py to register it with DaCe.

dace.library.register_node(node_cls, library)

Associate a given library node class with a DaCe library. This is done automatically for library nodes defined in a DaCe library module, but this function can be used to add additional node classes from an external context.

dace.library.register_transformation(transformation_cls, library)

Associate a given transformation with a DaCe library. This is done automatically for transformations defined in a DaCe library module, but this function can be used to add additional transformations from an external context.

dace.memlet module

class dace.memlet.Memlet(*args, **kwargs)

Bases: object

Data movement object. Represents the data, the subset moved, and the manner it is reindexed (other_subset) into the destination. If there are multiple conflicting writes, this object also specifies how they are resolved with a lambda function.

allow_oob

Bypass out-of-bounds validation

bounding_box_size()

Returns a per-dimension upper bound on the maximum number of elements in each dimension.

This bound will be tight in the case of Range.

data

Data descriptor attached to this memlet

debuginfo

Line information to track source and generated code

property dst_subset
dynamic

Is the number of elements moved determined at runtime (e.g., data dependent)

property free_symbols: Set[str]

Returns a set of symbols used in this edge’s properties.

static from_array(dataname, datadesc, wcr=None)

Constructs a Memlet that transfers an entire array’s contents.

Parameters:
  • dataname – The name of the data descriptor in the SDFG.

  • datadesc (Data) – The data descriptor object.

  • wcr – The conflict resolution lambda.

static from_json(json_obj, context=None)
static from_memlet(memlet)
Return type:

Memlet

get_dst_subset(edge, state)
get_free_symbols_by_indices(indices_src, indices_dst)

Returns set of free symbols used in this edges properties but only taking certain indices of the src and dst subset into account

Parameters:
  • indices_src (List[int]) – The indices of the src subset to take into account

  • indices_dst (List[int]) – The indices of the dst subset to take into account

Returns:

The set of free symbols

Return type:

Set[str]

get_src_subset(edge, state)
get_stride(sdfg, map, dim=-1)

Returns the stride of the underlying memory when traversing a Map.

Parameters:
  • sdfg (SDFG) – The SDFG in which the memlet resides.

  • map (Map) – The map in which the memlet resides.

  • dim (int) – The dimension that is incremented. By default it is the innermost.

Return type:

SymExpr

is_empty()

Returns True if this memlet carries no data. Memlets without data are primarily used for connecting nodes to scopes without transferring data to them.

Return type:

bool

property num_accesses

Returns the total memory movement volume (in elements) of this memlet.

num_elements()

Returns the number of elements in the Memlet subset.

other_subset

Subset of elements after reindexing to the data not attached to this edge (e.g., for offsets and reshaping).

properties()
replace(repl_dict)

Substitute a given set of symbols with a different set of symbols.

Parameters:

repl_dict – A dict of string symbol names to symbols with which to replace them.

static simple(data, subset_str, wcr_str=None, other_subset_str=None, wcr_conflict=True, num_accesses=None, debuginfo=None, dynamic=False)

DEPRECATED: Constructs a Memlet from string-based expressions.

Parameters:
  • data – The data object or name to access.

  • subset_str – The subset of data that is going to be accessed in string format. Example: ‘0:N’.

  • wcr_str – A lambda function (as a string) specifying how write-conflicts are resolved. The syntax of the lambda function receives two elements: current value and new value, and returns the value after resolution. For example, summation is ‘lambda cur, new: cur + new’.

  • other_subset_str – The reindexing of subset on the other connected data (as a string).

  • wcr_conflict – If False, forces non-locked conflict resolution when generating code. The default is to let the code generator infer this information from the SDFG.

  • num_accesses – The number of times that the moved data will be subsequently accessed. If -1, designates that the number of accesses is unknown at compile time.

  • debuginfo – Source-code information (e.g., line, file) used for debugging.

  • dynamic – If True, the number of elements moved in this memlet is defined dynamically at runtime.

property src_subset
subset

Subset of elements to move from the data attached to this edge.

to_json()
try_initialize(sdfg, state, edge)

Tries to initialize the internal fields of the memlet (e.g., src/dst subset) once it is added to an SDFG as an edge.

used_symbols(all_symbols, edge=None)

Returns a set of symbols used in this edge’s properties.

Parameters:
  • all_symbols (bool) – If False, only returns the set of symbols that will be used in the generated code and are needed as arguments.

  • edge – If given, provides richer context-based tests for the case of all_symbols=False.

Return type:

Set[str]

validate(sdfg, state)
volume

The exact number of elements moved using this memlet, or the maximum number if dynamic=True (with 0 as unbounded)

wcr

If set, defines a write-conflict resolution lambda function. The syntax of the lambda function receives two elements: current value and new value, and returns the value after resolution

wcr_nonatomic

If True, always generates non-conflicting (non-atomic) writes in resulting code

class dace.memlet.MemletTree(edge, downwards=True, parent=None, children=None)

Bases: object

A tree of memlet edges.

Since memlets can form paths through scope nodes, and since these paths can split in “OUT_*” connectors, a memlet edge can be extended to a memlet tree. The tree is always rooted at the outermost-scope node, which can mean that it forms a tree of directed edges going forward (in the case where memlets go through scope-entry nodes) or backward (through scope-exit nodes).

Memlet trees can be used to obtain all edges pertaining to a single memlet using the memlet_tree function in SDFGState. This collects all siblings of the same edge and their children, for instance if multiple inputs from the same access node are used.

property downwards

If True, this memlet tree points downwards (rooted at the source node).

leaves()

Returns a list of all the leaves of this MemletTree, i.e., the innermost edges.

Return type:

List[MultiConnectorEdge[Memlet]]

root()
Return type:

MemletTree

traverse_children(include_self=False)

dace.properties module

class dace.properties.CodeBlock(code, language=Language.Python)

Bases: object

Helper class that represents code blocks with language. Used in CodeProperty, implemented as a list of AST statements if language is Python, or a string otherwise.

property as_string: str
static from_json(tmp, sdfg=None)
get_free_symbols(defined_syms=None)

Returns the set of free symbol names in this code block, excluding the given symbol names.

Return type:

Set[str]

to_json()
class dace.properties.CodeProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

Custom Property type that accepts code in various languages.

property dtype
from_json(tmp, sdfg=None)
static from_string(string, language=None)
to_json(obj)
static to_string(obj)
class dace.properties.DataProperty(desc='', default=None, **kwargs)

Bases: Property

Custom Property type that represents a link to a data descriptor. Needs the SDFG to be passed as an argument to from_string and choices.

static choices(sdfg=None)
from_json(s, context=None)
static from_string(s, sdfg=None)
to_json(obj)
static to_string(obj)
typestring()
class dace.properties.DataclassProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

Property that stores pydantic models or dataclasses.

from_json(d, sdfg=None)
static from_string(s)
to_json(obj)
static to_string(obj)
class dace.properties.DebugInfoProperty(**kwargs)

Bases: Property

Custom Property type for DebugInfo members.

property allow_none
property dtype
static from_string(s)
static to_string(di)
class dace.properties.DictProperty(key_type, value_type, *args, **kwargs)

Bases: Property

Property type for dictionaries.

from_json(data, sdfg=None)
static from_string(s)
to_json(d)
static to_string(d)
class dace.properties.EnumProperty(dtype, *args, **kwargs)

Bases: Property

class dace.properties.LambdaProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

Custom Property type that accepts a lambda function, with conversions to and from strings.

property dtype
from_json(s, sdfg=None)
static from_string(s)
to_json(obj)
static to_string(obj)
class dace.properties.LibraryImplementationProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

Property for choosing an implementation type for a library node. On the Python side it is a standard property, but can expand into a combo-box in the editor.

typestring()
class dace.properties.ListProperty(element_type, *args, **kwargs)

Bases: Property[List[T]]

Property type for lists.

from_json(data, sdfg=None)
from_string(s)
to_json(l)
static to_string(l)
class dace.properties.NestedDataClassProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

Custom property type for nested data.

property dtype
static from_json(obj, context=None)
static from_string(s)
to_json(obj)
static to_string(obj)
class dace.properties.OptionalSDFGReferenceProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: SDFGReferenceProperty

An SDFG reference property that defaults to None if cannot be deserialized.

from_json(obj, context=None)
class dace.properties.OrderedDictProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

Property type for ordered dicts

static from_json(obj, sdfg=None)
to_json(d)
class dace.properties.Property(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Generic[T]

Class implementing properties of DaCe objects that conform to strong typing, and allow conversion to and from strings to be edited.

static add_none_pair(dict_in)
property allow_none
property category
property choices
property default
property desc
property dtype
property from_json
static get_property_element(object_with_properties, name)
property getter
property indirected
property meta_to_json

Returns a function to export meta information (type, description, default value).

property optional
property optional_condition
property setter
property to_json
typestring()
property unmapped
exception dace.properties.PropertyError

Bases: Exception

Exception type for errors related to internal functionality of these properties.

class dace.properties.RangeProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

Custom Property type for dace.subsets.Range members.

property dtype
static from_string(s)
static to_string(obj)
class dace.properties.ReferenceProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

Custom Property type that represents a link to another SDFG object. Needs the SDFG to be passed as an argument to from_string.

static from_string(s, sdfg=None)
static to_string(obj)
class dace.properties.SDFGReferenceProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

from_json(obj, context=None)
to_json(obj)
class dace.properties.SetProperty(element_type, getter=None, setter=None, default=None, from_json=None, to_json=None, unmapped=False, allow_none=False, desc='', **kwargs)

Bases: Property

Property for a set of elements of one type, e.g., connectors.

property dtype
from_json(l, sdfg=None)
static from_string(s)
to_json(l)
static to_string(l)
class dace.properties.ShapeProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

Custom Property type that defines a shape.

property dtype
from_json(d, sdfg=None)
static from_string(s)
to_json(obj)
static to_string(obj)
class dace.properties.SubsetProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

Custom Property type that accepts any form of subset, and enables parsing strings into multiple types of subsets.

property allow_none
property dtype
from_json(val, sdfg=None)
static from_string(s)
to_json(val)
static to_string(val)
class dace.properties.SymbolicProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

Custom Property type that accepts integers or Sympy expressions.

property dtype
static from_string(s)
static to_string(obj)
class dace.properties.TransformationHistProperty(*args, **kwargs)

Bases: Property

Property type for transformation histories.

from_json(data, sdfg=None)
to_json(hist)
class dace.properties.TypeClassProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

Custom property type for memory as defined in dace.types, e.g. dace.float32.

property dtype
static from_json(obj, context=None)
static from_string(s)
to_json(obj)
static to_string(obj)
class dace.properties.TypeProperty(getter=None, setter=None, dtype=None, default=None, from_json=None, to_json=None, meta_to_json=None, choices=None, unmapped=False, allow_none=False, indirected=False, category='General', desc='', optional=False, optional_condition=<function Property.<lambda>>)

Bases: Property

Custom Property type that finds a type according to the input string.

property dtype
static from_json(obj, context=None)
static from_string(s)
dace.properties.indirect_properties(indirect_class, indirect_function, override=False)

A decorator for objects that provides indirect properties defined in another class.

dace.properties.indirect_property(cls, f, prop, override)
dace.properties.make_properties(cls)

A decorator for objects that adds support and checks for strongly-typed properties (which use the Property class).

dace.serialize module

class dace.serialize.NumpySerializer

Bases: object

Helper class to load/store numpy arrays from JSON.

static from_json(json_obj, context=None)
static to_json(obj)
class dace.serialize.SerializableObject(json_obj={}, typename=None)

Bases: object

static from_json(json_obj, context=None, typename=None)
json_obj = {}
to_json()
typename = None
dace.serialize.all_properties_to_json(object_with_properties)
dace.serialize.dump(*args, **kwargs)
dace.serialize.dumps(*args, **kwargs)
dace.serialize.from_json(obj, context=None, known_type=None)
dace.serialize.get_serializer(type_name)
dace.serialize.load(*args, context=None, **kwargs)
dace.serialize.loads(*args, context=None, **kwargs)
dace.serialize.serializable(cls)
dace.serialize.set_properties_from_json(object_with_properties, json_obj, context=None, ignore_properties=None)
dace.serialize.to_json(obj)

dace.sourcemap module

class dace.sourcemap.MapCpp(code, name, target_name)

Bases: object

Creates the mapping between the SDFG nodes and the generated C++ code lines.

codegen_mapping(line, line_num)

Searches the code line for the first ////__CODEGEN identifier and adds the information to the codegen_map

Parameters:
  • line (str) – code line to search for identifiers

  • line_num (int) – corresponding line number

create_mapping(node, line_num)

Adds a C++ line number to the mapping

Parameters:
  • node (SdfgLocation) – A node which will map to the line number

  • line_num (int) – The line number to add to the mapping

get_identifiers(line, findall=True)

Returns a list of identifiers found in the code line

Parameters:
  • line (str) – line of C++ code with identifiers

  • findall (bool) – if it should return all finds or just the first one

Returns:

if findall is True, returns list of identifiers, otherwise a single identifier.

get_nodes(line_code)

Retrieve all identifiers set at the end of the line of code. Example: x = y ////__DACE:0:0:0 ////__DACE:0:0:1 returns [SDFGL(0,0,0), SDFGL(0,0,1)]

Parameters:

line_code (str) – a single line of code

Returns:

list of SDFGLocation

mapper(codegen_debug=False)

For each line of code retrieve the corresponding identifiers and create the mapping

Parameters:

codegen_debug (bool) – if the codegen mapping should be created

class dace.sourcemap.MapPython(name)

Bases: object

Creates the mapping between the source code and the SDFG nodes

create_mapping(range_dict=None)

Creates the actual mapping by using the debuginfo list

Parameters:

range_dict – For each file, a list of tuples containing a start and end line of a DaCe program

divide()

Divide debuginfo into an array where each entry corresponds to the debuginfo of a diffrent sourcefile.

make_info(debuginfo, node_id, state_id, cfg_id)

Creates an object for the current node with the most important information

Parameters:
  • debuginfo – JSON object of the debuginfo of the node

  • node_id (int) – ID of the node

  • state_id (int) – ID of the state

  • cfg_id (int) – ID of the sdfg

Return type:

dict

Returns:

Dictionary with a debuginfo JSON object and the identifiers

mapper(sdfg)

Creates the source to SDFG node mapping

Parameters:

sdfg – SDFG to create the mapping for

Return type:

bool

Returns:

if the sdfg was created only by the API

sdfg_debuginfo(graph, cfg_id=0, state_id=0)

Recursively retracts all debuginfo from the nodes

Parameters:
  • graph – An SDFG or SDFGState to check for nodes

  • cfg_id (int) – Id of the current SDFG/NestedSDFG

  • state_id (int) – Id of the current SDFGState

Returns:

list of debuginfo with the node identifiers

sorter()

Prioritizes smaller ranges over larger ones

class dace.sourcemap.SdfgLocation(cfg_id, state_id, node_ids)

Bases: object

printer()
dace.sourcemap.create_cache(name, folder)

Creates the map folder in the build path if it does not yet exist

Parameters:
  • name (str) – name of the SDFG

  • folder (str) – the build folder

Return type:

str

Returns:

relative path to the created folder

dace.sourcemap.create_cpp_map(code, name, target_name, build_folder, sourceFiles, made_with_api)

Creates the mapping from the SDFG nodes to the C++ code lines. The mapping gets saved at: <SDFG build folder>/map/map_cpp.json

Parameters:
  • code (str) – C++ code containing the identifiers ‘////__DACE:0:0:0’

  • name (str) – The name of the SDFG

  • target_name (str) – The target type, example: ‘cpu’

  • build_folder (str) – The build_folder of the SDFG

  • sourceFiles ([<class ‘str’>]) – A list of source files of to the SDFG

  • made_with_api (bool) – true if the SDFG was created just with the API

dace.sourcemap.create_folder(path_str)

Creates a folder if it does not yet exist

Parameters:

path_str (str) – location the folder will be crated at

dace.sourcemap.create_maps(sdfg, code, target_name)

Creates the C++, Py and Codegen mapping

Parameters:
  • sdfg – The sdfg to create the mapping for

  • code (str) – The generated code

  • target_name (str) – The target name

dace.sourcemap.create_py_map(sdfg)

Creates the mapping from the python source lines to the SDFG nodes. The mapping gets saved at: <SDFG build folder>/map/map_py.json

Parameters:

sdfg – The SDFG for which the mapping will be created

Returns:

an object with the build_folder, src_files and made_with_api

dace.sourcemap.get_src_files(sdfg)

Search all nodes for debuginfo to find the source filenames

Parameters:

sdfg – An SDFG to check for source files

Returns:

list of unique source filenames

dace.sourcemap.save(language, name, map, build_folder)

Saves the mapping in the map folder of the corresponding SDFG

Parameters:
  • language (str) – used for the file name to save to: py -> map_py.json

  • name (str) – name of the SDFG

  • map (dict) – the map object to be saved

  • build_folder (str) – build folder

Return type:

str

Returns:

absolute path to the cache folder of the SDFG

dace.sourcemap.send(data)

Sends a json object to the port given as the env variable DACE_port. If the port isn’t set we don’t send anything.

Parameters:

data (<module ‘json’ from ‘/home/docs/.asdf/installs/python/3.9.18/lib/python3.9/json/__init__.py’>) – json object to send

dace.subsets module

class dace.subsets.Indices(indices)

Bases: Subset

A subset of one element representing a single index in an N-dimensional data descriptor.

absolute_strides(global_shape)
at(i, strides)

Returns the absolute index (1D memory layout) of this subset at the given index tuple. For example, the range [2:10::2] at index 2 would return 6 (2+2*2).

Parameters:
  • i – A tuple of the same dimensionality as subset.dims().

  • strides – The strides of the array we are subsetting.

Returns:

Absolute 1D index at coordinate i.

bounding_box_size()
compose(other)
coord_at(i)

Returns the offseted coordinates of this subset at the given index tuple. For example, the range [2:10:2] at index 2 would return 6 (2+2*2).

Parameters:

i – A tuple of the same dimensionality as subset.dims().

Returns:

Absolute coordinates for index i.

data_dims()
dims()
property free_symbols: Set[str]

Returns a set of undefined symbols in this subset.

static from_json(obj, context=None)
static from_string(s)
intersection(other)
intersects(other)
max_element()
max_element_approx()
min_element()
min_element_approx()
ndrange()
num_elements()
num_elements_exact()
offset(other, negative, indices=None)
offset_new(other, negative, indices=None)
pop(dimensions)
pystr()
reorder(order)

Re-orders the dimensions in-place according to a permutation list.

Parameters:

order – List or tuple of integers from 0 to self.dims() - 1, indicating the desired order of the dimensions.

replace(repl_dict)
size()
size_exact()
squeeze(ignore_indices=None)
strides()
to_json()
unsqueeze(axes)

Adds zeroes to the subset, in the indices contained in axes.

The method is mostly used to restore subsets that had their zero-indices removed (i.e., squeezed subsets). Hence, the method is called ‘unsqueeze’.

Examples (initial subset, axes -> result subset, output): - [i], [0] -> [0, i], [0] - [i], [0, 1] -> [0, 0, i], [0, 1] - [i], [0, 2] -> [0, i, 0], [0, 2] - [i], [0, 1, 2, 3] -> [0, 0, 0, 0, i], [0, 1, 2, 3] - [i], [0, 2, 3, 4] -> [0, i, 0, 0, 0], [0, 2, 3, 4] - [i], [0, 1, 1] -> [0, 0, 0, i], [0, 1, 2]

Parameters:

axes (Sequence[int]) – The axes where the zero-indices should be added.

Return type:

List[int]

Returns:

A list of the actual axes where the zero-indices were added.

class dace.subsets.Range(ranges)

Bases: Subset

Subset defined in terms of a fixed range.

absolute_strides(global_shape)

Returns a list of strides for advancing one element in each dimension. Size of the list is equal to data_dims(), which may be larger than dims() depending on tile sizes.

at(i, strides)

Returns the absolute index (1D memory layout) of this subset at the given index tuple.

For example, the range [2:10:2] at index 2 would return 6 (2+2*2).

Parameters:
  • i – A tuple of the same dimensionality as subset.dims() or subset.data_dims().

  • strides – The strides of the array we are subsetting.

Returns:

Absolute 1D index at coordinate i.

bounding_box_size()

Returns the size of a bounding box around this range.

compose(other)
coord_at(i)

Returns the offseted coordinates of this subset at the given index tuple.

For example, the range [2:10:2] at index 2 would return 6 (2+2*2).

Parameters:

i – A tuple of the same dimensionality as subset.dims() or subset.data_dims().

Returns:

Absolute coordinates for index i (length equal to data_dims(), may be larger than dims()).

data_dims()
static dim_to_string(d, t=1)
dims()
property free_symbols: Set[str]

Returns a set of undefined symbols in this subset.

static from_array(array)

Constructs a range that covers the full array given as input.

static from_indices(indices)
static from_json(obj, context=None)
static from_string(string)
get_free_symbols_by_indices(indices)

Get set of free symbols by only looking at the dimension given by the indices list

Parameters:

indices (List[int]) – The indices of the dimensions to look at

Returns:

The set of free symbols

Return type:

Set[str]

intersects(other)
max_element()
max_element_approx()
min_element()
min_element_approx()
ndrange()
static ndslice_to_string(slice, tile_sizes=None)
static ndslice_to_string_list(slice, tile_sizes=None)
num_elements()
num_elements_exact()
offset(other, negative, indices=None)
offset_new(other, negative, indices=None)
pop(dimensions)
pystr()
reorder(order)

Re-orders the dimensions in-place according to a permutation list.

Parameters:

order – List or tuple of integers from 0 to self.dims() - 1, indicating the desired order of the dimensions.

replace(repl_dict)
size(for_codegen=False)

Returns the number of elements in each dimension.

size_exact()

Returns the number of elements in each dimension.

squeeze(ignore_indices=None, offset=True)

Removes size-1 ranges from the subset and returns a list of dimensions that remain.

For example, [i:i+10, j] will change the range to [i:i+10] and return [0]. If offset is True, the subset will become [0:10].

Parameters:
  • ignore_indices (Optional[List[int]]) – An iterable of dimensions to not include in squeezing.

  • offset (bool) – If True, will offset the non-ignored indices back so that they start with 0.

Return type:

List[int]

Returns:

A list of dimension indices in the original subset, which remain in the squeezed result.

strides()
string_list()
to_json()
unsqueeze(axes)

Adds 0:1 ranges to the subset, in the indices contained in axes.

The method is mostly used to restore subsets that had their length-1 ranges removed (i.e., squeezed subsets). Hence, the method is called ‘unsqueeze’.

Examples (initial subset, axes -> result subset, output): - [i:i+10], [0] -> [0:1, i], [0] - [i:i+10], [0, 1] -> [0:1, 0:1, i:i+10], [0, 1] - [i:i+10], [0, 2] -> [0:1, i:i+10, 0:1], [0, 2] - [i:i+10], [0, 1, 2, 3] -> [0:1, 0:1, 0:1, 0:1, i:i+10], [0, 1, 2, 3] - [i:i+10], [0, 2, 3, 4] -> [0:1, i:i+10, 0:1, 0:1, 0:1], [0, 2, 3, 4] - [i:i+10], [0, 1, 1] -> [0:1, 0:1, 0:1, i:i+10], [0:1, 1, 2]

Parameters:

axes (Sequence[int]) – The axes where the 0:1 ranges should be added.

Return type:

List[int]

Returns:

A list of the actual axes where the 0:1 ranges were added.

class dace.subsets.Subset

Bases: object

Defines a subset of a data descriptor.

at(i, strides)

Returns the absolute index (1D memory layout) of this subset at the given index tuple.

For example, the range [2:10:2] at index 2 would return 6 (2+2*2).

Parameters:
  • i – A tuple of the same dimensionality as subset.dims() or subset.data_dims().

  • strides – The strides of the array we are subsetting.

Returns:

Absolute 1D index at coordinate i.

coord_at(i)

Returns the offseted coordinates of this subset at the given index tuple.

For example, the range [2:10:2] at index 2 would return 6 (2+2*2).

Parameters:

i – A tuple of the same dimensionality as subset.dims() or subset.data_dims().

Returns:

Absolute coordinates for index i (length equal to data_dims(), may be larger than dims()).

covers(other)

Returns True if this subset covers (using a bounding box) another subset.

covers_precise(other)

Returns True if self contains all the elements in other.

property free_symbols: Set[str]

Returns a set of undefined symbols in this subset.

offset(other, negative, indices=None)
offset_new(other, negative, indices=None)
class dace.subsets.SubsetUnion(subset)

Bases: Subset

Wrapper subset type that stores multiple Subsets in a list.

covers(other)

Returns True if this SubsetUnion covers another subset (using a bounding box). If other is another SubsetUnion then self and other will only return true if self is other. If other is a different type of subset true is returned when one of the subsets in self is equal to other.

covers_precise(other)

Returns True if this SubsetUnion covers another subset. If other is another SubsetUnion then self and other will only return true if self is other. If other is a different type of subset true is returned when one of the subsets in self is equal to other

dims()
property free_symbols: Set[str]

Returns a set of undefined symbols in this subset.

num_elements()
replace(repl_dict)
union(other)

In place union of self with another Subset

dace.subsets.bounding_box_cover_exact(subset_a, subset_b)
Return type:

bool

dace.subsets.bounding_box_symbolic_positive(subset_a, subset_b, approximation=False)
Return type:

bool

dace.subsets.bounding_box_union(subset_a, subset_b)

Perform union by creating a bounding-box of two subsets.

Return type:

Range

dace.subsets.intersects(subset_a, subset_b)

Returns True if two subsets intersect, False if they do not, or None if the answer cannot be determined.

Parameters:
  • subset_a (Subset) – The first subset.

  • subset_b (Subset) – The second subset.

Return type:

Optional[bool]

Returns:

True if subsets intersect, False if not, None if indeterminate.

dace.subsets.list_union(subset_a, subset_b)

Returns the union of two Subset lists.

Parameters:
  • subset_a (Subset) – The first subset.

  • subset_b (Subset) – The second subset.

Return type:

Subset

Returns:

A SubsetUnion object that contains all elements of subset_a and subset_b.

dace.subsets.nng(expr)
dace.subsets.union(subset_a, subset_b)

Compute the union of two Subset objects. If the subsets are not of the same type, degenerates to bounding-box union.

Parameters:
  • subset_a (Subset) – The first subset.

  • subset_b (Subset) – The second subset.

Return type:

Subset

Returns:

A Subset object whose size is at least the union of the two inputs. If union failed, returns None.

dace.symbolic module

class dace.symbolic.AND(x, y)

Bases: Function

default_assumptions = {}
classmethod eval(x, y)

Evaluates logical and.

Parameters:
  • x – First operand.

  • y – Second operand.

Returns:

Return value (literal or symbolic).

class dace.symbolic.Attr(*args)

Bases: Function

Represents a get-attribute call on a function, equivalent to a.b in Python.

default_assumptions = {}
free_symbols = {}
class dace.symbolic.BitwiseAnd(*args)

Bases: Function

default_assumptions = {}
class dace.symbolic.BitwiseNot(*args)

Bases: Function

default_assumptions = {}
class dace.symbolic.BitwiseOr(*args)

Bases: Function

default_assumptions = {}
class dace.symbolic.BitwiseXor(*args)

Bases: Function

default_assumptions = {}
class dace.symbolic.DaceSympyPrinter(arrays, cpp_mode=False, *args, **kwargs)

Bases: StrPrinter

Several notational corrections for integer math and C++ translation that sympy.printing.cxxcode does not provide.

class dace.symbolic.IfExpr(x, y, z)

Bases: Function

default_assumptions = {}
classmethod eval(x, y, z)

Evaluates a ternary operator.

Parameters:
  • x – Predicate.

  • y – If true return this.

  • z – If false return this.

Returns:

Return value (literal or symbolic).

class dace.symbolic.Is(*args)

Bases: Function

default_assumptions = {}
class dace.symbolic.IsNot(*args)

Bases: Function

default_assumptions = {}
class dace.symbolic.LeftShift(*args)

Bases: Function

default_assumptions = {}
class dace.symbolic.OR(x, y)

Bases: Function

default_assumptions = {}
classmethod eval(x, y)

Evaluates logical or.

Parameters:
  • x – First operand.

  • y – Second operand.

Returns:

Return value (literal or symbolic).

class dace.symbolic.PythonOpToSympyConverter

Bases: NodeTransformer

Replaces various operations with the appropriate SymPy functions to avoid non-symbolic evaluation.

visit_Attribute(node)
visit_BinOp(node)
visit_BoolOp(node)
visit_Compare(node)
visit_Constant(node)
visit_IfExp(node)
visit_NameConstant(node)
visit_Subscript(node)
visit_UnaryOp(node)
class dace.symbolic.ROUND(x)

Bases: Function

default_assumptions = {}
classmethod eval(x)

Evaluates rounding to integer.

Parameters:

x – Value to round.

Returns:

Return value (literal or symbolic).

class dace.symbolic.RightShift(*args)

Bases: Function

default_assumptions = {}
class dace.symbolic.SymExpr(main_expr, approx_expr=None)

Bases: object

Symbolic expressions with support for an overapproximation expression.

property approx
property expr
match(*args, **kwargs)
subs(repldict)
class dace.symbolic.SympyAwarePickler(file, protocol=None, fix_imports=True, buffer_callback=None)

Bases: Pickler

Custom Pickler class that safely saves SymPy expressions with function definitions in expressions (e.g., int_ceil).

persistent_id(obj)
class dace.symbolic.SympyAwareUnpickler(file, *, fix_imports=True, encoding='ASCII', errors='strict', buffers=())

Bases: Unpickler

Custom Unpickler class that safely restores SymPy expressions with function definitions in expressions (e.g., int_ceil).

persistent_load(pid)
dace.symbolic.contains_sympy_functions(expr)

Returns True if expression contains Sympy functions.

dace.symbolic.equal(a, b, is_length=True)

Compares 2 symbolic expressions and returns True if they are equal, False if they are inequal, and None if the comparison is inconclusive.

Parameters:
  • a (Union[Basic, SymExpr]) – First symbolic expression.

  • b (Union[Basic, SymExpr]) – Second symbolic expression.

  • is_length (bool) – If True, the assumptions that a, b are integers and positive are made.

Return type:

Optional[bool]

dace.symbolic.equalize_symbol(sym)

If a symbol or symbolic expressions has multiple symbols with the same name, it substitutes them with the last symbol (as they appear in s.free_symbols).

Return type:

Expr

dace.symbolic.equalize_symbols(a, b)

If the 2 input expressions use different symbols but with the same name, it substitutes the symbols of the second expressions with those of the first expression.

Return type:

Tuple[Expr, Expr]

dace.symbolic.evaluate(expr, symbols)

Evaluates an expression to a constant based on a mapping from symbols to values.

Parameters:
  • expr (Union[Basic, int, float]) – The expression to evaluate.

  • symbols (Dict[Union[symbol, str], Union[int, float]]) – A mapping of symbols to their values.

Return type:

Union[int, float, number]

Returns:

A constant value based on expr and symbols.

dace.symbolic.evaluate_optional_arrays(expr, sdfg)

Evaluate Is(…) and IsNot(…) expressions for arrays.

Parameters:
  • expr – The symbolic expression to evaluate.

  • sdfg – SDFG that contains arrays.

Returns:

A simplified version of the expression.

dace.symbolic.free_symbols_and_functions(expr)
Return type:

Set[str]

dace.symbolic.inequal_symbols(a, b)

Compares 2 symbolic expressions and returns True if they are not equal.

Return type:

bool

class dace.symbolic.int_ceil(x, y)

Bases: Function

default_assumptions = {}
classmethod eval(x, y)

Evaluates integer ceiling division (ceil(x / y)).

Parameters:
  • x – Numerator.

  • y – Denominator.

Returns:

Return value (literal or symbolic).

class dace.symbolic.int_floor(x, y)

Bases: Function

default_assumptions = {}
classmethod eval(x, y)

Evaluates integer floor division (floor(x / y)).

Parameters:
  • x – Numerator.

  • y – Denominator.

Returns:

Return value (literal or symbolic).

dace.symbolic.is_sympy_userfunction(expr)

Returns True if the expression is a SymPy function.

dace.symbolic.issymbolic(value, constants=None)

Returns True if an expression is symbolic with respect to its contents and a given dictionary of constant values.

dace.symbolic.overapproximate(expr)

Takes a sympy expression and returns its maximal possible value in specific cases.

dace.symbolic.pystr_to_symbolic(expr, symbol_map=None, simplify=None)

Takes a Python string and converts it into a symbolic expression.

Return type:

Basic

dace.symbolic.resolve_symbol_to_constant(symb, start_sdfg)

Tries to resolve a symbol to constant, by looking up into SDFG’s constants, following nested SDFGs hierarchy if necessary.

Parameters:
  • symb – symbol to resolve to constant

  • start_sdfg – starting SDFG

Returns:

the constant value if the symbol is resolved, None otherwise

dace.symbolic.safe_replace(mapping, replace_callback, value_as_string=False)

Safely replaces symbolic expressions that may clash with each other via a two-step replacement. For example, the mapping {M: N, N: M} would be translated to replacing {N, M} -> __dacesym_{N, M} followed by __dacesym{N, M} -> {M, N}.

Parameters:
  • mapping (Dict[Union[Basic, SymExpr, str], Union[Basic, SymExpr, str]]) – The replacement dictionary.

  • replace_callback (Callable[[Dict[str, str]], None]) – A callable function that receives a replacement dictionary and performs the replacement (can be unsafe).

  • value_as_string (bool) – Replacement values are replaced as strings rather than symbols.

Return type:

None

dace.symbolic.simplify(expr)
Return type:

Union[Basic, SymExpr]

dace.symbolic.simplify_ext(expr)

An extended version of simplification with expression fixes for sympy.

Parameters:

expr – A sympy expression.

Returns:

Simplified version of the expression.

dace.symbolic.swalk(expr, enter_functions=False)

Walk over a symbolic expression tree (similar to ast.walk). Returns an iterator that yields the values and recurses into functions, if specified.

class dace.symbolic.symbol(name=None, dtype=int, **assumptions)

Bases: Symbol

Defines a symbolic expression. Extends SymPy symbols with DaCe-related information.

add_constraints(constraint_list)
check_constraints(value)
property constraints
default_assumptions = {}
name: str
s_currentsymbol = 0
set_constraints(constraint_list)
dace.symbolic.symbol_name_or_value(val)

Returns the symbol name if symbol, otherwise the value as a string.

dace.symbolic.symbols_in_ast(tree)

Walks an AST and finds all names, excluding function names.

dace.symbolic.symbols_in_code(code, potential_symbols=None, symbols_to_ignore=None)

Tokenizes a code string for symbols and returns a set thereof.

Parameters:
  • code (str) – The code to tokenize.

  • potential_symbols (Optional[Set[str]]) – If not None, filters symbols to this given set.

  • symbols_to_ignore (Optional[Set[str]]) – If not None, filters out symbols from this set.

Return type:

Set[str]

dace.symbolic.symlist(values)

Finds symbol dependencies of expressions.

dace.symbolic.sympy_divide_fix(expr)

Fix SymPy printouts where integer division such as “tid/2” turns into “.5*tid”.

dace.symbolic.sympy_intdiv_fix(expr)

Fix for SymPy printing out reciprocal values when they should be integral in “ceiling/floor” sympy functions.

dace.symbolic.sympy_numeric_fix(expr)

Fix for printing out integers as floats with “.00000000”. Converts the float constants in a given expression to integers.

dace.symbolic.sympy_to_dace(exprs, symbol_map=None)

Convert all sympy.Symbol`s to DaCe symbols, according to `symbol_map.

dace.symbolic.symstr(sym, arrayexprs=None, cpp_mode=False)

Convert a symbolic expression to a compilable expression.

Parameters:
  • sym – Symbolic expression to convert.

  • arrayexprs (Optional[Set[str]]) – Set of names of arrays, used to convert SymPy user-functions back to array expressions.

  • cpp_mode – If True, returns a C++-compilable expression. Otherwise, returns a Python expression.

Return type:

str

Returns:

Expression in string format depending on the value of cpp_mode.

dace.symbolic.symtype(expr)

Returns the inferred symbol type from a symbolic expression.

Module contents

class dace.DaceModule(name, doc=None)

Bases: module