dace package

Subpackages

Submodules

dace.builtin_hooks module

A set of built-in hooks.

dace.builtin_hooks.cli_optimize_on_call(sdfg)

Calls a command-line interface for interactive SDFG transformations on every DaCe program call.

Parameters:

sdfg (SDFG) – The current SDFG to optimize.

dace.builtin_hooks.instrument(itype, filter, annotate_maps=True, annotate_tasklets=False, annotate_states=False, annotate_sdfgs=False)

Context manager that instruments every called DaCe program. Depending on the given instrumentation type and parameters, annotates the given elements on the SDFG. Filtering is possible with strings and wildcards, or a function (if given).

Example usage:

with dace.instrument(dace.InstrumentationType.GPU_Events, 
                     filter='*add??') as profiler:
    some_program(...)
    # ...
    other_program(...)

# Print instrumentation report for last call
print(profiler.reports[-1])
Parameters:
  • itype (InstrumentationType) – Instrumentation type to use.

  • filter (Union[str, Callable[[Any], bool], None]) – An optional string with * and ? wildcards, or function that receives one parameter, determining whether to instrument the element or not.

  • annotate_maps (bool) – If True, instruments scopes (e.g., map, consume) in the SDFGs.

  • annotate_tasklets (bool) – If True, instruments tasklets in the SDFGs.

  • annotate_states (bool) – If True, instruments states in the SDFGs.

  • annotate_sdfgs (bool) – If True, instruments whole SDFGs and sub-SDFGs.

dace.builtin_hooks.instrument_data(ditype, filter, restore_from=None, verbose=False)

Context manager that instruments (serializes/deserializes) the data of every called DaCe program. This can be used for reproducible runs and debugging. Depending on the given data instrumentation type and parameters, annotates the access nodes on the SDFG. Filtering is possible with strings and wildcards, or a function (if given). An optional instrumented data report can be given to load a specific set of data.

Example usage:

@dace
def sample(a: dace.float64, b: dace.float64):
    arr = a + b
    return arr + 1

with dace.instrument_data(dace.DataInstrumentationType.Save, filter='a??'):
    result_ab = sample(a, b)

# Optionally, get the serialized data containers
dreport = sdfg.get_instrumented_data()
assert dreport.keys() == {'arr'}  # dreport['arr'] is now the internal ``arr``

# Reload latest instrumented data (can be customized if ``restore_from`` is given)
with dace.instrument_data(dace.DataInstrumentationType.Restore, filter='a??'):
    result_cd = sample(c, d)  # where ``c, d`` are different from ``a, b``

assert numpy.allclose(result_ab, result_cd)
Parameters:
  • ditype (DataInstrumentationType) – Data instrumentation type to use.

  • filter (Union[str, Callable[[Any], bool], None]) – An optional string with * and ? wildcards, or function that receives one parameter, determining whether to instrument the access node or not.

  • restore_from (Union[str, InstrumentedDataReport, None]) – An optional parameter that specifies which instrumented data report to load data from. It could be a path to a folder, an InstrumentedDataReport object, or None to load the latest generated report.

  • verbose (bool) – If True, prints information about created and loaded instrumented data reports.

dace.builtin_hooks.profile(repetitions=100, warmup=0, tqdm_leave=True, print_results=True)

Context manager that enables profiling of each called DaCe program. If repetitions is greater than 1, the program is run multiple times and the average execution time is reported.

Example usage:

with dace.profile(repetitions=100) as profiler:
    some_program(...)
    # ...
    other_program(...)

# Print all execution times of the last called program (other_program)
print(profiler.times[-1])
Parameters:
  • repetitions (int) – The number of times to run each DaCe program.

  • warmup (int) – Number of additional repetitions to run the program without measuring time.

  • tqdm_leave (bool) – Sets the leave parameter of the tqdm progress bar (useful for nested progress bars). Ignored if tqdm progress bar is not used.

  • print_results (bool) – Whether or not to print the median execution time after all repetitions.

Note:

Running functions multiple times may affect the results of the program.

dace.config module

class dace.config.Config

Bases: object

Interface to the DaCe hierarchical configuration file.

static append(*key_hierarchy, value=None, autosave=False)

Appends to the current value of a given configuration entry and sets it.

Parameters:
  • key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

  • value – The value to append.

  • autosave – If True, saves the configuration to the file after modification.

Returns:

Current configuration entry value.

Examples:

Config.append('compiler', 'cpu', 'args', value='-fPIC')
static cfg_filename()

Returns the current configuration file path.

default_filename = '.dace.conf'
static get(*key_hierarchy)

Returns the current value of a given configuration entry.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Configuration entry value.

static get_bool(*key_hierarchy)

Returns the current value of a given boolean configuration entry. This specialization allows more string types to be converted to boolean, e.g., due to environment variable overrides.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Configuration entry value (as a boolean).

static get_default(*key_hierarchy)

Returns the default value of a given configuration entry. Takes into accound current operating system.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Default configuration value.

static get_metadata(*key_hierarchy)

Returns the configuration specification of a given entry from the schema.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Configuration specification as a dictionary.

static initialize()

Initializes configuration.

Note:

This function runs automatically when the module is loaded.

static load(filename=None)

Loads a configuration from an existing file.

Parameters:

filename – The file to load. If unspecified, uses default configuration file.

static load_schema(filename=None)

Loads a configuration schema from an existing file.

Parameters:

filename – The file to load. If unspecified, uses default schema file.

static nondefaults()
Return type:

Dict[str, Any]

static save(path=None, all=False)

Saves the current configuration to a file.

Parameters:
  • path – The file to save to. If unspecified, uses default configuration file.

  • all (bool) – If False, only saves non-default configuration entries. Otherwise saves all entries.

static set(*key_hierarchy, value=None, autosave=False)

Sets the current value of a given configuration entry.

Parameters:
  • key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

  • value – The value to set.

  • autosave – If True, saves the configuration to the file after modification.

Examples:

Config.set('profiling', value=True)
dace.config.set_temporary(*path, value)

Temporarily set configuration value at path to value, and reset it after the context manager exits.

Example:

print(Config.get("compiler", "build_type")
with set_temporary("compiler", "build_type", value="Debug"):
    print(Config.get("compiler", "build_type")
print(Config.get("compiler", "build_type")
dace.config.temporary_config()

Creates a context where all configuration options changed will be reset when the context exits.

Example:

with temporary_config():
    Config.set("testing", "serialization", value=True)
    Config.set("optimizer", "autooptimize", value=True)
    foo()

dace.data module

class dace.data.Array(*args, **kwargs)

Bases: Data

Array data descriptor. This object represents a multi-dimensional data container in SDFGs that can be accessed and modified. The definition does not contain the actual array, but rather a description of how to construct it and how it should behave.

The array definition is flexible in terms of data allocation, it allows arbitrary multidimensional, potentially symbolic shapes (e.g., an array with size N+1 x M will have shape=(N+1, M)), of arbitrary data typeclasses (dtype). The physical data layout of the array is controlled by several properties:

  • The strides property determines the ordering and layout of the dimensions — it specifies how many elements in memory are skipped whenever one element in that dimension is advanced. For example, the contiguous dimension always has a stride of 1; a C-style MxN array will have strides (N, 1), whereas a FORTRAN-style array of the same size will have (1, M). Strides can be larger than the shape, which allows post-padding of the contents of each dimension.

  • The start_offset property is a number of elements to pad the beginning of the memory buffer with. This is used to ensure that a specific index is aligned as a form of pre-padding (that element may not necessarily be the first element, e.g., in the case of halo or “ghost cells” in stencils).

  • The total_size property determines how large the total allocation size is. Normally, it is the product of the shape elements, but if pre- or post-padding is involved it may be larger.

  • alignment provides alignment guarantees (in bytes) of the first element in the allocated array. This is used by allocators in the code generator to ensure certain addresses are expected to be aligned, e.g., for vectorization.

  • Lastly, a property called offset controls the logical access of the array, i.e., what would be the first element’s index after padding and alignment. This mimics a language feature prominent in scientific languages such as FORTRAN, where one could set an array to begin with 1, or any arbitrary index. By default this is set to zero.

To summarize with an example, a two-dimensional array with pre- and post-padding looks as follows:

[xxx][          |xx]
     [          |xx]
     [          |xx]
     [          |xx]
     ---------------
     [xxxxxxxxxxxxx]

shape = (4, 10)
strides = (12, 1)
start_offset = 3
total_size = 63   [= 3 + 12 * 5]
offset = (0, 0, 0)

Notice that the last padded row does not appear in strides, but is a consequence of total_size being larger.

Apart from memory layout, other properties of Array help the data-centric transformation infrastructure make decisions about the array. allow_conflicts states that warnings should not be printed if potential conflicted acceses (e.g., data races) occur. may_alias inhibits transformations that may assume that this array does not overlap with other arrays in the same context (e.g., function).

alignment

Allocation alignment in bytes (0 uses compiler-default)

allow_conflicts

If enabled, allows more than one memlet to write to the same memory location without conflict resolution.

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

as_python_arg(with_types=True, for_call=False, name=None)

Returns a string for a Data-Centric Python function signature (e.g., A: dace.int32[M]).

clone()
covers_range(rng)
property free_symbols

Returns a set of undefined symbols in this data descriptor.

classmethod from_json(json_obj, context=None)
is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

may_alias

This pointer may alias with other pointers in the same function

offset

Initial offset to translate all indices by.

optional

Specifies whether this array may have a value of None. If False, the array must not be None. If option is not set, it is inferred by other properties and the OptionalArrayInference pass.

pool

Hint to the allocator that using a memory pool is preferred

properties()
set_shape(new_shape, strides=None, total_size=None, offset=None)

Updates the shape of an array.

sizes()
start_offset

Allocation offset elements for manual alignment (pre-padding)

strides

For each dimension, the number of elements to skip in order to obtain the next element in that dimension.

to_json()
total_size

The total allocated size of the array. Can be used for padding.

used_symbols(all_symbols)

Returns a set of symbols that are used by this data descriptor.

Parameters:

all_symbols (bool) – Include not-strictly-free symbols that are used by this data descriptor, e.g., shape and size of a global array.

Return type:

Set[Union[Basic, SymExpr]]

Returns:

A set of symbols that are used by this data descriptor. NOTE: The results are symbolic rather than a set of strings.

validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.ArrayReference(*args, **kwargs)

Bases: Array, Reference

Data descriptor that acts as a dynamic reference of another array. See Reference for more information.

In order to enable data-centric analysis and optimizations, avoid using References as much as possible.

as_array()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.ArrayView(*args, **kwargs)

Bases: Array, View

Data descriptor that acts as a static reference (or view) of another array. Can be used to reshape or reinterpret existing data without copying it.

In the Python frontend, numpy.reshape and numpy.ndarray.view both generate ArrayViews.

as_array()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.ContainerArray(*args, **kwargs)

Bases: Array

An array that may contain other data containers (e.g., Structures, other arrays).

classmethod from_json(json_obj, context=None)
properties()
stype

Object property of type Data

class dace.data.ContainerArrayReference(*args, **kwargs)

Bases: ContainerArray, Reference

Data descriptor that acts as a dynamic reference of another data container array. See Reference for more information.

In order to enable data-centric analysis and optimizations, avoid using References as much as possible.

as_array()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.ContainerView(*args, **kwargs)

Bases: ContainerArray, View

Data descriptor that acts as a view of another container array. Can be used to access nested container types without a copy.

as_array()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.Data(*args, **kwargs)

Bases: object

Data type descriptors that can be used as references to memory. Examples: Arrays, Streams, custom arrays (e.g., sparse matrices).

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

as_python_arg(with_types=True, for_call=False, name=None)

Returns a string for a Data-Centric Python function signature (e.g., A: dace.int32[M]).

property ctype
debuginfo

Object property of type DebugInfo

dtype

Object property of type typeclass

property free_symbols: Set[Basic | SymExpr]

Returns a set of undefined symbols in this data descriptor.

is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

lifetime

Data allocation span

location

Full storage location identifier (e.g., rank, GPU ID)

properties()
set_strides_from_layout(*dimensions, alignment=1, only_first_aligned=False)

Sets the absolute strides and total size of this data descriptor, according to the given dimension ordering and alignment.

Parameters:
  • dimensions (int) – A sequence of integers representing a permutation of the descriptor’s dimensions.

  • alignment (Union[Basic, SymExpr]) – Padding (in elements) at the end, ensuring stride is a multiple of this number. 1 (default) means no padding.

  • only_first_aligned (bool) – If True, only the first dimension is padded with alignment. Otherwise all dimensions are.

shape

Object property of type tuple

storage

Storage location

strides_from_layout(*dimensions, alignment=1, only_first_aligned=False)

Returns the absolute strides and total size of this data descriptor, according to the given dimension ordering and alignment.

Parameters:
  • dimensions (int) – A sequence of integers representing a permutation of the descriptor’s dimensions.

  • alignment (Union[Basic, SymExpr]) – Padding (in elements) at the end, ensuring stride is a multiple of this number. 1 (default) means no padding.

  • only_first_aligned (bool) – If True, only the first dimension is padded with alignment. Otherwise all dimensions are.

Return type:

Tuple[Tuple[Union[Basic, SymExpr]], Union[Basic, SymExpr]]

Returns:

A 2-tuple of (tuple of strides, total size).

to_json()
property toplevel
transient

Object property of type bool

used_symbols(all_symbols)

Returns a set of symbols that are used by this data descriptor.

Parameters:

all_symbols (bool) – Include not-strictly-free symbols that are used by this data descriptor, e.g., shape and size of a global array.

Return type:

Set[Union[Basic, SymExpr]]

Returns:

A set of symbols that are used by this data descriptor. NOTE: The results are symbolic rather than a set of strings.

validate()

Validate the correctness of this object. Raises an exception on error.

property veclen
class dace.data.Reference

Bases: object

Data descriptor that acts as a dynamic reference of another data descriptor. It can be used just like a regular data descriptor, except that it could be set to an arbitrary container (or subset thereof) at runtime. To set a reference, connect another access node to it and use the “set” connector.

In order to enable data-centric analysis and optimizations, avoid using References as much as possible.

static view(viewed_container, debuginfo=None)

Create a new Reference of the specified data container.

Parameters:
  • viewed_container (Data) – The data container properties of this reference.

  • debuginfo – Specific source line information for this reference, if different from viewed_container.

Returns:

A new subclass of View with the appropriate viewed container properties, e.g., StructureReference for a Structure.

class dace.data.Scalar(*args, **kwargs)

Bases: Data

Data descriptor of a scalar value.

property alignment
allow_conflicts

Object property of type bool

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

as_python_arg(with_types=True, for_call=False, name=None)

Returns a string for a Data-Centric Python function signature (e.g., A: dace.int32[M]).

clone()
covers_range(rng)
static from_json(json_obj, context=None)
is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

property may_alias: bool
property offset
property optional: bool
property pool: bool
properties()
sizes()
property start_offset
property strides
property total_size
class dace.data.Stream(*args, **kwargs)

Bases: Data

Stream (or stream array) data descriptor.

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

buffer_size

Size of internal buffer.

clone()
covers_range(rng)
property free_symbols

Returns a set of undefined symbols in this data descriptor.

classmethod from_json(json_obj, context=None)
is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

is_stream_array()
property may_alias: bool
offset

Object property of type list

property optional: bool
properties()
size_string()
sizes()
property start_offset
property strides
to_json()
property total_size
used_symbols(all_symbols)

Returns a set of symbols that are used by this data descriptor.

Parameters:

all_symbols (bool) – Include not-strictly-free symbols that are used by this data descriptor, e.g., shape and size of a global array.

Return type:

Set[Union[Basic, SymExpr]]

Returns:

A set of symbols that are used by this data descriptor. NOTE: The results are symbolic rather than a set of strings.

class dace.data.Structure(*args, **kwargs)

Bases: Data

Base class for structures.

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

clone()
property free_symbols: Set[Basic | SymExpr]

Returns a set of undefined symbols in this data descriptor.

static from_json(json_obj, context=None)
keys()
property may_alias: bool
members

Dictionary of structure members

name

Structure type name

property offset
property optional: bool
property pool: bool
properties()
property start_offset
property strides
property total_size
class dace.data.StructureReference(*args, **kwargs)

Bases: Structure, Reference

Data descriptor that acts as a dynamic reference of another Structure. See Reference for more information.

In order to enable data-centric analysis and optimizations, avoid using References as much as possible.

as_structure()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.StructureView(*args, **kwargs)

Bases: Structure, View

Data descriptor that acts as a view of another structure.

as_structure()
static from_json(json_obj, context=None)
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.Tensor(*args, **kwargs)

Bases: Structure

Abstraction for Tensor storage format.

This abstraction is based on [https://doi.org/10.1145/3276493].

static from_json(json_obj, context=None)
index_ordering

Object property of type list

indices

Object property of type list

properties()
tensor_shape

Object property of type tuple

value_count

Object property of type SymbolicProperty

value_dtype

Object property of type typeclass

class dace.data.TensorAssemblyType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Types of possible assembly strategies for the individual indices.

NoAssembly: Assembly is not possible as such.

Insert: index allows inserting elements at random (e.g. Dense)

Append: index allows appending to a list of existing coordinates. Depending on append order, this affects whether the index is ordered or not. This could be changed by sorting the index after assembly

Append = 3
Insert = 2
NoAssembly = 1
class dace.data.TensorIndex

Bases: ABC

Abstract base class for tensor index implementations.

abstract property assembly: TensorAssemblyType

What assembly type is supported by the index.

See TensorAssemblyType for reference.

abstract property branchless: bool

True if the level doesn’t branch, false otw.

A level is considered branchless if no coordinate has a sibling (another coordinate with same ancestor) and all coordinates in parent level have a child. In other words if there is a bijection between the coordinates in this level and the parent level. An example of the is the Singleton index level in the COO format.

abstract property compact: bool

True if the level is compact, false otw.

A level is compact if no two coordinates are separated by an unlabled node that does not encode a coordinate. An example of a compact level can be found in CSR, while the DIA formats range and offset levels are not compact (they have entries that would coorespond to entries outside the tensors index range, e.g. column -1).

abstract fields(lvl, dummy_symbol)

Generates the fields needed for the index.

Return type:

Dict[str, Data]

Returns:

a Dict of fields that need to be present in the struct

classmethod from_json(json_obj, context=None)
abstract property full: bool

True if the level is full, False otw.

A level is considered full if it encompasses all valid coordinates along the corresponding tensor dimension.

abstract property iteration_type: TensorIterationTypes

Iteration capability supported by this index.

See TensorIterationTypes for reference.

abstract property locate: bool

True if the index supports locate (aka random access), False otw.

abstract property ordered: bool

True if the level is ordered, False otw.

A level is ordered when all coordinates that share the same ancestor are ordered by increasing value (e.g. in typical CSR).

to_json()
abstract property unique: bool

True if coordinate in the level are unique, False otw.

A level is considered unique if no collection of coordinates that share the same ancestor contains duplicates. In CSR this is True, in COO it is not.

class dace.data.TensorIndexCompressed(*args, **kwargs)

Bases: