dace package

Subpackages

Submodules

dace.builtin_hooks module

A set of built-in hooks.

dace.builtin_hooks.cli_optimize_on_call(sdfg)

Calls a command-line interface for interactive SDFG transformations on every DaCe program call.

Parameters:

sdfg (SDFG) – The current SDFG to optimize.

dace.builtin_hooks.instrument(itype, filter, annotate_maps=True, annotate_tasklets=False, annotate_states=False, annotate_sdfgs=False)

Context manager that instruments every called DaCe program. Depending on the given instrumentation type and parameters, annotates the given elements on the SDFG. Filtering is possible with strings and wildcards, or a function (if given).

Example usage:

with dace.instrument(dace.InstrumentationType.GPU_Events, 
                     filter='*add??') as profiler:
    some_program(...)
    # ...
    other_program(...)

# Print instrumentation report for last call
print(profiler.reports[-1])
Parameters:
  • itype (InstrumentationType) – Instrumentation type to use.

  • filter (Union[str, Callable[[Any], bool], None]) – An optional string with * and ? wildcards, or function that receives one parameter, determining whether to instrument the element or not.

  • annotate_maps (bool) – If True, instruments scopes (e.g., map, consume) in the SDFGs.

  • annotate_tasklets (bool) – If True, instruments tasklets in the SDFGs.

  • annotate_states (bool) – If True, instruments states in the SDFGs.

  • annotate_sdfgs (bool) – If True, instruments whole SDFGs and sub-SDFGs.

dace.builtin_hooks.instrument_data(ditype, filter, restore_from=None, verbose=False)

Context manager that instruments (serializes/deserializes) the data of every called DaCe program. This can be used for reproducible runs and debugging. Depending on the given data instrumentation type and parameters, annotates the access nodes on the SDFG. Filtering is possible with strings and wildcards, or a function (if given). An optional instrumented data report can be given to load a specific set of data.

Example usage:

@dace
def sample(a: dace.float64, b: dace.float64):
    arr = a + b
    return arr + 1

with dace.instrument_data(dace.DataInstrumentationType.Save, filter='a??'):
    result_ab = sample(a, b)

# Optionally, get the serialized data containers
dreport = sdfg.get_instrumented_data()
assert dreport.keys() == {'arr'}  # dreport['arr'] is now the internal ``arr``

# Reload latest instrumented data (can be customized if ``restore_from`` is given)
with dace.instrument_data(dace.DataInstrumentationType.Restore, filter='a??'):
    result_cd = sample(c, d)  # where ``c, d`` are different from ``a, b``

assert numpy.allclose(result_ab, result_cd)
Parameters:
  • ditype (DataInstrumentationType) – Data instrumentation type to use.

  • filter (Union[str, Callable[[Any], bool], None]) – An optional string with * and ? wildcards, or function that receives one parameter, determining whether to instrument the access node or not.

  • restore_from (Union[str, InstrumentedDataReport, None]) – An optional parameter that specifies which instrumented data report to load data from. It could be a path to a folder, an InstrumentedDataReport object, or None to load the latest generated report.

  • verbose (bool) – If True, prints information about created and loaded instrumented data reports.

dace.builtin_hooks.profile(repetitions=100, warmup=0)

Context manager that enables profiling of each called DaCe program. If repetitions is greater than 1, the program is run multiple times and the average execution time is reported.

Example usage:

with dace.profile(repetitions=100) as profiler:
    some_program(...)
    # ...
    other_program(...)

# Print all execution times of the last called program (other_program)
print(profiler.times[-1])
Parameters:
  • repetitions (int) – The number of times to run each DaCe program.

  • warmup (int) – Number of additional repetitions to run the program without measuring time.

Note:

Running functions multiple times may affect the results of the program.

dace.config module

class dace.config.Config

Bases: object

Interface to the DaCe hierarchical configuration file.

static append(*key_hierarchy, value=None, autosave=False)

Appends to the current value of a given configuration entry and sets it.

Parameters:
  • key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

  • value – The value to append.

  • autosave – If True, saves the configuration to the file after modification.

Returns:

Current configuration entry value.

Examples:

Config.append('compiler', 'cpu', 'args', value='-fPIC')
static cfg_filename()

Returns the current configuration file path.

default_filename = '.dace.conf'
static get(*key_hierarchy)

Returns the current value of a given configuration entry.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Configuration entry value.

static get_bool(*key_hierarchy)

Returns the current value of a given boolean configuration entry. This specialization allows more string types to be converted to boolean, e.g., due to environment variable overrides.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Configuration entry value (as a boolean).

static get_default(*key_hierarchy)

Returns the default value of a given configuration entry. Takes into accound current operating system.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Default configuration value.

static get_metadata(*key_hierarchy)

Returns the configuration specification of a given entry from the schema.

Parameters:

key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

Returns:

Configuration specification as a dictionary.

static initialize()

Initializes configuration.

Note:

This function runs automatically when the module is loaded.

static load(filename=None)

Loads a configuration from an existing file.

Parameters:

filename – The file to load. If unspecified, uses default configuration file.

static load_schema(filename=None)

Loads a configuration schema from an existing file.

Parameters:

filename – The file to load. If unspecified, uses default schema file.

static nondefaults()
Return type:

Dict[str, Any]

static save(path=None, all=False)

Saves the current configuration to a file.

Parameters:
  • path – The file to save to. If unspecified, uses default configuration file.

  • all (bool) – If False, only saves non-default configuration entries. Otherwise saves all entries.

static set(*key_hierarchy, value=None, autosave=False)

Sets the current value of a given configuration entry.

Parameters:
  • key_hierarchy – A tuple of strings leading to the configuration entry. For example: (‘a’, ‘b’, ‘c’) would be configuration entry c which is in the path a->b.

  • value – The value to set.

  • autosave – If True, saves the configuration to the file after modification.

Examples:

Config.set('profiling', value=True)
dace.config.set_temporary(*path, value)

Temporarily set configuration value at path to value, and reset it after the context manager exits.

Example:

print(Config.get("compiler", "build_type")
with set_temporary("compiler", "build_type", value="Debug"):
    print(Config.get("compiler", "build_type")
print(Config.get("compiler", "build_type")
dace.config.temporary_config()

Creates a context where all configuration options changed will be reset when the context exits.

Example:

with temporary_config():
    Config.set("testing", "serialization", value=True)
    Config.set("optimizer", "autooptimize", value=True)
    foo()

dace.data module

class dace.data.Array(*args, **kwargs)

Bases: Data

Array data descriptor. This object represents a multi-dimensional data container in SDFGs that can be accessed and modified. The definition does not contain the actual array, but rather a description of how to construct it and how it should behave.

The array definition is flexible in terms of data allocation, it allows arbitrary multidimensional, potentially symbolic shapes (e.g., an array with size N+1 x M will have shape=(N+1, M)), of arbitrary data typeclasses (dtype). The physical data layout of the array is controlled by several properties:

  • The strides property determines the ordering and layout of the dimensions — it specifies how many elements in memory are skipped whenever one element in that dimension is advanced. For example, the contiguous dimension always has a stride of 1; a C-style MxN array will have strides (N, 1), whereas a FORTRAN-style array of the same size will have (1, M). Strides can be larger than the shape, which allows post-padding of the contents of each dimension.

  • The start_offset property is a number of elements to pad the beginning of the memory buffer with. This is used to ensure that a specific index is aligned as a form of pre-padding (that element may not necessarily be the first element, e.g., in the case of halo or “ghost cells” in stencils).

  • The total_size property determines how large the total allocation size is. Normally, it is the product of the shape elements, but if pre- or post-padding is involved it may be larger.

  • alignment provides alignment guarantees (in bytes) of the first element in the allocated array. This is used by allocators in the code generator to ensure certain addresses are expected to be aligned, e.g., for vectorization.

  • Lastly, a property called offset controls the logical access of the array, i.e., what would be the first element’s index after padding and alignment. This mimics a language feature prominent in scientific languages such as FORTRAN, where one could set an array to begin with 1, or any arbitrary index. By default this is set to zero.

To summarize with an example, a two-dimensional array with pre- and post-padding looks as follows:

[xxx][          |xx]
     [          |xx]
     [          |xx]
     [          |xx]
     ---------------
     [xxxxxxxxxxxxx]

shape = (4, 10)
strides = (12, 1)
start_offset = 3
total_size = 63   [= 3 + 12 * 5]
offset = (0, 0, 0)

Notice that the last padded row does not appear in strides, but is a consequence of total_size being larger.

Apart from memory layout, other properties of Array help the data-centric transformation infrastructure make decisions about the array. allow_conflicts states that warnings should not be printed if potential conflicted acceses (e.g., data races) occur. may_alias inhibits transformations that may assume that this array does not overlap with other arrays in the same context (e.g., function).

alignment

Allocation alignment in bytes (0 uses compiler-default)

allow_conflicts

If enabled, allows more than one memlet to write to the same memory location without conflict resolution.

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

as_python_arg(with_types=True, for_call=False, name=None)

Returns a string for a Data-Centric Python function signature (e.g., A: dace.int32[M]).

clone()
covers_range(rng)
property free_symbols

Returns a set of undefined symbols in this data descriptor.

classmethod from_json(json_obj, context=None)
is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

may_alias

This pointer may alias with other pointers in the same function

offset

Initial offset to translate all indices by.

optional

Specifies whether this array may have a value of None. If False, the array must not be None. If option is not set, it is inferred by other properties and the OptionalArrayInference pass.

pool

Hint to the allocator that using a memory pool is preferred

properties()
set_shape(new_shape, strides=None, total_size=None, offset=None)

Updates the shape of an array.

sizes()
start_offset

Allocation offset elements for manual alignment (pre-padding)

strides

For each dimension, the number of elements to skip in order to obtain the next element in that dimension.

to_json()
total_size

The total allocated size of the array. Can be used for padding.

used_symbols(all_symbols)

Returns a set of symbols that are used by this data descriptor.

Parameters:

all_symbols (bool) – Include not-strictly-free symbols that are used by this data descriptor, e.g., shape and size of a global array.

Return type:

Set[Union[Basic, SymExpr]]

Returns:

A set of symbols that are used by this data descriptor. NOTE: The results are symbolic rather than a set of strings.

validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.Data(*args, **kwargs)

Bases: object

Data type descriptors that can be used as references to memory. Examples: Arrays, Streams, custom arrays (e.g., sparse matrices).

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

as_python_arg(with_types=True, for_call=False, name=None)

Returns a string for a Data-Centric Python function signature (e.g., A: dace.int32[M]).

property ctype
debuginfo

Object property of type DebugInfo

dtype

Object property of type typeclass

property free_symbols: Set[Basic | SymExpr]

Returns a set of undefined symbols in this data descriptor.

is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

lifetime

Data allocation span

location

Full storage location identifier (e.g., rank, GPU ID)

properties()
set_strides_from_layout(*dimensions, alignment=1, only_first_aligned=False)

Sets the absolute strides and total size of this data descriptor, according to the given dimension ordering and alignment.

Parameters:
  • dimensions (int) – A sequence of integers representing a permutation of the descriptor’s dimensions.

  • alignment (Union[Basic, SymExpr]) – Padding (in elements) at the end, ensuring stride is a multiple of this number. 1 (default) means no padding.

  • only_first_aligned (bool) – If True, only the first dimension is padded with alignment. Otherwise all dimensions are.

shape

Object property of type tuple

storage

Storage location

strides_from_layout(*dimensions, alignment=1, only_first_aligned=False)

Returns the absolute strides and total size of this data descriptor, according to the given dimension ordering and alignment.

Parameters:
  • dimensions (int) – A sequence of integers representing a permutation of the descriptor’s dimensions.

  • alignment (Union[Basic, SymExpr]) – Padding (in elements) at the end, ensuring stride is a multiple of this number. 1 (default) means no padding.

  • only_first_aligned (bool) – If True, only the first dimension is padded with alignment. Otherwise all dimensions are.

Return type:

Tuple[Tuple[Union[Basic, SymExpr]], Union[Basic, SymExpr]]

Returns:

A 2-tuple of (tuple of strides, total size).

to_json()
property toplevel
transient

Object property of type bool

used_symbols(all_symbols)

Returns a set of symbols that are used by this data descriptor.

Parameters:

all_symbols (bool) – Include not-strictly-free symbols that are used by this data descriptor, e.g., shape and size of a global array.

Return type:

Set[Union[Basic, SymExpr]]

Returns:

A set of symbols that are used by this data descriptor. NOTE: The results are symbolic rather than a set of strings.

validate()

Validate the correctness of this object. Raises an exception on error.

property veclen
class dace.data.Reference(*args, **kwargs)

Bases: Array

Data descriptor that acts as a dynamic reference of another array. It can be used just like a regular array, except that it could be set to an arbitrary array or sub-array at runtime. To set a reference, connect another access node to it and use the “set” connector.

In order to enable data-centric analysis and optimizations, avoid using References as much as possible.

as_array()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.Scalar(*args, **kwargs)

Bases: Data

Data descriptor of a scalar value.

allow_conflicts

Object property of type bool

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

as_python_arg(with_types=True, for_call=False, name=None)

Returns a string for a Data-Centric Python function signature (e.g., A: dace.int32[M]).

clone()
covers_range(rng)
static from_json(json_obj, context=None)
is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

property may_alias: bool
property offset
property optional: bool
property pool: bool
properties()
sizes()
property start_offset
property strides
property total_size
class dace.data.Stream(*args, **kwargs)

Bases: Data

Stream (or stream array) data descriptor.

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

buffer_size

Size of internal buffer.

clone()
covers_range(rng)
property free_symbols

Returns a set of undefined symbols in this data descriptor.

classmethod from_json(json_obj, context=None)
is_equivalent(other)

Check for equivalence (shape and type) of two data descriptors.

is_stream_array()
property may_alias: bool
offset

Object property of type list

property optional: bool
properties()
size_string()
sizes()
property start_offset
property strides
to_json()
property total_size
used_symbols(all_symbols)

Returns a set of symbols that are used by this data descriptor.

Parameters:

all_symbols (bool) – Include not-strictly-free symbols that are used by this data descriptor, e.g., shape and size of a global array.

Return type:

Set[Union[Basic, SymExpr]]

Returns:

A set of symbols that are used by this data descriptor. NOTE: The results are symbolic rather than a set of strings.

class dace.data.StructArray(*args, **kwargs)

Bases: Array

Array of Structures.

classmethod from_json(json_obj, context=None)
properties()
stype

Object property of type Data

class dace.data.Structure(*args, **kwargs)

Bases: Data

Base class for structures.

as_arg(with_types=True, for_call=False, name=None)

Returns a string for a C++ function signature (e.g., int *A).

property free_symbols: Set[Basic | SymExpr]

Returns a set of undefined symbols in this data descriptor.

static from_json(json_obj, context=None)
members

Dictionary of structure members

name

Structure type name

property offset
properties()
property start_offset
property strides
property total_size
class dace.data.StructureView(*args, **kwargs)

Bases: Structure

Data descriptor that acts as a reference (or view) of another structure.

static from_json(json_obj, context=None)
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

class dace.data.View(*args, **kwargs)

Bases: Array

Data descriptor that acts as a reference (or view) of another array. Can be used to reshape or reinterpret existing data without copying it.

To use a View, it needs to be referenced in an access node that is directly connected to another access node. The rules for deciding which access node is viewed are:

  • If there is one edge (in/out) that leads (via memlet path) to an access node, and the other side (out/in) has a different number of edges.

  • If there is one incoming and one outgoing edge, and one leads to a code node, the one that leads to an access node is the viewed data.

  • If both sides lead to access nodes, if one memlet’s data points to the view it cannot point to the viewed node.

  • If both memlets’ data are the respective access nodes, the access node at the highest scope is the one that is viewed.

  • If both access nodes reside in the same scope, the input data is viewed.

Other cases are ambiguous and will fail SDFG validation.

In the Python frontend, numpy.reshape and numpy.ndarray.view both generate Views.

as_array()
properties()
validate()

Validate the correctness of this object. Raises an exception on error.

dace.data.create_datadescriptor(obj, no_custom_desc=False)

Creates a data descriptor from various types of objects.

See:

dace.data.Data

dace.data.find_new_name(name, existing_names)

Returns a name that matches the given name as a prefix, but does not already exist in the given existing name set. The behavior is typically to append an underscore followed by a unique (increasing) number. If the name does not already exist in the set, it is returned as-is.

Parameters:
  • name (str) – The given name to find.

  • existing_names (Sequence[str]) – The set of existing names.

Return type:

str

Returns:

A new name that is not in existing_names.

dace.data.make_array_from_descriptor(descriptor, original_array=None, symbols=None)

Creates an array that matches the given data descriptor, and optionally copies another array to it.

Parameters:
  • descriptor (Array) – The data descriptor to create the array from.

  • original_array (Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]], None]) – An optional array to fill the content of the return value with.

  • symbols (Optional[Dict[str, Any]]) – An optional symbol mapping between symbol names and their values. Used for creating arrays with symbolic sizes.

Return type:

Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]

Returns:

A NumPy-compatible array (CuPy for GPU storage) with the specified size and strides.

dace.data.make_reference_from_descriptor(descriptor, original_array, symbols=None)

Creates an array that matches the given data descriptor from the given pointer. Shares the memory with the argument (does not create a copy).

Parameters:
  • descriptor (Array) – The data descriptor to create the array from.

  • original_array (c_void_p) – The array whose memory the return value would be used in.

  • symbols (Optional[Dict[str, Any]]) – An optional symbol mapping between symbol names and their values. Used for referencing arrays with symbolic sizes.

Return type:

Union[_SupportsArray[dtype[Any]], _NestedSequence[_SupportsArray[dtype[Any]]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]

Returns:

A NumPy-compatible array (CuPy for GPU storage) with the specified size and strides, sharing memory with the pointer specified in original_array.

dace.dtypes module

A module that contains various DaCe type definitions.

class dace.dtypes.AllocationLifetime(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Options for allocation span (when to allocate/deallocate) of data.

External = 6

Allocated and managed outside the generated code

Global = 4

Allocated throughout the entire program (outer SDFG)

Persistent = 5

Allocated throughout multiple invocations (init/exit)

SDFG = 3

Allocated throughout the innermost SDFG (possibly nested)

Scope = 1

Allocated/Deallocated on innermost scope start/end

State = 2

Allocated throughout the containing state

Undefined = 7
register(*args)
class dace.dtypes.DataInstrumentationType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Types of data container instrumentation providers.

No_Instrumentation = 1
Restore = 3
Save = 2
Undefined = 4
register(*args)
class dace.dtypes.DebugInfo(start_line, start_column=0, end_line=-1, end_column=0, filename=None)

Bases: object

Source code location identifier of a node/edge in an SDFG. Used for IDE and debugging purposes.

static from_json(json_obj, context=None)
to_json()
class dace.dtypes.DeviceType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

An enumeration.

CPU = 1

Multi-core CPU

FPGA(Intel or Xilinx) = 3

FPGA (Intel or Xilinx)

GPU(AMD or NVIDIA) = 2

GPU (AMD or NVIDIA)

Snitch = 4

Compute Cluster (RISC-V)

Undefined = 5
register(*args)
class dace.dtypes.InstrumentationType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Types of instrumentation providers.

FPGA = 7
GPU_Events = 6
LIKWID_CPU = 4
LIKWID_GPU = 5
No_Instrumentation = 1
PAPI_Counters = 3
Timer = 2
Undefined = 8
register(*args)
class dace.dtypes.Language(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available programming languages for SDFG tasklets.

CPP = 2
MLIR = 5
OpenCL = 3
Python = 1
SystemVerilog = 4
Undefined = 6
register(*args)
class dace.dtypes.OMPScheduleType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available OpenMP shedule types for Maps with CPU-Multicore schedule.

Default = 1

OpenMP library default

Dynamic = 3

Dynamic schedule

Guided = 4

Guided schedule

Static = 2

Static schedule

Undefined = 5
register(*args)
class dace.dtypes.ReductionType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Reduction types natively supported by the SDFG compiler.

Bitwise_And = 7

Bitwise AND (&)

Bitwise_Or = 9

Bitwise OR (|)

Bitwise_Xor = 11

Bitwise XOR (^)

Custom = 1

Defined by an arbitrary lambda function

Div = 16

Division (only supported in OpenMP)

Exchange = 14

Set new value, return old value

Logical_And = 6

Logical AND (&&)

Logical_Or = 8

Logical OR (||)

Logical_Xor = 10

Logical XOR (!=)

Max = 3

Maximum value

Max_Location = 13

Maximum value and its location

Min = 2

Minimum value

Min_Location = 12

Minimum value and its location

Product = 5

Product

Sub = 15

Subtraction (only supported in OpenMP)

Sum = 4

Sum

Undefined = 17
class dace.dtypes.ScheduleType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available map schedule types in the SDFG.

CPU_Multicore = 4

OpenMP parallel for loop

CPU_Persistent = 5

OpenMP parallel region

Default = 1

Scope-default parallel schedule

FPGA_Device = 13
FPGA_Multi_Pumped = 16

Used for double pumping

GPU_Default = 8

Default scope schedule for GPU code. Specializes to schedule GPU_Device and GPU_Global during inference.

GPU_Device = 9

Kernel

GPU_Persistent = 12
GPU_ThreadBlock = 10

Thread-block code

GPU_ThreadBlock_Dynamic = 11

Allows rescheduling work within a block

MPI = 3

MPI processes

SVE_Map = 7

Arm SVE

Sequential = 2

Sequential code (single-thread)

Snitch = 14
Snitch_Multicore = 15
Undefined = 17
Unrolled = 6

Unrolled code

register(*args)
class dace.dtypes.StorageType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available data storage types in the SDFG.

CPU_Heap = 4

Host memory allocated on heap

CPU_Pinned = 3

Host memory that can be DMA-accessed from accelerators

CPU_ThreadLocal = 5

Thread-local host memory

Default = 1

Scope-default storage location

FPGA_Global = 8

Off-chip global memory (DRAM)

FPGA_Local = 9

On-chip memory (bulk storage)

FPGA_Registers = 10

On-chip memory (fully partitioned registers)

FPGA_ShiftRegister = 11

Only accessible at constant indices

GPU_Global = 6

GPU global memory

GPU_Shared = 7

On-GPU shared memory

Register = 2

Local data on registers, stack, or equivalent memory

SVE_Register = 12

SVE register

Snitch_L2 = 14

External memory

Snitch_SSR = 15

Memory accessed by SSR streamer

Snitch_TCDM = 13

Cluster-private memory

Undefined = 16
register(*args)
class dace.dtypes.TilingType(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

Available tiling types in a StripMining transformation.

CeilRange = 2
Normal = 1
NumberOfTiles = 3
Undefined = 4
register(*args)
class dace.dtypes.Typeclasses(value=<no_arg>, names=None, module=None, qualname=None, type=None, start=1, boundary=None)

Bases: AutoNumberEnum

An enumeration.

Undefined = 16
bool = 1
bool_ = 2
complex128 = 15
complex64