pyg_lib.ops

grouped_matmul(inputs: List[Tensor], others: List[Tensor], biases: Optional[List[Tensor]] = None) List[Tensor][source]

Performs dense-dense matrix multiplication according to groups, utilizing dedicated kernels that effectively parallelize over groups.

inputs = [torch.randn(5, 16), torch.randn(3, 32)]
others = [torch.randn(16, 32), torch.randn(32, 64)]

outs = pyg_lib.ops.grouped_matmul(inputs, others)
assert len(outs) == 2
assert outs[0].size() == (5, 32)
assert outs[0] == inputs[0] @ others[0]
assert outs[1].size() == (3, 64)
assert outs[1] == inputs[1] @ others[1]
Parameters:
  • inputs (List[Tensor]) – List of left operand 2D matrices of shapes [N_i, K_i].

  • others (List[Tensor]) – List of right operand 2D matrices of shapes [K_i, M_i].

  • biases (Optional[List[Tensor]], default: None) – Optional bias terms to apply for each element.

Returns:

List[Tensor] – List of 2D output matrices of shapes [N_i, M_i].

segment_matmul(inputs: Tensor, ptr: Tensor, other: Tensor, bias: Optional[Tensor] = None) Tensor[source]

Performs dense-dense matrix multiplication according to segments along the first dimension of inputs as given by ptr, utilizing dedicated kernels that effectively parallelize over groups.

inputs = torch.randn(8, 16)
ptr = torch.tensor([0, 5, 8])
other = torch.randn(2, 16, 32)

out = pyg_lib.ops.segment_matmul(inputs, ptr, other)
assert out.size() == (8, 32)
assert out[0:5] == inputs[0:5] @ other[0]
assert out[5:8] == inputs[5:8] @ other[1]
Parameters:
  • inputs (Tensor) – The left operand 2D matrix of shape [N, K].

  • ptr (Tensor) – Compressed vector of shape [B + 1], holding the boundaries of segments. For best performance, given as a CPU tensor.

  • other (Tensor) – The right operand 3D tensor of shape [B, K, M].

  • bias (Optional[Tensor], default: None) – The bias term of shape [B, M].

Returns:

Tensor – The 2D output matrix of shape [N, M].

sampled_add(left: Tensor, right: Tensor, left_index: Optional[Tensor] = None, right_index: Optional[Tensor] = None) Tensor[source]

Performs a sampled addition of left and right according to the indices specified in left_index and right_index.

\[\textrm{out} = \textrm{left}[\textrm{left_index}] + \textrm{right}[\textrm{right_index}]\]

This operation fuses the indexing and addition operation together, thus being more runtime and memory-efficient.

Parameters:
  • left (Tensor) – The left tensor.

  • right (Tensor) – The right tensor.

  • left_index (Optional[Tensor], default: None) – The values to sample from the left tensor.

  • right_index (Optional[Tensor], default: None) – The values to sample from the right tensor.

Returns:

Tensor – The output tensor.

sampled_sub(left: Tensor, right: Tensor, left_index: Optional[Tensor] = None, right_index: Optional[Tensor] = None) Tensor[source]

Performs a sampled subtraction of left by right according to the indices specified in left_index and right_index.

\[\textrm{out} = \textrm{left}[\textrm{left_index}] - \textrm{right}[\textrm{right_index}]\]

This operation fuses the indexing and subtraction operation together, thus being more runtime and memory-efficient.

Parameters:
  • left (Tensor) – The left tensor.

  • right (Tensor) – The right tensor.

  • left_index (Optional[Tensor], default: None) – The values to sample from the left tensor.

  • right_index (Optional[Tensor], default: None) – The values to sample from the right tensor.

Returns:

Tensor – The output tensor.

sampled_mul(left: Tensor, right: Tensor, left_index: Optional[Tensor] = None, right_index: Optional[Tensor] = None) Tensor[source]

Performs a sampled multiplication of left and right according to the indices specified in left_index and right_index.

\[\textrm{out} = \textrm{left}[\textrm{left_index}] * \textrm{right}[\textrm{right_index}]\]

This operation fuses the indexing and multiplication operation together, thus being more runtime and memory-efficient.

Parameters:
  • left (Tensor) – The left tensor.

  • right (Tensor) – The right tensor.

  • left_index (Optional[Tensor], default: None) – The values to sample from the left tensor.

  • right_index (Optional[Tensor], default: None) – The values to sample from the right tensor.

Returns:

Tensor – The output tensor.

sampled_div(left: Tensor, right: Tensor, left_index: Optional[Tensor] = None, right_index: Optional[Tensor] = None) Tensor[source]

Performs a sampled division of left by right according to the indices specified in left_index and right_index.

\[\textrm{out} = \textrm{left}[\textrm{left_index}] / \textrm{right}[\textrm{right_index}]\]

This operation fuses the indexing and division operation together, thus being more runtime and memory-efficient.

Parameters:
  • left (Tensor) – The left tensor.

  • right (Tensor) – The right tensor.

  • left_index (Optional[Tensor], default: None) – The values to sample from the left tensor.

  • right_index (Optional[Tensor], default: None) – The values to sample from the right tensor.

Returns:

Tensor – The output tensor.

index_sort(inputs: Tensor, max_value: Optional[int] = None) Tuple[Tensor, Tensor][source]

Sorts the elements of the inputs tensor in ascending order. It is expected that inputs is one-dimensional and that it only contains positive integer values. If max_value is given, it can be used by the underlying algorithm for better performance.

Note

This operation is optimized only for tensors associated with the CPU device.

Parameters:
  • inputs (Tensor) – A vector with positive integer values.

  • max_value (Optional[int], default: None) – The maximum value stored inside inputs. This value can be an estimation, but needs to be greater than or equal to the real maximum.

Returns:

Tuple[Tensor, Tensor] – A tuple containing sorted values and indices of the elements in the original input tensor.

softmax_csr(src: Tensor, ptr: Tensor, dim: int = 0) Tensor[source]

Computes a sparsely evaluated softmax. Given a value tensor src, this function first groups the values along the given dimension dim, based on the indices specified via ptr, and then proceeds to compute the softmax individually for each group.

Examples

>>> src = torch.randn(4, 4)
>>> ptr = torch.tensor([0, 4])
>>> softmax(src, ptr)
tensor([[0.0157, 0.0984, 0.1250, 0.4523],
        [0.1453, 0.2591, 0.5907, 0.2410],
        [0.0598, 0.2923, 0.1206, 0.0921],
        [0.7792, 0.3502, 0.1638, 0.2145]])
Parameters:
  • src (Tensor) – The source tensor.

  • ptr (Tensor) – Groups defined by CSR representation.

  • dim (int, default: 0) – The dimension in which to normalize.

Return type:

Tensor