pyg_lib.ops
- grouped_matmul(inputs: List[Tensor], others: List[Tensor], biases: Optional[List[Tensor]] = None) List[Tensor] [source]
Performs dense-dense matrix multiplication according to groups, utilizing dedicated kernels that effectively parallelize over groups.
inputs = [torch.randn(5, 16), torch.randn(3, 32)] others = [torch.randn(16, 32), torch.randn(32, 64)] outs = pyg_lib.ops.grouped_matmul(inputs, others) assert len(outs) == 2 assert outs[0].size() == (5, 32) assert outs[0] == inputs[0] @ others[0] assert outs[1].size() == (3, 64) assert outs[1] == inputs[1] @ others[1]
- segment_matmul(inputs: Tensor, ptr: Tensor, other: Tensor, bias: Optional[Tensor] = None) Tensor [source]
Performs dense-dense matrix multiplication according to segments along the first dimension of
inputs
as given byptr
, utilizing dedicated kernels that effectively parallelize over groups.inputs = torch.randn(8, 16) ptr = torch.tensor([0, 5, 8]) other = torch.randn(2, 16, 32) out = pyg_lib.ops.segment_matmul(inputs, ptr, other) assert out.size() == (8, 32) assert out[0:5] == inputs[0:5] @ other[0] assert out[5:8] == inputs[5:8] @ other[1]
- Parameters:
inputs (
Tensor
) – The left operand 2D matrix of shape[N, K]
.ptr (
Tensor
) – Compressed vector of shape[B + 1]
, holding the boundaries of segments. For best performance, given as a CPU tensor.other (
Tensor
) – The right operand 3D tensor of shape[B, K, M]
.bias (
Optional
[Tensor
], default:None
) – The bias term of shape[B, M]
.
- Returns:
Tensor
– The 2D output matrix of shape[N, M]
.
- sampled_add(left: Tensor, right: Tensor, left_index: Optional[Tensor] = None, right_index: Optional[Tensor] = None) Tensor [source]
Performs a sampled addition of
left
andright
according to the indices specified inleft_index
andright_index
.\[\textrm{out} = \textrm{left}[\textrm{left_index}] + \textrm{right}[\textrm{right_index}]\]This operation fuses the indexing and addition operation together, thus being more runtime and memory-efficient.
- sampled_sub(left: Tensor, right: Tensor, left_index: Optional[Tensor] = None, right_index: Optional[Tensor] = None) Tensor [source]
Performs a sampled subtraction of
left
byright
according to the indices specified inleft_index
andright_index
.\[\textrm{out} = \textrm{left}[\textrm{left_index}] - \textrm{right}[\textrm{right_index}]\]This operation fuses the indexing and subtraction operation together, thus being more runtime and memory-efficient.
- sampled_mul(left: Tensor, right: Tensor, left_index: Optional[Tensor] = None, right_index: Optional[Tensor] = None) Tensor [source]
Performs a sampled multiplication of
left
andright
according to the indices specified inleft_index
andright_index
.\[\textrm{out} = \textrm{left}[\textrm{left_index}] * \textrm{right}[\textrm{right_index}]\]This operation fuses the indexing and multiplication operation together, thus being more runtime and memory-efficient.
- sampled_div(left: Tensor, right: Tensor, left_index: Optional[Tensor] = None, right_index: Optional[Tensor] = None) Tensor [source]
Performs a sampled division of
left
byright
according to the indices specified inleft_index
andright_index
.\[\textrm{out} = \textrm{left}[\textrm{left_index}] / \textrm{right}[\textrm{right_index}]\]This operation fuses the indexing and division operation together, thus being more runtime and memory-efficient.
- index_sort(inputs: Tensor, max_value: Optional[int] = None) Tuple[Tensor, Tensor] [source]
Sorts the elements of the
inputs
tensor in ascending order. It is expected thatinputs
is one-dimensional and that it only contains positive integer values. Ifmax_value
is given, it can be used by the underlying algorithm for better performance.Note
This operation is optimized only for tensors associated with the CPU device.
- softmax_csr(src: Tensor, ptr: Tensor, dim: int = 0) Tensor [source]
Computes a sparsely evaluated softmax. Given a value tensor
src
, this function first groups the values along the given dimensiondim
, based on the indices specified viaptr
, and then proceeds to compute the softmax individually for each group.Examples
>>> src = torch.randn(4, 4) >>> ptr = torch.tensor([0, 4]) >>> softmax(src, ptr) tensor([[0.0157, 0.0984, 0.1250, 0.4523], [0.1453, 0.2591, 0.5907, 0.2410], [0.0598, 0.2923, 0.1206, 0.0921], [0.7792, 0.3502, 0.1638, 0.2145]])