PairwiseDistance

class paddle.nn. PairwiseDistance ( p=2.0, epsilon=1e-06, keepdim=False, name=None ) [source]

This operator computes the pairwise distance between two vectors. The distance is calculated by p-oreder norm:

\[\Vert x \Vert _p = \left( \sum_{i=1}^n \vert x_i \vert ^ p \right) ^ {1/p}.\]
Parameters
  • p (float) – The order of norm. The default value is 2.

  • epsilon (float, optional) – Add small value to avoid division by zero, default value is 1e-6.

  • keepdim (bool, optional) – Whether to reserve the reduced dimension in the output Tensor. The result tensor is one dimension less than the result of 'x-y' unless keepdim is True, default value is False.

  • name (str, optional) – Name for the operation (optional, default is None). For more information, please refer to Name.

Shape:
x: \([N, D]\) where D is the dimension of vector, available dtype

is float32, float64.

System Message: WARNING/2 (/usr/local/lib/python3.8/site-packages/paddle/nn/layer/distance.py:docstring of paddle.nn.layer.distance.PairwiseDistance, line 25)

Definition list ends without a blank line; unexpected unindent.

y: \([N, D]\), y have the same shape and dtype as x. out: \([N]\). If keepdim is True, the out shape is \([N, 1]\).

System Message: ERROR/3 (/usr/local/lib/python3.8/site-packages/paddle/nn/layer/distance.py:docstring of paddle.nn.layer.distance.PairwiseDistance, line 27)

Unexpected indentation.

The same dtype as input tensor.

Examples

import paddle
import numpy as np
paddle.disable_static()
x_np = np.array([[1., 3.], [3., 5.]]).astype(np.float64)
y_np = np.array([[5., 6.], [7., 8.]]).astype(np.float64)
x = paddle.to_tensor(x_np)
y = paddle.to_tensor(y_np)
dist = paddle.nn.PairwiseDistance()
distance = dist(x, y)
print(distance.numpy()) # [5. 5.]
forward ( x, y )

Defines the computation performed at every call. Should be overridden by all subclasses.

Parameters
  • *inputs (tuple) – unpacked tuple arguments

  • **kwargs (dict) – unpacked dict arguments

extra_repr ( )

Extra representation of this layer, you can have custom implementation of your own layer.