embedding

api_attr

declarative programming (static graph)

paddle.fluid.layers.embedding(input, size, is_sparse=False, is_distributed=False, padding_idx=None, param_attr=None, dtype='float32')[source]

WARING: This OP will be deprecated in a future release. This OP requires the last dimension of Tensor shape must be equal to 1. It is recommended to use fluid. embedding .

The operator is used to lookup embeddings vector of ids provided by input . It automatically constructs a 2D embedding matrix based on the input size (vocab_size, emb_size) and dtype .

This OP requires the last dimension of Tensor shape must be equal to 1. The shape of output Tensor is generated by replacing the last dimension of the input Tensor shape with emb_size.

Note: The id in input must satisfy \(0 =< id < size[0]\) , otherwise the program will throw an exception and exit.

Case 1:

input is a Tensor. padding_idx = -1
    input.data = [[[1], [3]], [[2], [4]], [[4], [127]]]
    input.shape = [3, 2, 1]
Given size = [128, 16]
output is a Tensor:
    out.shape = [3, 2, 16]
    out.data = [[[0.129435295, 0.244512452, ..., 0.436322452],
                [0.345421456, 0.524563927, ..., 0.144534654]],

                [[0.345249859, 0.124939536, ..., 0.194353745],
                [0.945345345, 0.435394634, ..., 0.435345365]],

                [[0.945345345, 0.435394634, ..., 0.435345365],
                [0.0,         0.0,         ..., 0.0        ]]]  # padding data
The input padding_idx is less than 0, it is automatically converted to padding_idx = -1 + 128 = 127
It will pad all-zero data when ids is 127.

Case 2:

input is a LoDTensor with 1-level LoD. padding_idx = 0
    input.lod = [[2, 3]]
    input.data = [[1], [3], [2], [4], [0]]
    input.shape = [5, 1]
Given size = [128, 16]
output is a LoDTensor:
    out.lod = [[2, 3]]
    out.shape = [5, 16]
    out.data = [[0.129435295, 0.244512452, ..., 0.436322452],
                [0.345421456, 0.524563927, ..., 0.144534654],
                [0.345249859, 0.124939536, ..., 0.194353745],
                [0.945345345, 0.435394634, ..., 0.435345365],
                [0.0,         0.0,         ..., 0.0        ]]  # padding data
It will pad all-zero data when ids is 0.
Parameters
  • input (Variable) – A Tensor or LoDTensor with type int64, which contains the id information. The last dimension of Tensor shape must be equal to 1. The value of the input id should satisfy \(0<= id < size[0]\) .

  • size (tuple|list) – The shape of lookup table parameter. It should have two elements which indicates the size of the dictionary of embeddings and the size of each embedding vector respectively.

  • is_sparse (bool) – The flag indicating whether to use sparse update. This parameter only affects the performance of the backwards gradient update. It is recommended to set True because sparse update is faster. But some optimizer does not support sparse update, such as AdadeltaOptimizer , AdamaxOptimizer , DecayedAdagradOptimizer , FtrlOptimizer , LambOptimizer and LarsMomentumOptimizer . In these case, is_sparse must be False. Default: False.

  • is_distributed (bool) – Whether to store the embedding matrix in a distributed manner. Only used in multi-machine distributed CPU training. Default: False.

  • padding_idx (int|long|None) – padding_idx needs to be in the interval [-vocab_size, vocab_size). If \(padding\_idx < 0\), the \(padding\_idx\) will automatically be converted to \(vocab\_size + padding\_idx\) . It will output all-zero padding data whenever lookup encounters \(padding\_idx\) in id. And the padding data will not be updated while training. If set None, it makes no effect to output. Default: None.

  • param_attr (ParamAttr) – To specify the weight parameter property. Default: None, which means the default weight parameter property is used. See usage for details in ParamAttr . In addition, user-defined or pre-trained word vectors can be loaded with the param_attr parameter. The local word vector needs to be transformed into numpy format, and the shape of local word vector should be consistent with size . Then NumpyArrayInitializer is used to load custom or pre-trained word vectors. See code example 2 for details.

  • dtype (str|core.VarDesc.VarType) – It refers to the data type of output Tensor. It must be float32 or float64. Default: float32.

Returns

Embedding Tensor or LoDTensor mapped by input. The data type is the same as dtype .

Return type

Variable

Examples

import paddle.fluid as fluid
import numpy as np
data = fluid.data(name='x', shape=[None, 1], dtype='int64')

# example 1
emb_1 = fluid.embedding(input=data, size=[128, 64])

# example 2: load custom or pre-trained word vectors
weight_data = np.random.random(size=(128, 100))  # word vectors with numpy format
w_param_attrs = fluid.ParamAttr(
    name="emb_weight",
    learning_rate=0.5,
    initializer=fluid.initializer.NumpyArrayInitializer(weight_data),
    trainable=True)
emb_2 = fluid.layers.embedding(input=data, size=(128, 100), param_attr=w_param_attrs, dtype='float32')