rank_attention¶
- paddle.fluid.contrib.layers.nn. rank_attention ( input, rank_offset, rank_param_shape, rank_param_attr, max_rank=3, max_size=0 ) [source]
- 
         Rank Attention layer This Op can calculate rank attention between input and rank_param, and rank_param gives the organization of data. Notice: It currently supports GPU device. This Op exists in contrib, which means that it is not shown to the public. :param input: Tensor with data type float32, float64. :param rank_offset: Tensor with data type int32. :param rank_para_shape: The shape of rank_param. :param rank_param_attr: Attribute initializer of rank_param. :param max_rank: The max rank of input’s ranks. - Returns
- 
           A Tensor with the same data type as input’s. 
- Return type
- 
           Variable 
 Examples 
