You need to enable JavaScript to run this app.
\u200E
开始使用
特性
文档
API
使用指南
工具平台
工具
AutoDL
PaddleHub
PARL
ERNIE
全部
平台
AI Studio
EasyDL
EasyEdge
资源
模型和数据集
学习资料
应用案例
develop
2.5rc
2.4
2.3
2.2
2.1
2.0
1.8
中文(简)
English(En)
Installation Guide
Install via pip
Install on Linux via PIP
Install on MacOS via PIP
Install on Windows via PIP
Install via conda
Installation on Linux via Conda
Installation on MacOS via Conda
Installation on Windows via Conda
Install via docker
Install on Linux via Docker
Install on MacOS via Docker
Compile From Source Code
Compile on Linux from Source Code
Compile on MacOS from Source Code
Compile on Windows from Source Code
Paddle installation for machines with Kunlun XPU card
NGC PaddlePaddle Container Installation Guide
Appendix
Guides
Model Development
Introduction to Tensor
More Uses for Model Development
Model Visualization
Introduction to models and layers
Gradient clip methods in Paddle
Dygraph to Static Graph
Supported Grammars
Error Debugging Experience
Deploy Inference Model
Model Compression
Distributed Training
Quick start for distributed training
Performance Improving
Automatic Mixed Precision Training (AMP)
Model Convert
Upgrade guide
FLAGS
cudnn
data processing
debug
check nan inf tool
device management
distributed
executor
memory management
ascend npu
others
API Reference
paddle
abs
acos
acosh
add
add_n
addmm
all
allclose
amax
amin
angle
any
arange
argmax
argmin
argsort
as_complex
as_real
asin
asinh
assign
atan
atan2
atanh
batch
bernoulli
bincount
bitwise_and
bitwise_not
bitwise_or
bitwise_xor
bmm
broadcast_shape
broadcast_tensors
broadcast_to
bucketize
cast
ceil
check_shape
chunk
clip
clone
complex
concat
conj
cos
cosh
count_nonzero
CPUPlace
create_parameter
crop
cross
CUDAPinnedPlace
CUDAPlace
cumprod
cumsum
DataParallel
deg2rad
diag
diagflat
diagonal
diff
digamma
disable_signal_handler
disable_static
dist
divide
dot
dtype
einsum
empty
empty_like
enable_static
equal
equal_all
erf
erfinv
exp
expand
expand_as
expm1
eye
flatten
flip
floor
floor_divide
floor_mod
flops
fmax
fmin
frac
full
full_like
gather
gather_nd
gcd
get_cuda_rng_state
get_default_dtype
get_flags
grad
greater_equal
greater_than
heaviside
histogram
iinfo
imag
in_dynamic_mode
increment
index_add
index_add_
index_sample
index_select
inner
is_complex
is_empty
is_floating_point
is_grad_enabled
is_integer
is_tensor
isclose
isfinite
isinf
isnan
kron
kthvalue
LazyGuard
lcm
lerp
less_equal
less_than
lgamma
linspace
load
log
log10
log1p
log2
logcumsumexp
logical_and
logical_not
logical_or
logical_xor
logit
logspace
logsumexp
masked_select
matmul
max
maximum
mean
median
meshgrid
min
minimum
mm
mode
Model
moveaxis
multinomial
multiplex
multiply
mv
nanmean
nanmedian
nanquantile
nansum
neg
no_grad
nonzero
normal
not_equal
NPUPlace
numel
ones
ones_like
outer
ParamAttr
poisson
pow
prod
put_along_axis
quantile
rad2deg
rand
randint
randint_like
randn
randperm
rank
real
reciprocal
renorm
repeat_interleave
reshape
reshape_
roll
rot90
round
rsqrt
save
scale
scatter
scatter_
scatter_nd
scatter_nd_add
searchsorted
seed
set_cuda_rng_state
set_default_dtype
set_flags
set_grad_enabled
set_printoptions
sgn
shape
shard_index
sign
sin
sinh
slice
sort
split
sqrt
square
squeeze
squeeze_
stack
standard_normal
stanh
std
strided_slice
subtract
sum
summary
t
take
take_along_axis
tan
tanh
tanh_
Tensor
tensordot
tile
to_tensor
tolist
topk
trace
transpose
tril
tril_indices
triu
triu_indices
trunc
unbind
uniform
unique
unique_consecutive
unsqueeze
unsqueeze_
unstack
var
where
zeros
zeros_like
paddle.amp
auto_cast
decorate
GradScaler
paddle.audio
backends
get_current_backend
list_available_backends
set_backend
datasets
ESC50
TESS
features
LogMelSpectrogram
MelSpectrogram
MFCC
Spectrogram
functional
compute_fbank_matrix
create_dct
fft_frequencies
get_window
hz_to_mel
mel_frequencies
mel_to_hz
power_to_db
info
load
save
paddle.autograd
backward
PyLayer
PyLayerContext
paddle.callbacks
Callback
EarlyStopping
LRScheduler
ModelCheckpoint
ProgBarLogger
ReduceLROnPlateau
VisualDL
paddle.device
cuda
current_stream
device_count
empty_cache
Event
get_device_capability
get_device_name
get_device_properties
max_memory_allocated
max_memory_reserved
memory_allocated
memory_reserved
Stream
stream_guard
synchronize
get_all_custom_device_type
get_all_device_type
get_available_custom_device
get_available_device
get_cudnn_version
get_device
IPUPlace
is_compiled_with_cinn
is_compiled_with_cuda
is_compiled_with_ipu
is_compiled_with_mlu
is_compiled_with_npu
is_compiled_with_rocm
is_compiled_with_xpu
MLUPlace
set_device
XPUPlace
paddle.distributed
all_gather
all_gather_object
all_reduce
alltoall
alltoall_single
barrier
broadcast
communication
stream
all_gather
all_reduce
alltoall
alltoall_single
broadcast
recv
reduce
reduce_scatter
scatter
send
CountFilterEntry
destroy_process_group
fleet
CommunicateTopology
DistributedStrategy
Fleet
HybridCommunicateGroup
MultiSlotDataGenerator
MultiSlotStringDataGenerator
PaddleCloudRoleMaker
Role
UserDefinedRoleMaker
UtilBase
utils
DistributedInfer
HDFSClient
LocalFS
recompute
get_group
get_rank
get_world_size
gloo_barrier
gloo_init_parallel_env
gloo_release
init_parallel_env
InMemoryDataset
irecv
is_initialized
isend
launch
new_group
ParallelEnv
ParallelMode
passes
new_pass
PassContext
PassManager
ProbabilityEntry
ps
the_one_ps
BarrierTable
DenseTable
GeoSparseTable
SparseTable
Table
TensorTable
utils
ps_factory
CpuAsyncPsProgramBuilder
CpuSyncPsProgramBuilder
FlPsProgramBuilder
GeoPsProgramBuilder
GpuPsProgramBuilder
HeterAsyncPsProgramBuilder
NuPsProgramBuilder
PsProgramBuilder
QueueDataset
recv
reduce
reduce_scatter
ReduceOp
scatter
send
sharding
group_sharded_parallel
save_group_sharded_model
ShowClickEntry
spawn
split
wait
paddle.distribution
AbsTransform
AffineTransform
Beta
Categorical
ChainTransform
Dirichlet
Distribution
ExponentialFamily
ExpTransform
Independent
IndependentTransform
kl_divergence
Multinomial
Normal
PowerTransform
register_kl
ReshapeTransform
SigmoidTransform
SoftmaxTransform
StackTransform
StickBreakingTransform
TanhTransform
Transform
TransformedDistribution
Uniform
paddle.fft
fft
fft2
fftfreq
fftn
fftshift
hfft
hfft2
hfftn
ifft
ifft2
ifftn
ifftshift
ihfft
ihfft2
ihfftn
irfft
irfft2
irfftn
rfft
rfft2
rfftfreq
rfftn
paddle.fluid
average
WeightedAverage
clip
ErrorClipByValue
set_gradient_clip
communicator
Communicator
FLCommunicator
LargeScaleKV
contrib
decoder
beam_search_decoder
BeamSearchDecoder
InitState
StateCell
TrainingDecoder
extend_optimizer
extend_optimizer_with_weight_decay
extend_with_decoupled_weight_decay
layers
nn
batch_fc
bilateral_slice
correlation
fused_bn_add_act
fused_elemwise_activation
fused_embedding_seq_pool
fused_seqpool_cvm
match_matrix_tensor
multiclass_nms2
partial_concat
partial_sum
rank_attention
search_pyramid_hash
sequence_topk_avg_pooling
shuffle_batch
tdm_child
tdm_sampler
tree_conv
var_conv_2d
rnn_impl
basic_gru
basic_lstm
BasicGRUUnit
BasicLSTMUnit
memory_usage_calc
memory_usage
mixed_precision
amp_nn
check_finite_and_unscale
update_loss_scaling
bf16
amp_lists
AutoMixedPrecisionListsBF16
amp_utils
bf16_guard
cast_model_to_bf16
cast_parameters_to_bf16
convert_float_to_uint16
rewrite_program_bf16
decorator
decorate_bf16
decorator
decorate
fp16_lists
AutoMixedPrecisionLists
fp16_utils
cast_model_to_fp16
cast_parameters_to_fp16
fp16_guard
op_frequence
op_freq_statistic
optimizer
Momentum
quantize
quantize_transpiler
QuantizeTranspiler
slim
quantization
cal_kl_threshold
cal_kl_threshold
imperative
ptq
ImperativePTQ
ptq_config
PTQConfig
ptq_quantizer
AbsmaxQuantizer
BaseQuantizer
HistQuantizer
KLQuantizer
PerChannelAbsmaxQuantizer
ptq_registry
PTQRegistry
qat
ImperativeQuantAware
post_training_quantization
PostTrainingQuantization
PostTrainingQuantizationProgram
WeightQuantization
quant2_int8_mkldnn_pass
Quant2Int8MkldnnPass
quant_int8_mkldnn_pass
QuantInt8MkldnnPass
quantization_pass
AddQuantDequantForInferencePass
AddQuantDequantPass
AddQuantDequantPassV2
ConvertToInt8Pass
OutScaleForInferencePass
OutScaleForTrainingPass
QuantizationFreezePass
QuantizationTransformPass
QuantizationTransformPassV2
QuantWeightPass
ReplaceFakeQuantDequantPass
TransformForMobilePass
sparsity
utils
check_mask_1d
check_mask_2d
check_sparsity
CheckMethod
create_mask
get_mask_1d
get_mask_2d_best
get_mask_2d_greedy
MaskAlgo
data
data_feed_desc
DataFeedDesc
data_feeder
DataFeeder
dataloader
collate
default_collate_fn
dataset
DatasetFactory
InMemoryDataset
QueueDataset
device_worker
DeviceWorker
DownpourSGD
DownpourSGDOPT
HeterSection
Hogwild
Section
dygraph
amp
auto_cast
amp_decorate
amp_guard
loss_scaler
AmpScaler
OptimizerState
base
enabled
guard
no_grad
to_variable
checkpoint
load_dygraph
save_dygraph
dygraph_to_static
ast_transformer
DygraphToStaticAst
break_continue_transformer
BreakContinueTransformer
convert_call_func
convert_call
logging_utils
TranslatorLogger
loop_transformer
LoopTransformer
NameVisitor
program_translator
convert_to_static
return_transformer
ReturnTransformer
static_analysis
AstNodeWrapper
NodeVarType
StaticAnalysisVisitor
variable_trans_func
create_bool_as_type
create_fill_constant_node
create_undefined_var
to_static_variable
jit
dygraph_to_static_func
learning_rate_scheduler
CosineDecay
ExponentialDecay
InverseTimeDecay
LambdaDecay
LinearLrWarmup
MultiStepDecay
NaturalExpDecay
NoamDecay
PiecewiseDecay
PolynomialDecay
ReduceLROnPlateau
StepDecay
nn
BilinearTensorProduct
Conv2D
Conv2DTranspose
Conv3D
Conv3DTranspose
Dropout
Embedding
GroupNorm
GRUUnit
InstanceNorm
LayerNorm
Linear
NCE
Pool2D
PRelu
TreeConv
parallel
prepare_context
rnn
GRUCell
LSTMCell
evaluator
ChunkEvaluator
DetectionMAP
EditDistance
framework
cuda_pinned_places
in_dygraph_mode
is_compiled_with_npu
is_compiled_with_xpu
generator
Generator
incubate
fleet
parameter_server
pslib
optimizer_factory
DistributedAdam
initializer
ConstantInitializer
MSRAInitializer
NormalInitializer
NumpyArrayInitializer
TruncatedNormalInitializer
UniformInitializer
XavierInitializer
input
one_hot
install_check
run_check
io
get_program_parameter
get_program_persistable_vars
load_inference_model
load_params
load_persistables
load_vars
save_inference_model
save_params
save_persistables
save_vars
layer_helper_base
LayerHelperBase
layers
BasicDecoder
beam_search
beam_search_decode
birnn
control_flow
array_length
array_read
array_write
Assert
create_array
DynamicRNN
equal
greater_equal
greater_than
IfElse
increment
is_empty
less_equal
less_than
not_equal
reorder_lod_tensor_by_rank
Switch
While
DecodeHelper
Decoder
detection
anchor_generator
bipartite_match
box_clip
box_coder
box_decoder_and_assign
collect_fpn_proposals
density_prior_box
detection_output
distribute_fpn_proposals
generate_mask_labels
generate_proposal_labels
generate_proposals
iou_similarity
locality_aware_nms
matrix_nms
multiclass_nms
polygon_box_transform
prior_box
retinanet_detection_output
retinanet_target_assign
roi_perspective_transform
rpn_target_assign
sigmoid_focal_loss
ssd_loss
target_assign
yolo_box
yolov3_loss
distributions
Categorical
MultivariateNormalDiag
Normal
Uniform
dynamic_gru
dynamic_lstm
dynamic_lstmp
GreedyEmbeddingHelper
gru_unit
GRUCell
io
create_py_reader_by_data
data
double_buffer
load
py_reader
read_file
layer_function_generator
autodoc
generate_activation_fn
generate_inplace_fn
generate_layer_fn
templatedoc
learning_rate_scheduler
cosine_decay
inverse_time_decay
linear_lr_warmup
natural_exp_decay
noam_decay
piecewise_decay
polynomial_decay
loss
bpr_loss
center_loss
cross_entropy
edit_distance
hsigmoid
huber_loss
kldiv_loss
margin_rank_loss
mse_loss
npair_loss
rank_loss
sampled_softmax_with_cross_entropy
sigmoid_cross_entropy_with_logits
softmax_with_cross_entropy
square_error_cost
teacher_student_sigmoid_loss
warpctc
lstm
lstm_unit
LSTMCell
nn
adaptive_pool2d
adaptive_pool3d
add_position_encoding
affine_channel
affine_grid
autoincreased_step_counter
brelu
chunk_eval
clip
clip_by_norm
continuous_value_model
cos_sim
crop
crop_tensor
ctc_greedy_decoder
deformable_conv
deformable_roi_pooling
dice_loss
dropout
elementwise_add
elementwise_div
elementwise_floordiv
elementwise_max
elementwise_min
elementwise_mod
elementwise_mul
elementwise_pow
elementwise_sub
elu
embedding
expand
expand_as
fc
filter_by_instag
flatten
fsp_matrix
gather
gather_nd
gather_tree
gaussian_random
gaussian_random_batch_size_like
get_tensor_from_selected_rows
grid_sampler
hard_sigmoid
hard_swish
hash
im2sequence
image_resize
image_resize_short
inplace_abn
l2_normalize
label_smooth
leaky_relu
linear_chain_crf
lod_append
lod_reset
log
log_loss
logical_and
logical_not
logical_or
logical_xor
lrn
matmul
maxout
mean
mean_iou
merge_selected_rows
mish
mul
multiplex
one_hot
pad
pad2d
pad_constant_like
pixel_shuffle
pool2d
pool3d
pow
prroi_pool
psroi_pool
random_crop
rank
reduce_all
reduce_any
reduce_max
reduce_mean
reduce_min
reduce_prod
reduce_sum
relu
relu6
reshape
resize_bilinear
resize_linear
resize_nearest
resize_trilinear
roi_align
roi_pool
sampling_id
scale
scatter
scatter_nd
scatter_nd_add
selu
shape
shard_index
shuffle_channel
sign
similarity_focus
size
slice
smooth_l1
soft_relu
softmax
space_to_depth
split
squeeze
stack
stanh
strided_slice
sum
swish
temporal_shift
topk
transpose
unbind
unfold
uniform_random
uniform_random_batch_size_like
unique
unique_with_counts
unsqueeze
unstack
where
ops
cumsum
erf
gelu
hard_shrink
lgamma
softplus
softshrink
thresholded_relu
rnn
RNNCell
SampleEmbeddingHelper
sequence_lod
sequence_mask
tensor
argmax
argmin
argsort
assign
cast
concat
create_tensor
diag
eye
fill_constant
fill_constant_batch_size_like
has_inf
has_nan
isfinite
linspace
ones
ones_like
range
reverse
sums
tensor_array_to_tensor
triu
zeros
zeros_like
TrainingHelper
libpaddle
IPUPlace
LoDTensor
LoDTensorArray
MLUPlace
XPUPlace
lod_tensor
create_lod_tensor
create_random_int_lodtensor
log_helper
get_logger
metrics
Accuracy
Auc
ChunkEvaluator
CompositeMetric
DetectionMAP
EditDistance
MetricBase
Precision
Recall
nets
glu
img_conv_group
scaled_dot_product_attention
sequence_conv_pool
simple_img_conv_pool
optimizer
AdadeltaOptimizer
AdagradOptimizer
AdamaxOptimizer
AdamOptimizer
DecayedAdagradOptimizer
DpsgdOptimizer
FtrlOptimizer
LambOptimizer
LarsMomentumOptimizer
LookaheadOptimizer
ModelAverage
MomentumOptimizer
PipelineOptimizer
RecomputeOptimizer
RMSPropOptimizer
SGDOptimizer
reader
PyReader
regularizer
L1DecayRegularizer
L2DecayRegularizer
trainer_desc
DistMultiTrainer
HeterPipelineTrainer
HeterXpuTrainer
MultiTrainer
PipelineTrainer
TrainerDesc
trainer_factory
FetchHandlerMonitor
TrainerFactory
transpiler
collective
GradAllReduce
LocalSGD
MultiThread
distribute_transpiler
DistributeTranspiler
DistributeTranspilerConfig
memory_optimization_transpiler
memory_optimize
release_memory
ps_dispatcher
HashName
RoundRobin
wrapped_decorator
signature_safe_contextmanager
wrap_decorator
paddle.geometric
reindex_graph
reindex_heter_graph
sample_neighbors
segment_max
segment_mean
segment_min
segment_sum
send_u_recv
send_ue_recv
send_uv
paddle.hub
help
list
load
paddle.incubate
asp
calculate_density
decorate
prune_model
reset_excluded_layers
set_excluded_layers
autograd
disable_prim
enable_prim
forward_grad
grad
Hessian
Jacobian
jvp
vjp
autotune
set_config
graph_khop_sampler
graph_reindex
graph_sample_neighbors
graph_send_recv
identity_loss
LookAhead
ModelAverage
nn
functional
fused_bias_dropout_residual_layer_norm
fused_feedforward
fused_linear
fused_matmul_bias
fused_multi_head_attention
fused_multi_transformer
FusedBiasDropoutResidualLayerNorm
FusedFeedForward
FusedLinear
FusedMultiHeadAttention
FusedMultiTransformer
FusedTransformerEncoderLayer
optimizer
functional
minimize_bfgs
minimize_lbfgs
segment_max
segment_mean
segment_min
segment_sum
softmax_mask_fuse
softmax_mask_fuse_upper_triangle
xpu
resnet_block
resnet_basic_block
ResNetBasicBlock
paddle.inference
Config
convert_to_mixed_precision
DataType
PlaceType
PrecisionType
Predictor
PredictorPool
Tensor
paddle.io
BatchSampler
ChainDataset
ComposeDataset
DataLoader
Dataset
DistributedBatchSampler
get_worker_info
IterableDataset
random_split
RandomSampler
Sampler
SequenceSampler
Subset
TensorDataset
WeightedRandomSampler
paddle.jit
load
not_to_static
ProgramTranslator
save
set_code_level
set_verbosity
to_static
TracedLayer
TranslatedLayer
paddle.linalg
cholesky
cholesky_solve
cond
corrcoef
cov
det
eig
eigh
eigvals
eigvalsh
inv
lstsq
lu
lu_unpack
matrix_power
matrix_rank
multi_dot
norm
pinv
qr
slogdet
solve
svd
triangular_solve
paddle.metric
Accuracy
accuracy
Auc
Metric
Precision
Recall
paddle.nn
AdaptiveAvgPool1D
AdaptiveAvgPool2D
AdaptiveAvgPool3D
AdaptiveMaxPool1D
AdaptiveMaxPool2D
AdaptiveMaxPool3D
AlphaDropout
AvgPool1D
AvgPool2D
AvgPool3D
BatchNorm
BatchNorm1D
BatchNorm2D
BatchNorm3D
BCELoss
BCEWithLogitsLoss
BeamSearchDecoder
Bilinear
BiRNN
CELU
ChannelShuffle
ClipGradByGlobalNorm
ClipGradByNorm
ClipGradByValue
Conv1D
Conv1DTranspose
Conv2D
Conv2DTranspose
Conv3D
Conv3DTranspose
CosineEmbeddingLoss
CosineSimilarity
CrossEntropyLoss
CTCLoss
Dropout
Dropout2D
Dropout3D
dynamic_decode
ELU
Embedding
Flatten
Fold
functional
adaptive_avg_pool1d
adaptive_avg_pool2d
adaptive_avg_pool3d
adaptive_max_pool1d
adaptive_max_pool2d
adaptive_max_pool3d
affine_grid
alpha_dropout
avg_pool1d
avg_pool2d
avg_pool3d
batch_norm
bilinear
binary_cross_entropy
binary_cross_entropy_with_logits
celu
channel_shuffle
class_center_sample
conv1d
conv1d_transpose
conv2d
conv2d_transpose
conv3d
conv3d_transpose
cosine_embedding_loss
cosine_similarity
cross_entropy
ctc_loss
diag_embed
dice_loss
dropout
dropout2d
dropout3d
elu
elu_
embedding
fold
gather_tree
gelu
glu
grid_sample
gumbel_softmax
hardshrink
hardsigmoid
hardswish
hardtanh
hinge_embedding_loss
hsigmoid_loss
instance_norm
interpolate
kl_div
l1_loss
label_smooth
layer_norm
leaky_relu
linear
local_response_norm
log_loss
log_sigmoid
log_softmax
margin_cross_entropy
margin_ranking_loss
max_pool1d
max_pool2d
max_pool3d
max_unpool1d
max_unpool2d
max_unpool3d
maxout
mish
mse_loss
multi_label_soft_margin_loss
nll_loss
normalize
npair_loss
one_hot
pad
pairwise_distance
pixel_shuffle
pixel_unshuffle
prelu
relu
relu6
relu_
rrelu
selu
sequence_mask
sigmoid
sigmoid_focal_loss
silu
smooth_l1_loss
soft_margin_loss
softmax
softmax_
softmax_with_cross_entropy
softplus
softshrink
softsign
sparse_attention
square_error_cost
swish
tanhshrink
temporal_shift
thresholded_relu
triplet_margin_loss
triplet_margin_with_distance_loss
unfold
upsample
zeropad2d
GELU
GroupNorm
GRU
GRUCell
Hardshrink
Hardsigmoid
Hardswish
Hardtanh
HingeEmbeddingLoss
HSigmoidLoss
Identity
initializer
Assign
Bilinear
calculate_gain
Constant
Dirac
KaimingNormal
KaimingUniform
Normal
Orthogonal
set_global_initializer
TruncatedNormal
Uniform
XavierNormal
XavierUniform
InstanceNorm1D
InstanceNorm2D
InstanceNorm3D
KLDivLoss
L1Loss
Layer
LayerDict
LayerList
LayerNorm
LeakyReLU
Linear
LocalResponseNorm
LogSigmoid
LogSoftmax
LSTM
LSTMCell
MarginRankingLoss
Maxout
MaxPool1D
MaxPool2D
MaxPool3D
MaxUnPool1D
MaxUnPool2D
MaxUnPool3D
Mish
MSELoss
MultiHeadAttention
MultiLabelSoftMarginLoss
NLLLoss
Pad1D
Pad2D
Pad3D
PairwiseDistance
ParameterList
PixelShuffle
PixelUnshuffle
PReLU
quant
quant_layers
FakeQuantAbsMax
FakeQuantChannelWiseAbsMax
FakeQuantMAOutputScaleLayer
FakeQuantMovingAverageAbsMax
MAOutputScaleLayer
MovingAverageAbsMaxScale
QuantizedColumnParallelLinear
QuantizedConv2D
QuantizedConv2DTranspose
QuantizedLinear
QuantizedRowParallelLinear
ReLU
ReLU6
RNN
RNNCellBase
RReLU
SELU
Sequential
Sigmoid
Silu
SimpleRNN
SimpleRNNCell
SmoothL1Loss
SoftMarginLoss
Softmax
Softmax2D
Softplus
Softshrink
Softsign
SpectralNorm
Swish
SyncBatchNorm
Tanh
Tanhshrink
ThresholdedReLU
Transformer
TransformerDecoder
TransformerDecoderLayer
TransformerEncoder
TransformerEncoderLayer
TripletMarginLoss
TripletMarginWithDistanceLoss
Unfold
Upsample
UpsamplingBilinear2D
UpsamplingNearest2D
utils
parameters_to_vector
remove_weight_norm
spectral_norm
vector_to_parameters
weight_norm
ZeroPad2D
paddle.onnx
export
paddle.optimizer
Adadelta
Adagrad
Adam
Adamax
AdamW
Lamb
lr
CosineAnnealingDecay
CyclicLR
ExponentialDecay
InverseTimeDecay
LambdaDecay
LinearWarmup
LRScheduler
MultiplicativeDecay
MultiStepDecay
NaturalExpDecay
NoamDecay
OneCycleLR
PiecewiseDecay
PolynomialDecay
ReduceOnPlateau
StepDecay
Momentum
Optimizer
RMSProp
SGD
paddle.profiler
export_chrome_tracing
export_protobuf
load_profiler_result
make_scheduler
Profiler
ProfilerState
ProfilerTarget
RecordEvent
SortedKeys
SummaryView
paddle.regularizer
L1Decay
L2Decay
paddle.signal
istft
stft
paddle.sparse
abs
add
addmm
asin
asinh
atan
atanh
cast
coalesce
deg2rad
divide
expm1
is_same_shape
log1p
masked_matmul
matmul
multiply
mv
neg
nn
BatchNorm
Conv3D
functional
attention
conv3d
leaky_relu
max_pool3d
relu
relu6
softmax
subm_conv3d
LeakyReLU
MaxPool3D
ReLU
ReLU6
Softmax
SubmConv3D
SyncBatchNorm
pow
rad2deg
reshape
sin
sinh
sparse_coo_tensor
sparse_csr_tensor
sqrt
square
subtract
tan
tanh
transpose
paddle.static
accuracy
append_backward
auc
BuildStrategy
CompiledProgram
cpu_places
create_global_var
ctr_metric_bundle
cuda_places
data
default_main_program
default_startup_program
deserialize_persistables
deserialize_program
device_guard
ExecutionStrategy
Executor
exponential_decay
ExponentialMovingAverage
global_scope
gradients
InputSpec
ipu_shard_guard
IpuCompiledProgram
IpuStrategy
load
load_from_file
load_inference_model
load_program_state
mlu_places
name_scope
nn
batch_norm
bilinear_tensor_product
case
cond
conv2d
conv2d_transpose
conv3d
conv3d_transpose
crf_decoding
data_norm
deform_conv2d
embedding
fc
group_norm
instance_norm
layer_norm
multi_box_head
nce
prelu
row_conv
sequence_concat
sequence_conv
sequence_enumerate
sequence_expand
sequence_expand_as
sequence_first_step
sequence_last_step
sequence_pad
sequence_pool
sequence_reshape
sequence_reverse
sequence_scatter
sequence_slice
sequence_softmax
sequence_unpad
sparse_embedding
spectral_norm
StaticRNN
switch_case
while_loop
normalize_program
npu_places
ParallelExecutor
Print
Program
program_guard
py_func
save
save_inference_model
save_to_file
scope_guard
serialize_persistables
serialize_program
set_ipu_shard
set_program_state
sparsity
add_supported_layer
set_excluded_layers
Variable
WeightNormParamAttr
xpu_places
paddle.sysconfig
get_include
get_lib
paddle.Tensor
Overview
add_
astype
backward
ceil_
clear_grad
clip_
clone
cpu
cuda
dim
erfinv_
exp_
exponential_
fill_
fill_diagonal_
fill_diagonal_tensor
fill_diagonal_tensor_
flatten_
floor_
gradient
item
lerp_
ndimension
pin_memory
put_along_axis_
reciprocal_
register_hook
remainder_
round_
rsqrt_
scale_
set_value
sqrt_
subtract_
to_dense
to_sparse_coo
uniform_
value
values
zero_
paddle.text
Conll05st
Imdb
Imikolov
Movielens
UCIHousing
viterbi_decode
ViterbiDecoder
WMT14
WMT16
paddle.utils
cpp_extension
CppExtension
CUDAExtension
get_build_directory
load
setup
deprecated
dlpack
from_dlpack
to_dlpack
download
get_weights_path_from_url
profiler
cuda_profiler
get_profiler
profiler
Profiler
ProfilerOptions
reset_profiler
start_profiler
stop_profiler
require_version
run_check
try_import
unique_name
generate
guard
switch
paddle.version
cuda
cudnn
show
paddle.vision
datasets
Cifar10
Cifar100
DatasetFolder
FashionMNIST
Flowers
ImageFolder
MNIST
VOC2012
get_image_backend
image_load
models
AlexNet
alexnet
DenseNet
densenet121
densenet161
densenet169
densenet201
densenet264
GoogLeNet
googlenet
inception_v3
InceptionV3
LeNet
mobilenet_v1
mobilenet_v2
mobilenet_v3_large
mobilenet_v3_small
MobileNetV1
MobileNetV2
MobileNetV3Large
MobileNetV3Small
ResNet
resnet101
resnet152
resnet18
resnet34
resnet50
resnext101_32x4d
resnext101_64x4d
resnext152_32x4d
resnext152_64x4d
resnext50_32x4d
resnext50_64x4d
shufflenet_v2_swish
shufflenet_v2_x0_25
shufflenet_v2_x0_33
shufflenet_v2_x0_5
shufflenet_v2_x1_0
shufflenet_v2_x1_5
shufflenet_v2_x2_0
ShuffleNetV2
SqueezeNet
squeezenet1_0
squeezenet1_1
VGG
vgg11
vgg13
vgg16
vgg19
wide_resnet101_2
wide_resnet50_2
ops
box_coder
decode_jpeg
deform_conv2d
DeformConv2D
distribute_fpn_proposals
generate_proposals
matrix_nms
nms
prior_box
psroi_pool
PSRoIPool
read_file
roi_align
roi_pool
RoIAlign
RoIPool
yolo_box
yolo_loss
set_image_backend
transforms
adjust_brightness
adjust_contrast
adjust_hue
affine
BaseTransform
BrightnessTransform
center_crop
CenterCrop
ColorJitter
Compose
ContrastTransform
crop
erase
Grayscale
hflip
HueTransform
Normalize
normalize
pad
Pad
perspective
RandomAffine
RandomCrop
RandomErasing
RandomHorizontalFlip
RandomPerspective
RandomResizedCrop
RandomRotation
RandomVerticalFlip
Resize
resize
rotate
SaturationTransform
to_grayscale
to_tensor
ToTensor
Transpose
vflip
Contribution Guidelines
Custom Device Support
Custom Runtime
Data Type
Device APIs
Memory APIs
Stream APIs
Event APIs
Custom Kernel
Kernel Function Declaration
Kernel Implementation APIs
Context APIs
Tensor APIs
Exception API
Kernel Registration API
CustomDevice Example
2.4.1 Release Note
DenseTable
»
DenseTable
Edit on GitHub
DenseTable
¶
class
paddle.distributed.ps.the_one_ps.
DenseTable
(
context
,
send_ctx
)
[source]