You need to enable JavaScript to run this app.
\u200E
开始使用
特性
文档
API
使用指南
工具平台
工具
AutoDL
PaddleHub
PARL
ERNIE
全部
平台
AI Studio
EasyDL
EasyEdge
资源
模型和数据集
学习资料
应用案例
develop
3.0-beta
2.6
2.5
2.4
2.3
2.2
2.1
2.0
1.8
中文(简)
English(En)
Installation Guide
Install via pip
Install on Linux via PIP
Install on macOS via PIP
Install on Windows via PIP
Install via conda
Installation on Linux via Conda
Installation on macOS via Conda
Installation on Windows via Conda
Install via docker
Install on Linux via Docker
Install on macOS via Docker
Docker List
Compile From Source Code
Compile on Linux from Source Code
Compile on macOS from Source Code
Compile on Windows from Source Code
Paddle installation for machines with Kunlun XPU card
NVIDIA PaddlePaddle Container Installation Guide
Appendix
Guides
Model Development
Introduction to Tensor
More Uses for Model Development
Model Visualization
Introduction to models and layers
Gradient clip methods in Paddle
Introduction to Data Type Promotion
Dynamic to Static Graph
Supported Grammars
Error Debugging Experience
Deploy Inference Model
Model Compression
Distributed Training
Quick start for distributed training
Performance Improving
Automatic Mixed Precision Training (AMP)
Auto-tuning in Full-process Training (Beta)
Model Convert
Upgrade guide
FLAGS
cudnn
data processing
debug
check nan inf tool
device management
distributed
executor
memory management
ascend npu
others
API Reference
paddle
abs
abs_
acos
acos_
acosh
add
add_n
addmm
addmm_
all
allclose
amax
amin
angle
any
arange
argmax
argmin
argsort
as_complex
as_real
as_strided
asin
asinh
assign
atan
atan2
atan_
atanh
atleast_1d
atleast_2d
atleast_3d
batch
bernoulli
bernoulli_
bincount
binomial
bitwise_and
bitwise_and_
bitwise_left_shift
bitwise_left_shift_
bitwise_not
bitwise_not_
bitwise_or
bitwise_or_
bitwise_right_shift
bitwise_right_shift_
bitwise_xor
bitwise_xor_
block_diag
bmm
broadcast_shape
broadcast_tensors
broadcast_to
bucketize
cast
cast_
cauchy_
cdist
ceil
check_shape
chunk
clip
clone
column_stack
combinations
complex
concat
conj
copysign
copysign_
cos
cos_
cosh
count_nonzero
CPUPlace
create_parameter
crop
cross
CUDAPinnedPlace
CUDAPlace
cummax
cummin
cumprod
cumprod_
cumsum
cumsum_
cumulative_trapezoid
DataParallel
deg2rad
diag
diag_embed
diagflat
diagonal
diagonal_scatter
diff
digamma
digamma_
disable_signal_handler
disable_static
dist
divide
divide_
dot
dsplit
dstack
dtype
einsum
empty
empty_like
enable_grad
enable_static
equal
equal_
equal_all
erf
erf_
erfinv
exp
expand
expand_as
expm1
expm1_
eye
finfo
flatten
flatten_
flip
floor
floor_divide
floor_divide_
floor_mod
floor_mod_
flops
fmax
fmin
frac
frac_
frexp
full
full_like
gammainc
gammainc_
gammaincc
gammaincc_
gammaln
gammaln_
gather
gather_nd
gcd
gcd_
geometric_
get_cuda_rng_state
get_default_dtype
get_flags
get_rng_state
grad
greater_equal
greater_equal_
greater_than
greater_than_
heaviside
histogram
histogramdd
hsplit
hstack
hypot
hypot_
i0
i0_
i0e
i1
i1e
iinfo
imag
in_dynamic_mode
increment
index_add
index_add_
index_fill
index_fill_
index_put
index_put_
index_sample
index_select
inner
is_complex
is_empty
is_floating_point
is_grad_enabled
is_integer
is_tensor
isclose
isfinite
isin
isinf
isnan
isneginf
isposinf
isreal
kron
kthvalue
LazyGuard
lcm
lcm_
ldexp
ldexp_
lerp
less_equal
less_equal_
less_than
less_than_
lgamma
lgamma_
linspace
load
log
log10
log10_
log1p
log2
log2_
log_
logaddexp
logcumsumexp
logical_and
logical_and_
logical_not
logical_not_
logical_or
logical_or_
logical_xor
logit
logit_
logspace
logsumexp
masked_fill
masked_fill_
masked_scatter
masked_scatter_
masked_select
matmul
max
maximum
mean
median
meshgrid
min
minimum
mm
mode
Model
moveaxis
multigammaln
multigammaln_
multinomial
multiplex
multiply
multiply_
mv
nan_to_num
nan_to_num_
nanmean
nanmedian
nanquantile
nansum
neg
neg_
nextafter
no_grad
nonzero
normal
normal_
not_equal
numel
ones
ones_like
outer
ParamAttr
pdist
poisson
polar
polygamma
polygamma_
pow
pow_
prod
put_along_axis
quantile
rad2deg
rand
randint
randint_like
randn
randperm
rank
real
reciprocal
reduce_as
renorm
renorm_
repeat_interleave
reshape
reshape_
roll
rot90
round
row_stack
rsqrt
save
scale
scatter
scatter_
scatter_nd
scatter_nd_add
searchsorted
seed
select_scatter
set_cuda_rng_state
set_default_dtype
set_flags
set_grad_enabled
set_printoptions
set_rng_state
sgn
shape
shard_index
sign
signbit
sin
sin_
sinc
sinc_
sinh
sinh_
slice
slice_scatter
sort
split
sqrt
square
square_
squeeze
squeeze_
stack
standard_gamma
standard_normal
stanh
std
strided_slice
subtract
sum
summary
t
t_
take
take_along_axis
tan
tan_
tanh
tanh_
Tensor
tensor_split
tensordot
tile
to_tensor
tolist
topk
trace
transpose
transpose_
trapezoid
tril
tril_
tril_indices
triu
triu_
triu_indices
trunc
trunc_
unbind
unflatten
unfold
uniform
unique
unique_consecutive
unsqueeze
unsqueeze_
unstack
vander
var
view
view_as
vsplit
vstack
where
where_
zeros
zeros_like
paddle.amp
auto_cast
debugging
check_layer_numerics
check_numerics
collect_operator_stats
compare_accuracy
DebugMode
disable_operator_stats_collection
disable_tensor_checker
enable_operator_stats_collection
enable_tensor_checker
TensorCheckerConfig
decorate
GradScaler
is_bfloat16_supported
is_float16_supported
paddle.audio
backends
get_current_backend
list_available_backends
set_backend
datasets
ESC50
TESS
features
LogMelSpectrogram
MelSpectrogram
MFCC
Spectrogram
functional
compute_fbank_matrix
create_dct
fft_frequencies
get_window
hz_to_mel
mel_frequencies
mel_to_hz
power_to_db
info
load
save
paddle.autograd
backward
hessian
ir_backward
calc_gradient
calc_gradient_helper
grad
jacobian
PyLayer
PyLayerContext
saved_tensors_hooks
paddle.callbacks
Callback
EarlyStopping
LRScheduler
ModelCheckpoint
ProgBarLogger
ReduceLROnPlateau
VisualDL
WandbCallback
paddle.device
cuda
current_stream
device_count
empty_cache
Event
get_device_capability
get_device_name
get_device_properties
max_memory_allocated
max_memory_reserved
memory_allocated
memory_reserved
Stream
stream_guard
synchronize
current_stream
Event
get_all_custom_device_type
get_all_device_type
get_available_custom_device
get_available_device
get_cudnn_version
get_device
IPUPlace
is_compiled_with_cinn
is_compiled_with_cuda
is_compiled_with_custom_device
is_compiled_with_distribute
is_compiled_with_ipu
is_compiled_with_rocm
is_compiled_with_xpu
set_device
set_stream
Stream
stream_guard
synchronize
xpu
synchronize
XPUPlace
paddle.distributed
all_gather
all_gather_object
all_reduce
alltoall
alltoall_single
barrier
broadcast
broadcast_object_list
communication
stream
all_gather
all_reduce
alltoall
alltoall_single
broadcast
gather
recv
reduce
reduce_scatter
scatter
send
CountFilterEntry
destroy_process_group
DistAttr
DistModel
dtensor_from_fn
fleet
CommunicateTopology
DistributedStrategy
Fleet
HybridCommunicateGroup
MultiSlotDataGenerator
MultiSlotStringDataGenerator
PaddleCloudRoleMaker
Role
UserDefinedRoleMaker
UtilBase
utils
DistributedInfer
HDFSClient
LocalFS
recompute
gather
get_backend
get_group
get_rank
get_world_size
gloo_barrier
gloo_init_parallel_env
gloo_release
init_parallel_env
InMemoryDataset
irecv
is_available
is_initialized
isend
launch
load_state_dict
new_group
ParallelEnv
ParallelMode
Partial
passes
new_pass
PassContext
PassManager
Placement
ProbabilityEntry
ProcessMesh
ps
the_one_ps
BarrierTable
DenseTable
GeoSparseTable
SparseTable
Table
TensorTable
utils
ps_factory
CpuAsyncPsProgramBuilder
CpuSyncPsProgramBuilder
FlPsProgramBuilder
GeoPsProgramBuilder
GpuPsProgramBuilder
HeterAsyncPsProgramBuilder
NuPsProgramBuilder
PsProgramBuilder
QueueDataset
recv
reduce
reduce_scatter
ReduceOp
ReduceType
Replicate
reshard
rpc
get_all_worker_infos
get_current_worker_info
get_worker_info
init_rpc
rpc_async
rpc_sync
shutdown
save_state_dict
scatter
scatter_object_list
send
Shard
shard_dataloader
shard_layer
shard_optimizer
shard_scaler
shard_tensor
sharding
group_sharded_parallel
save_group_sharded_model
ShardingStage1
ShardingStage2
ShardingStage3
ShowClickEntry
spawn
split
Strategy
to_static
unshard_dtensor
wait
paddle.distribution
AbsTransform
AffineTransform
Bernoulli
Beta
Binomial
Categorical
Cauchy
ChainTransform
ContinuousBernoulli
Dirichlet
Distribution
Exponential
ExponentialFamily
ExpTransform
Gamma
Geometric
Gumbel
Independent
IndependentTransform
kl_divergence
Laplace
LogNormal
Multinomial
MultivariateNormal
Normal
Poisson
PowerTransform
register_kl
ReshapeTransform
SigmoidTransform
SoftmaxTransform
StackTransform
StickBreakingTransform
TanhTransform
Transform
TransformedDistribution
Uniform
paddle.fft
fft
fft2
fftfreq
fftn
fftshift
hfft
hfft2
hfftn
ifft
ifft2
ifftn
ifftshift
ihfft
ihfft2
ihfftn
irfft
irfft2
irfftn
rfft
rfft2
rfftfreq
rfftn
paddle.geometric
reindex_graph
reindex_heter_graph
sample_neighbors
segment_max
segment_mean
segment_min
segment_sum
send_u_recv
send_ue_recv
send_uv
weighted_sample_neighbors
paddle.hub
help
list
load
paddle.incubate
asp
add_supported_layer
calculate_density
decorate
prune_model
reset_excluded_layers
set_excluded_layers
autograd
disable_prim
enable_prim
forward_grad
grad
Hessian
Jacobian
jvp
vjp
autotune
set_config
distributed
fleet
recompute_hybrid
recompute_sequential
utils
io
dist_save
save
save_for_auto_inference
graph_khop_sampler
graph_reindex
graph_sample_neighbors
graph_send_recv
identity_loss
LookAhead
ModelAverage
nn
functional
blha_get_max_len
block_multihead_attention
fused_bias_dropout_residual_layer_norm
fused_dropout_add
fused_ec_moe
fused_feedforward
fused_layer_norm
fused_linear
fused_linear_activation
fused_matmul_bias
fused_multi_head_attention
fused_multi_transformer
fused_rms_norm
fused_rotary_position_embedding
masked_multihead_attention
swiglu
variable_length_memory_efficient_attention
FusedBiasDropoutResidualLayerNorm
FusedDropoutAdd
FusedEcMoe
FusedFeedForward
FusedLinear
FusedMultiHeadAttention
FusedMultiTransformer
FusedTransformerEncoderLayer
optimizer
functional
minimize_bfgs
minimize_lbfgs
LBFGS
segment_max
segment_mean
segment_min
segment_sum
softmax_mask_fuse
softmax_mask_fuse_upper_triangle
xpu
resnet_block
resnet_basic_block
ResNetBasicBlock
paddle.inference
Config
convert_to_mixed_precision
DataType
PlaceType
PrecisionType
Predictor
PredictorPool
Tensor
XpuConfig
paddle.io
BatchSampler
ChainDataset
ComposeDataset
ConcatDataset
DataLoader
Dataset
DistributedBatchSampler
get_worker_info
IterableDataset
random_split
RandomSampler
Sampler
SequenceSampler
Subset
SubsetRandomSampler
TensorDataset
WeightedRandomSampler
paddle.jit
enable_to_static
ignore_module
load
not_to_static
save
set_code_level
set_verbosity
to_static
TranslatedLayer
paddle.linalg
cholesky
cholesky_solve
cond
corrcoef
cov
det
eig
eigh
eigvals
eigvalsh
householder_product
inv
lstsq
lu
lu_unpack
matrix_exp
matrix_norm
matrix_power
matrix_rank
multi_dot
norm
ormqr
pca_lowrank
pinv
qr
slogdet
solve
svd
svd_lowrank
triangular_solve
vector_norm
paddle.metric
Accuracy
accuracy
Auc
Metric
Precision
Recall
paddle.nn
AdaptiveAvgPool1D
AdaptiveAvgPool2D
AdaptiveAvgPool3D
AdaptiveLogSoftmaxWithLoss
AdaptiveMaxPool1D
AdaptiveMaxPool2D
AdaptiveMaxPool3D
AlphaDropout
AvgPool1D
AvgPool2D
AvgPool3D
BatchNorm
BatchNorm1D
BatchNorm2D
BatchNorm3D
BCELoss
BCEWithLogitsLoss
BeamSearchDecoder
Bilinear
BiRNN
CELU
ChannelShuffle
ClipGradByGlobalNorm
ClipGradByNorm
ClipGradByValue
Conv1D
Conv1DTranspose
Conv2D
Conv2DTranspose
Conv3D
Conv3DTranspose
CosineEmbeddingLoss
CosineSimilarity
CrossEntropyLoss
CTCLoss
Dropout
Dropout2D
Dropout3D
dynamic_decode
ELU
Embedding
Flatten
Fold
FractionalMaxPool2D
FractionalMaxPool3D
functional
adaptive_avg_pool1d
adaptive_avg_pool2d
adaptive_avg_pool3d
adaptive_log_softmax_with_loss
adaptive_max_pool1d
adaptive_max_pool2d
adaptive_max_pool3d
affine_grid
alpha_dropout
avg_pool1d
avg_pool2d
avg_pool3d
batch_norm
bilinear
binary_cross_entropy
binary_cross_entropy_with_logits
celu
channel_shuffle
class_center_sample
conv1d
conv1d_transpose
conv2d
conv2d_transpose
conv3d
conv3d_transpose
cosine_embedding_loss
cosine_similarity
cross_entropy
ctc_loss
dice_loss
dropout
dropout2d
dropout3d
elu
elu_
embedding
flash_attention_with_sparse_mask
flash_attn_qkvpacked
flash_attn_varlen_qkvpacked
fold
fractional_max_pool2d
fractional_max_pool3d
gather_tree
gaussian_nll_loss
gelu
glu
grid_sample
group_norm
gumbel_softmax
hardshrink
hardsigmoid
hardswish
hardtanh
hardtanh_
hinge_embedding_loss
hsigmoid_loss
instance_norm
interpolate
kl_div
l1_loss
label_smooth
layer_norm
leaky_relu
leaky_relu_
linear
local_response_norm
log_loss
log_sigmoid
log_softmax
margin_cross_entropy
margin_ranking_loss
max_pool1d
max_pool2d
max_pool3d
max_unpool1d
max_unpool2d
max_unpool3d
maxout
mish
mse_loss
multi_label_soft_margin_loss
multi_margin_loss
nll_loss
normalize
npair_loss
one_hot
pad
pairwise_distance
pixel_shuffle
pixel_unshuffle
poisson_nll_loss
prelu
relu
relu6
relu_
rnnt_loss
rrelu
scaled_dot_product_attention
selu
sequence_mask
sigmoid_focal_loss
silu
smooth_l1_loss
soft_margin_loss
softmax
softmax_
softmax_with_cross_entropy
softplus
softshrink
softsign
sparse_attention
square_error_cost
swish
tanhshrink
temporal_shift
thresholded_relu
thresholded_relu_
triplet_margin_loss
triplet_margin_with_distance_loss
unfold
upsample
zeropad2d
GaussianNLLLoss
GELU
GLU
GroupNorm
GRU
GRUCell
Hardshrink
Hardsigmoid
Hardswish
Hardtanh
HingeEmbeddingLoss
HSigmoidLoss
Identity
initializer
Assign
Bilinear
calculate_gain
Constant
Dirac
KaimingNormal
KaimingUniform
Normal
Orthogonal
set_global_initializer
TruncatedNormal
Uniform
XavierNormal
XavierUniform
InstanceNorm1D
InstanceNorm2D
InstanceNorm3D
KLDivLoss
L1Loss
Layer
LayerDict
LayerList
LayerNorm
LeakyReLU
Linear
LocalResponseNorm
LogSigmoid
LogSoftmax
LSTM
LSTMCell
MarginRankingLoss
Maxout
MaxPool1D
MaxPool2D
MaxPool3D
MaxUnPool1D
MaxUnPool2D
MaxUnPool3D
Mish
MSELoss
MultiHeadAttention
MultiLabelSoftMarginLoss
MultiMarginLoss
NLLLoss
Pad1D
Pad2D
Pad3D
PairwiseDistance
ParameterList
PixelShuffle
PixelUnshuffle
PoissonNLLLoss
PReLU
quant
llm_int8_linear
quant_layers
FakeQuantAbsMax
FakeQuantChannelWiseAbsMax
FakeQuantMAOutputScaleLayer
FakeQuantMovingAverageAbsMax
MAOutputScaleLayer
MovingAverageAbsMaxScale
QuantizedColumnParallelLinear
QuantizedConv2D
QuantizedConv2DTranspose
QuantizedLinear
QuantizedMatmul
QuantizedRowParallelLinear
Stub
weight_dequantize
weight_only_linear
weight_quantize
ReLU
ReLU6
RNN
RNNCellBase
RNNTLoss
RReLU
SELU
Sequential
Sigmoid
Silu
SimpleRNN
SimpleRNNCell
SmoothL1Loss
SoftMarginLoss
Softmax
Softmax2D
Softplus
Softshrink
Softsign
SpectralNorm
Swish
SyncBatchNorm
Tanh
Tanhshrink
ThresholdedReLU
Transformer
TransformerDecoder
TransformerDecoderLayer
TransformerEncoder
TransformerEncoderLayer
TripletMarginLoss
TripletMarginWithDistanceLoss
Unflatten
Unfold
Upsample
UpsamplingBilinear2D
UpsamplingNearest2D
utils
clip_grad_norm_
clip_grad_value_
parameters_to_vector
remove_weight_norm
spectral_norm
vector_to_parameters
weight_norm
ZeroPad1D
ZeroPad2D
ZeroPad3D
paddle.onnx
export
paddle.optimizer
Adadelta
Adagrad
Adam
Adamax
AdamW
ASGD
Lamb
LBFGS
lr
CosineAnnealingDecay
CosineAnnealingWarmRestarts
CyclicLR
ExponentialDecay
InverseTimeDecay
LambdaDecay
LinearLR
LinearWarmup
LRScheduler
MultiplicativeDecay
MultiStepDecay
NaturalExpDecay
NoamDecay
OneCycleLR
PiecewiseDecay
PolynomialDecay
ReduceOnPlateau
StepDecay
Momentum
NAdam
Optimizer
RAdam
RMSProp
Rprop
SGD
paddle.profiler
export_chrome_tracing
export_protobuf
load_profiler_result
make_scheduler
Profiler
ProfilerState
ProfilerTarget
RecordEvent
SortedKeys
SummaryView
paddle.quantization
BaseObserver
BaseQuanter
PTQ
QAT
QuantConfig
quanter
paddle.regularizer
L1Decay
L2Decay
paddle.signal
istft
stft
paddle.sparse
abs
add
addmm
asin
asinh
atan
atanh
cast
coalesce
deg2rad
divide
expm1
is_same_shape
isnan
log1p
masked_matmul
matmul
multiply
mv
neg
nn
BatchNorm
Conv2D
Conv3D
functional
attention
conv2d
conv3d
leaky_relu
max_pool3d
relu
relu6
softmax
subm_conv2d
subm_conv2d_igemm
subm_conv3d
subm_conv3d_igemm
LeakyReLU
MaxPool3D
ReLU
ReLU6
Softmax
SubmConv2D
SubmConv3D
SyncBatchNorm
pca_lowrank
pow
rad2deg
reshape
sin
sinh
slice
sparse_coo_tensor
sparse_csr_tensor
sqrt
square
subtract
sum
tan
tanh
transpose
paddle.static
accuracy
append_backward
auc
BuildStrategy
CompiledProgram
cpu_places
create_global_var
ctr_metric_bundle
cuda_places
data
default_main_program
default_startup_program
deserialize_persistables
deserialize_program
device_guard
Executor
ExponentialMovingAverage
global_scope
gradients
InputSpec
ipu_shard_guard
IpuCompiledProgram
IpuStrategy
load
load_from_file
load_inference_model
load_program_state
name_scope
nn
batch_norm
bilinear_tensor_product
case
cond
conv2d
conv2d_transpose
conv3d
conv3d_transpose
data_norm
deform_conv2d
embedding
fc
group_norm
instance_norm
layer_norm
nce
prelu
row_conv
sequence_conv
sequence_enumerate
sequence_expand
sequence_expand_as
sequence_first_step
sequence_last_step
sequence_pad
sequence_pool
sequence_reshape
sequence_scatter
sequence_slice
sequence_softmax
sequence_unpad
sparse_embedding
spectral_norm
static_pylayer
switch_case
while_loop
normalize_program
Print
Program
program_guard
py_func
save
save_inference_model
save_to_file
scope_guard
serialize_persistables
serialize_program
set_ipu_shard
set_program_state
Variable
WeightNormParamAttr
xpu_places
paddle.sysconfig
get_include
get_lib
paddle.Tensor
Overview
acosh_
add_
apply
apply_
asin_
asinh_
astype
atanh_
backward
ceil_
clear_grad
clip_
coalesce
cosh_
cpu
create_tensor
cuda
dim
erfinv_
exp_
exponential_
fill_
fill_diagonal_
fill_diagonal_tensor
fill_diagonal_tensor_
floor_
gradient
item
lerp_
log1p_
logical_xor_
ndimension
not_equal_
pin_memory
put_along_axis_
reciprocal_
register_hook
round_
rsqrt_
scale_
set_value
sigmoid
sigmoid_
sqrt_
subtract_
to
to_dense
to_sparse_coo
top_p_sampling
uniform_
value
values
zero_
paddle.text
Conll05st
Imdb
Imikolov
Movielens
UCIHousing
viterbi_decode
ViterbiDecoder
WMT14
WMT16
paddle.utils
cpp_extension
CppExtension
CUDAExtension
get_build_directory
load
setup
deprecated
dlpack
from_dlpack
to_dlpack
download
get_weights_path_from_url
require_version
run_check
try_import
unique_name
generate
guard
switch
paddle.version
cuda
cudnn
nccl
show
xpu
xpu_xccl
xpu_xhpc
paddle.vision
datasets
Cifar10
Cifar100
DatasetFolder
FashionMNIST
Flowers
ImageFolder
MNIST
VOC2012
get_image_backend
image_load
models
AlexNet
alexnet
DenseNet
densenet121
densenet161
densenet169
densenet201
densenet264
GoogLeNet
googlenet
inception_v3
InceptionV3
LeNet
mobilenet_v1
mobilenet_v2
mobilenet_v3_large
mobilenet_v3_small
MobileNetV1
MobileNetV2
MobileNetV3Large
MobileNetV3Small
ResNet
resnet101
resnet152
resnet18
resnet34
resnet50
resnext101_32x4d
resnext101_64x4d
resnext152_32x4d
resnext152_64x4d
resnext50_32x4d
resnext50_64x4d
shufflenet_v2_swish
shufflenet_v2_x0_25
shufflenet_v2_x0_33
shufflenet_v2_x0_5
shufflenet_v2_x1_0
shufflenet_v2_x1_5
shufflenet_v2_x2_0
ShuffleNetV2
SqueezeNet
squeezenet1_0
squeezenet1_1
VGG
vgg11
vgg13
vgg16
vgg19
wide_resnet101_2
wide_resnet50_2
ops
box_coder
decode_jpeg
deform_conv2d
DeformConv2D
distribute_fpn_proposals
generate_proposals
matrix_nms
nms
prior_box
psroi_pool
PSRoIPool
read_file
roi_align
roi_pool
RoIAlign
RoIPool
yolo_box
yolo_loss
set_image_backend
transforms
adjust_brightness
adjust_contrast
adjust_hue
affine
BaseTransform
BrightnessTransform
center_crop
CenterCrop
ColorJitter
Compose
ContrastTransform
crop
erase
Grayscale
hflip
HueTransform
Normalize
normalize
Pad
pad
perspective
RandomAffine
RandomCrop
RandomErasing
RandomHorizontalFlip
RandomPerspective
RandomResizedCrop
RandomRotation
RandomVerticalFlip
resize
Resize
rotate
SaturationTransform
to_grayscale
to_tensor
ToTensor
Transpose
vflip
Contribution Guidelines
Custom Device Support
Custom Runtime
Data Type
Device APIs
Memory APIs
Stream APIs
Event APIs
Custom Kernel
Kernel Function Declaration
Kernel Implementation APIs
Context APIs
Tensor APIs
Exception API
Kernel Registration API
CustomDevice Example
3.0 Beta Release Note
calc_gradient_helper
»
calc_gradient_helper
Edit on GitHub
calc_gradient_helper
¶
paddle.autograd.ir_backward.
calc_gradient_helper
(
outputs
,
inputs
,
grad_outputs
,
no_grad_set
)
[source]