You need to enable JavaScript to run this app.
\u200E
开始使用
特性
文档
API
使用指南
工具平台
工具
AutoDL
PaddleHub
PARL
ERNIE
全部
平台
AI Studio
EasyDL
EasyEdge
资源
模型和数据集
学习资料
应用案例
3.0
develop
3.0-beta
2.6
2.5
2.4
2.3
2.2
2.1
2.0
1.8
中文(简)
English(En)
Installation Guide
Install via pip
Install on Linux via PIP
Install on macOS via PIP
Install on Windows via PIP
Install via conda
Installation on Linux via Conda
Installation on macOS via Conda
Installation on Windows via Conda
Install via docker
Install on Linux via Docker
Install on macOS via Docker
Docker List
Compile From Source Code
Compile on Linux from Source Code
Compile on macOS from Source Code
Compile on Windows from Source Code
NVIDIA PaddlePaddle Container Installation Guide
Appendix
Guides
Model Development
Introduction to Tensor
More Uses for Model Development
Model Visualization
Introduction to models and layers
Gradient clip methods in Paddle
Introduction to Data Type Promotion
Dynamic to Static Graph
Supported Grammars
Error Debugging Experience
Deploy Inference Model
Model Compression
Distributed Training
Quick start for distributed training
Performance Improving
Automatic Mixed Precision Training (AMP)
Auto-tuning in Full-process Training (Beta)
Model Convert
Upgrade guide
FLAGS
cudnn
data processing
debug
check nan inf tool
device management
distributed
executor
memory management
ascend npu
others
API Reference
paddle
abs
abs_
acos
acos_
acosh
add
add_n
addmm
addmm_
all
allclose
amax
amin
angle
any
arange
argmax
argmin
argsort
as_complex
as_real
as_strided
asin
asinh
assign
atan
atan2
atan_
atanh
atleast_1d
atleast_2d
atleast_3d
baddbmm
baddbmm_
batch
bernoulli
bernoulli_
bincount
binomial
bitwise_and
bitwise_and_
bitwise_invert
bitwise_invert_
bitwise_left_shift
bitwise_left_shift_
bitwise_not
bitwise_not_
bitwise_or
bitwise_or_
bitwise_right_shift
bitwise_right_shift_
bitwise_xor
bitwise_xor_
block_diag
bmm
broadcast_shape
broadcast_tensors
broadcast_to
bucketize
cartesian_prod
cast
cast_
cauchy_
cdist
ceil
check_shape
chunk
clip
clone
column_stack
combinations
complex
concat
conj
copysign
copysign_
cos
cos_
cosh
count_nonzero
CPUPlace
create_parameter
crop
cross
CUDAPinnedPlace
CUDAPlace
cummax
cummin
cumprod
cumprod_
cumsum
cumsum_
cumulative_trapezoid
DataParallel
deg2rad
diag
diag_embed
diagflat
diagonal
diagonal_scatter
diff
digamma
digamma_
disable_signal_handler
disable_static
dist
divide
divide_
dot
dsplit
dstack
dtype
einsum
empty
empty_like
enable_grad
enable_static
equal
equal_
equal_all
erf
erf_
erfinv
exp
expand
expand_as
expm1
expm1_
eye
finfo
flatten
flatten_
flip
floor
floor_divide
floor_divide_
floor_mod
floor_mod_
flops
fmax
fmin
frac
frac_
frexp
from_dlpack
full
full_like
gammainc
gammainc_
gammaincc
gammaincc_
gammaln
gammaln_
gather
gather_nd
gcd
gcd_
geometric_
get_cuda_rng_state
get_default_dtype
get_flags
get_rng_state
grad
greater_equal
greater_equal_
greater_than
greater_than_
heaviside
histogram
histogram_bin_edges
histogramdd
hsplit
hstack
hypot
hypot_
i0
i0_
i0e
i1
i1e
iinfo
imag
in_dynamic_mode
increment
index_add
index_add_
index_fill
index_fill_
index_put
index_put_
index_sample
index_select
inner
is_complex
is_empty
is_floating_point
is_grad_enabled
is_integer
is_tensor
isclose
isfinite
isin
isinf
isnan
isneginf
isposinf
isreal
kron
kthvalue
LazyGuard
lcm
lcm_
ldexp
ldexp_
lerp
less
less_
less_equal
less_equal_
less_than
less_than_
lgamma
lgamma_
linspace
load
log
log10
log10_
log1p
log2
log2_
log_
log_normal
log_normal_
logaddexp
logcumsumexp
logical_and