Loading [Contrib]/a11y/accessibility-menu.js
You need to enable JavaScript to run this app.
\u200E
中
|
EN
Forum
GitHub
Install
Docs
Models
Products
Customers
3.1
Installation Guide
Guides
API Reference
Contribution Guidelines
3.1 Release Note
English(En)
paddle
paddle.amp
paddle.audio
paddle.autograd
paddle.callbacks
paddle.device
paddle.distributed
paddle.distribution
paddle.fft
paddle.geometric
paddle.hub
paddle.incubate
asp
autograd
autotune
cc
distributed
graph_khop_sampler
graph_reindex
graph_sample_neighbors
graph_send_recv
identity_loss
inference
LookAhead
ModelAverage
nn
functional
blha_get_max_len
block_multihead_attention
build_src_rank_and_local_expert_id
cal_aux_loss
expand_modality_expert_id
fp8_gemm_blockwise
fp8_quant_blockwise
fused_act_dequant
fused_bias_act
fused_bias_dropout_residual_layer_norm
fused_dropout_add
fused_feedforward
fused_layer_norm
fused_linear
fused_linear_activation
fused_matmul_bias
fused_multi_head_attention
fused_multi_transformer
fused_rms_norm
fused_rms_norm_ext
fused_rotary_position_embedding
fused_stack_transpose_quant
fused_swiglu_weighted_bwd
fused_transpose_split_quant
fused_transpose_wlch_split_quant
fused_weighted_swiglu_act_quant
int_bincount
masked_multihead_attention
moe_combine
moe_gate_dispatch
moe_gate_dispatch_partial_nosoftmaxtopk
moe_gate_dispatch_permute
swiglu
variable_length_memory_efficient_attention
FusedBiasDropoutResidualLayerNorm
FusedDropoutAdd
FusedFeedForward
FusedLinear
FusedMultiHeadAttention
FusedMultiTransformer
FusedTransformerEncoderLayer
optimizer
segment_max
segment_mean
segment_min
segment_sum
softmax_mask_fuse
softmax_mask_fuse_upper_triangle
xpu
paddle.inference
paddle.io
paddle.jit
paddle.linalg
paddle.metric
paddle.nn
paddle.onnx
paddle.optimizer
paddle.profiler
paddle.quantization
paddle.regularizer
paddle.signal
paddle.sparse
paddle.static
paddle.sysconfig
paddle.Tensor
paddle.text
paddle.utils
paddle.version
paddle.vision
目录
Document
fused_transpose_split_quant
fused_transpose_split_quant
paddle.incubate.nn.functional.
fused_transpose_split_quant
(
x
,
tokens_per_expert
,
pow_2_scales
=
False
)
[source]
开始使用
特性
文档
API
使用指南
工具平台
工具
AutoDL
PaddleHub
PARL
ERNIE
全部
平台
AI Studio
EasyDL
EasyEdge
资源
模型和数据集
学习资料
应用案例