PyTorch 最新 release 与 Paddle develop API 映射表

本文梳理了 PyTorch 最新发行版(当前 v2.8.0) API 与 PaddlePaddle develop 版本 API 对应关系与差异分析。通过本文档,帮助开发者快速迁移 PyTorch 使用经验,完成模型的开发与调优。

贡献代码

欢迎你向我们贡献代码,关于如何编写 API 映射关系,为保证文档格式统一性与可读性,请严格参照 API 映射关系-格式与模板 来编写。

API 映射表目录

类别 简介
参数与 API 名均一致 此类 API 功能和使用方法一致,只需将 torch. 替换为 paddle.
参数一致但 API 名不一致 此类 API 功能相同且参数一致,但 API 名不同
仅参数名不一致 ​ 此类 API 功能相同,但部分参数名称不同
paddle 参数更多 此类 API 在 PaddlePaddle 中提供了更多可选参数
参数默认值不一致 此类 API 功能相同,但某些参数的默认值不同
torch 参数更多 ​此类 API 在 PyTorch 中提供了更多参数
输入参数用法不一致 此类 API 对输入参数的处理方式不同
输入参数类型不一致 此类 API 要求的输入数据类型不同
返回参数类型不一致 ​此类 API 返回值的类型或结构不同
组合替代实现 此类功能在 PaddlePaddle 中没有直接对应的单一 API,需要通过多个 PaddlePaddle API 组合来实现
可删除 此类 PyTorch API 在 PaddlePaddle 中可以直接删除
功能缺失 此类 PyTorch API 的功能在 PaddlePaddle 中暂时没有等效实现

参数与 API 名均一致

分类依据​​ 此类 API 功能和使用方法在 PyTorch 和 PaddlePaddle 中完全一致,只需将 torch. 替换为 paddle.

转写示例

# PyTorch 写法
x = torch.eye(5)
torch.einsum('ii->i', x)
model = torch.nn.Softplus(beta=0.5, threshold=15)

# Paddle 写法
x = paddle.eye(5)
paddle.einsum('ii->i', x)
model = paddle.nn.Softplus(beta=0.5, threshold=15)
序号 Pytorch 最新 release Paddle develop 备注
1 torch.Tensor.bfloat16 paddle.Tensor.bfloat16 -
2 torch.Tensor.bool paddle.Tensor.bool -
3 torch.Tensor.byte paddle.Tensor.byte -
4 torch.Tensor.char paddle.Tensor.char -
5 torch.Tensor.double paddle.Tensor.double -
6 torch.Tensor.float paddle.Tensor.float -
7 torch.Tensor.half paddle.Tensor.half -
8 torch.Tensor.int paddle.Tensor.int -
9 torch.Tensor.long paddle.Tensor.long -
10 torch.Tensor.short paddle.Tensor.short -
11 torch.Tensor.cfloat paddle.Tensor.cfloat -
12 torch.Tensor.cdouble paddle.Tensor.cdouble -
13 torch.nn.init.calculate_gain paddle.nn.init.calculate_gain -
14 torch.nn.init.constant_ paddle.nn.init.constant_ -
15 torch.nn.init.dirac_ paddle.nn.init.dirac_ -
16 torch.nn.init.eye_ paddle.nn.init.eye_ -
17 torch.nn.init.kaiming_normal_ paddle.nn.init.kaiming_normal_ -
18 torch.nn.init.kaiming_uniform_ paddle.nn.init.kaiming_uniform_ -
19 torch.nn.init.normal_ paddle.nn.init.normal_ -
20 torch.nn.init.ones paddle.nn.init.ones -
21 torch.nn.init.orthogonal_ paddle.nn.init.orthogonal_ -
22 torch.nn.init.trunc_normal_ paddle.nn.init.trunc_normal_ -
23 torch.nn.init.uniform_ paddle.nn.init.uniform_ -
24 torch.nn.init.xavier_normal_ paddle.nn.init.xavier_normal_ -
25 torch.nn.init.xavier_uniform_ paddle.nn.init.xavier_uniform_ -
26 torch.nn.init.zeros_ paddle.nn.init.zeros_ -
27 torch.nn.Conv1d paddle.nn.Conv1d -
28 torch.nn.Conv2d paddle.nn.Conv2d -
29 torch.nn.Conv3d paddle.nn.Conv3d -
30 torch.nn.Embedding paddle.nn.Embedding -
31 torch.complex paddle.complex -
32 torch.polar paddle.polar -
33 torch.cat paddle.cat -
34 torch.stack paddle.stack -
35 torch.swapaxes paddle.swapaxes -
36 torch.swapdims paddle.swapdims -
37 torch.where paddle.where -
38 torch.clamp paddle.clamp -
39 torch.clip paddle.clip -
40 torch.cos paddle.cos -
41 torch.floor paddle.floor -
42 torch.log paddle.log -
43 torch.mul paddle.mul -
44 torch.multiply paddle.multiply -
45 torch.pow paddle.pow -
46 torch.rsqrt paddle.rsqrt -
47 torch.sign paddle.sign -
48 torch.sin paddle.sin -
49 torch.eq paddle.eq -
50 torch.gt paddle.gt -
51 torch.view_as_real paddle.view_as_real -
52 torch.view_as_complex paddle.view_as_complex -
53 torch.ger paddle.ger -
54 torch.Tensor.mul_ paddle.Tensor.mul_ -
55 torch.Tensor.swapaxes paddle.Tensor.swapaxes -
56 torch.Tensor.swapdims paddle.Tensor.swapdims -
57 torch.autograd.Function paddle.autograd.Function -
58 torch.take_along_dim paddle.take_along_dim -
59 torch.Tensor.take_along_dim paddle.Tensor.take_along_dim -
60 torch.special.logsumexp paddle.special.logsumexp -
61 torch.argwhere paddle.argwhere -
62 torch.concatenate paddle.concatenate -
63 torch.is_autocast_enabled paddle.is_autocast_enabled -
64 torch.get_autocast_gpu_dtype paddle.get_autocast_gpu_dtype -
65 torch.cumsum paddle.cumsum -
66 torch.diff paddle.diff -
67 torch.nn.functional.dropout1d paddle.nn.functional.dropout1d -
68 torch.nn.parameter.Parameter paddle.nn.parameter.Parameter -
69 torch.add paddle.add -
70 torch.div paddle.div -
71 torch.divide paddle.divide -
72 torch.true_divide paddle.true_divide -
73 torch.Tensor.add paddle.Tensor.add -
74 torch.Tensor.add_ paddle.Tensor.add_ -
75 torch.Tensor.div paddle.Tensor.div -
76 torch.Tensor.div_ paddle.Tensor.div_ -
77 torch.Tensor.divide paddle.Tensor.divide -
78 torch.Tensor.divide_ paddle.Tensor.divide_ -
79 torch.Tensor.true_divide paddle.Tensor.true_divide -
80 torch.range paddle.range -
81 torch.arange paddle.arange -
82 torch.randn paddle.randn -
83 torch.zeros paddle.zeros -
84 torch.ones paddle.ones -
85 torch.full paddle.full -
86 torch.empty paddle.empty -
87 torch.zeros_like paddle.zeros_like -
88 torch.ones_like paddle.ones_like -
89 torch.full_like paddle.full_like -
90 torch.empty_like paddle.empty_like -
91 torch.Tensor.new_zeros paddle.Tensor.new_zeros -
92 torch.Tensor.new_ones paddle.Tensor.new_ones -
93 torch.Tensor.new_full paddle.Tensor.new_full -
94 torch.Tensor.new_empty paddle.Tensor.new_empty -
95 torch.eye paddle.eye -
96 torch.permute paddle.permute -
97 torch.Tensor.permute paddle.Tensor.permute -
98 torch.repeat_interleave paddle.repeat_interleave -
99 torch.Tensor.repeat_interleave paddle.Tensor.repeat_interleave -
100 torch.Tensor.repeat paddle.Tensor.repeat -
101 torch.maximum paddle.maximum -
102 torch.minimum paddle.minimum -
103 torch.topk paddle.topk -
104 torch.sqrt paddle.sqrt -
105 torch.amin paddle.amin -
106 torch.amax paddle.amax -
107 torch.as_tensor paddle.as_tensor -
108 torch.tensor paddle.tensor -
109 torch.Tensor.copy_ paddle.Tensor.copy_ -
110 torch.Tensor.norm paddle.Tensor.norm -
111 torch.Tensor paddle.Tensor -
112 torch.FloatTensor paddle.FloatTensor -
113 torch.DoubleTensor paddle.DoubleTensor -
114 torch.HalfTensor paddle.HalfTensor -
115 torch.BFloat16Tensor paddle.BFloat16Tensor -
116 torch.ByteTensor paddle.ByteTensor -
117 torch.CharTensor paddle.CharTensor -
118 torch.ShortTensor paddle.ShortTensor -
119 torch.IntTensor paddle.IntTensor -
120 torch.LongTensor paddle.LongTensor -
121 torch.BoolTensor paddle.BoolTensor -
122 torch.norm paddle.norm -
123 torch.linalg.norm paddle.linalg.norm -
124 torch.multinomial paddle.multinomial -
125 torch.var paddle.var -
126 torch.rand_like paddle.rand_like -
127 torch.mean paddle.mean -
128 torch.Tensor.mean paddle.Tensor.mean -
129 torch.msort paddle.msort -
130 torch.Tensor.msort paddle.Tensor.msort -
131 torch.Tensor.ravel paddle.Tensor.ravel -
132 torch.ravel paddle.ravel -
133 torch.Tensor.scatter_add paddle.Tensor.scatter_add -
134 torch.scatter_add paddle.scatter_add -
135 torch.Tensor.scatter_add_ paddle.Tensor.scatter_add_ -
136 torch.Tensor.tril paddle.Tensor.tril -
137 torch.tril paddle.tril -
138 torch.Tensor.triu paddle.Tensor.triu -
139 torch.triu paddle.triu -
140 torch.bmm paddle.bmm -
141 torch.Tensor.bmm paddle.Tensor.bmm -
142 torch.nn.GELU paddle.nn.GELU -
143 torch.broadcast_shapes paddle.broadcast_shapes -
144 torch.Tensor.scatter_reduce paddle.Tensor.scatter_reduce -
145 torch.scatter_reduce paddle.scatter_reduce -
146 torch.nn.functional.silu paddle.nn.functional.silu -
147 torch.Tensor.softmax paddle.Tensor.softmax -
148 torch.special.softmax paddle.special.softmax -
149 torch.softmax paddle.softmax -
150 torch.Tensor.clamp paddle.Tensor.clamp -
151 torch.Tensor.itemsize paddle.Tensor.itemsize -
152 torch.get_default_dtype paddle.get_default_dtype -
153 torch.einsum paddle.einsum -
154 torch.nn.Identity paddle.nn.Identity -
155 torch.Tensor.ndim paddle.Tensor.ndim -
156 torch.Tensor.T paddle.Tensor.T -
157 torch.Tensor.abs paddle.Tensor.abs -
158 torch.Tensor.cos paddle.Tensor.cos -
159 torch.Tensor.detach paddle.Tensor.detach -
160 torch.Tensor.dim paddle.Tensor.dim -
161 torch.Tensor.fill_ paddle.Tensor.fill_ -
162 torch.Tensor.isnan paddle.Tensor.isnan -
163 torch.Tensor.item paddle.Tensor.item -
164 torch.Tensor.log paddle.Tensor.log -
165 torch.Tensor.masked_scatter paddle.Tensor.masked_scatter -
166 torch.Tensor.masked_fill_ paddle.Tensor.masked_fill_ -
167 torch.Tensor.masked_fill paddle.Tensor.masked_fill -
168 torch.Tensor.nonzero paddle.Tensor.nonzero -
169 torch.Tensor.normal_ paddle.Tensor.normal_ -
170 torch.Tensor.sigmoid paddle.Tensor.sigmoid -
171 torch.Tensor.sin paddle.Tensor.sin -
172 torch.Tensor.square paddle.Tensor.square -
173 torch.Tensor.tolist paddle.Tensor.tolist -
174 torch.Tensor.zero_ paddle.Tensor.zero_ -
175 torch.distributed.get_rank paddle.distributed.get_rank -
176 torch.distributed.get_world_size paddle.distributed.get_world_size -
177 torch.special.softmax paddle.special.softmax -
178 torch.Tensor.shape paddle.Tensor.shape -
179 torch.float32 paddle.float32 -
180 torch.long paddle.long -
181 torch.int32 paddle.int32 -
182 torch.bfloat16 paddle.bfloat16 -
183 torch.int64 paddle.int64 -
184 torch.bool paddle.bool -
185 torch.uint8 paddle.uint8 -
186 torch.Tensor.abs_ paddle.Tensor.abs_ -
187 torch.Tensor.acos paddle.Tensor.acos -
188 torch.Tensor.acos_ paddle.Tensor.acos_ -
189 torch.Tensor.acosh paddle.Tensor.acosh -
190 torch.Tensor.acosh_ paddle.Tensor.acosh_ -
191 torch.Tensor.angle paddle.Tensor.angle -
192 torch.Tensor.apply_ paddle.Tensor.apply_ -
193 torch.Tensor.asin paddle.Tensor.asin -
194 torch.Tensor.asin_ paddle.Tensor.asin_ -
195 torch.Tensor.asinh paddle.Tensor.asinh -
196 torch.Tensor.asinh_ paddle.Tensor.asinh_ -
197 torch.Tensor.atan paddle.Tensor.atan -
198 torch.Tensor.atan_ paddle.Tensor.atan_ -
199 torch.Tensor.atanh paddle.Tensor.atanh -
200 torch.Tensor.atanh_ paddle.Tensor.atanh_ -
201 torch.Tensor.bincount paddle.Tensor.bincount -
202 torch.Tensor.bitwise_not paddle.Tensor.bitwise_not -
203 torch.Tensor.bitwise_not_ paddle.Tensor.bitwise_not_ -
204 torch.Tensor.ceil paddle.Tensor.ceil -
205 torch.Tensor.ceil_ paddle.Tensor.ceil_ -
206 torch.Tensor.cholesky paddle.Tensor.cholesky -
207 torch.Tensor.cholesky_inverse paddle.Tensor.cholesky_inverse -
208 torch.Tensor.clip paddle.Tensor.clip -
209 torch.Tensor.clip_ paddle.Tensor.clip_ -
210 torch.Tensor.coalesce paddle.Tensor.coalesce -
211 torch.Tensor.conj paddle.Tensor.conj -
212 torch.Tensor.cos_ paddle.Tensor.cos_ -
213 torch.Tensor.cosh paddle.Tensor.cosh -
214 torch.Tensor.cosh_ paddle.Tensor.cosh_ -
215 torch.Tensor.cumprod paddle.Tensor.cumprod -
216 torch.Tensor.cumprod_ paddle.Tensor.cumprod_ -
217 torch.Tensor.data_ptr paddle.Tensor.data_ptr -
218 torch.Tensor.deg2rad paddle.Tensor.deg2rad -
219 torch.Tensor.dense_dim paddle.Tensor.dense_dim -
220 torch.Tensor.detach_ paddle.Tensor.detach_ -
221 torch.Tensor.diag_embed paddle.Tensor.diag_embed -
222 torch.Tensor.diagflat paddle.Tensor.diagflat -
223 torch.Tensor.digamma paddle.Tensor.digamma -
224 torch.Tensor.digamma_ paddle.Tensor.digamma_ -
225 torch.Tensor.dtype paddle.Tensor.dtype -
226 torch.Tensor.erf paddle.Tensor.erf -
227 torch.Tensor.erfinv paddle.Tensor.erfinv -
228 torch.Tensor.erfinv_ paddle.Tensor.erfinv_ -
229 torch.Tensor.exp paddle.Tensor.exp -
230 torch.Tensor.exp_ paddle.Tensor.exp_ -
231 torch.Tensor.expm1 paddle.Tensor.expm1 -
232 torch.Tensor.floor paddle.Tensor.floor -
233 torch.Tensor.floor_ paddle.Tensor.floor_ -
234 torch.Tensor.frac paddle.Tensor.frac -
235 torch.Tensor.frac_ paddle.Tensor.frac_ -
236 torch.Tensor.frexp paddle.Tensor.frexp -
237 torch.Tensor.grad paddle.Tensor.grad -
238 torch.Tensor.i0 paddle.Tensor.i0 -
239 torch.Tensor.i0_ paddle.Tensor.i0_ -
240 torch.Tensor.indices paddle.Tensor.indices -
241 torch.Tensor.inverse paddle.Tensor.inverse -
242 torch.Tensor.is_complex paddle.Tensor.is_complex -
243 torch.Tensor.is_floating_point paddle.Tensor.is_floating_point -
244 torch.Tensor.is_leaf paddle.Tensor.is_leaf -
245 torch.Tensor.isfinite paddle.Tensor.isfinite -
246 torch.Tensor.isinf paddle.Tensor.isinf -
247 torch.Tensor.isneginf paddle.Tensor.isneginf -
248 torch.Tensor.isposinf paddle.Tensor.isposinf -
249 torch.Tensor.isreal paddle.Tensor.isreal -
250 torch.Tensor.istft paddle.Tensor.istft -
251 torch.Tensor.lgamma paddle.Tensor.lgamma -
252 torch.Tensor.lgamma_ paddle.Tensor.lgamma_ -
253 torch.Tensor.log10 paddle.Tensor.log10 -
254 torch.Tensor.log10_ paddle.Tensor.log10_ -
255 torch.Tensor.log1p paddle.Tensor.log1p -
256 torch.Tensor.log1p_ paddle.Tensor.log1p_ -
257 torch.Tensor.log2 paddle.Tensor.log2 -
258 torch.Tensor.log2_ paddle.Tensor.log2_ -
259 torch.Tensor.log_ paddle.Tensor.log_ -
260 torch.Tensor.logit paddle.Tensor.logit -
261 torch.Tensor.logit_ paddle.Tensor.logit_ -
262 torch.Tensor.lu paddle.Tensor.lu -
263 torch.Tensor.mT paddle.Tensor.mT -
264 torch.Tensor.masked_scatter_ paddle.Tensor.masked_scatter_ -
265 torch.Tensor.masked_select paddle.Tensor.masked_select -
266 torch.Tensor.matrix_power paddle.Tensor.matrix_power -
267 torch.Tensor.mm paddle.Tensor.mm -
268 torch.Tensor.moveaxis paddle.Tensor.moveaxis -
269 torch.Tensor.mv paddle.Tensor.mv -
270 torch.Tensor.nan_to_num paddle.Tensor.nan_to_num -
271 torch.Tensor.nan_to_num_ paddle.Tensor.nan_to_num_ -
272 torch.Tensor.ndimension paddle.Tensor.ndimension -
273 torch.Tensor.neg paddle.Tensor.neg -
274 torch.Tensor.neg_ paddle.Tensor.neg_ -
275 torch.Tensor.pin_memory paddle.Tensor.pin_memory -
276 torch.Tensor.polygamma paddle.Tensor.polygamma -
277 torch.Tensor.polygamma_ paddle.Tensor.polygamma_ -
278 torch.Tensor.rad2deg paddle.Tensor.rad2deg -
279 torch.Tensor.reciprocal paddle.Tensor.reciprocal -
280 torch.Tensor.reciprocal_ paddle.Tensor.reciprocal_ -
281 torch.Tensor.register_hook paddle.Tensor.register_hook -
282 torch.Tensor.rsqrt paddle.Tensor.rsqrt -
283 torch.Tensor.rsqrt_ paddle.Tensor.rsqrt_ -
284 torch.Tensor.sgn paddle.Tensor.sgn -
285 torch.Tensor.sigmoid_ paddle.Tensor.sigmoid_ -
286 torch.Tensor.sign paddle.Tensor.sign -
287 torch.Tensor.signbit paddle.Tensor.signbit -
288 torch.Tensor.sin_ paddle.Tensor.sin_ -
289 torch.Tensor.sinc paddle.Tensor.sinc -
290 torch.Tensor.sinc_ paddle.Tensor.sinc_ -
291 torch.Tensor.sinh paddle.Tensor.sinh -
292 torch.Tensor.sinh_ paddle.Tensor.sinh_ -
293 torch.Tensor.sparse_dim paddle.Tensor.sparse_dim -
294 torch.Tensor.sqrt paddle.Tensor.sqrt -
295 torch.Tensor.sqrt_ paddle.Tensor.sqrt_ -
296 torch.Tensor.t paddle.Tensor.t -
297 torch.Tensor.t_ paddle.Tensor.t_ -
298 torch.Tensor.tan paddle.Tensor.tan -
299 torch.Tensor.tan_ paddle.Tensor.tan_ -
300 torch.Tensor.tanh paddle.Tensor.tanh -
301 torch.Tensor.tanh_ paddle.Tensor.tanh_ -
302 torch.Tensor.to_dense paddle.Tensor.to_dense -
303 torch.Tensor.tril_ paddle.Tensor.tril_ -
304 torch.Tensor.triu_ paddle.Tensor.triu_ -
305 torch.Tensor.trunc paddle.Tensor.trunc -
306 torch.Tensor.trunc_ paddle.Tensor.trunc_ -
307 torch.Tensor.values paddle.Tensor.values -
308 torch.version paddle.version -
309 torch.version.split paddle.version.split -
310 torch.diag_embed paddle.diag_embed -
311 torch.distributed.ReduceOp.MAX paddle.distributed.ReduceOp.MAX -
312 torch.distributed.ReduceOp.MIN paddle.distributed.ReduceOp.MIN -
313 torch.distributed.ReduceOp.SUM paddle.distributed.ReduceOp.SUM -
314 torch.distributed.batch_isend_irecv paddle.distributed.batch_isend_irecv -
315 torch.distributed.get_backend paddle.distributed.get_backend -
316 torch.distributed.is_available paddle.distributed.is_available -
317 torch.distributed.is_initialized paddle.distributed.is_initialized -
318 torch.e paddle.e -
319 torch.enable_grad paddle.enable_grad -
320 torch.inf paddle.inf -
321 torch.is_grad_enabled paddle.is_grad_enabled -
322 torch.nan paddle.nan -
323 torch.newaxis paddle.newaxis -
324 torch.nn.LogSigmoid paddle.nn.LogSigmoid -
325 torch.nn.Sigmoid paddle.nn.Sigmoid -
326 torch.nn.Softplus paddle.nn.Softplus -
327 torch.nn.Softsign paddle.nn.Softsign -
328 torch.nn.Tanh paddle.nn.Tanh -
329 torch.nn.Tanhshrink paddle.nn.Tanhshrink -
330 torch.nn.TransformerDecoder paddle.nn.TransformerDecoder -
331 torch.nn.TripletMarginWithDistanceLoss paddle.nn.TripletMarginWithDistanceLoss -
332 torch.nn.utils.parameters_to_vector paddle.nn.utils.parameters_to_vector -
333 torch.nn.utils.vector_to_parameters paddle.nn.utils.vector_to_parameters -
334 torch.pi paddle.pi -
335 torch.set_default_dtype paddle.set_default_dtype -
336 torch.t paddle.t -
337 torch.utils.cpp_extension.BuildExtension paddle.utils.cpp_extension.BuildExtension -
338 torch.utils.cpp_extension.BuildExtension.with_options paddle.utils.cpp_extension.BuildExtension.with_options -
339 torch.is_grad_enabled paddle.is_grad_enabled -
340 torch.nn.Conv2d paddle.nn.Conv2d -
341 torch.nn.init.calculate_gain paddle.nn.init.calculate_gain -
342 torch.nn.init.ones_ paddle.nn.init.ones_ -
343 torch.nn.init.uniform_ paddle.nn.init.uniform_ -
344 torch.nn.init.zeros_ paddle.nn.init.zeros_ -
345 torch.Tensor.div paddle.Tensor.div -
346 torch.Tensor.element_size paddle.Tensor.element_size -
347 torch.Tensor.is_floating_point paddle.Tensor.is_floating_point -
348 torch.Tensor.neg paddle.Tensor.neg -
349 torch.Tensor.pin_memory paddle.Tensor.pin_memory -
350 torch.Tensor.view_as paddle.Tensor.view_as -
351 torch.distributed.is_available paddle.distributed.is_available -
352 torch.distributed.is_initialized paddle.distributed.is_initialized -
353 torch.set_default_dtype paddle.set_default_dtype -
354 torch.dtype paddle.dtype -
355 torch.Tensor.data_ptr paddle.Tensor.data_ptr -
356 torch.matmul paddle.matmul -
357 torch.linalg.matmul paddle.linalg.matmul -
358 torch.multiply paddle.multiply -
359 torch.Tensor.matmul paddle.Tensor.matmul -
360 torch.Tensor.multiply paddle.Tensor.multiply -
361 torch.amax paddle.amax -
362 torch.amin paddle.amin -
363 torch.Tensor.amax paddle.Tensor.amax -
364 torch.Tensor.amin paddle.Tensor.amin -
365 torch.Tensor.log2 paddle.Tensor.log2 -
366 torch.log2 paddle.log2 -
367 torch.broadcast_to paddle.broadcast_to -
368 torch.nn.functional.embedding paddle.nn.functional.embedding -
369 torch.no_grad paddle.no_grad -
370 torch.ones_like paddle.ones_like -
371 torch.reshape paddle.reshape -
372 torch.take_along_dim paddle.take_along_dim -
373 torch.Tensor.bitwise_or_ paddle.Tensor.bitwise_or_ -
374 torch.Tensor.view paddle.Tensor.view -
375 torch.unique_consecutive paddle.unique_consecutive -
376 torch.eye paddle.eye -
377 torch.full_like paddle.full_like -
378 torch.Tensor.cumsum paddle.Tensor.cumsum -
379 torch.Tensor.expand paddle.Tensor.expand -
380 torch.clip paddle.clip -
381 torch.isfinite paddle.isfinite -
382 torch.isinf paddle.isinf -
383 torch.isnan paddle.isnan -
384 torch.flatten paddle.flatten -
385 torch.Tensor.flatten paddle.Tensor.flatten -
386 torch.roll paddle.roll -
387 torch.Tensor.sum paddle.Tensor.sum -
388 torch.sum paddle.sum -
389 torch.repeat_interleave paddle.repeat_interleave -
390 torch.Tensor.repeat_interleave paddle.Tensor.repeat_interleave -
391 torch.var paddle.var -
392 torch.prod paddle.prod -
393 torch.finfo paddle.finfo -
394 torch.is_complex paddle.is_complex -
395 torch.concat paddle.concat -
396 torch.empty_like paddle.empty_like -
397 torch.full paddle.full -
398 torch.nonzero paddle.nonzero -
399 torch.Tensor.pow paddle.Tensor.pow -
400 torch.Tensor.prod paddle.Tensor.prod -
401 torch.Tensor.reshape paddle.Tensor.reshape -
402 torch.zeros_like paddle.zeros_like -
403 torch.argsort paddle.argsort -
404 torch.Tensor.argsort paddle.Tensor.argsort -
405 torch.Tensor.squeeze paddle.Tensor.squeeze -
406 torch.chunk paddle.chunk -
407 torch.Tensor.chunk paddle.Tensor.chunk -
408 torch.any paddle.any -
409 torch.unbind paddle.unbind -
410 torch.Tensor.unbindtorch.Tensor.expand_as paddle.Tensor.unbindpaddle.Tensor.expand_as -
411 torch.logsumexp paddle.logsumexp -
412 torch.Tensor.logsumexp paddle.Tensor.logsumexp -
413 torch.argmax paddle.argmax -
414 torch.Tensor.argmax paddle.Tensor.argmax -
415 torch.argmin paddle.argmin -
416 torch.Tensor.argmin paddle.Tensor.argmin -
417 torch.all paddle.all -
418 torch.Tensor.all paddle.Tensor.all -
419 torch.Tensor.any paddle.Tensor.any -
420 torch.logical_not paddle.logical_not -
421 torch.Tensor.logical_not paddle.Tensor.logical_not -
422 torch.logical_and paddle.logical_and -
423 torch.Tensor.logical_and paddle.Tensor.logical_and -
424 torch.logical_or paddle.logical_or -
425 torch.Tensor.logical_or paddle.Tensor.logical_or -
426 torch.logical_xor paddle.logical_xor -
427 torch.Tensor.logical_xor paddle.Tensor.logical_xor -
428 torch.index_select paddle.index_select -
429 torch.Tensor.index_select paddle.Tensor.index_select -
430 torch.dot paddle.dot -
431 torch.Tensor.dot paddle.Tensor.dot -
432 torch.bfloat16 paddle.bfloat16 -
433 torch.bool paddle.bool -
434 torch.complex128 paddle.complex128 -
435 torch.complex64 paddle.complex64 -
436 torch.float64 paddle.float64 -
437 torch.float16 paddle.float16 -
438 torch.float32 paddle.float32 -
439 torch.int16 paddle.int16 -
440 torch.int32 paddle.int32 -
441 torch.int64 paddle.int64 -
442 torch.int8 paddle.int8 -
443 torch.ravel paddle.ravel -
444 torch.Tensor.narrow paddle.Tensor.narrow -
445 torch.narrow paddle.narrow -
446 torch.Tensor.type_as paddle.Tensor.type_as -
447 torch.nn.Sequential paddle.nn.Sequential -
448 torch.transpose paddle.transpose -
449 torch.Tensor.transpose paddle.Tensor.transpose -
450 torch.unsqueeze paddle.unsqueeze -
451 torch.Tensor.unsqueeze paddle.Tensor.unsqueeze -
452 torch.sigmoid paddle.sigmoid -
453 torch.Tensor.topk paddle.Tensor.topk -
454 torch.outer paddle.outer -
455 torch.nn.functional.sigmoid paddle.nn.functional.sigmoid -
456 torch.Tensor.requires_grad paddle.Tensor.requires_grad -
457 torch.Tensor.data paddle.Tensor.data -
458 torch.is_tensor paddle.is_tensor -
459 torch.Tensor.element_size paddle.Tensor.element_size -
460 torch.Tensor.cuda paddle.Tensor.cuda -
461 torch.Tensor.view_as paddle.Tensor.view_as -

参数一致但 API 名不一致

分类依据 此类 API 两者完全一致,只有 API 名称不同,只需用户将 Pytorch API 名称替换为 Paddle API 名称即可。

转写示例

## Pytorch 写法
m = torch.nn.AdaptiveAvgPool1d(5)
y = x.to_sparse(1)

## Paddle 写法
m = paddle.nn.AdaptiveAvgPool1D(5)
y = x.to_sparse_coo(1)
序号 Pytorch 最新 release Paddle develop 备注
1 torch.Tensor.clamp paddle.Tensor.clip -
2 torch.Tensor.clamp_ paddle.Tensor.clip_ -
3 torch.Tensor.col_indices paddle.Tensor.cols -
4 torch.Tensor.conj_physical paddle.Tensor.conj -
5 torch.Tensor.crow_indices paddle.Tensor.crows -
6 torch.Tensor.det paddle.linalg.det -
7 torch.Tensor.device paddle.Tensor.place -
8 torch.Tensor.erf_ paddle.erf_ -
9 torch.Tensor.expm1_ paddle.expm1_ -
10 torch.Tensor.fix paddle.Tensor.trunc -
11 torch.Tensor.fix_ paddle.Tensor.trunc_ -
12 torch.Tensor.get_device paddle.Tensor.place.gpu_device_id -
13 torch.Tensor.is_inference paddle.Tensor.stop_gradient -
14 torch.Tensor.itemsize paddle.Tensor.element_size -
15 torch.Tensor.matrix_exp paddle.linalg.matrix_exp -
16 torch.Tensor.movedim paddle.Tensor.moveaxis -
17 torch.Tensor.mvlgamma paddle.Tensor.multigammaln -
18 torch.Tensor.mvlgamma_ paddle.Tensor.multigammaln_ -
19 torch.Tensor.negative paddle.Tensor.neg -
20 torch.Tensor.negative_ paddle.Tensor.neg_ -
21 torch.Tensor.nelement paddle.Tensor.size -
22 torch.Tensor.numel paddle.Tensor.size -
23 torch.Tensor.positive paddle.positive -
24 torch.Tensor.retain_grad paddle.Tensor.retain_grads -
25 torch.Tensor.sparse_mask paddle.sparse.mask_as -
26 torch.Tensor.square_ paddle.square_ -
27 torch.Tensor.to_sparse paddle.Tensor.to_sparse_coo -
28 torch.autograd.Function.forward paddle.autograd.PyLayer.forward -
29 torch.autograd.enable_grad paddle.enable_grad -
30 torch.autograd.function.FunctionCtx paddle.autograd.PyLayerContext -
31 torch.autograd.function.FunctionCtx.save_for_backward paddle.autograd.PyLayerContext.save_for_backward -
32 torch.autograd.function.FunctionCtx.set_materialize_grads paddle.autograd.PyLayerContext.set_materialize_grads -
33 torch.autograd.grad_mode.set_grad_enabled paddle.set_grad_enabled -
34 torch.autograd.graph.saved_tensors_hooks paddle.autograd.saved_tensors_hooks -
35 torch.backends.cuda.is_built paddle.device.is_compiled_with_cuda -
36 torch.backends.cudnn.version paddle.device.get_cudnn_version -
37 torch.cpu.current_device paddle.get_device -
38 torch.cuda.Event paddle.device.cuda.Event -
39 torch.cuda.StreamContext paddle.device.stream_guard -
40 torch.cuda.current_device paddle.device.get_device -
41 torch.cuda.device_count paddle.device.cuda.device_count -
42 torch.cuda.empty_cache paddle.device.cuda.empty_cache -
43 torch.cuda.get_device_capability paddle.device.cuda.get_device_capability -
44 torch.cuda.get_device_name paddle.device.cuda.get_device_name -
45 torch.cuda.is_bf16_supported paddle.amp.is_bfloat16_supported -
46 torch.cuda.is_initialized paddle.is_compiled_with_cuda -
47 torch.cuda.manual_seed_all paddle.seed -
48 torch.cuda.max_memory_allocated paddle.device.cuda.max_memory_allocated -
49 torch.cuda.max_memory_reserved paddle.device.cuda.max_memory_reserved -
50 torch.cuda.memory_allocated paddle.device.cuda.memory_allocated -
51 torch.cuda.memory_reserved paddle.device.cuda.memory_reserved -
52 torch.cuda.nvtx.range_pop paddle.framework.core.nvprof_nvtx_pop -
53 torch.cuda.reset_max_memory_allocated paddle.device.cuda.reset_max_memory_allocated -
54 torch.cuda.reset_max_memory_cached paddle.device.cuda.reset_max_memory_reserved -
55 torch.cuda.set_stream paddle.device.set_stream -
56 torch.cuda.stream paddle.device.stream_guard -
57 torch.distributed.ReduceOp.PRODUCT paddle.distributed.ReduceOp.PROD -
58 torch.distributed.is_nccl_available paddle.core.is_compiled_with_nccl -
59 torch.distributions.constraints.Constraint paddle.distribution.constraint.Constraint -
60 torch.distributions.distribution.Distribution.log_prob paddle.distribution.Distribution.log_prob -
61 torch.distributions.kl.kl_divergence paddle.distribution.kl_divergence -
62 torch.ge paddle.greater_equal -
63 torch.get_default_device paddle.device.get_device -
64 torch.is_inference paddle.Tensor.stop_gradient -
65 torch.manual_seed paddle.seed -
66 torch.nn.AdaptiveAvgPool1d paddle.nn.AdaptiveAvgPool1D -
67 torch.nn.HuberLoss paddle.nn.SmoothL1Loss -
68 torch.nn.Module.apply paddle.nn.Layer.apply -
69 torch.nn.Module.children paddle.nn.Layer.children -
70 torch.nn.Module.eval paddle.nn.Layer.eval -
71 torch.nn.Module.named_children paddle.nn.Layer.named_children -
72 torch.nn.Module.train paddle.nn.Layer.train -
73 torch.nn.init.calculate_gain paddle.nn.initializer.calculate_gain -
74 torch.numel paddle.Tensor.size -
75 torch.optim.Optimizer.add_param_group paddle.optimizer.Optimizer._add_param_group -
76 torch.optim.Optimizer.load_state_dict paddle.optimizer.Optimizer.load_state_dict -
77 torch.optim.Optimizer.state_dict paddle.optimizer.Optimizer.state_dict -
78 torch.utils.cpp_extension.CUDA_HOME paddle.utils.cpp_extension.cpp_extension.CUDA_HOME -
79 torch.utils.data.ChainDataset paddle.io.ChainDataset -
80 torch.utils.data.ConcatDataset paddle.io.ConcatDataset -
81 torch.utils.data.Dataset paddle.io.Dataset -
82 torch.utils.data.IterableDataset paddle.io.IterableDataset -
83 torch.utils.data.RandomSampler paddle.io.RandomSampler -
84 torch.utils.data.Sampler paddle.io.Sampler -
85 torch.utils.data.SequentialSampler paddle.io.SequenceSampler -
86 torch.utils.data.Subset paddle.io.Subset -
87 torch.utils.data.WeightedRandomSampler paddle.io.WeightedRandomSampler -
88 torch.utils.data.get_worker_info paddle.io.get_worker_info -
89 torch.utils.data.random_split paddle.io.random_split -
90 torchvision.ops.RoIPool paddle.vision.ops.RoIPool -
91 torchvision.transforms.Compose paddle.vision.transforms.Compose -
92 torchvision.transforms.InterpolationMode.BICUBIC 'bicubic' -
93 torchvision.transforms.InterpolationMode.BILINEAR 'bilinear' -
94 torchvision.transforms.InterpolationMode.BOX 'box' -
95 torchvision.transforms.InterpolationMode.HAMMING 'hamming' -
96 torchvision.transforms.InterpolationMode.LANCZOS 'lanczos' -
97 torchvision.transforms.InterpolationMode.NEAREST 'nearest' -
98 torchvision.transforms.InterpolationMode.NEAREST_EXACT 'nearest_exact' -
99 torchvision.transforms.functional.adjust_brightness paddle.vision.transforms.adjust_brightness -
100 torchvision.transforms.functional.adjust_contrast paddle.vision.transforms.adjust_contrast -
101 torchvision.transforms.functional.adjust_hue paddle.vision.transforms.adjust_hue -
102 torchvision.transforms.functional.center_crop paddle.vision.transforms.center_crop -
103 torchvision.transforms.functional.crop paddle.vision.transforms.crop -
104 torchvision.transforms.functional.erase paddle.vision.transforms.erase -
105 torchvision.transforms.functional.hflip paddle.vision.transforms.hflip -
106 torchvision.transforms.functional.pad paddle.vision.transforms.pad -
107 torchvision.transforms.functional.to_grayscale paddle.vision.transforms.to_grayscale -
108 torchvision.transforms.functional.vflip paddle.vision.transforms.vflip -

仅参数名不一致

分类依据 此类 API 功能相同,但部分参数名称不同

序号 Pytorch 最新 release Paddle develop 备注
新增中......

paddle 参数更多

分类依据 此类 API 在 PaddlePaddle 中提供了更多可选参数

序号 Pytorch 最新 release Paddle develop 备注
新增中......

参数默认值不一致

分类依据 此类 API 功能相同,但某些参数的默认值不同

序号 Pytorch 最新 release Paddle develop 备注
新增中......

torch 参数更多

分类依据 ​此类 API 在 PyTorch 中提供了更多参数

序号 Pytorch 最新 release Paddle develop 备注
新增中......

输入参数用法不一致

分类依据 此类 API 对输入参数的处理方式不同

序号 Pytorch 最新 release Paddle develop 备注
新增中......

输入参数类型不一致

分类依据 此类 API 要求的输入数据类型不同

序号 Pytorch 最新 release Paddle develop 备注
新增中......

返回参数类型不一致

分类依据 ​此类 API 返回值的类型或结构不同

序号 Pytorch 最新 release Paddle develop 备注
新增中......

组合替代实现

分类依据 此类功能在 PaddlePaddle 中没有直接对应的单一 API,需要通过多个 PaddlePaddle API 组合来实现

序号 Pytorch 最新 release Paddle develop 备注
新增中......

可删除

分类依据 此类 PyTorch API 在 PaddlePaddle 中可以直接删除

序号 Pytorch 最新 release Paddle develop 备注
新增中......

功能缺失

分类依据 此类 PyTorch API 的功能在 PaddlePaddle 中暂时没有等效实现

序号 Pytorch 最新 release Paddle develop 备注
新增中......