paddle.utils.cpp_extension.cpp_extension. setup ( **attr ) [source]

The interface is used to config the process of compiling customized operators, mainly includes how to complile shared library, automatically generate python API and install it into site-package. It supports using customized operators directly with import statement.

It encapsulates the python built-in setuptools.setup function and keeps arguments and usage same as the native interface. Meanwhile, it hiddens Paddle inner framework concepts, such as necessary compiling flags, included paths of head files, and linking flags. It also will automatically search and valid local enviromment and versions of cc and nvcc , then compiles customized operators supporting CPU or GPU device according to the specified Extension type.

Moreover, ABI compatibility will be checked to ensure that compiler version from cc on local machine is compatible with pre-installed Paddle whl in python site-packages. For example if Paddle with CUDA 10.1 is built with GCC 8.2, then the version of user’s local machine should satisfy GCC >= 8.2. Otherwise, a fatal error will occur because of ABI compatibility.


  1. Compiler ABI compatibility is forward compatible. On Linux platform, we recommend to use GCC 8.2 as soft linking condidate of /usr/bin/cc .

  2. Using which cc to ensure location of cc and using cc --version to ensure linking GCC version on Linux.

  3. Currently we support Linux and Windows platfrom. MacOS is supporting…

Compared with Just-In-Time load interface, it only compiles once by executing python install . Then customized operators API will be available everywhere after importing it.

A simple example of as followed:


# Case 1: Compiling customized operators supporting CPU and GPU devices
from paddle.utils.cpp_extension import CUDAExtension, setup

    name='custom_op',  # name of package used by "import"
        sources=['', '', '', '']  # Support for compilation of multiple OPs

# Case 2: Compiling customized operators supporting only CPU device
from paddle.utils.cpp_extension import CppExtension, setup

    name='custom_op',  # name of package used by "import"
        sources=['', '']  # Support for compilation of multiple OPs

Applying compilation and installation by executing python install under source files directory. Then we can use the layer api as followed:

import paddle
from custom_op import relu, tanh

x = paddle.randn([4, 10], dtype='float32')
relu_out = relu(x)
tanh_out = tanh(x)
  • name (str) – Specify the name of shared library file and installed python package.

  • ext_modules (Extension) – Specify the Extension instance including customized operator source files, compiling flags If only compile operator supporting CPU device, please use CppExtension ; If compile operator supporting CPU and GPU devices, please use CUDAExtension .

  • include_dirs (list[str], optional) – Specify the extra include directoies to search head files. The interface will automatically add site-package/paddle/include . Please add the corresponding directory path if including third-party head files. Default is None.

  • extra_compile_args (list[str] | dict, optional) – Specify the extra compiling flags such as -O3 . If set list[str] , all these flags will be applied for cc and nvcc compiler. It support specify flags only applied cc or nvcc compiler using dict type with {'cxx': [...], 'nvcc': [...]} . Default is None.

  • **attr (dict, optional) – Specify other arguments same as setuptools.setup .

Returns: None