\u200E
  • 开始使用
  • 特性
  • 文档
    • API
    • 使用指南
  • 工具平台
    • 工具
      • AutoDL
  • develop
  • 2.0
  • 1.8
  • 1.7
  • 1.6
  • 1.5
  • 1.4
  • 1.3
  • 1.2
  • 1.1
  • 1.0
  • 0.15.0
  • 0.14.0
  • 0.13.0
  • 0.12.0
  • 0.11.0
  • 0.10.0
  • 中文(简)
  • English(En)
  • Installation Manuals
    • Install via pip
      • Install on Linux via PIP
      • Install on MacOS via PIP
      • Install on Windows via PIP
    • Install via conda
      • Installation on Linux via Conda
      • Installation on MacOS via Conda
      • Installation on Windows via Conda
    • Install via docker
      • Install on Linux via Docker
      • Install on MacOS via Docker
    • Compile From Source Code
      • Compile on Linux from Source Code
      • Compile on MacOS from Source Code
      • Compile on Windows from Source Code
    • Paddle installation for machines with Kunlun XPU card
    • Appendix
  • Guides
    • Paddle 2 Introduction
      • Basic Concept
        • Introduction to Tensor
        • Broadcasting
    • VisualDL Tools
      • Introduction to VisualDL Toolset
      • VisualDL user guide
    • Dygraph to Static Graph
      • Basic Usage
      • Architecture
      • Supported Grammars
      • Introduction of InputSpec
      • Error Handling
      • Debugging Methods
    • Deploy Inference Model
      • Server-side Deployment
        • Install and Compile C++ Inference Library on Linux
        • Install and Compile C++ Inference Library on Windows
        • Introduction to C++ Inference API
        • Performance Profiling for TensorRT Library
      • Model Compression
    • Distributed Training
      • Quick start for distributed training
    • Write New Operators
      • How to write a new operator
      • Notes on operator development
    • How to contribute codes to Paddle
      • Guide of local development
      • Guide of submitting PR to Github
  • API Reference
  • Release Note
  • Server-side Deployment
  • »
  • Guides »
  • Deploy Inference Model »
  • Server-side Deployment
  • View page source

Server-side Deployment¶

PaddlePaddle provides various methods to support deployment and release of trained models.

  • Install and Compile C++ Inference Library on Linux
  • Install and Compile C++ Inference Library on Windows
  • Introduction to C++ Inference API
  • Performance Profiling for TensorRT Library