在Windows上编译Pytorch 源码

tech2024-11-15  15

Pytorch 源码编译 Windows GPU

编译工具和第三方库1.1 Visual studio 20191.2 Cuda toolkit1.3 cudnn1.4 mkl1.5 magma1.6 sccache1.7 安装Ananconda 或者 miniconda1.8 安装python 包 设置环境变量开始编译编译Pytorch只编译libtorch 最新的pytorch的源码编译(包括1.6以上)

编译工具和第三方库

1.1 Visual studio 2019

不过最新的VS 2019有点问题,所以需要装https://docs.microsoft.com/en-us/visualstudio/releases/2019/history 里面的16.6.5版本。 注意需要把VS 的升级给禁了,不然升上去编译又过不去

1.2 Cuda toolkit

去Nvidia官网下 [cudatoolkit] (https://developer.nvidia.com/cuda-toolkit)

1.3 cudnn

这个的安装可以参考 windows_cudnn_install.

1.4 mkl

这个的安装可以参考 install_mkl

1.5 magma

参考 install_magma 需要注意的是,要注意区别release和debug版本,如果你想下载cuda110的release版本,就xxx_cuda110_release,否则就是xxx_cuda110_debug。编译的时候release就用release版本,debug就用debug版本

1.6 sccache

参考 install_sccache

mkl,magma和sccache下载解压后最好放在同一个目录下。

1.7 安装Ananconda 或者 miniconda

1.8 安装python 包

conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests dataclasses

设置环境变量

set -x set USE_CUDA=1 set DEBUG= rem set DEBUG=1 for debug version set USE_DISTRIBUTED=0 set CMAKE_VERBOSE_MAKEFILE=1 set TMP_DIR_WIN=C:\git\ set CMAKE_INCLUDE_PATH=%TMP_DIR_WIN%\mkl\include set LIB=%TMP_DIR_WIN%\mkl\lib;%LIB set MAGMA_HOME=%TMP_DIR_WIN%\magma set CUDA_SUFFIX=cuda110 set CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0 set CUDNN_LIB_DIR=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\lib\x64 set CUDA_TOOLKIT_ROOT_DIR=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0 set CUDNN_ROOT_DIR=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0 set NVTOOLSEXT_PATH=C:\Program Files\NVIDIA Corporation\NvToolsExt set CUDNN_INCLUDE_DIR=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include set PATH=%CUDA_PATH%\bin;%CUDA_PATH%\libnvvp;%PATH% set DISTUTILS_USE_SDK=1 set TORCH_CUDA_ARCH_LIST=3.7+PTX;5.0;6.0;6.1;7.0;7.5;8.0 set TORCH_NVCC_FLAGS=-Xfatbin -compress-all set PATH=%TMP_DIR_WIN%\bin;%PATH% set SCCACHE_IDLE_TIMEOUT=0 sccache --stop-server sccache --start-server sccache --zero-stats set CC=sccache-cl set CXX=sccache-cl set CMAKE_GENERATOR=Ninja if "%USE_CUDA%"=="1" ( copy %TMP_DIR_WIN%\bin\sccache.exe %TMP_DIR_WIN%\bin\nvcc.exe :: randomtemp is used to resolve the intermittent build error related to CUDA. :: code: https://github.com/peterjc123/randomtemp :: issue: https://github.com/pytorch/pytorch/issues/25393 :: :: Previously, CMake uses CUDA_NVCC_EXECUTABLE for finding nvcc and then :: the calls are redirected to sccache. sccache looks for the actual nvcc :: in PATH, and then pass the arguments to it. :: Currently, randomtemp is placed before sccache (%TMP_DIR_WIN%\bin\nvcc) :: so we are actually pretending sccache instead of nvcc itself. curl -kL https://github.com/peterjc123/randomtemp/releases/download/v0.3/randomtemp.exe --output %TMP_DIR_WIN%\bin\randomtemp.exe set RANDOMTEMP_EXECUTABLE=%TMP_DIR_WIN%\bin\nvcc.exe set CUDA_NVCC_EXECUTABLE=%TMP_DIR_WIN%\bin\randomtemp.exe set RANDOMTEMP_BASEDIR=%TMP_DIR_WIN%\bin ) "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=14.26

注意几点 1.如果编译的是Cuda10的话,TORCH_CUDA_ARCH_LIST要去掉 8.0 2. 编译debug/release版本是,下载对应的Magma debug/release版本包 3. 如果只编译CPU版本的话,设置USE_CUDA=0

开始编译

编译Pytorch

python setup.py install --cmake

只编译libtorch

python tools\build_libtorch.py
最新回复(0)