Paper reading:Deformable ConvNets v2(pytorch1.6编译)

tech2022-09-16  103

Deformable ConvNets v2

1. 论文阅读

目前文章只是粗略的看过一遍,还不能写在博客里,要等后续再读读,恩,先欠着吧!

2. pytorch1.6编译

LZ其实是为了预研FairMOT论文,然后需要配置dcnv2的环境,期间当然不是一帆风顺的,已经习惯了。。。

其中有几个报错还是很多小伙伴会遇到的,在github上也有很多类似的issue,所以就重新开一篇,毕竟也是一篇CVPR,它值得的。

DCNv2_new/src/cuda/dcn_v2_cuda.cu(107): error: identifier "THCState_getCurrentStream" is undefined DCNv2_new/src/cuda/dcn_v2_cuda.cu(279): error: identifier "THCState_getCurrentStream" is undefined DCNv2_new/src/cuda/dcn_v2_cuda.cu(324): error: identifier "THCudaBlas_Sgemv" is undefined

使用原始代码里给的dcnv2,会报如上的错误,因为原始代码用的是

conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=10.0 -c pytorch

而LZ是安装的pytorch1.6

conda install pytorch torchvision cudatoolkit=10.2 -c pytorch

非得按照自己版本来,结果就是得各种找问题

后面是在github上https://github.com/jinfagang/DCNv2_latest找到了pytorch1.6版本的DCNv2版本

后面运行make.sh应该是编译成功了的

copying build/lib.linux-x86_64-3.8/_ext.cpython-38-x86_64-linux-gnu.so -> Creating /root/anaconda3/envs/FairMOT/lib/python3.8/site-packages/DCNv2.egg-link (link to .) Adding DCNv2 0.1 to easy-install.pth file Installed /DCNv2_latest-pytorch1.6 Processing dependencies for DCNv2==0.1 Finished processing dependencies for DCNv2==0.1

但是在test的时候又出现对应的问题:

python testcuda.py torch.Size([2, 64, 128, 128]) torch.Size([20, 32, 7, 7]) torch.Size([20, 32, 7, 7]) torch.Size([20, 32, 7, 7]) 0.971507, 1.943014 0.971507, 1.943014 Zero offset passed /root/anaconda3/envs/FairMOT/lib/python3.8/site-packages/torch/autograd/gradcheck.py:266: UserWarning: The {}th input requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex. warnings.warn( check_gradient_dpooling: True Traceback (most recent call last): File "testcuda.py", line 265, in <module> check_gradient_dconv() File "testcuda.py", line 95, in check_gradient_dconv gradcheck(dcn_v2_conv, (input, offset, mask, weight, bias, File "/root/anaconda3/envs/FairMOT/lib/python3.8/site-packages/torch/autograd/gradcheck.py", line 321, in gradcheck return fail_test('Backward is not reentrant, i.e., running backward with same ' File "/root/anaconda3/envs/FairMOT/lib/python3.8/site-packages/torch/autograd/gradcheck.py", line 254, in fail_test raise RuntimeError(msg) RuntimeError: Backward is not reentrant, i.e., running backward with same input and grad_output multiple times gives different values, although analytical gradient matches numerical gradient. The tolerance for nondeterminism was 0.0.

README.md中有一段

Known Issues:

Gradient check w.r.t offset (solved) Backward is not reentrant (minor)

This is an adaption of the official Deformable-ConvNets.

Update: all gradient check passes with double precision.

Another issue is that it raises RuntimeError: Backward is not reentrant. However, the error is very small (<1e-7 for float <1e-15 for double), so it may not be a serious problem (?)

换句话说,这个问题可以忽略,那按照它对应的说法,这算是编译完成,可以使用了。

参考地址

论文地址https://arxiv.org/pdf/1811.11168.pdf
最新回复(0)