计算机视觉知识点-FineTuning

tech2022-08-06  161

 

FineTuning技术是采用自己网络的特征提取层的参数采用其它开源的模型的参数初始化,避免了由于自己的数据少而过拟合现象严重的情况。采用这里的例子来说明finetuning.

使用mxnet进行一下代码演示。mxnet的安装方法

pip install d2l==0.14.3 pip install -U mxnet-cu101mkl==1.6.0.post0 pip install gluoncv

加载一个1400个正样本,1000个负样本的hotdog数据集

%matplotlib inline from d2l import mxnet as d2l from mxnet import gluon, init, np, npx from mxnet.gluon import nn import os npx.set_np() #@save d2l.DATA_HUB['hotdog'] = (d2l.DATA_URL+'hotdog.zip', 'fba480ffa8aa7e0febbb511d181409f899b9baa5') data_dir = d2l.download_extract('hotdog') train_imgs = gluon.data.vision.ImageFolderDataset( os.path.join(data_dir, 'train')) test_imgs = gluon.data.vision.ImageFolderDataset( os.path.join(data_dir, 'test')) hotdogs = [train_imgs[i][0] for i in range(8)] not_hotdogs = [train_imgs[-i - 1][0] for i in range(8)] d2l.show_images(hotdogs + not_hotdogs, 2, 8, scale=1.4);

数据增强

# We specify the mean and variance of the three RGB channels to normalize the # image channel normalize = gluon.data.vision.transforms.Normalize( [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) train_augs = gluon.data.vision.transforms.Compose([ gluon.data.vision.transforms.RandomResizedCrop(224), gluon.data.vision.transforms.RandomFlipLeftRight(), gluon.data.vision.transforms.ToTensor(), normalize]) test_augs = gluon.data.vision.transforms.Compose([ gluon.data.vision.transforms.Resize(256), gluon.data.vision.transforms.CenterCrop(224), gluon.data.vision.transforms.ToTensor(), normalize])

下载预训练网络

pretrained_net = gluon.model_zoo.vision.resnet18_v2(pretrained=True)

定义我的二分类网络

finetune_net = gluon.model_zoo.vision.resnet18_v2(classes=2) finetune_net.features = pretrained_net.features finetune_net.output.initialize(init.Xavier()) # The model parameters in output will be updated using a learning rate ten # times greater finetune_net.output.collect_params().setattr('lr_mult', 10)

开启训练

def train_fine_tuning(net, learning_rate, batch_size=128, num_epochs=5): train_iter = gluon.data.DataLoader( train_imgs.transform_first(train_augs), batch_size, shuffle=True) test_iter = gluon.data.DataLoader( test_imgs.transform_first(test_augs), batch_size) devices = d2l.try_all_gpus() net.collect_params().reset_ctx(devices) net.hybridize() loss = gluon.loss.SoftmaxCrossEntropyLoss() trainer = gluon.Trainer(net.collect_params(), 'sgd', { 'learning_rate': learning_rate, 'wd': 0.001}) d2l.train_ch13(net, train_iter, test_iter, loss, trainer, num_epochs, devices) train_fine_tuning(finetune_net, 0.01) loss 0.307, train acc 0.912, test acc 0.940 668.6 examples/sec on [gpu(0), gpu(1)]

不用FineTuning的训练效果

scratch_net = gluon.model_zoo.vision.resnet18_v2(classes=2) scratch_net.initialize(init=init.Xavier()) train_fine_tuning(scratch_net, 0.1) loss 0.390, train acc 0.828, test acc 0.861 706.3 examples/sec on [gpu(0), gpu(1)]

最后的话:

这篇文章发布在/蓝色的杯子, 没事多留言,让我们一起爱智求真吧.我的邮箱wisdomfriend@126.com.

最新回复(0)