使用face-api和Tensorflow.js在浏览器中进行AI年龄估计

tech2022-08-05  123

目录

性别和年龄检测

下一步是什么?


下载源-10.6 MB

在上一篇文章中,我们学习了如何使用face-api.jsTensorflow.js在浏览器中对人的情绪进行分类。

如果您尚未阅读该文章,我建议您首先阅读,因为我们将假设您对face-api.js有所了解,并且我们将基于为情感创建的代码为基础检测。

性别和年龄检测

我们已经看到使用face-api.js预测人的面部表情有多么容易。但是我们还能做什么呢?让我们学习预测某人的性别和年龄。

我们将对之前的代码进行一些更改。在HTML文件中,我们更改了视频代码的尺寸,因为我们需要一些额外的空间才能显示图形:

<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <script src="https://webrtc.github.io/adapter/adapter-latest.js"></script> <script type="application/x-javascript" src="face-api.js"></script> </head> <body> <h1>Emotions, Age & gender Detection using face-api.js</h1> <video autoplay muted id="video" width="400" height="400" style=" margin: auto;"></video> <div id="prediction">Loading</div> <script type="text/javascript" defer src="index.js"></script> </body> </html>

我们还需要在index.js文件中导入另一个模型:

faceapi.nets.ageGenderNet.loadFromUri('/models')

同时将年龄和性别添加到预测中:

const detections = await faceapi .detectAllFaces(video, new faceapi.TinyFaceDetectorOptions()) .withFaceLandmarks() .withFaceExpressions() .withAgeAndGender();

Face-api.js也具有一些绘图功能。让我们将绘图添加到画布中:

const resizedDetections = faceapi.resizeResults(detections, displaySize); faceapi.draw.drawDetections(canvas, resizedDetections); faceapi.draw.drawFaceLandmarks(canvas, resizedDetections); faceapi.draw.drawFaceExpressions(canvas, resizedDetections);

现在我们可以得到我们的预测了:

resizedDetections.forEach(result => { const { age, gender, genderProbability } = result; new faceapi.draw.DrawTextField( [ `${faceapi.round(age, 0)} years`, `${gender} (${faceapi.round(genderProbability)})` ], result.detection.box.bottomRight ).draw(canvas); });

这是index.js文件的最终外观:

const video = document.getElementById('video'); Promise.all([ faceapi.nets.tinyFaceDetector.loadFromUri('/models'), faceapi.nets.faceLandmark68Net.loadFromUri('/models'), faceapi.nets.faceRecognitionNet.loadFromUri('/models'), faceapi.nets.faceExpressionNet.loadFromUri('/models'), faceapi.nets.ageGenderNet.loadFromUri('/models') ]).then(startVideo); function startVideo() { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; if (navigator.getUserMedia) { navigator.getUserMedia({ video: true }, function(stream) { var video = document.querySelector('video'); video.srcObject = stream; video.onloadedmetadata = function(e) { video.play(); }; }, function(err) { console.log(err.name); } ); } else { document.body.innerText ="getUserMedia not supported"; console.log("getUserMedia not supported"); } } video.addEventListener('play', () => { const canvas = faceapi.createCanvasFromMedia(video); document.body.append(canvas); const displaySize = { width: video.width, height: video.height }; faceapi.matchDimensions(canvas, displaySize); setInterval(async () => { const predictions = await faceapi .detectAllFaces(video, new faceapi.TinyFaceDetectorOptions()) .withFaceLandmarks() .withFaceExpressions() .withAgeAndGender(); const resizedDetections = faceapi.resizeResults(predictions, displaySize); canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height); faceapi.draw.drawDetections(canvas, resizedDetections); faceapi.draw.drawFaceLandmarks(canvas, resizedDetections); faceapi.draw.drawFaceExpressions(canvas, resizedDetections); resizedDetections.forEach(result => { const { age, gender, genderProbability } = result; new faceapi.draw.DrawTextField( [ `${faceapi.round(age, 0)} years`, `${gender} (${faceapi.round(genderProbability)})` ], result.detection.box.bottomRight ).draw(canvas); }); }, 100); });

下一步是什么?

本系列文章向您介绍了TensorFlow.js,并帮助您开始在浏览器中进行机器学习。我们构建了一个项目,向您展示了如何直接在浏览器中开始训练自己的计算机视觉AI,并使它能够识别狗的品种、人的面部表情、年龄和性别。尽管它们本身已经令人印象深刻,但是本系列仅仅是一个起点。浏览器中存在AIML的无限可能。例如,在本系列中我们没有做的一件事是离线训练ML模型并将其导入浏览器。可以在任何示例的基础上随意构建或创建自己感兴趣的东西。不要忘记分享您的想法!

最新回复(0)