简介:本文详细介绍如何使用Vercel部署基于SolidJS与daisyUI的纯前端人脸识别项目,涵盖技术选型、项目搭建、核心功能实现及部署优化全流程。通过分步指导与代码示例,帮助开发者快速构建并上线现代化人脸识别应用。
传统人脸识别系统依赖后端服务处理图像分析,但现代浏览器通过WebAssembly(WASM)和TensorFlow.js等技术,已能在客户端完成轻量级人脸检测。本项目采用face-api.js库(基于TensorFlow.js),其优势在于:
此组合实现”开发效率”与”运行性能”的平衡,尤其适合原型验证和小型应用。
npm create solid@latest# 选择TypeScript模板cd your-project-namenpm install
安装Tailwind CSS和daisyUI:
npm install -D tailwindcss postcss autoprefixernpm install daisyui
创建tailwind.config.js:
module.exports = {content: ["./src/**/*.{js,jsx,ts,tsx}"],theme: { extend: {} },plugins: [require("daisyui")],daisyui: { themes: ["light"] }}
在src/index.css中引入Tailwind:
npm install face-api.js
// src/utils/faceDetection.tsimport * as faceapi from 'face-api.js';const MODEL_URL = '/models'; // 需将模型文件放入public/models目录export async function loadModels() {await Promise.all([faceapi.nets.tinyFaceDetector.loadFromUri(MODEL_URL),faceapi.nets.faceLandmark68Net.loadFromUri(MODEL_URL),faceapi.nets.faceRecognitionNet.loadFromUri(MODEL_URL)]);}
// src/components/FaceDetector.tsximport { createSignal, onMount } from 'solid-js';import * as faceapi from 'face-api.js';export default function FaceDetector() {const [detections, setDetections] = createSignal<any[]>([]);const [isLoading, setIsLoading] = createSignal(true);onMount(async () => {const stream = await navigator.mediaDevices.getUserMedia({ video: {} });const video = document.getElementById('video') as HTMLVideoElement;video.srcObject = stream;video.onloadedmetadata = () => {setIsLoading(false);detectFaces();};});async function detectFaces() {const video = document.getElementById('video') as HTMLVideoElement;const results = await faceapi.detectAllFaces(video,new faceapi.TinyFaceDetectorOptions()).withFaceLandmarks();setDetections(results);requestAnimationFrame(detectFaces);}return (<div class="relative">{isLoading() ? (<div class="flex justify-center items-center h-64"><div class="loading loading-spinner loading-lg"></div></div>) : (<video id="video" autoPlay muted class="w-full h-auto" />)}<canvas class="absolute top-0 left-0 w-full h-full" />{/* 绘制检测结果的代码需补充 */}</div>);}
在组件中添加canvas绘制逻辑:
// 在FaceDetector组件中补充import { onCleanup } from 'solid-js';// 在onMount中添加const canvas = faceapi.createCanvasFromMedia(video);document.body.append(canvas);const displaySize = { width: video.width, height: video.height };faceapi.matchDimensions(canvas, displaySize);// 修改detectFaces函数async function detectFaces() {// ...原有代码...const resizedResults = faceapi.resizeResults(results, displaySize);const ctx = canvas.getContext('2d');ctx.clearRect(0, 0, canvas.width, canvas.height);resizedResults.forEach(detection => {// 绘制检测框const box = detection.detection.box;ctx.strokeStyle = '#4ADE80';ctx.lineWidth = 2;ctx.strokeRect(box.x, box.y, box.width, box.height);// 绘制特征点faceapi.draw.drawFaceLandmarks(canvas, detection.landmarks);});requestAnimationFrame(detectFaces);}// 添加清理函数onCleanup(() => {const stream = (document.getElementById('video') as HTMLVideoElement).srcObject as MediaStream;stream.getTracks().forEach(track => track.stop());});
project-root/├── public/│ └── models/ # 存放face-api.js模型文件├── src/│ ├── components/│ ├── utils/│ └── index.tsx├── vercel.json└── package.json
{"builds": [{"src": "package.json","use": "@vercel/static-build","config": { "distDir": "dist" }}],"routes": [{"src": "/models/(.*)","headers": { "Cache-Control": "public, max-age=31536000, immutable" }}],"github": {"silent": true}}
构建项目:
npm run build# 或根据项目配置可能是 vite build
连接Vercel:
npm install -g vercelvercel
配置环境变量(如需):
face-api.js的轻量级模型(如tiny_face_detector)vercel.json配置长期缓存添加以下CSS确保视频元素适配不同屏幕:
/* 在全局样式中添加 */#video, canvas {max-width: 100%;height: auto;aspect-ratio: 16/9;}
跨域问题:
权限问题:
package.json中添加:
"browserslist": ["defaults","not ie <= 11"]
性能优化:
video.width = 640)拍照功能:
function takeSnapshot() {const video = document.getElementById('video') as HTMLVideoElement;const canvas = document.createElement('canvas');canvas.width = video.videoWidth;canvas.height = video.videoHeight;canvas.getContext('2d')?.drawImage(video, 0, 0);// 转换为图片URLreturn canvas.toDataURL('image/jpeg');}
情绪识别:
// 扩展face-api.js的识别能力const emotionLabels = ['neutral', 'happy', 'sad', 'angry', 'fearful', 'disgusted', 'surprised'];const emotionDetector = await faceapi.loadEmotionModel('/models');const emotions = await emotionDetector.detectEmotions(video);
多平台适配:
通过以上步骤,开发者可以在2小时内完成从项目初始化到全球部署的全流程。Vercel的自动缩放特性特别适合流量突增的场景,而SolidJS的细粒度响应性确保了人脸识别过程的流畅体验。