简介:本文详细解析如何使用Vue3构建仿Deepseek/ChatGPT的流式聊天界面,并实现与Deepseek/OpenAI API的对接,涵盖UI设计、流式响应处理、API调用等关键技术点。
在AI对话产品竞争日益激烈的今天,流式响应(Streaming Response)技术已成为提升用户体验的关键。与传统全量返回不同,流式响应允许模型逐字或逐句返回内容,模拟真实对话的节奏感。本文将基于Vue3框架,结合Composition API和TypeScript,实现一个仿Deepseek/ChatGPT风格的聊天界面,并重点解决以下技术挑战:
Vue3的Composition API提供了更灵活的逻辑组织方式,特别适合处理流式数据这种需要状态管理的场景。结合Vite构建工具,可实现热更新和快速打包。
支持两种主流API模式:
// stream-processor.tsclass StreamProcessor {private chunks: string[] = [];private isProcessing = false;async processStream(stream: ReadableStream) {const reader = stream.getReader();while (true) {const { done, value } = await reader.read();if (done) break;this.chunks.push(new TextDecoder().decode(value));this.emitChunk();}}private emitChunk() {const text = this.chunks.join('');// 触发UI更新this.onUpdate(text);}}
对于长对话场景,采用虚拟滚动技术:
<template><div class="chat-container" ref="container"><divclass="message-list":style="{ height: totalHeight + 'px' }"><divv-for="msg in visibleMessages":key="msg.id"class="message-item":style="{ transform: `translateY(${msg.offset}px)` }"><!-- 消息内容 --></div></div></div></template>
使用CSS变量和媒体查询实现多端适配:
:root {--chat-width: 768px;--bubble-radius: 18px;}@media (max-width: 768px) {:root {--chat-width: 100%;--bubble-radius: 12px;}}
// deepseek-client.tsasync function connectToDeepseek(apiKey: string) {const socket = new WebSocket(`wss://api.deepseek.com/v1/chat`);socket.onopen = () => {socket.send(JSON.stringify({api_key: apiKey,model: "deepseek-chat"}));};return new ReadableStream({start(controller) {socket.onmessage = (event) => {const data = JSON.parse(event.data);controller.enqueue(new TextEncoder().encode(data.text));};}});}
// openai-client.tsasync function fetchOpenAIStream(prompt: string) {const response = await fetch("https://api.openai.com/v1/chat/completions", {method: "POST",headers: {"Content-Type": "application/json","Authorization": `Bearer ${OPENAI_API_KEY}`},body: JSON.stringify({model: "gpt-3.5-turbo",messages: [{ role: "user", content: prompt }],stream: true})});return new ReadableStream({async start(controller) {const reader = response.body!.getReader();while (true) {const { done, value } = await reader.read();if (done) break;const text = decode(value);const lines = text.split('\n').filter(line => line.trim());for (const line of lines) {if (!line.startsWith("data: ")) continue;const data = line.substring(6).trim();if (data === "[DONE]") break;const parsed = JSON.parse(data.slice(6, -1));const content = parsed.choices[0].delta?.content || "";controller.enqueue(new TextEncoder().encode(content));}}controller.close();}});}
// debounce-utils.tsfunction debounce<T extends (...args: any[]) => any>(func: T,wait: number) {let timeout: NodeJS.Timeout;return function(this: any, ...args: Parameters<T>) {clearTimeout(timeout);timeout = setTimeout(() => func.apply(this, args), wait);} as T;}
// api-retry.tsasync function withRetry<T>(fn: () => Promise<T>,maxRetries = 3,delay = 1000): Promise<T> {let lastError: Error;for (let i = 0; i < maxRetries; i++) {try {return await fn();} catch (error) {lastError = error as Error;await new Promise(resolve => setTimeout(resolve, delay * (i + 1)));}}throw lastError;}
# DockerfileFROM node:18-alpineWORKDIR /appCOPY package*.json ./RUN npm installCOPY . .RUN npm run buildEXPOSE 3000CMD ["npm", "run", "preview"]
建议监控以下关键指标:
本文实现的Vue3流式聊天界面具有以下优势:
未来可扩展方向包括:
通过本文提供的实现方案,开发者可以快速构建出具有专业级体验的AI对话界面,为各类AI应用提供强有力的前端支持。