Meta-Llama-3-70B-Instruct
Meta-Llama-3-70B是Meta AI于2024年4月18日发布的Meta Llama 3系列70B参数大语言模型,擅长语言细微差别、上下文理解、代码生成以及翻译和对话生成等复杂任务。Meta-Llama-3-70B-Instruct是70B参数的指令微调版本,适用于对话场景,在理解语言细节、上下文和执行复杂任务上表现更佳。本文介绍了相关API。
功能介绍
调用本接口,发起一次对话请求。
在线调试
平台提供了 API在线调试平台-示例代码 ,用于帮助开发者调试接口,平台集成快速检索、查看开发文档、查看在线调用的请求内容和返回结果、复制和下载示例代码等功能,简单易用,更多内容请查看API在线调试介绍。
HTTP调用
鉴权说明
本文API,支持2种鉴权方式。不同鉴权方式,调用方式不同,使用Header、Query参数不同,详见本文请求说明。开发者可以选择以下任一种方式进行鉴权。
请求说明
- 基本信息
请求地址: https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_3_70b
请求方式: POST
- Header参数
根据不同鉴权方式,查看对应Header参数。
访问凭证access_token鉴权
名称 | 类型 | 必填 | 描述 |
---|---|---|---|
Content-Type | string | 是 | 固定值application/json |
基于安全认证AK/SK进行签名计算鉴权
名称 | 类型 | 必填 | 描述 |
---|---|---|---|
Content-Type | string | 是 | 固定值application/json |
x-bce-date | string | 否 | 当前时间,遵循ISO8601规范,格式如2016-04-06T08:23:49Z |
Authorization | string | 是 | 用于验证请求合法性的认证信息,更多内容请参考鉴权认证机制,签名工具可参考IAM签名工具 |
- Query参数
只有访问凭证access_token鉴权方式,需使用Query参数。
访问凭证access_token鉴权
名称 | 类型 | 必填 | 描述 |
---|---|---|---|
access_token | string | 是 | 通过API Key和Secret Key获取的access_token,参考Access Token获取 |
- Body参数
名称 | 类型 | 必填 | 描述 |
---|---|---|---|
messages | List(message) | 是 | 聊天上下文信息。说明: (1)messages成员不能为空,1个成员表示单轮对话,多个成员表示多轮对话 (2)最后一个message为当前请求的信息,前面的message为历史对话信息 (3)必须为奇数个成员,成员中message的role必须依次为user、assistant (4)message中的content总长度和system字段总内容不能超过4800个字符 |
stream | bool | 否 | 是否以流式接口的形式返回数据,默认false |
temperature | float | 否 | 说明: (1)较高的数值会使输出更加随机,而较低的数值会使其更加集中和确定 (2)范围 (0, 1.0],不能为0 |
top_k | int | 否 | Top-K 采样参数,在每轮token生成时,保留k个概率最高的token作为候选。说明: (1)影响输出文本的多样性,取值越大,生成文本的多样性越强 (2)取值范围:正整数 |
top_p | float | 否 | 说明: (1)影响输出文本的多样性,取值越大,生成文本的多样性越强 (2)取值范围 [0, 1.0] |
penalty_score | float | 否 | 通过对已生成的token增加惩罚,减少重复生成的现象。说明: (1)值越大表示惩罚越大 (2)取值范围:[1.0, 2.0] |
stop | List(String) | 否 | 生成停止标识。当模型生成结果以stop中某个元素结尾时,停止文本生成。说明: (1)每个元素长度不超过20字符。 (2)最多4个元素 |
system | string | 否 | 模型人设,主要用于人设设定,例如,你是xxx公司制作的AI助手,说明: (1)长度限制请参考messages参数说明 |
user_id | string | 否 | 表示最终用户的唯一标识符 |
message说明
名称 | 类型 | 描述 |
---|---|---|
role | string | 当前支持以下: user: 表示用户 assistant: 表示对话助手 |
content | string | 对话内容,不能为空 |
响应说明
名称 | 类型 | 描述 |
---|---|---|
id | string | 本轮对话的id |
object | string | 回包类型。 chat.completion:多轮对话返回 |
created | int | 时间戳 |
sentence_id | int | 表示当前子句的序号。只有在流式接口模式下会返回该字段 |
is_end | bool | 表示当前子句是否是最后一句。只有在流式接口模式下会返回该字段 |
is_truncated | bool | 当前生成的结果是否被截断 |
result | string | 对话返回结果 |
need_clear_history | bool | 表示用户输入是否存在安全,是否关闭当前会话,清理历史会话信息 true:是,表示用户输入存在安全风险,建议关闭当前会话,清理历史会话信息 false:否,表示用户输入无安全风险 |
ban_round | int | 当need_clear_history为true时,此字段会告知第几轮对话有敏感信息,如果是当前问题,ban_round=-1 |
usage | usage | token统计信息 |
usage说明
名称 | 类型 | 描述 |
---|---|---|
prompt_tokens | int | 问题tokens数 |
completion_tokens | int | 回答tokens数 |
total_tokens | int | tokens总数 |
注意 :同步模式和流式模式,响应参数返回不同,详细内容参考示例描述。
- 同步模式下,响应参数为以上字段的完整json包。
- 流式模式下,各字段的响应参数为 data: {响应参数}。
请求示例(单轮)
以访问凭证access_token鉴权方式为例,说明如何调用API,示例如下。
# 步骤一,获取access_token,替换下列示例中的应用API Key与应用Secret Key
curl 'https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=[应用API Key]&client_secret=[应用Secret Key]'
# 步骤二,调用本文API,使用步骤一获取的access_token,替换下列示例中的“调用接口获取的access_token”
curl -X POST 'https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_3_70b?access_token=[步骤一调用接口获取的access_token]' -d '{
"messages": [
{"role":"user","content":"Please introduce yourself"}
]
}' | iconv -f utf-8 -t utf-8
import requests
import json
def get_access_token():
"""
使用 API Key,Secret Key 获取access_token,替换下列示例中的应用API Key、应用Secret Key
"""
url = "https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=[应用API Key]&client_secret=[应用Secret Key]"
payload = json.dumps("")
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
return response.json().get("access_token")
def main():
url = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_3_70b?access_token=" + get_access_token()
payload = json.dumps({
"messages": [
{
"role": "user",
"content": "Please introduce yourself"
}
]
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
if __name__ == '__main__':
main()
响应示例(单轮)
{
"id": "as-85n3nbnbv5",
"object": "chat.completion",
"created": 1693291160,
"result": " Hello! My name is Assistant, and I'm here to help you with any questions or concerns you may have. I strive to provide respectful, honest, and helpful responses that are free from harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. I'm here to assist you in a socially unbiased and positive manner, and I will do my best to provide accurate and reliable information. If a question doesn't make sense or is not factually coherent, I will explain why instead of providing an incorrect answer. If I don't know the answer to a question, I will let you know and do my best to find a reliable source of information for you. Please feel free to ask me anything, and I'll do my best to assist you.",
"is_truncated": false,
"need_clear_history": false,
"usage": {
"prompt_tokens": 3,
"completion_tokens": 172,
"total_tokens": 175
}
}
请求示例(多轮)
# 步骤一,获取access_token,替换下列示例中的应用API Key与应用Secret Key
curl 'https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=[应用API Key]&client_secret=[应用Secret Key]'
# 步骤二,调用本文API,使用步骤一获取的access_token,替换下列示例中的“调用接口获取的access_token”
curl -X POST 'https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_3_70b?access_token=[步骤一调用接口获取的access_token]' -d '{
"messages": [
{"role":"user","content":"Please introduce yourself"},
{"role":"assistant","content":" Hello! My name is Assistant, and I\\’m here to help you with any questions or concerns you may have. I strive to provide respectful, honest, and helpful responses that are free from harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. I\\’m here to assist you in a socially unbiased and positive manner, and I will do my best to provide accurate and reliable information. If a question doesn\\’t make sense or is not factually coherent, I will explain why instead of providing an incorrect answer. If I don\\’t know the answer to a question, I will let you know and do my best to find a reliable source of information for you. Please feel free to ask me anything, and I\\’ll do my best to assist you."},
{"role":"user","content": "How about the weather in Shanghai"}
]
}' | iconv -f utf-8 -t utf-8
import requests
import json
def get_access_token():
"""
使用 API Key,Secret Key 获取access_token,替换下列示例中的应用API Key、应用Secret Key
"""
url = "https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=[应用API Key]&client_secret=[应用Secret Key]"
payload = json.dumps("")
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
return response.json().get("access_token")
def main():
url = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_3_70b?access_token=" + get_access_token()
payload = json.dumps({
"messages": [
{
"role": "user",
"content": "Please introduce yourself"
},
{
"role": "assistant",
"content": " Hello! My name is Assistant, and I'm here to help you with any questions or concerns you may have. I strive to provide respectful, honest, and helpful responses that are free from harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. I'm here to assist you in a socially unbiased and positive manner, and I will do my best to provide accurate and reliable information. If a question doesn't make sense or is not factually coherent, I will explain why instead of providing an incorrect answer. If I don't know the answer to a question, I will let you know and do my best to find a reliable source of information for you. Please feel free to ask me anything, and I'll do my best to assist you."
},
{
"role": "user",
"content": "How about the weather in Shanghai"
}
]
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
if __name__ == '__main__':
main()
响应示例(多轮)
{
"id": "as-i4g16spxm8",
"object": "chat.completion",
"created": 1693291334,
"result": " Sure, here's the current weather conditions and forecast for Shanghai, China:\n\nCurrent Weather:\n\n* Temperature: 22°C (72°F)\n* Humidity: 62%\n* Wind Speed: 20 km/h (12 mph)\n* Conditions: Partly Cloudy\n\nForecast:\n\n* Today: Partly cloudy skies with a high of 25°C (77°F) and a low of 18°C (64°F).\n* Tomorrow: Intervals of clouds and sunshine with a high of 26°C (79°F) and a low of 19°C (66°F).\n* Weekend: Mostly sunny with a high of 28°C (82°F) on Saturday and 27°C (81°F) on Sunday.\n\nNote: These weather conditions are subject to change and are based on the current data available.\n\nI hope this helps! Let me know if you have any other questions.",
"is_truncated": false,
"need_clear_history": false,
"usage": {
"prompt_tokens": 182,
"completion_tokens": 141,
"total_tokens": 323
}
}
请求示例(流式)
# 步骤一,获取access_token,替换下列示例中的应用API Key与应用Secret Key
curl 'https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=[应用API Key]&client_secret=[应用Secret Key]'
# 步骤二,调用本文API,使用步骤一获取的access_token,替换下列示例中的“调用接口获取的access_token”
curl -X POST 'https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_3_70b?access_token=[步骤一调用接口获取的access_token]' -d '{
"messages": [
{"role":"user", "content": "Please introduce The Great Wall"}
],
"stream": true
}'
import requests
import json
def get_access_token():
"""
使用 API Key,Secret Key 获取access_token,替换下列示例中的应用API Key、应用Secret Key
"""
url = "https://aip.baidubce.com/oauth/2.0/token?grant_type=client_credentials&client_id=[应用API Key]&client_secret=[应用Secret Key]"
payload = json.dumps("")
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
return response.json().get("access_token")
def main():
url = "https://aip.baidubce.com/rpc/2.0/ai_custom/v1/wenxinworkshop/chat/llama_3_70b?access_token=" + get_access_token()
payload = json.dumps({
"messages": [
{
"role": "user",
"content": "Please introduce The Great Wall"
}
],
"stream": True
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload, stream=True)
for line in response.iter_lines():
print(line.decode("UTF-8"))
if __name__ == '__main__':
main()
响应示例(流式)
data: {"id":"as-2m3nh3th6b","object":"chat.completion","created":1702551868,"sentence_id":0,"is_end":false,"is_truncated":false,"result":" Hello! I'm happy to help you with your question. The Great Wall of China is an ancient series of fortifications built to protect the Chinese Empire from invasions by foreign enemies, particularly the nomadic tribes from the north. It stretches for over 4,000 miles, making it one of the longest structures ever built.\n\nThe Great Wall was constructed over several centuries, with the first versions being built as ","need_clear_history":false,"usage":{"prompt_tokens":5,"completion_tokens":0,"total_tokens":5}}
data: {"id":"as-2m3nh3th6b","object":"chat.completion","created":1702551874,"sentence_id":1,"is_end":false,"is_truncated":false,"result":"early as the 7th century BC. It was continuously expanded and fortified over time, with the most famous and well-preserved sections being built during the Ming Dynasty (1368-1644).\n\nThe Great Wall is not only an engineering marvel but also a testament to the ingenuity and determination of the Chinese people. It's a popular tourist destination, with many visitors each year walking or hiking along its historic paths.\n\nIt's important to ","need_clear_history":false,"usage":{"prompt_tokens":5,"completion_tokens":0,"total_tokens":5}}
data: {"id":"as-2m3nh3th6b","object":"chat.completion","created":1702551880,"sentence_id":2,"is_end":true,"is_truncated":false,"result":"note that while the Great Wall is an incredible feat of engineering and history, it's not without controversy. Some sections of the wall have been damaged or destroyed over time, and there are concerns about the impact of tourism on the structure's preservation. Additionally, it's important to recognize the cultural and historical context of the wall, and to be respectful of the communities and traditions that it represents.","need_clear_history":false,"usage":{"prompt_tokens":5,"completion_tokens":276,"total_tokens":281}}
SDK调用
平台支持通过Python SDK、Go SDK、Java SDK 和 Node.js SDK调用本文API,SDK调用说明文档请参考推理服务V1-对话Chat文档。
错误码
如果请求错误,服务器返回的JSON文本包含以下参数。
名称 | 描述 |
---|---|
error_code | 错误码 |
error_msg | 错误描述信息,帮助理解和解决发生的错误 |
例如Access Token失效返回以下内容,需要重新获取新的Access Token再次请求。
{
"error_code": 110,
"error_msg": "Access token invalid or no longer valid"
}
更多相关错误码,请查看错误码说明。