在数据安全和隐私保护日益受到重视的背景下,私有化部署大模型的需求日益增长。Mintplex Labs Inc. 推出的开源项目 AnythingLLM,为个人和企业提供了一种安全、高效且可定制的解决方案。该工具基于RAG(Retrieval-Augmented Generation)模型,允许用户将本地文档转换为可由大型语言模型(LLM)引用的格式,实现对话式问答和知识管理。
二、AnythingLLM 部署实战
1. 安装Chroma Vectorstore: 通过Docker容器部署,创建集合并验证设置。
访问向量存储API文档:
2. LocalAI部署: 使用CLI应用程序启动API服务器,与开源模型交互。
git clonechromadocker compose up -d --build
容器运行后,我们需要下载、安装模型以供测试使用
Bert 的转换器嵌入模型:MiniLM L6
curl"Content-Type: application/json"-d '{ "id": "model-gallery@bert-embeddings" }'curl"Content-Type: application/json"-d '{ "input": "The food was delicious and the waiter...","model": "bert-embeddings" }'{"created": 1702050873,"object": "list","id": "b11eba4b-d65f-46e1-8b50-38d3251e3b52","model": "bert-embeddings","data": [{"embedding": [-0.043848168,0.067443006,...0.03223838,0.013112408,0.06982294,-0.017132297,-0.05828256],"index": 0,"object": "embedding"}],"usage": {"prompt_tokens": 0,"completion_tokens": 0,"total_tokens": 0 }}
大模型LLM:Zephyr-7B-β
curl"Content-Type: application/json"-d '{ "id": "huggingface@thebloke__zephyr-7b-beta-gguf__zephyr-7b-beta.q4_k_s.gguf","name": "zephyr-7b-beta" }'curl"Content-Type: application/json"-d '{ "model": "zephyr-7b-beta","messages": [{"role": "user","content": "Why is the Earth round?"}],"temperature": 0.9 }'{"created": 1702050808,"object": "chat.completion","id": "67620f7e-0bc0-4402-9a21-878e4c4035ce","model": "thebloke__zephyr-7b-beta-gguf__zephyr-7b-beta.q4_k_s.gguf","choices": [{"index": 0,"finish_reason": "stop","message": {"role": "assistant","content": "\nThe Earth appears round because it isactually a spherical body. This shape is a result of thegravitational forces acting upon it from all directions. The forceof gravity pulls matter towards the center of the Earth, causingit to become more compact and round in shape. Additionally, theEarth's rotation causes it to bulge slightly at the equator,further contributing to its roundness. While the Earth may appearflat from a distance, up close it is clear that our planet isindeed round."}}],"usage": {"prompt_tokens": 0,"completion_tokens": 0,"total_tokens": 0}}
3. 部署AnythingLLM:利用官方Docker镜像安装,然后配置LocalAI后端和嵌入模型。
docker pull mintplexlabs/anythingllm:masterexport STORAGE_LOCATION="/var/lib/anythingllm" && \mkdir -p $STORAGE_LOCATION && \touch "$STORAGE_LOCATION/.env" && \docker run -d -p 3001:3001 \-v ${STORAGE_LOCATION}:/app/server/storage \-v ${STORAGE_LOCATION}/.env:/app/server/.env \-e STORAGE_DIR="/app/server/storage" \mintplexlabs/anythingllm:master
访问: ,我们可以在其中使用直观的 GUI 开始配置。
LocalAI 后端配置: 通过URL 访问
嵌入模型配置 与相同的 LocalAI 后端保持一致。
接下来, 配置 Chroma 向量数据库 ,使用URL
创建一个工作区 ,命名为“Playground”。
在“Playground”工作区,我们可以上传文档,进一步扩展本地知识库。
至此我们能够与文档开始进行交互式对话。
总结:
AnythingLLM和Vector Admin是Mintplex Labs提供的创新开源工具,它们极大地简化了私有知识库的构建和管理。通过高效的RAG模型实现和直观的用户界面,这些工具不仅保障了数据的安全性,同时也提供了强大的交互式文档处理能力。随着技术的不断进步,这些工具将为企业和个人用户提供更多的可能性和价值。
了解更多详情:
AnythingLLM GitHub :
LocalAI Docs :
AnythingLLM :
原文链接: