-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy path.env.example
More file actions
41 lines (34 loc) · 1.79 KB
/
.env.example
File metadata and controls
41 lines (34 loc) · 1.79 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# agent.rs Environment Configuration
# Copy this file to .env and configure with your values
# =============================================================================
# Native Demo (Shell) - Local LLM with llama.cpp
# =============================================================================
# Path to GGUF model file for native shell demo
# Example models:
# - TinyLlama 1.1B: https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF
# - Phi-2: https://huggingface.co/TheBloke/phi-2-GGUF
# - Mistral 7B: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF
MODEL_PATH=/path/to/your/model.gguf
# =============================================================================
# Edge Demo (Deno) - HTTP-based LLM
# =============================================================================
# LLM endpoint (OpenAI-compatible chat/completions API)
# Options:
# - OpenAI: https://api.openai.com/v1/chat/completions
# - Local server: http://localhost:8080/v1/chat/completions
# - Other providers: Anthropic, Azure, etc. with adapters
LLM_ENDPOINT=https://api.openai.com/v1/chat/completions
# Model name to use for edge demo
# Examples:
# - OpenAI: gpt-3.5-turbo, gpt-4
# - Local: granite-3.1-2b-instruct, llama-2-7b-chat
LLM_MODEL=gpt-3.5-turbo
# API key for LLM endpoint authentication (optional for local servers)
# For OpenAI: Get from https://platform.openai.com/api-keys
LLM_API_KEY=sk-your-api-key-here
# =============================================================================
# Browser Demo - No configuration needed
# =============================================================================
# Browser demo uses WebLLM for local-first inference
# Model: Qwen2.5-3B-Instruct-q4f16_1-MLC (precompiled, downloaded on first run)
# No API keys or external services required