Replies: 53 comments 111 replies
-
Can I enable the AI in an unofficial way now in my instance?The answer is yes. Steps for those using provided compose file:1. create a JSON config at
{
"$schema": "https://github.com/toeverything/affine/releases/latest/download/config.schema.json",
"copilot": {
"enabled": true,
"providers.openai": {
"apiKey": "your key",
"baseUrl": "open-ai-compatitable.example.com"
}
}
}2. restart your host docker compose up -dSteps for custom deployment:1. create a JSON config {
"$schema": "https://github.com/toeverything/affine/releases/latest/download/config.schema.json",
"copilot": {
"enabled": true,
"providers.openai": {
"apiKey": "your key",
}
}
}2. Mount or Copy the config file to your instance 3. import the config node --import ./scripts/register.js ./dist/data/index.js import-config ./path/to/config.json4. restart your host |
Beta Was this translation helpful? Give feedback.
-
How can I configure DeepSeek? How to set up the model. Moreover, in the 0.21.2 management panel, the AI function cannot be activated. |
Beta Was this translation helpful? Give feedback.
-
|
Hello! Thanks for the tips for selfhosted instance Will there be AI support for local workspace? |
Beta Was this translation helpful? Give feedback.
-
|
好像仍然有10次的使用限制,我使用自己的 API key 进行配置,并在openrouter后台看到了使用记录,但是affine中的AI仍然只能使用10次 |
Beta Was this translation helpful? Give feedback.
-
|
I created for myself a small server between Affine and Gemini or OpenRouter, who provides free API keys. Not sure if this allowed here in this chat. |
Beta Was this translation helpful? Give feedback.
-
|
Since recently, I get this error message when trying to use it: My relevant {
"$schema": "https://github.com/toeverything/affine/releases/latest/download/config.schema.json",
"server": {
"name": "myname",
"host": "affine.myname.de",
"https": true
},
"copilot": {
"enabled": true,
"providers.openai": {
"apiKey": "sk-proj-...",
}
}
}Any ideas? |
Beta Was this translation helpful? Give feedback.
-
|
Which docker version does your solution work with? I have used stable and canary version and none work with your configuration and would like to know the version that works to see the difference in code from before to now to know how to fix this and possibly have a Github PR made with a solution. |
Beta Was this translation helpful? Give feedback.
-
|
Is there a way to define which model to use? |
Beta Was this translation helpful? Give feedback.
-
|
I use openresty to redirect the requests to deepseek and the config.json |
Beta Was this translation helpful? Give feedback.
-
|
Is there a usage cap when using AI on self hosted even when using my own AI Token???? |
Beta Was this translation helpful? Give feedback.
-
|
Even tho I set the config.json (validated if its set with cat config.json in root/.affine/config) and set unlimited ai usage, I get this error when trying to send a message: |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Hope AFFiNE to support custom providers(ollama, deepseek, ...) |
Beta Was this translation helpful? Give feedback.
-
|
Hey guys, since ./scripts/register.js got removed, I can't import my config for the ai anymore with node --import ./scripts/register.js ./dist/data/index.js import-config ./path/to/config.json. Did anybody get ai to work on 0.22.2? |
Beta Was this translation helpful? Give feedback.
-
|
Any updates? |
Beta Was this translation helpful? Give feedback.
-
|
Just wanted to drop by and report that I've finally managed to connect affine to my LLM server (currently Ollama, but does not matter, any OpenAI API compatible will work) using a solution proposed in https://github.com/axcode07/affine_ai_helper Except I took the idea and incorporated it into my own OpenAI proxy, and also forces a specific model name, just in case. So model name from Affine is completely ignored and replaced (not sure if this is something that the affine_ai_helper does). This still just means that Affine does not follow the OpenAI API spec correctly and it is something that has to be fixed. Writing adapters like that should not be required for software that claims to support local LLM servers. |
Beta Was this translation helpful? Give feedback.
-
|
After a long time and numerous updates to Affine, I'm attempting to solve this problem again. Now for v0.25.0, you could add a custom copilot in a self-hosted instance using the following method:
{
"$schema": "https://github.com/toeverything/AFFiNE/releases/latest/download/config.schema.json",
"copilot": {
"enabled": true,
"scenarios": {
"override_enabled": true,
"scenarios": {
"audio_transcribing": "gemini-2.5-flash",
"chat": "gemini-2.5-flash",
"embedding": "gemini-embedding-001",
"image": "gpt-image-1",
"rerank": "gpt-4.1",
"coding": "claude-sonnet-4-5@20250929",
"complex_text_generation": "gpt-4o-2024-08-06",
"quick_decision_making": "gpt-5-mini",
"quick_text_generation": "gemini-2.5-flash",
"polish_and_summarize": "gemini-2.5-flash"
}
},
"providers.openai": {
"apiKey": "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"baseUrl": "https://api.openai.com/v1"
},
"exa": {
"key": "xxxxxxxxxxx"
}
}
}Fill in your own ApiKey and baseUrl. Each model name used can be modified as needed. Ensure they are accessible through your providers. Providers can be OpenAI, Gemini, etc. Refer to the To enable the
For those using local LLMs, services like new-api can be used to translate requests to OpenAI-compatible format, although this inherently adds a certain level of complexity. |
Beta Was this translation helpful? Give feedback.
-
Although version 0.25.1 has been released, it still does not support custom providers such as Ollama and DeepSeek. Is there an estimated timeline for when this feature will be available? |
Beta Was this translation helpful? Give feedback.
-
|
I have AI working. Am I correct in assuming the names in the UI will not change regardless of what I put in the AI config? They will always say gemini? |
Beta Was this translation helpful? Give feedback.
-
|
Web Search failed in Affine AI option and I am using gemini api key |
Beta Was this translation helpful? Give feedback.
-
|
ty |
Beta Was this translation helpful? Give feedback.
-
|
Does anyone know how to integrate openrouter in place of open ai? |
Beta Was this translation helpful? Give feedback.
-
|
What's going on here?
The pricing page for self-hosted AFFiNE doesn't even mention AI usage limits. 😢
|
Beta Was this translation helpful? Give feedback.
-
|
I don't know if it's the same for you, but after struggling for a few days, I went to the admin settings > Accounts on the far right, click the 3 dots under "actions" for your username > edit > then select UnlimitedCopilot. Everything worked fine for me after that. I hope it's as easy for someone else as it was for me. |
Beta Was this translation helpful? Give feedback.
-
Can people share their working configs?someone post a working open router config, a litellm, a ollama, etc. then we can put them at the top in the main message @forehalo this thread is getting long with lots of conflicting information |
Beta Was this translation helpful? Give feedback.
-
|
Here, I use LiteLLM to proxy third-party models, employing a reverse proxy method (cloudflare worker or caddy server) to proxy the Gemini API. I didn't cover all the scenarios. That can only solve the basic problem of using third-party models in Affine selfhost. |
Beta Was this translation helpful? Give feedback.
-
|
I wonder if everyone is still interested in this topic. I’ve made an important discovery (with the help of GitHub Copilot): by simply setting a single environment variable— The article has been updated, and I hope it will be helpful for everyone https://torchtree.com/en/post/affine-selfhost-ai-configuration/
|
Beta Was this translation helpful? Give feedback.
-
|
Setting This issue persists with self-hosted LLMs like vLLM and Ollama, where the logs from |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
I tried everything in this thread and Copilot still not working. Even using an OpenAI key after a fresh install I get I'm using LiteLLM and is working well with other tools, and the config for the openai provider set at I even used a proxied URL and AFFiNE is not calling it when I configure it as openai baseUrl... How can this be debugged to at least find the URL used? |
Beta Was this translation helpful? Give feedback.












Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Progress
Beta Was this translation helpful? Give feedback.
All reactions