Skip to content
Discussion options

You must be logged in to vote

Hi, this repo does not implement any task queue. Since async patterns are used throughout and most heavy processing happens in the separate LLM service (OpenAI, Claude or whatever), I expect that this service would not be the bottleneck under moderate (and maybe even somewhat large) simultaneous traffic assuming it's running on a decent machine size. We also recently added better support for connection pooling with the agent state database if you use the Postgres connection.

With that said, I have not tested and not aware of anyone else testing it under substantial simultaneous load. This is NOT designed to be a scaled out production-grade service. I would love to hear about any testing e…

Replies: 8 comments 5 replies

Comment options

You must be logged in to vote
2 replies
@gaochenxi
Comment options

@janardhanhere
Comment options

Answer selected by gaochenxi
Comment options

You must be logged in to vote
1 reply
@janardhanhere
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@JoshuaC215
Comment options

Comment options

You must be logged in to vote
1 reply
@JoshuaC215
Comment options

Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
6 participants