While hooking up the AI bot to external APIs has been quite awesome, I think that it would be very useful for admins/devs to be able to build prompt chains using LCEL in the custom tool editor in order to allow for more complex workflows and turbo-boosted intelligent responses. Or perhaps the custom tool editor could have a "chain builder" functionality that essentially just wraps up LangChain into a GUI for the admin .
Alternatively, it would be super useful to allow an execution time of longer than two seconds so that a prompt chain can be hosted on a different servers and exposed to the Discourse server. 2 seconds is long enough to send the chain some data, let it "think" through its chain, then get an answer back . But this is more complex and niche, and less friendly for non-engineers. The first proposition makes more sense.
Of course both of these options introduce potential risks, so ideally there'd be a toggle to enable with a warning/notice attached .
But why even do this? Well, I'm glad you asked .
Complex AI behavior / agency is not really achievable with a single system prompt alone, and that's precisely why LangChain was developed. I'm an AI researcher and we're building a (for now) simple LLM-based agentic cognitive architecture which will eventually be interacting with users . Discourse would be the most ideal medium by which we can unleash this beast, especially since it can be coupled with the automations plugin — the possibilities are truly endless here and I love it .
I'm confident that many other companies that would benefit from being able to apply agents to their forums as well. LLMs are quite impressive, however we recognizes that elements of symbolic (logic/rule-based) AI are still applicable within larger AI systems — such as cognitive architectures which are built to optimize towards a particular goal state.
I would love to hear any feedback, brainstorming, and critiques of this idea!
Discuss this on our forum.