interlocute.ai beta
Out-of-the-box agent

Interactive Assistant

Spin up a conversational AI assistant with memory, threading, and streaming — no glue code required.

What you get out of the box

Set up in minutes — no infrastructure to provision

Persistent conversation threads with long-term memory

Real-time streaming responses (SSE)

Cross-thread awareness for parallel workspaces

Full governance, disclosures, and usage tracking

Customisable persona and constitution

How setup works

01

Sign up and create a new node

02

Select the Chat Assistant profile

03

Optionally customise the constitution and persona

04

Embed on your site or chat via the dashboard

Try these prompts

Help me brainstorm names for a new product launch
Summarise this document and list the key takeaways
Draft an email responding to this customer complaint
What are the pros and cons of microservices vs monoliths?

Frequently Asked Questions

Interactive Assistant

How fast can I set up a Chat Assistant agent?
You can have a fully functional conversational AI assistant running in under five minutes. Sign up, choose the Chat Assistant profile, and your node is live with persistent threads, streaming, and memory enabled by default. No backend code or infrastructure setup is needed.
What capabilities come out of the box?
The Chat Assistant ships with conversation memory, cross-thread awareness, artifact handling, persona customisation, real-time streaming, and full disclosure support. Every capability is governed by the platform contract and metered in your usage ledger.
Can I customise the assistant's personality and behaviour?
Yes. Each node has a constitution (system prompt) you can edit at any time. You can also enable or disable individual capabilities — for example, turning off cross-thread awareness or switching the disclosure mode — without redeploying anything.
Is the Chat Assistant safe for production use?
Absolutely. Every response is governed by the platform contract, usage is tracked per-request in an auditable ledger, and you can set budget limits to prevent runaway costs. The full disclosure mode lets callers inspect governance and capability metadata.
How does pricing work for the Chat Assistant?
Pricing is usage-based: a small platform premium on LLM tokens plus computation charges. There are no monthly fees and no per-seat costs. You pay only for the tokens your node actually consumes.

Ready to deploy?

Create your Interactive Assistant node in seconds and start building.