The Cognitive Engine is the thinking unit of every FRAKTIΛ agent. It determines how an agent interprets input, generates responses and evaluates conditions, whether via LLMs, symbolic logic or hybrid models.
This module is fully pluggable, allowing agents to run on external LLM APIs, decentralized inference networks or custom in-house models.
Supported Model Backends
Engine Type | Examples |
---|---|
Cloud LLMs |
|
Local Models |
|
Inference APIs |
|
On-chain Logic* |
|
*Agents can use multiple engines and switch contextually (e.g. fallback, routing, specialization).
Cognitive Engine Block (JSON)
Multi-Engine Agents
You can map different tasks to different models:
Security & Privacy
✦ Tokens and keys are encrypted per agent.
✦ Agents can request ephemeral access for specific models.
✦ Model usage tracked and cost-logged via the runtime.
✦ Fine-grained audit logs on model usage coming soon.
Use Cases Enabled
✦ Natural language interpretation. (intent, emotion, instruction parsing)
✦ Multi-modal decision workflows. (text → action → response)
✦ Symbolic filtering + LLM fallback.
✦ Model chaining via Add-On coordination. (e.g., agent A → B → C)
Strategic Role
The Cognitive Engine turns logic trees into intelligent conversation, task fulfillment and perception, it’s where deterministic structure meets generative flexibility.