FRAKTIΛ agents can ingest and emit data through multiple channels simultaneously, enabling them to operate across platforms (web, mobile, IoT) and modalities (voice, text, real-time events). This layer abstracts input/output into a programmable interface, allowing developers to configure how agents communicate, respond and synchronize across contexts.
Supported Input/Output Modalities
Channel Type | Examples |
---|---|
Text Input | Web app chat, CLI, WebSocket streams |
Voice Input | Real-time STT (via ElevenLabs, Whisper) |
Voice Output | TTS via ElevenLabs, Google Cloud, etc. |
Sensor Streams | MQTT / IoT data (e.g., temperature, GPS) |
Webhooks | API-triggered input/output events |
On-Chain Events | Smart contract logs, oracle feeds |
Voice Layer Configuration (persona.voice)
Voice can be toggled per channel or replaced dynamically based on runtime conditions.
Example: Bidirectional Voice Interaction
IoT / Sensor Integration
Input from hardware (e.g., motion sensor, drone, soil scanner) can be ingested like this:
This allows agents to react autonomously to real-world changes.
Channel Multiplexing Example
Use Case Examples
✦ An agent that listens via Discord voice and posts alerts via SMS.
✦ An agent embedded in a drone that responds to both voice and sensor readings.
✦ A chatbot that uses voice when accessed via mobile and text via desktop.