Liine

Core Concepts

Understand the fundamental concepts behind Liine AI

Overview

Liine AI is built on several core concepts that work together to create powerful AI-driven applications.

Agents

Agents are the fundamental building blocks of Liine AI. An agent is an autonomous entity that can:

  • Process input from users or systems
  • Make decisions based on AI models
  • Execute actions and return results
  • Learn and adapt over time

Agent Types

Liine AI supports several types of agents:

  • Conversational Agents: Handle natural language interactions
  • Task Agents: Execute specific tasks and workflows
  • Analytical Agents: Process and analyze data
  • Multi-Agent Systems: Coordinate multiple agents working together

Workflows

Workflows define how data flows through your agent. A workflow consists of:

Nodes

Nodes are the individual processing units in a workflow:

  • Input Nodes: Receive data from users or external sources
  • Processing Nodes: Transform or analyze data
  • AI Nodes: Use AI models to generate responses or make decisions
  • Action Nodes: Execute external actions (API calls, database operations, etc.)
  • Output Nodes: Return results to the user

Image Nodes

You can drop any PNG, JPG, or GIF directly onto the canvas to create an image node. Each node:

  • Captures the pointer position at drop time so the asset appears exactly where you intend.
  • Automatically keeps the source image’s aspect ratio during resizes, preventing distortion while still allowing you to scale the preview.
  • Supports replacement, download, URL copy, and editable alt text from the node menu so assets stay accessible.
  • Can be wired to other nodes with standard handles, making it easy to annotate flows or reference design artifacts inline.

Connections

Connections define the flow of data between nodes. Each connection:

  • Has a source node and a target node
  • Can include conditional logic
  • Supports data transformation

Models

Liine AI supports multiple AI models:

  • GPT-4: Advanced language understanding and generation
  • GPT-3.5: Fast and efficient language processing
  • Claude: Anthropic's conversational AI
  • Custom Models: Bring your own fine-tuned models

Context

Context is the information available to your agent during execution:

  • User Context: Information about the current user
  • Conversation History: Previous messages in the conversation
  • System Context: Application state and configuration
  • Custom Context: Domain-specific data you provide

Canvas Selection Context (Chat Attachments)

On the canvas, any currently selected nodes appear as attachment chips above the chat input. This mirrors the visual treatment of uploaded files so it’s always obvious what’s selected while you chat. The chips update immediately on select, multi-select, and deselect, and removing a chip also deselects the corresponding node on the canvas.

Exiting an interactive node preview (like browser, embed, or CSV) keeps selection stable on explicit exit gestures (Escape or content reset), while click-away and pointer-leave do not force nodes into selection.

Memory

Agents can maintain different types of memory:

  • Short-term Memory: Current conversation context
  • Long-term Memory: Persistent storage across sessions
  • Semantic Memory: Knowledge and facts
  • Episodic Memory: Past interactions and experiences

Next Steps

Now that you understand the core concepts, explore:

    Core Concepts