Wippy Platform: The Intelligent Application Runtime

In today’s rapidly evolving digital landscape, businesses need platforms that are not only robust and scalable but also intelligent and adaptable. The Wippy Platform, built upon the high-performance Wippy Runtime, is a groundbreaking system designed to empower developers and organizations to build, deploy, and manage sophisticated, AI-integrated applications with unprecedented speed and flexibility. It moves beyond traditional application frameworks by seamlessly merging a powerful actor-based concurrency model, a distributed configuration registry, and native Large Language Model (LLM) integration into a cohesive, developer-friendly environment – all deployable as a single binary.

The Problem Wippy Solves

Building modern applications often involves stitching together disparate systems for configuration management, background processing, API hosting, and increasingly, AI integration. This complexity leads to brittle architectures, slow development cycles, and operational headaches. Integrating LLMs effectively, managing prompts, handling tool usage, and ensuring consistency across distributed components present significant challenges. Wippy addresses these head-on by providing an integrated, opinionated, yet highly extensible platform where intelligent automation and robust application logic coexist seamlessly.

Core Architecture: A Foundation for Scalability and Intelligence

Wippy’s architecture is designed from the ground up for resilience, scalability, and efficient development, leveraging the underlying Wippy Runtime written in Go.
  • Single Binary Deployment: The entire Wippy system, including the runtime, Lua engine, core services, and your application logic defined in the registry, is packaged and deployed as a single, self-contained binary. This drastically simplifies deployment, reduces dependencies, and lowers operational overhead compared to managing complex microservice architectures.
  • Event-Driven Core: At its heart, Wippy utilizes a powerful event bus (wippy.docs:events.spec). This allows components and services to communicate asynchronously by publishing and subscribing to events. This decoupling promotes modularity, resilience (components can fail without bringing down the entire system), and enables reactive workflows where actions are triggered automatically in response to system changes or external stimuli. Developers can tap into this bus using Lua processes to build event-driven logic.
  • Distributed, Versioned Registry: Forget scattered configuration files and inconsistent state. Wippy employs a versioned registry ( wippy.docs:registry.spec ) as the central nervous system for configuration and state management:
  1. Unified Configuration: All system components—from HTTP services and database connections to Lua functions and AI agents—are defined as entries within this registry, typically using clear YAML syntax ( wippy.docs:config.spec ).
  2. Atomic Updates & Rollbacks: Changes are applied atomically through change sets, creating immutable versions. This provides a complete audit trail ( registry.versions ) and allows for instant rollbacks to any previous state ( registry.apply_version ), dramatically simplifying deployments and recovery.
  3. Dynamic Loading: The runtime dynamically loads and configures components based on the registry state, allowing for live updates and configuration changes without full system restarts in many cases.
  4. Programmatic Access & Self-Modification: Lua code can directly interact with the registry ( registry.get, registry.find, snapshot:changes, changes:apply ) to read configuration or even dynamically modify system behavior. This enables powerful self-modification capabilities, where the application can adapt its own configuration or even code based on runtime conditions or AI-driven instructions. Security checks and isolation are present to manage the risks associated with Recursive Self-Improvement (RSI) loops.
  • Actor-Based Concurrency with Lua Processes: Wippy embraces the actor model for concurrency ( wippy.docs:process.spec, wippy.docs:actor.spec ), implemented via lightweight Lua processes ( process.lua ).
  • Isolation & Safety: Each process runs in isolation with its own state, communicating solely through asynchronous message passing ( process.send, process.listen, process.inbox ). This eliminates complex locking and shared-state concurrency issues.
  • Scalability: The runtime can efficiently manage thousands of concurrent Lua processes, distributing work across available resources.
  • Supervision & Resilience: Processes can be spawned with monitoring or linking ( process.spawn_monitored, process.spawn_linked ), allowing for the creation of supervision trees where parent processes can react to child failures (e.g., restarting a failed worker). The trap_links option provides fine-grained control over failure propagation.
  • Go-like Channels: Within a process, developers can use Go-like channels ( channel.new, channel.select ) for communication and synchronization between coroutines ( wippy.docs:channel.spec ).
  • Modular Service Architecture: The underlying Wippy Runtime provides core services (HTTP server, SQL database connectors, file system abstractions, cloud storage interfaces, etc.) that are configured via the registry and made available to the Lua environment as modules ( http, sql, fs, store, cloudstorage , etc.). This provides a consistent API for interacting with essential infrastructure.

Unleashing AI Potential: Native LLM Integration

Wippy isn’t just a runtime; it’s an intelligent runtime with deep, native integration of Large Language Models.
  1. Unified LLM Interface (wippy.llm.spec): Interact with various LLM providers (OpenAI, Anthropic, Google Vertex AI are supported via specific handlers like wippy.llm.openai:*, wippy.llm.claude:*, wippy.llm.google.vertex:*) through a consistent Lua API (llm.generate, llm.structured_output, llm.embed). This abstraction simplifies development and reduces vendor lock-in.
  2. Declarative AI Agents (agent.gen1, wippy.docs:agents.spec): Define sophisticated AI agents directly in the registry.
    • Configuration: Specify the agent’s core prompt, desired LLM model (model), generation parameters (temperature, max_tokens), and capabilities.
    • Traits: Compose agent behaviors using reusable prompt fragments called “Traits” (e.g., conversational, thinking, technical_expertise).
    • Memory: Provide agents with persistent contextual knowledge via the memory field.
    • Tool Usage: Grant agents access to tools (function.lua entries marked with meta.type=tool) defined in the registry. The wippy.llm:tools library resolves tool schemas for the LLM.
    • Delegation: Enable agents to delegate complex tasks to other specialized agents via dedicated tools.
    • Inheritance: Build complex agents by inheriting traits, tools, and memory from parent agents.
    • Execution: The wippy.agent.gen1:agent library runs these declarative agents, managing conversation history and orchestrating LLM calls and tool execution.
  3. Advanced Reasoning (thinking_effort): Leverage models supporting enhanced reasoning (like Claude 3.x) by specifying a thinking_effort parameter, allowing agents to tackle more complex problems.
  4. Structured Output: Force LLMs to generate responses in a specific JSON schema using llm.structured_output, perfect for reliable data extraction and integration with other systems.
  5. Embeddings & Semantic Search (wippy.embeddings:*): Generate vector embeddings for text using llm.embed and manage them using the embedding_repo library. This repository supports storing vectors in SQL databases (PostgreSQL/SQLite with vector extensions) and performing efficient similarity searches (embedding_repo.search_by_embedding), enabling powerful RAG (Retrieval-Augmented Generation) and semantic search capabilities.
  6. Prompt Engineering (wippy.llm:prompt): A dedicated library provides a structured way to build complex, multi-turn, and even multi-modal (text + images) prompts for LLMs, managing different message roles (system, user, assistant, function calls/results) effectively.
  7. AI-Driven Runtime Adaptation: The runtime model is effectively adapted for LLM interaction by providing quick ways to perform system reflection (understanding its own configuration and code via registry access) and controlled modification (altering registry entries, including code, via agents like wippy.registry and wippy.coder). This enables AI agents to not only use tools, but also to analyze and improve the system itself based on a user’s commands.

Rapid Development & Unmatched Extensibility

Wippy is designed for developer productivity and deep customization.
  1. LLM-Friendly Lua Scripting: The primary development language is Lua – known for its simplicity, embeddability, and speed. Wippy’s Lua environment and APIs have been specifically adapted for easier use by LLMs, facilitating AI-driven code generation, analysis, and modification. This allows for rapid iteration and makes the platform accessible to both human developers and AI agents.
  2. Code as Configuration: Lua functions (function.lua), libraries (library.lua), and processes (process.lua) are themselves registry entries. This means application logic is versioned and managed alongside infrastructure configuration.
  3. Rich Standard Library: Developers have access to a comprehensive set of built-in Lua modules providing APIs for:
    • Registry Access:registry
    • AI/LLM: llm, prompt, agent_runner, embeddings
    • Concurrency:process, channel, actor
    • Networking: http (server context), http_client (outgoing requests), websocket
    • Data:sql, store, json, yaml, base64, hash
    • System: fs (filesystems), exec (external commands), time, logger, env, crypto, uuid
    • Templating: templates (Jet engine)
    • Code Analysis:treesitter (wippy.docs.treesitter:index.spec) for advanced, language-aware code parsing and analysis directly within Lua.
  4. API Development: Easily define RESTful APIs by creating http.endpoint entries that map URL paths and methods to specific function.lua handlers. Middleware (CORS, auth, logging, custom) can be configured declaratively on http.router entries.
  5. Custom Tools for AI: Any function.lua can be exposed as a tool for AI agents simply by adding meta.type=tool and defining its input schema (meta.input_schema). This allows agents to interact with any custom logic or external system.
  6. Future-Proof Architecture: Wippy is actively evolving, with planned integrations for Temporal.io (for robust workflow orchestration) and WebAssembly (Wasm) (for running code written in other languages), further expanding its capabilities and interoperability.

Robust Management & Operations

Wippy provides tools and features for effectively managing and operating applications.
  1. Registry Management (wippy.registry agent): A dedicated AI agent, along with underlying tools (keeper.agents.registry:*), allows operators to interactively manage the registry:
    • Create, read, update, delete any registry entry.
    • List entries by namespace or kind.
    • View version history (get_versions).
    • Rollback to previous configurations (apply_version).
  2. Configuration Deployment: Since configurations are registry entries, deployment can involve loading YAML files into the registry or applying changesets programmatically. The versioning system ensures safe updates and rollbacks. The single binary deployment model simplifies this process further.
  3. Git Synchronization (wippy.git agent): The platform includes capabilities (keeper.agents.git:*, keeper.syncer:*) to potentially synchronize registry contents with a Git repository. This enables GitOps workflows where the Git repository becomes the source of truth for system configuration, and changes are automatically applied to the Wippy registry.
  4. Structured Logging (wippy.docs:logger.spec): The logger module provides structured logging (JSON format) with support for different levels (debug, info, warn, error) and contextual fields. Logs can be easily aggregated and analyzed by external monitoring systems.
  5. Security Framework (wippy.docs:security.spec): Define granular access control using policies (security.policy entries). Policies are grouped into scopes, and actors (users or system components) are associated with scopes. The security.can() function allows checking permissions within Lua code. Token stores (security.token_store) manage authentication tokens. Some internal Wippy Runtime components include security checks outside the LLM/user space, providing a higher degree of system security at the runtime level.

Versatile Use Cases: Beyond the Obvious

While Wippy’s components can be applied to specific domains, its core capabilities enable a vast range of general-purpose applications:

  • Intelligent Customer Support: Build chatbots that understand user intent, query knowledge bases (using embeddings), interact with ticketing systems via tools, and escalate to human agents when necessary, using WebSockets for real-time interaction.
  • Automated Content Moderation: Create workflows (process.lua) that ingest user-generated content, use LLMs (llm.generatewith specific prompts orllm.structured_output) to classify content against moderation policies, and automatically flag or remove inappropriate material, logging actions via logger.
  • Personalized Content Generation: Develop agents (agent.gen1) that analyze user profiles (from sql or store) and use LLMs to generate tailored summaries, recommendations, or even creative content, delivered via API (http.endpoint) or rendered using templates.
  • Complex Data Pipelines: Orchestrate multi-step data processing workflows using actors (process.lua). Fetch data (http_client, sql), transform it using Lua logic or LLM analysis (llm.structured_output), store intermediate results (store), and trigger downstream actions via the event bus (events).
  • Internal Knowledge Portals: Ingest company documents (fs, cloudstorage), create vector embeddings (llm.embed, embedding_repo), and provide a Q&A interface using an agent that performs semantic search to answer employee questions accurately based on internal documentation.
  • Smart Monitoring & Alerting: Create processes (process.lua) that monitor system metrics or external data streams (events, http_client). Use LLMs to analyze patterns, detect anomalies beyond simple thresholds, generate human-readable alert summaries, and trigger notifications via external APIs.
  • Interactive Configuration Tools: Build web interfaces (using http.static, http.endpoint, websocket) that allow users to configure system parameters, which are then validated by Lua functions and stored securely in the Wippy registry (registry.snapshot:changes:update).
  • Conversational Application Development: Empower users, including those less technical, to build or significantly tweak application components through natural language conversation. By interacting with agents like wippy.registry and wippy.coder, users can request the creation or modification of registry entries(configurations, functions, agents), effectively guiding the application’s structure and behavior without writing YAML or Lua directly.

Why Wippy? The Competitive Edge

  • AI-Native: LLM integration is a first-class citizen, not an afterthought, with powerful agent frameworks, tool usage, embedding support, and AI-driven runtime adaptation built-in.
  • Developer Velocity: Lua scripting—adapted for easy LLM interaction—combined with a rich set of modules, declarative configuration, and tools like Tree-sitter accelerates development cycles significantly for both humans and AI.
  • Conversational Configuration: Unique ability for users (technical or not) to interact with system agents (wippy.registry, wippy.coder) to configure, modify, and even build parts of the application using natural language, leveraging the platform’s self-reflection and modification capabilities.
  • Integrated Platform: Provides a unified solution for configuration, concurrency, API hosting, AI integration, and more, reducing the need for complex external tooling.
  • Simplified Deployment: The entire system runs as a single binary, streamlining deployment and operations.
  • Robustness & Scalability: The actor model and event-driven architecture provide inherent scalability and fault tolerance.
  • Manageability: Centralized, versioned configuration via the registry simplifies deployment, auditing, and operations. GitOps integration potential further enhances manageability.
  • Flexibility & Extensibility: Easily create custom functions, libraries, agents, and tools to tailor the platform to any specific need. Future integrations with Temporal.io and Wasm promise even greater flexibility.

The Wippy Platform (Wippy Runtime) represents the next generation of application development, where intelligence and automation are woven into the fabric of the system. It provides the tools and architecture needed to build powerful, scalable, and adaptable AI-driven solutions faster and more reliably than ever before, even enabling application evolution through natural language interaction.

Need help mapping or building your AI agents?

Turn your ideas into innovation.

Your ideas are meant to live beyond your mind. That’s what we do – we turn your ideas into innovation that can change the world. Let’s get started with a free discovery call.
Scroll to top