Deploy autonomous AI agent teams that continuously research, analyze, and surface insights from your entire data corpus—visualized in a stunning, real-time 3D interface.
![]() |
![]() |
Quick Start · Documentation · Architecture · Roadmap · Contributing
Note: This README contains expandable sections marked with "Click to expand". Click on these to reveal detailed information.
What if your knowledge base could think for itself?
VisionFlow transforms static documents into living, breathing knowledge ecosystems. Instead of searching through files, AI agents continuously analyze your data, discovering connections you didn't know existed—while you explore the results in an immersive 3D space with your team.
graph LR
A[Your Data] --> B[AI Agent Teams]
B --> C[Knowledge Graph]
C --> D[3D Visualization]
D --> E[Team Collaboration]
style A fill:#1a1a2e,stroke:#16213e,color:#fff
style B fill:#0f3460,stroke:#16213e,color:#fff
style C fill:#533483,stroke:#16213e,color:#fff
style D fill:#e94560,stroke:#16213e,color:#fff
style E fill:#16213e,stroke:#1a1a2e,color:#fff
| VisionFlow | Traditional AI Tools |
|---|---|
| Continuous agent research, 24/7 | Reactive, query-based responses |
| Private data stays on your infrastructure | Sent to third-party APIs |
| Visual 3D exploration with your team | Static text output |
| Auditable Git version control for all changes | No transparency |
| Voice-first spatial interaction | Text-only interface |
|
|
|
|
|
|
Click to expand: The Ontology Intelligence System
1. Grammar Checker for Your Data Just as spell-check prevents "runned," the ontology prevents logical errors in your knowledge graph. It enforces rules like "a Person cannot also be a Company" or "Software Projects must have source code."
2. Automatic Knowledge Discovery
Add one fact, get two for free. Define Company X employs Person Y, and the system automatically infers Person Y works for Company X. The ontology uses inverse relationships to multiply your knowledge.
3. Self-Organizing 3D Visualization The physics engine translates logical rules into spatial forces:
SubClassOf→ Attraction (child classes cluster near parents)DisjointWith→ Repulsion (disjoint classes pushed apart)EquivalentClasses→ Strong attraction (synonyms together)
4. Context-Aware AI Agents Agents understand the "rules of your world." When tasked to "audit all software projects," they know exactly what qualifies as a project versus a library or documentation page.
| Without Ontology | With Ontology |
|---|---|
| Inconsistent data—connect anything to anything | Validated data—system prevents logical errors |
| Only know what you explicitly enter | Auto-discover hidden relationships |
| Generic hairball layout | Meaningful spatial organization |
| AI agents require hand-holding | Context-aware autonomous agents |
Get VisionFlow running in under 5 minutes:
# Clone the repository
git clone https://github.com/DreamLab-AI/VisionFlow.git
cd VisionFlow
# Configure environment
cp .env.example .env
# Edit .env with your NEO4J_PASSWORD
# Deploy with Docker
docker-compose --profile dev up -d
# Access VisionFlow
# Frontend: http://localhost:3001
# Neo4j: http://localhost:7474
# Backend API: http://localhost:4000That's it. Your AI agent teams will begin analyzing data immediately.
Next: Full Installation Guide · First Graph Tutorial
Click to expand: Native Installation (without Docker)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
rustup default stable# Ubuntu/Debian
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
sudo apt-get install cuda-toolkit-12-4
# Verify
nvcc --versioncurl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs# Docker (Recommended)
docker run -d --name neo4j \
-p 7474:7474 -p 7687:7687 \
-e NEO4J_AUTH=neo4j/your_password \
neo4j:5.13.0# Clone and configure
git clone https://github.com/DreamLab-AI/VisionFlow.git
cd VisionFlow
cp .env.example .env
# Build server (with GPU)
cargo build --release --features gpu
# Build client
cd client && npm install && npm run build && cd ..
# Run
./target/release/webxrVisionFlow implements a Hexagonal Architecture with Modular Actor System for clean separation of concerns:
flowchart TB
subgraph Client["Client Layer"]
direction LR
React["React + Three.js"]
WebXR["WebXR/Quest 3"]
Voice["Voice UI"]
end
subgraph Server["Server Layer (Rust + Actix)"]
direction TB
subgraph Actors["Actor System"]
GraphState["GraphStateActor"]
Physics["PhysicsOrchestrator"]
Semantic["SemanticProcessor"]
Coordinator["ClientCoordinator"]
end
subgraph Hexagonal["Hexagonal Core"]
Ports["Ports"]
Adapters["Adapters"]
end
end
subgraph Data["Data Layer"]
Neo4j[(Neo4j 5.13)]
end
subgraph GPU["GPU Compute"]
CUDA["39 CUDA Kernels"]
end
Client <-->|"36-byte Binary Protocol"| Server
Server <--> Data
Server <--> GPU
style Client fill:#e1f5ff,stroke:#0288d1
style Server fill:#fff3e0,stroke:#ff9800
style Data fill:#f3e5f5,stroke:#9c27b0
style GPU fill:#e8f5e9,stroke:#4caf50
- Server-Authoritative State — Neo4j is the single source of truth
- Binary Protocol — 36-byte WebSocket messages (80% bandwidth reduction vs JSON)
- GPU Offloading — Physics, clustering, pathfinding accelerated 100x
- Actor Isolation — Specialized actors for graph state, physics, semantics, and coordination
Deep Dive: Architecture Overview · Hexagonal CQRS · Actor System
Click to expand: Data Pipeline Details
flowchart LR
A[GitHub/Logseq] --> B[Sync Service]
B --> C{Parser}
C -->|OWL| D[Ontology]
C -->|Markdown| E[Knowledge]
D --> F[(Neo4j)]
E --> F
F --> G[GPU Memory]
G --> H[CUDA Physics]
H --> I[Binary Protocol]
I --> J[3D Client]
style A fill:#24292e,stroke:#fff,color:#fff
style F fill:#008cc1,stroke:#fff,color:#fff
style H fill:#76b900,stroke:#fff,color:#fff
style J fill:#e94560,stroke:#fff,color:#fff
Pipeline Features:
- Streaming processing — No batching bottlenecks
- Real-time persistence — Immediate Neo4j writes
- GPU-accelerated physics — 100x faster than CPU
- Binary efficiency — 36 bytes per node update
Complete documentation: Pipeline Integration
Click to expand: Architecture Documentation Index
Core Architecture:
Specialized Systems:
Communication Protocols:
Data Flow:
| Metric | Value |
|---|---|
| Frame Rate | 60 FPS @ 100K nodes |
| Render Latency | <16ms per frame |
| Max Nodes | 100,000+ without degradation |
| Concurrent Users | 50+ simultaneous |
| Operation | CPU | GPU | Speedup |
|---|---|---|---|
| Physics Simulation | 1,600ms | 16ms | 100x |
| Leiden Clustering | 800ms | 12ms | 67x |
| Shortest Path | 500ms | 8ms | 62x |
| Metric | Value |
|---|---|
| WebSocket Latency | <10ms |
| Bandwidth Reduction | 80% vs JSON |
| Message Size | 36 bytes/node |
Click to expand: Detailed Performance Benchmarks
- Agent Spawn Time: <50ms per agent
- Concurrent Agents: 50+ simultaneous
- Memory Per Agent: ~50MB average
- Agent Communication: <5ms message latency
| Field | Bytes | Description |
|--------------|-------|-----------------------|
| Node ID | 4 | Unique identifier |
| Position X | 4 | Float32 |
| Position Y | 4 | Float32 |
| Position Z | 4 | Float32 |
| Velocity X | 4 | Float32 |
| Velocity Y | 4 | Float32 |
| Velocity Z | 4 | Float32 |
| Flags | 4 | State flags |
| Reserved | 4 | Future use |
Full benchmarks: Performance Benchmarks
| Layer | Technology |
|---|---|
| Frontend | React, Three.js (React Three Fiber), TypeScript |
| Backend | Rust, Actix-Web, Hexagonal Architecture |
| Database | Neo4j 5.13 |
| GPU | CUDA 12.4 (39 kernels) |
| AI | MCP Protocol, Claude, Microsoft GraphRAG |
| Semantic | OWL/RDF, Whelk-rs Reasoner |
| XR | Babylon.js, WebXR, Quest 3 |
VisionFlow uses the Diátaxis framework for organized documentation:
Full Documentation Hub: docs/
Click to expand: Complete Documentation Index
- Development Setup
- Project Structure
- Architecture Overview
- Adding Features
- Testing Guide
- Contributing
VisionFlow/
├── src/ # Rust server
│ ├── actors/ # Modular actor system
│ ├── handlers/ # HTTP/WebSocket handlers
│ ├── services/ # Business logic
│ ├── adapters/ # Neo4j integration
│ ├── ontology/ # OWL reasoning
│ ├── gpu/ # CUDA kernels
│ └── protocols/ # Binary protocol
│
├── client/src/ # React frontend
│ ├── components/ # UI components
│ ├── rendering/ # Three.js 3D engine
│ ├── immersive/ # XR integration
│ └── services/ # API clients
│
├── multi-agent-docker/ # AI orchestration
│ ├── agents/ # 54+ agent templates
│ └── coordination/ # Agent coordination
│
└── docs/ # Documentation
├── getting-started/ # Tutorials
├── guides/ # How-to guides
├── concepts/ # Architecture docs
└── reference/ # API reference
Click to expand: Detailed Directory Structure
VisionFlow/
├── src/ # Server code (Rust + Actix)
│ ├── handlers/ # HTTP/WebSocket request handlers
│ │ ├── socket_flow_handler.rs # WebSocket connection management
│ │ └── graph_handler.rs # Graph API endpoints
│ ├── services/ # Business logic layer
│ │ ├── graph_service.rs # Graph operations
│ │ └── sync_service.rs # GitHub synchronization
│ ├── adapters/ # Neo4j & external integrations
│ │ └── neo4j_adapter.rs # Database operations
│ ├── ports/ # Interface definitions
│ ├── actors/ # Modular actor system
│ │ ├── graph_state_actor.rs # Graph state management
│ │ ├── physics_orchestrator_actor.rs # Physics coordination
│ │ ├── semantic_processor_actor.rs # Semantic processing
│ │ └── client_coordinator_actor.rs # Client synchronization
│ ├── ontology/ # OWL reasoning and validation
│ │ ├── parser.rs # OWL/RDF parsing
│ │ └── reasoner.rs # Whelk-rs integration
│ ├── gpu/ # CUDA kernel integration
│ │ └── kernels/ # 39 CUDA kernels
│ └── protocols/ # Binary WebSocket protocol
│ └── binary_protocol.rs # 36-byte message format
│
├── client/src/ # Client code (React + Three.js)
│ ├── components/ # React UI components
│ │ ├── ControlCenter/ # Main control panel
│ │ └── Graph/ # 3D graph components
│ ├── features/ # Feature-specific modules
│ │ ├── visualisation/ # Visualization features
│ │ └── design-system/ # UI design system
│ ├── rendering/ # 3D rendering engine
│ │ └── GraphRenderer.tsx # Three.js graph rendering
│ ├── services/ # API client services
│ │ └── websocket.ts # WebSocket client
│ ├── immersive/ # XR/VR integration
│ └── xr/ # WebXR implementation
│
├── multi-agent-docker/ # Multi-agent orchestration container
│ ├── agents/ # 54+ AI agent templates
│ ├── coordination/ # Agent coordination logic
│ ├── hooks/ # Pre/post task automation
│ └── unified-config/ # Container configuration
│
└── docs/ # Complete documentation
├── getting-started/ # Installation & tutorials
├── guides/ # How-to guides (user, developer, operations)
│ ├── developer/ # Development guides
│ ├── features/ # Feature-specific guides
│ └── infrastructure/ # Infrastructure guides
├── concepts/ # Architecture & design concepts
│ └── architecture/ # System architecture
├── reference/ # API references & technical specs
│ └── api/ # API documentation
└── explanations/ # Deep-dive explanations
- Modular actor architecture
- Neo4j as primary database
- Binary WebSocket protocol (36 bytes, 80% reduction)
- 39 CUDA kernels (100x speedup)
- 50+ concurrent AI agents
- OWL 2 EL reasoning with Whelk-rs
- Quest 3 WebXR (Beta)
- Vircadia multi-user integration
- Spatial avatars and presence
- SPARQL query interface
- Distributed GPU compute
- Apple Vision Pro native app
- Federated ontologies
- Kubernetes operator
- Multi-region replication
Click to expand: Detailed Roadmap
Core Infrastructure
- Modular actor architecture (GraphServiceActor → 4 specialized actors)
- Neo4j 5.13 as primary database
- Hexagonal architecture with ports & adapters
- Binary WebSocket protocol (36 bytes, 80% bandwidth reduction)
- Server-authoritative state management
GPU Acceleration
- 39 production CUDA kernels
- Physics simulation (100x CPU speedup)
- Leiden clustering for community detection
- Shortest path computation (SSSP)
AI Agent System
- 50+ concurrent AI agents
- Microsoft GraphRAG integration
- Multi-hop reasoning
- Whelk-rs OWL 2 DL reasoning (10-100x speedup)
Ontology Support
- OWL 2 EL profile reasoning
- Physics-based semantic constraints
- Automatic inference and contradiction detection
- LRU caching for inference optimization
Immersive XR & Multi-User
- Meta Quest 3 single-user AR/VR (Beta)
- Force-directed graph physics with WebXR
- Vircadia multi-user integration
- Spatial avatars and real-time presence
- Apple Vision Pro (Q3 2026)
Advanced Features
- SPARQL query interface
- Email integration for knowledge ingestion
- Multi-language voice support
Enterprise Features
- Federated ontologies across organizations
- Advanced audit and compliance tools
- SSO integration (SAML, OAuth2)
- Fine-grained permission system
Scalability
- Kubernetes operator for auto-scaling
- Multi-region data replication
- Millions of nodes support
- Real-time collaborative VR for 100+ users
We welcome contributions! See our Contributing Guide.
# Fork and clone
git clone https://github.com/YOUR_USERNAME/VisionFlow.git
cd VisionFlow
# Setup development environment
cargo build
cd client && npm install
# Run tests
cargo test
npm test
# Submit a PRAreas: Bug fixes · Documentation · Features · Testing · Performance
Click to expand: Contribution Guidelines
-
Fork the Repository
git clone https://github.com/DreamLab-AI/VisionFlow.git cd VisionFlow git checkout -b feature/your-feature-name -
Set Up Development Environment
cargo build cd client && npm install cargo test npm test
-
Make Your Changes
- Follow the coding guidelines
- Write tests for new features
- Update documentation as needed
-
Submit a Pull Request
- Describe your changes clearly
- Reference any related issues
- Ensure all tests pass
- Rust: Follow standard Rust conventions
- TypeScript: Follow ESLint configuration
- Documentation: Use Diátaxis framework
- Commits: Use conventional commit messages
- Architecture: Respect hexagonal architecture boundaries
| Area | Description |
|---|---|
| Bug Fixes | Report or fix issues |
| Documentation | Improve guides and examples |
| Features | Propose and implement new capabilities |
| Testing | Add test coverage |
| UI/UX | Enhance the visualization interface |
| Performance | Optimize bottlenecks |
Click to expand: System Requirements
- OS: Linux (Ubuntu 20.04+), macOS (12.0+), Windows 10/11
- CPU: 4-core processor, 2.5GHz
- RAM: 8GB
- Storage: 10GB free disk space
- Browser: Chrome 90+, Firefox 88+, Safari 14+, Edge 90+
- CPU: 8-core processor, 3.0GHz+
- RAM: 16GB
- Storage: 50GB SSD
- GPU: NVIDIA GTX 1060 or AMD RX 580
- CPU: 16+ cores, 3.5GHz
- RAM: 32GB+
- Storage: 200GB+ NVMe SSD
- GPU: NVIDIA RTX 4080+ with 16GB+ VRAM (CUDA 12.4)
| Platform | Server | GPU | XR |
|---|---|---|---|
| Linux (Ubuntu 20.04+) | ✅ | ✅ | ✅ |
| macOS (12.0+) | ✅ | ❌ | |
| Windows (WSL2) | ✅ | ✅ | ✅ |
| Meta Quest 3 | — | — | ✅ Beta |
| Apple Vision Pro | — | — | 🔄 Planned |
Click to expand: Detailed Platform Support
| Platform | Status | Notes |
|---|---|---|
| Linux (Ubuntu 20.04+) | ✅ Full | Recommended for production |
| Linux (Debian 11+) | ✅ Full | Docker deployment tested |
| Linux (Arch) | ✅ Full | Including GPU acceleration |
| macOS (12.0+) | CPU-only (no CUDA) | |
| Windows 10/11 | WSL2 recommended |
| GPU | CUDA | Status |
|---|---|---|
| NVIDIA RTX 40-series | 12.4 | ✅ Optimal |
| NVIDIA RTX 30-series | 12.4 | ✅ Excellent |
| NVIDIA GTX 10-series | 12.4 | ✅ Good |
| AMD (via ROCm) | — | 🔄 Planned |
| Apple Silicon (Metal) | — | 🔄 Planned |
| Browser | Status | WebGL 2 | WebXR |
|---|---|---|---|
| Chrome 90+ | ✅ Full | ✅ | ✅ |
| Edge 90+ | ✅ Full | ✅ | ✅ |
| Firefox 88+ | ✅ Full | ✅ | |
| Safari 14+ | ✅ | ❌ |
| Device | Status | Features |
|---|---|---|
| Meta Quest 3 | ✅ Beta | Force-directed graphs, hand tracking, AR passthrough |
| Meta Quest 2 | Browser-based WebXR, reduced performance | |
| Vircadia | 🔄 Architecture | Multi-user spatial avatars |
| Apple Vision Pro | 🔄 Planned | Native app Q3 2026 |
Built on the work of:
- 3d-force-graph — Force-directed visualization
- graph_RAG — Natural language queries
- Semantic RAG Research — Intelligent pathfinding
Special thanks to Prof. Rob Aspin for research in immersive knowledge visualization.
Mozilla Public License 2.0 — Use commercially, modify freely, share changes.
Click to expand: License Details
- ✅ Commercial Use: Use VisionFlow in commercial projects
- ✅ Modification: Modify and customize the source code
- ✅ Distribution: Distribute your modifications
- ✅ Patent Grant: Explicit patent license from contributors
⚠️ Copyleft: Modified source files must be MPL-2.0⚠️ Disclosure: Modified source must be made available
- You can use VisionFlow in proprietary applications
- You can modify files for internal use without disclosure
- If you distribute modifications, changed files must be MPL-2.0
- Your proprietary code can remain closed-source
Ready to transform knowledge discovery?
git clone https://github.com/DreamLab-AI/VisionFlow.git && cd VisionFlow && docker-compose --profile dev up -dDocumentation · Issues · Discussions
Built with care by the VisionFlow Team





