Observability & Multi-tenancy
A database in production is a living organism. To keep it healthy, we need to see inside it, and to keep it secure, we need to isolate its users.
Multi-tenancy: Namespace Isolation (Enterprise)
Multi-tenancy is an Enterprise Edition feature. The Community Edition operates with a single "default" namespace — all data lives in one graph, which is simpler and perfectly adequate for single-application deployments.
The Enterprise Edition adds full multi-tenant capabilities: a Tenant Management HTTP API (CRUD + usage tracking), resource quotas, and namespace isolation via RocksDB Column Families.
Logical Separation with RocksDB (Enterprise)
graph TD
subgraph "Samyama Enterprise Server"
Router["Tenant Router"]
Router --> TenantA["Tenant A<br>Quota: 1GB RAM, 10GB Disk"]
Router --> TenantB["Tenant B<br>Quota: 2GB RAM, 50GB Disk"]
Router --> TenantC["Tenant C<br>Quota: 512MB RAM, 5GB Disk"]
end
subgraph "RocksDB"
TenantA --> CFA["Column Family: tenant_a<br>Independent compaction"]
TenantB --> CFB["Column Family: tenant_b<br>Independent compaction"]
TenantC --> CFC["Column Family: tenant_c<br>Independent compaction"]
end
Enterprise leverages RocksDB’s Column Families (CF) for isolation. Each tenant is assigned their own CF.
- Isolation: Tenant A’s keyspace is physically and logically distinct from Tenant B’s.
- Maintenance: Compaction (the background cleanup process) happens per-tenant. If Tenant A is doing heavy writes, it won’t trigger a slow compaction for Tenant B.
- Backup: We can snapshot and restore individual tenants without affecting others.
- HTTP API:
GET/POST/PATCH/DELETE /api/tenantsfor tenant lifecycle management;GET /api/tenants/:id/usagefor resource tracking.
Resource Quotas (Enterprise)
To prevent the “Noisy Neighbor” problem, the Enterprise Edition enforces strict resource quotas per tenant:
- Memory Quota: Max RAM for the in-memory graph.
- Storage Quota: Max disk space in RocksDB.
- Query Time: Max duration for a single Cypher query (to prevent “queries from hell” from locking the CPU).
Observability: The Three Pillars
We follow the industry-standard observability stack: Prometheus, OpenTelemetry (OTEL), and Structured Logging.
1. Metrics (Prometheus)
Samyama exports hundreds of metrics in the Prometheus format.
- QPS: Queries per second (Read vs. Write).
- Latency Histograms: P50, P95, and P99 response times.
- Cache Hit Rates: How often we are hitting the in-memory graph versus going to RocksDB.
2. Structured Tracing
For complex queries, metrics aren’t enough. We need to know where the time was spent.
Using the tracing crate in Rust, Samyama emits structured spans and events with timing data for every stage of query execution—parsing, planning, and execution. These spans can be collected and visualized using any tracing-compatible subscriber.
Note: Currently, Samyama uses
tracing+tracing-subscriberfor structured logging and span instrumentation. Full OpenTelemetry export (for visualization in Jaeger or Grafana Tempo) is on the roadmap for a future release.
3. Structured Logging
Gone are the days of parsing text logs. Samyama emits JSON logs.
{
"timestamp": "2026-02-08T10:30:45Z",
"level": "INFO",
"query": "MATCH (n) RETURN n",
"duration_ms": 12,
"tenant": "acme_corp"
}
This allows for easy ingestion into ELK (Elasticsearch, Logstash, Kibana) or Loki for powerful log aggregation and searching.
By combining strong tenant isolation (Enterprise) with deep observability, Samyama provides a production-ready experience that allows operators to run massive multi-user clusters with confidence.