Skip to content

Query Playground

TeamLoop’s query system goes beyond simple search. It provides temporal intelligence - the ability to query knowledge as it existed at any point in time.

TeamLoop supports four query modes, each serving different use cases:

Queries the latest version of all knowledge.

When to use:

  • “What’s our current authentication approach?”
  • “Who owns the payment service?”
  • “What decisions are active for the API?”

Example:

teamloop_query:
query: "authentication approach"
mode: "current"

Queries knowledge as it existed on a specific date.

When to use:

  • “What was our auth approach before the refactor?”
  • “What decisions were active during the Q2 incident?”
  • “What did we know when we made that choice?”

Example:

teamloop_query:
query: "authentication"
mode: "as_of"
as_of: "2024-06-15"

Shows how knowledge changed over a period.

When to use:

  • “How has our database strategy evolved this year?”
  • “What infrastructure decisions changed in Q3?”
  • “Track the authentication journey from start to now”

Example:

teamloop_evolution:
query: "database architecture"
from_date: "2024-01-01"
to_date: "2024-12-31"

Output includes:

  • Events grouped by month
  • Decision supersession chains
  • New entities added
  • Status changes

Compares knowledge state between two points in time.

When to use:

  • “What changed between Q1 and Q3 planning?”
  • “Compare our auth decisions from last year to now”
  • “What knowledge was added after the security audit?”

Example:

teamloop_compare:
query: "security decisions"
date_a: "2024-01-01"
date_b: "2024-07-01"

Output shows:

  • Added - New entities
  • Removed - Entities no longer active
  • Superseded - Replaced decisions
  • Unchanged - Stable knowledge

Get chronological views of your knowledge:

Query by topic to see relevant events over time:

teamloop_timeline:
query: "payment processing"
limit: 20

Track a specific entity’s decision chain:

teamloop_timeline:
entity_id: "uuid-of-entity"

Shows:

  • What this decision superseded
  • What superseded this decision
  • Full lineage chain

Scenario: Something broke in production. What decisions led to this?

# 1. Find current state
teamloop_query:
query: "payment service configuration"
mode: "current"
# 2. Check what it was before the incident
teamloop_query:
query: "payment service configuration"
mode: "as_of"
as_of: "2024-11-01" # Day before incident
# 3. See what changed
teamloop_compare:
query: "payment service"
date_a: "2024-10-01"
date_b: "2024-11-02"

Scenario: Preparing for an architecture review, need historical context.

# 1. Get evolution of the system
teamloop_evolution:
query: "user service architecture"
from_date: "2024-01-01"
to_date: "2024-12-31"
# 2. Build timeline of decisions
teamloop_timeline:
query: "user service decisions"
limit: 50

Scenario: New team member needs to understand why things are the way they are.

# 1. Current state overview
teamloop_query:
query: "authentication and authorization architecture"
mode: "current"
# 2. Historical evolution
teamloop_evolution:
query: "authentication"
from_date: "2023-01-01"
to_date: "2024-12-31"
# 3. Key decision lineage
teamloop_timeline:
entity_id: "current-auth-decision-id"

TeamLoop supports two retrieval strategies that control how results are found and ranked.

Hybrid retrieval combines vector semantic search with BM25 full-text search using reciprocal rank fusion (RRF), then optionally refines the top results with cross-encoder reranking. This is the default for all queries.

Why it matters: Vector search finds semantically similar results (e.g., “auth token refactor” matches “JWT migration”), while full-text search finds exact keyword matches (e.g., the literal term “JWT” in an entity). Hybrid retrieval catches both. Cross-encoder reranking then rescores each (query, document) pair independently, significantly improving precision in the top results.

How it works:

  1. Your query runs through both search pipelines in parallel
  2. Vector search returns the top 50 candidates by embedding similarity
  3. Full-text search returns the top 50 candidates by BM25 ranking
  4. Results are merged using Reciprocal Rank Fusion (RRF, k=60)
  5. The top 20 fused candidates are sent to the cross-encoder reranker
  6. The reranker scores each (query, document) pair and returns the top 5

Example:

teamloop_query:
query: "JWT token rotation policy"
retrieval: "hybrid"

Reranking providers:

DeploymentProviderModel
SaaSVoyage AIrerank-2
AWS MarketplaceCohere via Bedrockrerank-v3.5

Atomic facts in results: When entities have been decomposed into atomic facts (via teamloop_save_facts), those facts participate in search alongside regular entities. A query for “who approved the migration?” may surface a specific fact like “Sarah approved the PostgreSQL migration on Jan 15” instead of the entire parent document. Fact results link back to their parent entity via PART_OF relationships for additional context.

Graceful degradation: If embeddings are unavailable, hybrid search falls back to text-only mode. If full-text search fails, it falls back to vector-only. If the reranker is unavailable or fails, results are returned in RRF-fused order. Both search pipelines must fail for the query to error.

Vector-only semantic search. This is the legacy behavior from before hybrid retrieval was added.

teamloop_query:
query: "authentication approach"
retrieval: "standard"

Use standard if you specifically want only semantic similarity results without keyword matching.

Filter queries to specific integrations:

teamloop_query:
query: "API design"
sources: "github" # Only GitHub
teamloop_query:
query: "product roadmap"
sources: "notion,linear" # Notion and Linear
# Too broad
query: "decisions"
# Better
query: "database technology decisions"
# Best
query: "PostgreSQL vs MySQL decision for user service"
# When investigating past issues
mode: "as_of"
as_of: "date-before-issue"
# When preparing reviews
from_date: "quarter-start"
to_date: "quarter-end"
  1. Start with current mode
  2. Extract entities to knowledge graph
  3. Use evolution to understand history
  4. Use timeline for specific entity chains
  • First queries may be slower (fetching from integrations)
  • Subsequent queries benefit from cached knowledge
  • Temporal queries require existing knowledge graph data
  • Use sources filter to limit scope when possible
  • Hybrid retrieval runs both pipelines in parallel — minimal latency overhead vs. vector-only
  • Cross-encoder reranking adds up to 300ms but significantly improves precision@5
  • If one pipeline fails, results still return from the other (graceful degradation)
  • If the reranker fails or times out, results fall back to RRF order