Skip to main content

Quickstart

Get up and running with Inherent in under 5 minutes. By the end of this guide, you'll have:

  1. Created an API key
  2. Listed documents in your knowledge base
  3. Retrieved context using semantic search

Prerequisites

  • An Inherent account (Sign up free)
  • Your API key (found in Dashboard > Settings > API Keys)

Step 1: Get Your API Key

  1. Log in to the Inherent Dashboard
  2. Navigate to Settings > API Keys
  3. Click Create API Key
  4. Copy your key (you won't see it again!)
export INHERENT_API_KEY="your-api-key-here"

Step 2: List Your Documents

Let's verify your workspace by listing the documents in your knowledge base.

curl https://api.inherent.systems/api/v1/documents \
-H "Authorization: Bearer $INHERENT_API_KEY"

Response:

{
"documents": [
{
"id": "doc_abc123",
"name": "API Authentication Guide",
"workspace_id": "ws_xyz",
"source_type": "upload",
"mime_type": "text/markdown",
"size_bytes": 32768,
"chunk_count": 24,
"status": "processed",
"created_at": "2024-01-15T10:30:00Z",
"updated_at": "2024-01-15T10:30:00Z",
"metadata": null
}
],
"total": 1,
"page": 1,
"page_size": 20
}
Ingesting Documents

Document ingestion is currently done through the Inherent Dashboard or via the internal ingestion pipeline. A public ingestion API endpoint is coming soon. See the Upload Document page for details.

Step 3: Search Your Knowledge Base

Now let's retrieve relevant context using semantic search.

curl -X POST https://api.inherent.systems/api/v1/search \
-H "Authorization: Bearer $INHERENT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"query": "What is Inherent used for?",
"limit": 5
}'

Response:

{
"results": [
{
"chunk_id": "chunk_xyz789",
"document_id": "doc_abc123",
"document_name": "About Inherent",
"content": "Inherent is a contextual intelligence layer for AI applications. It provides version-controlled knowledge management and deterministic retrieval for production AI systems.",
"score": 0.92,
"metadata": null
}
],
"query": "What is Inherent used for?",
"total_results": 1,
"processing_time_ms": 45.2
}

Step 4: Use with Your AI

Now use the retrieved context with your favorite LLM:

import openai

# Get context from Inherent
context_response = requests.post(
f"{base_url}/search",
headers={"Authorization": f"Bearer {api_key}"},
json={"query": user_question, "limit": 5}
)
context = context_response.json()

# Build prompt with context
context_text = "\n".join([r["content"] for r in context["results"]])

response = openai.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "system",
"content": f"Answer based on this context:\n\n{context_text}"
},
{"role": "user", "content": user_question}
]
)

Next Steps

You've successfully:

  • Created an API key
  • Listed documents in your knowledge base
  • Retrieved context using semantic search
  • Integrated with an LLM

Now explore more: