API
Searching Your Media
Semantic search across your organization's media using natural language
Searching Your Media
Prefer the collection-scoped search
The recommended way to search is the q parameter on GET /api/dataset/{collectionId}/media — it returns full media objects, supports all browse filters, and is paginated. See Browse & Search Media.
The global search endpoint documented here (GET /api/search) returns frame-level results across your entire organization and is useful when you don't know which collection to search in.
Global Search Endpoint
Search across all media in your organization using natural language. Returns frame-level results ranked by semantic relevance.
Endpoint
GET /api/searchQuery Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
q | string | Yes | Natural language search query |
organizationId | string | Yes | Your organization ID |
limit | integer | No | Max results to return (default: 20) |
Headers
| Header | Value | Required |
|---|---|---|
x-api-key | Your API key | Yes |
Example Request
curl -H "x-api-key: YOUR_API_KEY" \
"https://lab.coreviz.io/api/search?q=flower&organizationId=YOUR_ORG_ID"Response
Status Code: 200 OK
{
"results": [
{
"id": "frameId123",
"timestamp": "0",
"blob": "https://blob.vercel-storage.com/.../processed.jpg",
"width": 1620,
"height": 1080,
"media": {
"id": "mediaId456",
"name": "purple-flowers.jpg",
"type": "image",
"blob": "https://blob.vercel-storage.com/.../original.jpg",
"width": 2048,
"height": 1365,
"duration": null,
"createdAt": "2026-01-10T08:00:00.000Z"
},
"captions": [
{
"id": "captionId789",
"text": "A close-up of purple flowers with green leaves."
}
],
"objects": [
{
"id": "objectId101",
"type": "object",
"label": "flower",
"blob": "",
"polygon": [
{ "x": 0.81, "y": 0.54 },
{ "x": 1619.19, "y": 0.54 },
{ "x": 1619.19, "y": 1078.38 },
{ "x": 0.81, "y": 1078.38 }
]
}
],
"rank": 0.243,
"rankingMetrics": {
"textRank": 0.087,
"objectEmbeddingRank": 0.156,
"totalRank": 0.243
}
}
],
"query": "flower",
"limit": 20,
"totalResults": 20
}Response Fields
Result Object
| Field | Type | Description |
|---|---|---|
id | string | Frame ID |
timestamp | string | Frame timestamp ("0" for images) |
blob | string | Processed frame URL |
width / height | integer | Frame dimensions |
media | object | Parent media item |
captions | array | AI-generated captions for this frame |
objects | array | Detected objects with bounding polygons |
rank | number | Overall relevance score (0–1) |
rankingMetrics | object | Score breakdown: textRank, objectEmbeddingRank, totalRank |
Object Detection
| Field | Type | Description |
|---|---|---|
id | string | Object ID — use with similarToObjectId in the media endpoint |
type | string | object, face, frame, text, etc. |
label | string | Detected label (e.g. "flower", "person") |
polygon | array | Bounding box as 4 {x, y} pixel coordinates |
Search Tips
The search engine combines CLIP embeddings + text ranking against AI captions:
✅ Effective queries:
"red car in a parking lot"— descriptive and specific"person wearing blue shirt"— clear subject and attribute"sunset over mountains"— scene description
❌ Less effective:
"car"— too generic"photo"— not descriptive
JavaScript Example
const orgId = 'YOUR_ORG_ID';
const apiKey = 'YOUR_API_KEY';
const res = await fetch(
`https://lab.coreviz.io/api/search?q=${encodeURIComponent('flower')}&organizationId=${orgId}`,
{ headers: { 'x-api-key': apiKey } }
);
const { results, totalResults } = await res.json();
results.forEach(r => {
console.log(r.media.name, 'rank:', r.rank);
r.objects.forEach(o => console.log(' -', o.label, o.id));
});