CoreViz
API

Searching Your Media

Semantic search across your organization's media using natural language

Searching Your Media

Prefer the collection-scoped search

The recommended way to search is the q parameter on GET /api/dataset/{collectionId}/media — it returns full media objects, supports all browse filters, and is paginated. See Browse & Search Media.

The global search endpoint documented here (GET /api/search) returns frame-level results across your entire organization and is useful when you don't know which collection to search in.

Global Search Endpoint

Search across all media in your organization using natural language. Returns frame-level results ranked by semantic relevance.

Endpoint

GET /api/search

Query Parameters

ParameterTypeRequiredDescription
qstringYesNatural language search query
organizationIdstringYesYour organization ID
limitintegerNoMax results to return (default: 20)

Headers

HeaderValueRequired
x-api-keyYour API keyYes

Example Request

curl -H "x-api-key: YOUR_API_KEY" \
  "https://lab.coreviz.io/api/search?q=flower&organizationId=YOUR_ORG_ID"

Response

Status Code: 200 OK

{
  "results": [
    {
      "id": "frameId123",
      "timestamp": "0",
      "blob": "https://blob.vercel-storage.com/.../processed.jpg",
      "width": 1620,
      "height": 1080,
      "media": {
        "id": "mediaId456",
        "name": "purple-flowers.jpg",
        "type": "image",
        "blob": "https://blob.vercel-storage.com/.../original.jpg",
        "width": 2048,
        "height": 1365,
        "duration": null,
        "createdAt": "2026-01-10T08:00:00.000Z"
      },
      "captions": [
        {
          "id": "captionId789",
          "text": "A close-up of purple flowers with green leaves."
        }
      ],
      "objects": [
        {
          "id": "objectId101",
          "type": "object",
          "label": "flower",
          "blob": "",
          "polygon": [
            { "x": 0.81, "y": 0.54 },
            { "x": 1619.19, "y": 0.54 },
            { "x": 1619.19, "y": 1078.38 },
            { "x": 0.81, "y": 1078.38 }
          ]
        }
      ],
      "rank": 0.243,
      "rankingMetrics": {
        "textRank": 0.087,
        "objectEmbeddingRank": 0.156,
        "totalRank": 0.243
      }
    }
  ],
  "query": "flower",
  "limit": 20,
  "totalResults": 20
}

Response Fields

Result Object

FieldTypeDescription
idstringFrame ID
timestampstringFrame timestamp ("0" for images)
blobstringProcessed frame URL
width / heightintegerFrame dimensions
mediaobjectParent media item
captionsarrayAI-generated captions for this frame
objectsarrayDetected objects with bounding polygons
ranknumberOverall relevance score (0–1)
rankingMetricsobjectScore breakdown: textRank, objectEmbeddingRank, totalRank

Object Detection

FieldTypeDescription
idstringObject ID — use with similarToObjectId in the media endpoint
typestringobject, face, frame, text, etc.
labelstringDetected label (e.g. "flower", "person")
polygonarrayBounding box as 4 {x, y} pixel coordinates

Search Tips

The search engine combines CLIP embeddings + text ranking against AI captions:

✅ Effective queries:

  • "red car in a parking lot" — descriptive and specific
  • "person wearing blue shirt" — clear subject and attribute
  • "sunset over mountains" — scene description

❌ Less effective:

  • "car" — too generic
  • "photo" — not descriptive

JavaScript Example

const orgId = 'YOUR_ORG_ID';
const apiKey = 'YOUR_API_KEY';

const res = await fetch(
  `https://lab.coreviz.io/api/search?q=${encodeURIComponent('flower')}&organizationId=${orgId}`,
  { headers: { 'x-api-key': apiKey } }
);
const { results, totalResults } = await res.json();

results.forEach(r => {
  console.log(r.media.name, 'rank:', r.rank);
  r.objects.forEach(o => console.log(' -', o.label, o.id));
});