CoreViz
SDK

sdk.embed() + similarity()

Generate vector embeddings for semantic search and similarity matching

sdk.embed() + similarity()

Generate CLIP embeddings for images or text, and compute cosine similarity between two embeddings. Use these together for semantic search, deduplication, and visual similarity features.

Interactive Demo

Try it out ↓

sdk.embed()

Signature

sdk.embed(input: string, options?: EmbedOptions): Promise<EmbedResponse>

Parameters

ParameterTypeRequiredDescription
inputstringYesImage URL, base64 data URI, or text string
options.typestringNo"image" or "text". Auto-detected if omitted
options.modestringNo"api" (default) or "local" (in-browser CLIP via transformers.js)

Returns

Promise<EmbedResponse>:

{
  embedding: number[];  // high-dimensional CLIP vector (512 or 768 dimensions)
}

sdk.similarity()

Compute cosine similarity between two embeddings.

Signature

sdk.similarity(embeddingA: number[], embeddingB: number[]): number

Returns a score from -1 to 1 — higher means more similar.


Examples

import { CoreViz } from '@coreviz/sdk';

const sdk = new CoreViz({ apiKey: process.env.COREVIZ_API_KEY });

// Image-to-text similarity (find if image matches a description)
const { embedding: imageEmbed } = await sdk.embed('https://example.com/shoe.jpg', { type: 'image' });
const { embedding: textEmbed } = await sdk.embed('red sneaker', { type: 'text' });
const score = sdk.similarity(imageEmbed, textEmbed);
console.log(score); // 0.82 — high similarity

// Image-to-image similarity (find duplicates or near-duplicates)
const { embedding: a } = await sdk.embed('https://example.com/photo1.jpg');
const { embedding: b } = await sdk.embed('https://example.com/photo2.jpg');
console.log(sdk.similarity(a, b)); // 0.97 — very similar images

// Local processing (in-browser, no API call)
const { embedding: localEmbed } = await sdk.embed(imageBase64, {
  type: 'image',
  mode: 'local',
});

Use Cases

  • Semantic search — find images by text description
  • Deduplication — identify near-duplicate photos
  • Related content — suggest similar images
  • Visual clustering — group photos by visual theme

Local mode

mode: 'local' runs CLIP inference entirely in the browser or Node.js via transformers.js. Not supported on React Native / Expo.

Raw API Endpoint

This method calls POST /api/ai/embeddings. See the Vision API Reference for raw HTTP details.