Embeddings enable semantic similarity search by representing text as high-dimensional vectors. Keyword search, however, returns results based on lexical matches and is often sufficient for simple retrieval tasks such as FAQ matching, deterministic filtering, metadata lookup, or rule-based routing. When embeddings are used for these low-complexity scenarios, organizations pay for compute to generate embeddings, storage for vector columns, and compute-heavy cosine similarity searches — without improving accuracy or user experience. In Snowflake, this can also increase warehouse load and query runtime.
Embedding generation and vector search operations consume Snowflake compute credits. Larger embeddings increase storage requirements and query processing costs. When embeddings are not necessary, both compute and storage consumption rise needlessly.