Built for AI
High-performance vector searches accelerated by AVX-512.
Zero cost direct ingest on S3 without any indexing.
Seamlessly combine your vectors and meta-data.
High-performance vector searches accelerated by AVX-512.
Zero cost direct ingest on S3 without any indexing.
Seamlessly combine your vectors and meta-data.
Leverage Sneller’s handwritten SIMD/AVX-512 assembly.
Query TBs per second using standard SQL.
No need to manage infrastructure.
Scale to petabyte-size tables on S3 object storage.
Low latency (~3 sec) direct ingest from S3.
Keep your data in your own S3 buckets.
Ingest any (JSON) data without ETL or defining schemas.
Robust support for SQL with many useful extensions.
Simple REST API for all your SQL queries.
Built by developers, for developers.
The Sneller development team regularly posts in-depth information about the product and its internals.
by Phil Hofer on August 16, 2023
Sneller’s serverless vector search eliminates the need for capacity planning and expensive migrations.
by Frank Wessels on August 1, 2023
See how Sneller powers a Grafana dashboard with 1 billion records from the GitHub archive data.
by Phil Hofer on June 21, 2023
Learn how Sneller makes it easy to perform semantic search using SQL for AI-powered applications.
by Phil Hofer on June 19, 2023
Learn how we “eat our own dogfood” by using Sneller SQL to monitor Sneller Cloud.
Copyright (c) 2021-2024 Sneller Inc. All rights reserved.