Pricing

ConfidentialMind platform can be used from our cloud or deployed to your own environment. We offer fair pricing for your own secure AI platform irrespective of where you deploy it

CM Cloud

Trial

Try the ConfidentialMind Platform in our cloud before you buy

Free

Core features:

1 endpoint

Limited API calls

CM Cloud

Starter

Use ConfidentialMind LLM and AI system endpoints in our cloud

From €500/endpoint/month

Core features:

Unlimited API calls

Standard support: 72h response time guarantee

Great for evaluating the ConfidentialMind Platform

Support SLA: best effort

Includes production-grade RAG and semantic search AI-systems

Possible to migrate all data and components to your environment later

CM Cloud

Scale

ConfidentialMind Platform as a managed service in our cloud

Custom

Core features:

Includes dedicated GPU resources

Priority support: 24h response time guarantee

Ideal for organizations that do not want to self-host

Support SLA: production

Includes production-grade RAG and semantic search AI-systems

Possible to migrate all data and components to your environment later

Your environment

Enterprise

Full ConfidentialMind Platform deployable anywhere

Custom

Core features:

Deploy to on-prem, private/public cloud, VPC or edge

Priority support: 24h response time guarantee

Tailored pricing based on cluster size

Custom integrations and priority onboarding

Free migration assistance between environments

Fully managed service available separately

Support SLA: custom

Frequently Asked Questions

What is an endpoint?

An endpoint is a service in the stack that you can connect to using an API key and a URL. It can be a simple model inference or a more complex RAG (Retrieval-Augmented Generation) or agent system.

Each endpoint has a unique ID that is part of the URL path (for routing requests to the correct service). Most of these services use OpenAI-like chat completion APIs. However, depending on the type of endpoint, for example, in RAG systems, there is also a way to upload files to the system, and those files will be included in the chat completions.

Where do you host the ConfidentialMind platform for the cloud packages?

We have partnered with leading data center providers around the world to host our platform in the following locations: Iceland, Sweden, Finland (coming soon), and Microsoft Azure.

What is your managed service?

Our managed service, available with the Scale license, is where we help you install, manage, and maintain our platform in our partners' cloud environments. So, you don’t have to worry about purchasing hardware, fixing issues, or performing updates.

With the Enterprise license, this option is also available at an additional cost. In that case, we will help you set up and manage the platform in any environment.

What is the benefit of the ConfidentialMind platform?

Our platform helps you build and deploy complete AI systems not just LLMs.

Unlike others, we do not just provide you with a platform and leave you to figure out how to build your first RAG system, semantic search or other applications. We provide everything you need to create these assets. This includes one-click deployment of LLM endpoints, databases, and storage. So you can launch your first PoC in minutes, not months.

Also, with secure data connectors, you can connect these systems to your private data securely, and with simple APIs bring their capabilities to your offerings or legacy tools.

What data sources can I integrate into my applications with the AI platform?

The platform supports most common file formats such as PDFs, HTML, plain text, and repositories. It also integrates with SQL databases, Windows (SMB) files and S3 storage through API connectivity. Additionally, you can add custom data sources using API code.

What are the use cases for the AI platform?

Here are the things you can do with our AI platform:

  • Build AI agents
  • Create internal semantic search tools
  • Create AI chatbots and assistants
  • Bring genAI capabilities to legacy applications
  • Create internal genAI applications for your team
  • Add AI backend for your existing digital experiences
  • Modernize applications with AI features
  • And much more...

Where can I run the platform with the enterprise license?

You can deploy it anywhere you can run Kubernetes: on-prem, on your existing virtual machines, bare-metal servers, public cloud, private cloud, or VPC.

Note: Developers don't need extensive Kubernetes experience, as we have removed its complexities. You won't even realize it's the underlying infrastructure.

How can I get started?

Book a demo, and our team will show you how the platform works and how it can help with your use case.

;

About

ConfidentialMind
Otakaari 27, 02150 Espoo, Finland
+358 50 302 6510

Follow us

Email us

info (@) confidentialmind.com
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.