On-premise deployment guide
This guide provides detailed instructions for deploying the Artemis platform in an on-premise Kubernetes environment. It includes prerequisites, supported environments, configuration steps, and image registry details to help you deploy Artemis seamlessly, even in air-gapped setups.
Prerequisites
Note below the minimum and recommended hardware and software requirements to deploy Artemis.
Hardware
Minimum Requirements
Suitable for basic deployments on a single machine:
Resources | Requirements |
---|---|
CPU | 16 Cores |
Memory | At least 32GB of RAM |
Storage | At least 300GB (SSD/HDD) |
If your system does not meet these requirements, please contact our support team for alternatives.
Recommended Requirements
For optimal performance and scalability:
Resources | Requirements |
---|---|
CPU | 32 Cores |
Memory | At least 64GB of RAM |
Storage | At least 300GB (SSD/HDD) |
Optional: Self-Hosted LLM Support
Running large language models locally requires enhanced resources:
Resources | Requirements |
---|---|
GPU | Nvidia CUDA-capable GPU / Intel® Data Center GPU Max 1550 / Gaudi 2 |
CPU | 64 Cores |
Memory | At least 128GB of RAM |
Storage | At least 1TB (SSD/HDD) |
For LLM VRAM requirements, see LLM VRAM Specs
Operating System
- Linux-based distributions supported for Kubernetes clusters
GPU Support (Optional)
Install the following on each node:
Container Image Registry
For air-gapped deployments, a private container registry is required. Recommended registries:
- Sonatype Nexus
- JFrog Artifactory
- Docker Registry (self-hosted)
We provide:
- Docker Hub for public images
- AWS ECR and private Docker Hub repositories for proprietary images
- Secure credentials for authenticated pulls
Single Machine/VM
Artemis is designed for a smooth installation experience, whether your machine has internet access or is in an air-gapped environment. Below, we outline the installation steps for Artemis on single machine setup, covering scenarios with internet connectivity and isolated (air-gapped) environments that can run on a bare metal host or VM using Docker or Podman.
Containerized Runtimes
We can support installing this software in offline environments.
Configuration and Deployment
Artemis provides a collection of Docker Compose files required for deploying this version of the platform. These configuration files are orchestrated using a Makefile, which leverages either Docker Compose or Podman Compose under the hood.
The Makefile offers simple, CLI-like commands to streamline common tasks such as building, starting, and stopping services making deployment easy and consistent across environments.
Available Commands
make
- Display all available commandsmake info
- Display VM details and super admin credentialsmake deploy
- Deploy all servicesmake down
- Stop all containersmake destroy
- Remove containers and volumesmake deploy service=SERVICE_NAME
- Deploy a specific service
Installation
A. Single Machine/VM (Online)
- Create VM per hardware requirements
- Download the latest deployment package (Credentials will be provided separately.)
- Authenticate with remote registry:
docker login
(Credentials will be provided separately.) - Extract configuration:
mkdir artemis && tar -xvf artemis-<version>.tar -C artemis
- Run:
cd artemis
make deploy
make create-users
- Access Artemis:
http://<machine-ip>:80
B. Single Machine/VM (AirGapped)
- Set up VM per hardware requirements
- Download the latest deployment package (Credentials will be provided separately.)
- Push all the images required to your internal image registry
- Authenticate with local registry:
docker login your.internal.registry.domain.org:port
- Extract configuration:
mkdir artemis && tar -xvf artemis-<version>.tar -C artemis
- Update
.env.production
with registry details
REGISTRY=<your.internal.registry.domain.org:port>
IMAGE_REPOSITORY=<path_to_image/>turintech/
EVOML_IMAGE_REPOSITORY=<path_to_image/>turintech/
SERVICES_REGISTRY=<your.internal.registry.domain.org:port></path_to_image>
- Run:
cd artemis
make deploy
make create-users
- Access Artemis:
http://<machine-ip>:80
Non-root Runtimes
Configure socket path in .env.production
- Docker:
DOCKER_SOCK=/run/user/<uid>/docker.sock
- Podman:
DOCKER_SOCK=/run/user/<uid>/podman/podman.sock
LLM Providers Configuration
Update .env.production
with your internal configuration values.
Bedrock
ARTEMIS_BEDROCK_ACCESS_KEY=<value>
ARTEMIS_BEDROCK_SECRET_KEY=<value>
ARTEMIS_BEDROCK_REGION=<default value us-east-1>
Vertex
ARTEMIS_VERTEX_PRIVATE_KEY=<value>
ARTEMIS_VERTEX_PROJECT=<value>
ARTEMIS_VERTEX_EMAIL=<value>
ARTEMIS_VERTEX_SERVICE_ACCOUNT_PATH=<empty or /tmp/files/vertex-key.json>
You can authenticate with Vertex in two ways:
-
Using a Private Key: Set the
ARTEMIS_VERTEX_PRIVATE_KEY
environment variable directly with your private key. -
Using a Service Account Key File: Place the
vertex-key.json
file inside thekeys
folder and set:ARTEMIS_VERTEX_SERVICE_ACCOUNT_PATH=/tmp/files/vertex-key.json
. -
Make sure this file is mounted into the container at the specified path.
OPENAI
ARTEMIS_OPENAI_URL=<url value>
ARTEMIS_OPENAI_TYPE=openai or azure
ARTEMIS_OPENAI_KEY=<token value>
ARTEMIS_OPENAI_NO_SSL_VERIFY=true or false
ARTEMIS_OPENAI_SSL_CERT=<empty or /tmp/files/openai.ca>
ARTEMIS_OPENAI_EMBEDDING_URL=<url value>
When using OpenAI behind a proxy with SSL verification enabled (ARTEMIS_OPENAI_NO_SSL_VERIFY=false
), you can specify a custom internal CA certificate to verify SSL connections.
To do this:
-
Place your custom CA certificate file (e.g.,
openai.ca
) in thekeys
folder. -
Set the
ARTEMIS_OPENAI_SSL_CERT
environment variable to:ARTEMIS_OPENAI_SSL_CERT=/tmp/files/openai.ca
-
Ensure the file is mounted into the container at the specified path.
Anthropic
ARTEMIS_ANTHROPIC_KEY=<token value>
Cohere
ARTEMIS_COHERE_KEY=<token value>
Deepseek
ARTEMIS_DEEPSEEK_KEY=<token value>
Update
- Download the latest deployment package (Credentials will be provided separately.)
- AirGapped only: Push all the images required to your internal image registry
- Extract configuration:
tar --exclude='./.env.production' --exclude='./.env.secrets' --exclude='./keys' -xvf artemis-<version>.tar.gz
and replace all files on the existing artemis folder except:.env.production
file.env.secrets
filekeys
folder
- Run:
cd artemis
make deploy
- Access Artemis:
http://<machine-ip>:80
Note: Any new environment variables must be added to .env.production for persistence.
Kubernetes Cluster
Artemis is designed for a smooth installation experience, whether your machine has internet access or is in an air-gapped environment. Below, we outline the installation steps for Artemis on Kubernetes cluster setups, covering scenarios with internet connectivity and isolated (air-gapped) environments.
Cloud
On Premise
Configuration files and Deployment
Artemis uses a mix of public Helm charts (mainly from Bitnami) and proprietary charts developed in-house for its core components, which are embedded directly into the CLI.
To simplify image transfers to your internal registry, we provide a deployment CLI that wraps around Skopeo, making the process fast and reliable.
Built with Node.js, the CLI includes kubectl, helm, and skopeo, streamlining configuration and deployment to Kubernetes. It’s packaged as a cross-platform executable, ensuring compatibility across operating systems. Our goal is to make Artemis deployment straightforward and accessible for all users.
Dynamic configurations
Our deployment is fully dynamic and flexible, supporting dynamic configurations by editing the .configrc.json
file.
- OpenShift
"OVERRIDE": {
"OPENSHIFT": "true"
}
- External service integrations:
"OVERRIDE": {
"MINIO_HOST": "<host>",
"MONGO_HOST": "<host>",
"MINIO_PORT": "<port>",
"POSTGRES_HOST": "<host>",
...
},
"EXCLUDE": ["minio", "redis", "postgresql", "mongo"]
Tip: Use these configurations to customize your deployment for specific environments or external integrations.
- LLM Providers Configuration
- Bedrock
"OVERRIDE": {
"ARTEMIS_BEDROCK_ACCESS_KEY": "<value>",
"ARTEMIS_BEDROCK_SECRET_KEY": "<value>",
"ARTEMIS_BEDROCK_REGION": "<default value us-east-1>"
}
- Vertex
"OVERRIDE": {
"ARTEMIS_VERTEX_PRIVATE_KEY": "<value>",
"ARTEMIS_VERTEX_PROJECT": "<value>",
"ARTEMIS_VERTEX_EMAIL": "<value>",
"VERTEX_SERVICE_ACCOUNT_SECRET_NAME": "<value>",
}
You can authenticate with Vertex in two ways: To do this:
- Using a Private Key:
Set the
ARTEMIS_VERTEX_PRIVATE_KEY
value with your private key. - Make use of a Service Account Key file
- Create a secret called
vertex-key
- Add a property called
vertex-key.json
and add your json credentials - Set
VERTEX_SERVICE_ACCOUNT_SECRET_NAME=vertex-key
- Create a secret called
- OPENAI
"OVERRIDE": {
"ARTEMIS_OPENAI_URL": "<value>",
"ARTEMIS_OPENAI_TYPE": "openai or azure",
"ARTEMIS_OPENAI_KEY": "<value>",
"ARTEMIS_OPENAI_NO_SSL_VERIFY": "true or false",
"ARTEMIS_OPENAI_EMBEDDING_URL": "<value>",
"OPENAI_SSL_CA_SECRET_NAME": "<value>"
}
When using OpenAI behind a proxy with SSL verification enabled (ARTEMIS_OPENAI_NO_SSL_VERIFY=false
), you can specify a custom internal CA certificate to verify SSL connections.
To do this:
- Create a secret called
openai-certs
- Add a property called
ca.crt
and add your cert value - Set
OPENAI_SSL_CA_SECRET_NAME=openai-certs
- Anthropic
"OVERRIDE": {
"ARTEMIS_ANTHROPIC_KEY": "<value>"
}
- Cohere
"OVERRIDE": {
"ARTEMIS_COHERE_KEY": "<token value>"
}
- Deepseek
"OVERRIDE": {
"ARTEMIS_DEEPSEEK_KEY": "token <value>"
}
Installation
A. Kubernetes Cluster (Online)
- Provision a cluster with at least one node that meets the hardware requirements
- Get the latest Artemis deployment CLI. (Credentials will be provided separately.)
- Run the following command to verify and configure connectivity to your cluster
./cli config cluster
- Log in to the remote image registry using your credentials
./cli config registry-auth --env production -u <username> -p <password/token>
- Deploy the Artemis platform to your cluster
./cli deploy up --env production
- Once deployed, access the Artemis platform at:
http://<machine-ip>:80
(for single-node setups)http://<load-balancer-exposed-ip>:80
(for clusters with a load balancer)
B. Kubernetes Cluster (AirGapped)
- Set up a Kubernetes cluster with at least one node that meets the hardware requirements
- Get the latest Artemis deployment CLI (You will be provided with credentials to access the download.)
- Ensure all required container images are available in your internal image registry
- Manual Transfer: Pull and push each image manually.
- Using the CLI:
./cli transfer --dest <your-internal-registry>
- Run the following command to verify and configure connectivity to your Kubernetes cluster
./cli config cluster
- Log in to your internal (offline) image registry using the CLI
./cli config registry-auth --env offline -u <username> -p <password/token>
- Edit the
.configrc.json
file to reflect your internal deployment configuration
"OVERRIDE": {
"REGISTRY": "your.internal.registry.domain.org:port",
"SERVICES_REGISTRY": "your.internal.registry.domain.org:port",
}
- Deploy the Artemis platform to your cluster
./cli deploy up --env offline
- Once deployed, access the Artemis platform at:
http://<machine-ip>:80
(for single-node setups)http://<load-balancer-exposed-ip>:80
(for clusters with a load balancer)
Update
-
Get the latest Artemis deployment CLI. (Credentials will be provided separately.). Ensure that
.configrc.json
file is present in the same working directory as the CLI. -
AirGapped only: Ensure all required container images are available in your internal image registry
-
Delete or rename the existing
assets
folder. It will be automatically recreated by the CLI during deployment. -
Update the Artemis platform to your cluster:
- Only artemis components:
./cli deploy up -g worker -g component
- Any component:
./cli deploy up
- Only artemis components:
-
Once updated, access the Artemis platform at:
http://<machine-ip>:80
(for single-node setups)http://<load-balancer-exposed-ip>:80
(for clusters with a load balancer)
Browser Support
Artemis is thoroughly tested and optimized for compatibility with Chrome, Firefox, and Edge. To guarantee an optimal user experience, we strongly advocate using the most recent version of Chrome. While Artemis supports Firefox and Edge, performance may vary, and certain issues may arise when utilizing older versions of these browsers or alternative web browsers. Always keep your browsers updated to benefit from the best experience.