Skip to main content

On-premise deployment guide

This guide provides detailed instructions for deploying the Artemis platform in an on-premise Kubernetes environment. It includes prerequisites, supported environments, configuration steps, and image registry details to help you deploy Artemis seamlessly, even in air-gapped setups.

Prerequisites

Note below the minimum and recommended hardware and software requirements to deploy Artemis.

Hardware

Minimum Requirements

Suitable for basic deployments on a single machine:

ResourcesRequirements
CPU16 Cores
MemoryAt least 32GB of RAM
StorageAt least 300GB (SSD/HDD)

If your system does not meet these requirements, please contact our support team for alternatives.

For optimal performance and scalability:

ResourcesRequirements
CPU32 Cores
MemoryAt least 64GB of RAM
StorageAt least 300GB (SSD/HDD)

Optional: Self-Hosted LLM Support

Running large language models locally requires enhanced resources:

ResourcesRequirements
GPUNvidia CUDA-capable GPU / Intel® Data Center GPU Max 1550 / Gaudi 2
CPU64 Cores
MemoryAt least 128GB of RAM
StorageAt least 1TB (SSD/HDD)

For LLM VRAM requirements, see LLM VRAM Specs

Operating System

  • Linux-based distributions supported for Kubernetes clusters

GPU Support (Optional)

Install the following on each node:

Container Image Registry

For air-gapped deployments, a private container registry is required. Recommended registries:

  • Sonatype Nexus
  • JFrog Artifactory
  • Docker Registry (self-hosted)

We provide:

  • Docker Hub for public images
  • AWS ECR and private Docker Hub repositories for proprietary images
  • Secure credentials for authenticated pulls

Single Machine/VM

Artemis is designed for a smooth installation experience, whether your machine has internet access or is in an air-gapped environment. Below, we outline the installation steps for Artemis on single machine setup, covering scenarios with internet connectivity and isolated (air-gapped) environments that can run on a bare metal host or VM using Docker or Podman.

Containerized Runtimes

We can support installing this software in offline environments.

Configuration and Deployment

Artemis provides a collection of Docker Compose files required for deploying this version of the platform. These configuration files are orchestrated using a Makefile, which leverages either Docker Compose or Podman Compose under the hood.

The Makefile offers simple, CLI-like commands to streamline common tasks such as building, starting, and stopping services making deployment easy and consistent across environments.

Available Commands

  • make - Display all available commands
  • make info - Display VM details and super admin credentials
  • make deploy - Deploy all services
  • make down - Stop all containers
  • make destroy - Remove containers and volumes
  • make deploy service=SERVICE_NAME - Deploy a specific service

Installation

A. Single Machine/VM (Online)

  1. Create VM per hardware requirements
  2. Download the latest deployment package (Credentials will be provided separately.)
  3. Authenticate with remote registry: docker login (Credentials will be provided separately.)
  4. Extract configuration: mkdir artemis && tar -xvf artemis-<version>.tar -C artemis
  5. Run:
    • cd artemis
    • make deploy
    • make create-users
  6. Access Artemis: http://<machine-ip>:80

B. Single Machine/VM (AirGapped)

  1. Set up VM per hardware requirements
  2. Download the latest deployment package (Credentials will be provided separately.)
  3. Push all the images required to your internal image registry
  4. Authenticate with local registry: docker login your.internal.registry.domain.org:port
  5. Extract configuration: mkdir artemis && tar -xvf artemis-<version>.tar -C artemis
  6. Update .env.production with registry details
REGISTRY=<your.internal.registry.domain.org:port>
IMAGE_REPOSITORY=<path_to_image/>turintech/
EVOML_IMAGE_REPOSITORY=<path_to_image/>turintech/
SERVICES_REGISTRY=<your.internal.registry.domain.org:port></path_to_image>
  1. Run:
    • cd artemis
    • make deploy
    • make create-users
  2. Access Artemis: http://<machine-ip>:80

Non-root Runtimes

Configure socket path in .env.production

  • Docker: DOCKER_SOCK=/run/user/<uid>/docker.sock
  • Podman: DOCKER_SOCK=/run/user/<uid>/podman/podman.sock

LLM Providers Configuration

Update .env.production with your internal configuration values.

Bedrock
ARTEMIS_BEDROCK_ACCESS_KEY=<value>
ARTEMIS_BEDROCK_SECRET_KEY=<value>
ARTEMIS_BEDROCK_REGION=<default value us-east-1>
Vertex
ARTEMIS_VERTEX_PRIVATE_KEY=<value>
ARTEMIS_VERTEX_PROJECT=<value>
ARTEMIS_VERTEX_EMAIL=<value>
ARTEMIS_VERTEX_SERVICE_ACCOUNT_PATH=<empty or /tmp/files/vertex-key.json>

You can authenticate with Vertex in two ways:

  1. Using a Private Key: Set the ARTEMIS_VERTEX_PRIVATE_KEY environment variable directly with your private key.

  2. Using a Service Account Key File: Place the vertex-key.json file inside the keys folder and set: ARTEMIS_VERTEX_SERVICE_ACCOUNT_PATH=/tmp/files/vertex-key.json.

  3. Make sure this file is mounted into the container at the specified path.

OPENAI
ARTEMIS_OPENAI_URL=<url value>
ARTEMIS_OPENAI_TYPE=openai or azure
ARTEMIS_OPENAI_KEY=<token value>
ARTEMIS_OPENAI_NO_SSL_VERIFY=true or false
ARTEMIS_OPENAI_SSL_CERT=<empty or /tmp/files/openai.ca>
ARTEMIS_OPENAI_EMBEDDING_URL=<url value>

When using OpenAI behind a proxy with SSL verification enabled (ARTEMIS_OPENAI_NO_SSL_VERIFY=false), you can specify a custom internal CA certificate to verify SSL connections.

To do this:

  1. Place your custom CA certificate file (e.g., openai.ca) in the keys folder.

  2. Set the ARTEMIS_OPENAI_SSL_CERT environment variable to: ARTEMIS_OPENAI_SSL_CERT=/tmp/files/openai.ca

  3. Ensure the file is mounted into the container at the specified path.

Anthropic
ARTEMIS_ANTHROPIC_KEY=<token value>
Cohere
ARTEMIS_COHERE_KEY=<token value>
Deepseek
ARTEMIS_DEEPSEEK_KEY=<token value>

Update

  1. Download the latest deployment package (Credentials will be provided separately.)
  2. AirGapped only: Push all the images required to your internal image registry
  3. Extract configuration: tar --exclude='./.env.production' --exclude='./.env.secrets' --exclude='./keys' -xvf artemis-<version>.tar.gz and replace all files on the existing artemis folder except:
    • .env.production file
    • .env.secrets file
    • keys folder
  4. Run:
    • cd artemis
    • make deploy
  5. Access Artemis: http://<machine-ip>:80

Note: Any new environment variables must be added to .env.production for persistence.

Kubernetes Cluster

Artemis is designed for a smooth installation experience, whether your machine has internet access or is in an air-gapped environment. Below, we outline the installation steps for Artemis on Kubernetes cluster setups, covering scenarios with internet connectivity and isolated (air-gapped) environments.

Cloud

On Premise

Configuration files and Deployment

Artemis uses a mix of public Helm charts (mainly from Bitnami) and proprietary charts developed in-house for its core components, which are embedded directly into the CLI.

To simplify image transfers to your internal registry, we provide a deployment CLI that wraps around Skopeo, making the process fast and reliable.

Built with Node.js, the CLI includes kubectl, helm, and skopeo, streamlining configuration and deployment to Kubernetes. It’s packaged as a cross-platform executable, ensuring compatibility across operating systems. Our goal is to make Artemis deployment straightforward and accessible for all users.

Dynamic configurations

Our deployment is fully dynamic and flexible, supporting dynamic configurations by editing the .configrc.json file.

  1. OpenShift
"OVERRIDE": {
"OPENSHIFT": "true"
}
  1. External service integrations:
"OVERRIDE": {
"MINIO_HOST": "<host>",
"MONGO_HOST": "<host>",
"MINIO_PORT": "<port>",
"POSTGRES_HOST": "<host>",
...
},
"EXCLUDE": ["minio", "redis", "postgresql", "mongo"]

Tip: Use these configurations to customize your deployment for specific environments or external integrations.

  1. LLM Providers Configuration
  • Bedrock
"OVERRIDE": {
"ARTEMIS_BEDROCK_ACCESS_KEY": "<value>",
"ARTEMIS_BEDROCK_SECRET_KEY": "<value>",
"ARTEMIS_BEDROCK_REGION": "<default value us-east-1>"
}
  • Vertex
"OVERRIDE": {
"ARTEMIS_VERTEX_PRIVATE_KEY": "<value>",
"ARTEMIS_VERTEX_PROJECT": "<value>",
"ARTEMIS_VERTEX_EMAIL": "<value>",
"VERTEX_SERVICE_ACCOUNT_SECRET_NAME": "<value>",
}

You can authenticate with Vertex in two ways: To do this:

  1. Using a Private Key: Set the ARTEMIS_VERTEX_PRIVATE_KEY value with your private key.
  2. Make use of a Service Account Key file
    • Create a secret called vertex-key
    • Add a property called vertex-key.json and add your json credentials
    • Set VERTEX_SERVICE_ACCOUNT_SECRET_NAME=vertex-key
  • OPENAI
"OVERRIDE": {
"ARTEMIS_OPENAI_URL": "<value>",
"ARTEMIS_OPENAI_TYPE": "openai or azure",
"ARTEMIS_OPENAI_KEY": "<value>",
"ARTEMIS_OPENAI_NO_SSL_VERIFY": "true or false",
"ARTEMIS_OPENAI_EMBEDDING_URL": "<value>",
"OPENAI_SSL_CA_SECRET_NAME": "<value>"
}

When using OpenAI behind a proxy with SSL verification enabled (ARTEMIS_OPENAI_NO_SSL_VERIFY=false), you can specify a custom internal CA certificate to verify SSL connections. To do this:

  1. Create a secret called openai-certs
  2. Add a property called ca.crt and add your cert value
  3. Set OPENAI_SSL_CA_SECRET_NAME=openai-certs
  • Anthropic
"OVERRIDE": {
"ARTEMIS_ANTHROPIC_KEY": "<value>"
}
  • Cohere
"OVERRIDE": {
"ARTEMIS_COHERE_KEY": "<token value>"
}
  • Deepseek
"OVERRIDE": {
"ARTEMIS_DEEPSEEK_KEY": "token <value>"
}

Installation

A. Kubernetes Cluster (Online)

  1. Provision a cluster with at least one node that meets the hardware requirements
  2. Get the latest Artemis deployment CLI. (Credentials will be provided separately.)
  3. Run the following command to verify and configure connectivity to your cluster ./cli config cluster
  4. Log in to the remote image registry using your credentials ./cli config registry-auth --env production -u <username> -p <password/token>
  5. Deploy the Artemis platform to your cluster ./cli deploy up --env production
  6. Once deployed, access the Artemis platform at:
  • http://<machine-ip>:80 (for single-node setups)
  • http://<load-balancer-exposed-ip>:80 (for clusters with a load balancer)

B. Kubernetes Cluster (AirGapped)

  1. Set up a Kubernetes cluster with at least one node that meets the hardware requirements
  2. Get the latest Artemis deployment CLI (You will be provided with credentials to access the download.)
  3. Ensure all required container images are available in your internal image registry
    • Manual Transfer: Pull and push each image manually.
    • Using the CLI: ./cli transfer --dest <your-internal-registry>
  4. Run the following command to verify and configure connectivity to your Kubernetes cluster ./cli config cluster
  5. Log in to your internal (offline) image registry using the CLI ./cli config registry-auth --env offline -u <username> -p <password/token>
  6. Edit the .configrc.json file to reflect your internal deployment configuration
"OVERRIDE": {
"REGISTRY": "your.internal.registry.domain.org:port",
"SERVICES_REGISTRY": "your.internal.registry.domain.org:port",
}
  1. Deploy the Artemis platform to your cluster ./cli deploy up --env offline
  2. Once deployed, access the Artemis platform at:
  • http://<machine-ip>:80 (for single-node setups)
  • http://<load-balancer-exposed-ip>:80 (for clusters with a load balancer)

Update

  1. Get the latest Artemis deployment CLI. (Credentials will be provided separately.). Ensure that .configrc.json file is present in the same working directory as the CLI.

  2. AirGapped only: Ensure all required container images are available in your internal image registry

  3. Delete or rename the existing assets folder. It will be automatically recreated by the CLI during deployment.

  4. Update the Artemis platform to your cluster:

    • Only artemis components: ./cli deploy up -g worker -g component
    • Any component: ./cli deploy up
  5. Once updated, access the Artemis platform at:

    • http://<machine-ip>:80 (for single-node setups)
    • http://<load-balancer-exposed-ip>:80 (for clusters with a load balancer)

Browser Support

Artemis is thoroughly tested and optimized for compatibility with Chrome, Firefox, and Edge. To guarantee an optimal user experience, we strongly advocate using the most recent version of Chrome. While Artemis supports Firefox and Edge, performance may vary, and certain issues may arise when utilizing older versions of these browsers or alternative web browsers. Always keep your browsers updated to benefit from the best experience.