Kubernetes cluster
Artemis combines public Helm charts (mostly Bitnami) with proprietary charts. A cross-platform CLI (Node.js, bundling kubectl, helm, and skopeo) handles configuration and image transfer.
Review the prerequisites before starting.
1. Supported clusters
- Cloud: AWS, Azure, GCP, Intel Developer Cloud, OpenShift.
- On-prem: Kubeadm, K3s/K3d, MicroK8s, OKD, or any conformant cluster.
2. Latest artifacts
Loading latest release…Pick the binary matching your control machine. Save it as cli (or cli.exe on Windows) and chmod +x on Linux/macOS.
3. Installation
The flow is the same for online and air-gapped deployments. Steps marked Air-gapped only apply when the cluster has no internet access; skip them otherwise. The --env flag (production vs offline) selects the right environment for each path.
-
Provision a cluster matching the hardware requirements.
-
Air-gapped only: mirror every image listed above into your internal registry, either manually or with the bundled helper:
./cli bundle transfer --dest <your-internal-registry>Run
./cli bundle imagesto list the images the deployment will use. -
Verify cluster connectivity:
./cli config cluster -
Authenticate with the image registry:
- Online:
./cli config registry-auth --env production -u <user> -p <token> - Air-gapped:
./cli config registry-auth --env offline -u <user> -p <token>
- Online:
-
Edit
.configrc.jsonif either of the following applies (overrides can be combined in a single file):- Air-gapped: point at your internal registry.
{"OVERRIDE": {"REGISTRY": "your.internal.registry:port","SERVICES_REGISTRY": "your.internal.registry:port"}}
- OpenShift cluster: enable OpenShift mode.
{"OVERRIDE": {"OPENSHIFT": "true"}}
- Air-gapped: point at your internal registry.
-
Deploy:
- Online:
./cli deploy up --env production - Air-gapped:
./cli deploy up --env offline
- Online:
-
Open Artemis at
http://<node-ip>:80(or your load-balancer URL).
4. Expose Artemis (optional)
If your cluster has an ingress controller available, expose Artemis through it instead of using the node IP.
-
Apply an Ingress resource pointing at the
artemis-proxyservice. Replace<namespace>with your Artemis namespace andartemis.domain.foowith the hostname you want to use. Add TLS, annotations, oringressClassNameas required by your controller.kubectl apply -n <namespace> -f - <<'EOF'apiVersion: networking.k8s.io/v1kind: Ingressmetadata:name: artemis-proxy-ingressspec:rules:- host: artemis.domain.foohttp:paths:- path: /pathType: Prefixbackend:service:name: artemis-proxyport:number: 80EOF -
Switch the proxy service to
ClusterIPso traffic flows through the Ingress only. Add the override to.configrc.json:{"OVERRIDE": {"PROXY_SERVICE_TYPE": "ClusterIP"}} -
Re-deploy the proxy to pick up the new service type:
./cli deploy up -f proxy
5. LLM providers
Add the keys for the providers you use under OVERRIDE in .configrc.json. Only configure the blocks for the providers you actually use.
Bedrock
{
"OVERRIDE": {
"ARTEMIS_BEDROCK_REGION": "eu-central-1",
"ARTEMIS_BEDROCK_API_KEY": "<value>",
"ARTEMIS_BEDROCK_ANTHROPIC_REGION": "<value>"
}
}
Vertex
{
"OVERRIDE": {
"ARTEMIS_VERTEX_PRIVATE_KEY": "<value>",
"ARTEMIS_VERTEX_PROJECT": "<value>",
"ARTEMIS_VERTEX_EMAIL": "<value>",
"VERTEX_SERVICE_ACCOUNT_SECRET_NAME": "vertex-key"
}
}
For Vertex via service account file, create a Kubernetes secret vertex-key with key vertex-key.json and set VERTEX_SERVICE_ACCOUNT_SECRET_NAME=vertex-key.
OpenAI / Azure Foundry
ARTEMIS_OPENAI_TYPE selects the provider. It defaults to openai; set it to azure for Azure Foundry.
Official OpenAI
{
"OVERRIDE": {
"ARTEMIS_OPENAI_KEY": "<value>"
}
}
Azure Foundry (OpenAI models)
{
"OVERRIDE": {
"ARTEMIS_OPENAI_TYPE": "azure",
"ARTEMIS_OPENAI_URL": "<value>",
"ARTEMIS_OPENAI_KEY": "<value>"
}
}
Azure Foundry (Claude models)
{
"OVERRIDE": {
"ARTEMIS_AZURE_AI_URL": "<value>",
"ARTEMIS_AZURE_AI_KEY": "<value>"
}
}
Optional connection settings (OpenAI)
{
"OVERRIDE": {
"ARTEMIS_OPENAI_NO_SSL_VERIFY": "true",
"OPENAI_SSL_CA_SECRET_NAME": "openai-certs"
}
}
For OpenAI with a custom CA, create a secret openai-certs with key ca.crt and set OPENAI_SSL_CA_SECRET_NAME=openai-certs.
Anthropic
{
"OVERRIDE": {
"ARTEMIS_ANTHROPIC_KEY": "<value>"
}
}
Cohere
{
"OVERRIDE": {
"ARTEMIS_COHERE_KEY": "<value>"
}
}
Deepseek
{
"OVERRIDE": {
"ARTEMIS_DEEPSEEK_KEY": "<value>"
}
}
6. CLI reference
The deployment CLI groups commands under a few top-level verbs and shares a small set of flags. The most common ones are listed here. Run ./cli --help or ./cli <command> --help for the full reference.
Commands
| Command | Purpose |
|---|---|
./cli config cluster | Verify the CLI can reach the target Kubernetes cluster. |
./cli config registry-auth | Save image-registry credentials for the chosen environment. |
./cli bundle images | List the container images this release will install. |
./cli bundle transfer | Mirror those images into a private registry (air-gapped flow). |
./cli deploy up | Install or upgrade Artemis components on the cluster. |
Flags
| Flag | Applies to | Description |
|---|---|---|
--env <production|offline> | config registry-auth, deploy up | Selects the environment profile. production uses the public TurinTech registry; offline uses your internal registry per .configrc.json. |
-u, -p | config registry-auth | Registry username and password (or token). |
--dest <registry> | bundle transfer | Destination registry for image mirroring (e.g. your.internal.registry:port). |
-g <group> | deploy up | Limit the deploy to a component group. Repeatable. Common groups: worker, component. Omit to reconcile the entire stack. |
-f <component> | deploy up | Limit the deploy to a single component (e.g. proxy). Useful after a targeted override like PROXY_SERVICE_TYPE. |
7. Update
-
Download the latest CLI (links auto-update above) and place it next to your existing
.configrc.json. -
Air-gapped: ensure new images are mirrored to your internal registry.
-
Delete the existing
assetsfolder. It lives next to theclibinary and.configrc.json, and is safe to remove - the CLI regenerates it on the next deploy. -
Apply the update:
./cli deploy up -g worker -g componentnote./cli deploy up -g worker -g componentupgrades only the Artemis workers and application components, leaving shared infrastructure (databases, MinIO, Redis, etc.) untouched. This is the recommended path for routine version upgrades../cli deploy up(no flags) reconciles the entire stack, including bundled infrastructure. Use it for a fresh install or when a release explicitly requires infrastructure changes.