Skip to main content

Version 1.4.0

Here are our Release Notes for Artemis v.1.4.0:​

πŸš€ New features and functionalities​

We’ve added new LLMs​

Artemis includes a range of LLMs for code analysis and code optimisations tasks. We have added the following to our current list of LLMs:

  • GPT-4o mini
  • Neural Chat
  • Mistral Large 2
  • Llama 3.1

We’ve added reranker models​

Reranker models can be used to create embeddings of codebases and supplementary material, so that relevant information can be easily retrieved when we ask questions about a codebase. We have added the following reranker models to our platform:

  • Cohere
  • MixedBread
  • Jina-AI

πŸ“ˆ Improvements​

We enabled better document processing for RAG​

We use RAG-based approaches to query code and documents. We have improved the splitting methods of our documents, so that you will have better results when you query code and documents on Artemis.

We made GPT-4o mini the default LLM of Artemis​

We replaced GPT 3.5-turbo with GPT-4o mini as the Artemis default model. As a result you will experience better performance and reduced cost.

We improved our LLM token-tracking​

Artemis provides metrics to users to track and manage their LLM usage. This is done via monitoring the amount of LLM tokens utilised during code analysis and optimisation tasks. Previously, we used an umbrella tokeniser for all LLMs to calculate the number of tokens. In this version of Artemis, we implemented LLM-specific tokenisers, so that you can have a more refined understanding of your LLM usage.

We implemented a code specific search functionality​

Artemis includes an ARTEMIS CHAT feature, similar to the likes of ChatGPT, where you can ask questions and search your own codebase. We upgraded this feature to provide a more specific CODE SEARCH functionality. This functionality better retrieves information from codebases via more targeted search techniques.

We updated Artemis chat so that you can have your own knowledge repository within Artemis​

ARTEMIS CHAT is available at two levels: (1) within each code project; and (2) at platform level. We updated the platform-level chat in such a way that when you use the chat, all codebases and projects attributed to your user profile will be used to provide you answers. This is similar to having your own knowledge repository embedded within Artemis!

We enhanced our file search filters​

When you upload your code project to Artemis, you can choose which files of your codebase you want analysed. We improved this functionality to filter files better.

πŸ”§ Bug fixes​

We fixed an issue with running embeddings​

Some of you told us how you had to run a code analysis before creating embeddings for a code project. We heard you – this won’t be a problem anymore!

We made it easier for you to access validation logs​

You told us that you could not see sufficient information about the status of your code recommendation validations. We realised how that can make things difficult for you - now you can access individual validation logs for each code recommendation and each type of validation.

We fixed a problem with selecting the language of your codebase​

We heard from you that there were issues selecting the code languages from our drop-down list of languages - this is fixed now.

We’ve improved the stability of our embedding tasks​

We use embeddings to power the Artemis chat feature. We improved our approaches to embeddings so that the embeddings tasks can be more stable and performant.

We made our LLM outputs better​

Artemis uses LLM outputs across a range of its features and functionalities. We improved the way we parse LLM outputs, so that you receive better results from LLMs.

πŸ¦• Deprecated​

We removed some LLMs​

We removed Llama 3 70b and Mistral Large from the list of Artemis LLMs – only to replace them with more performant models! (The new LLMs we added are at the top of this page.)

🚨 Security updates​

We've improved underlying libraries​

We updated Artemis libraries and deployment bundles so that they are more compact and are up-to-date with latest industry standards.