Articles

Gartner: Comparing and evaluating large language models

3 Mins read
Large language models

The surge in popularity of ChatGPT has led to a proliferation of large language models (LLMs), making their evaluation a significant challenge. Due to the multifaceted nature of LLMs, there is no one-size-fits-all approach to assess and select the most suitable models for enterprises. Each LLM has various dimensions to measure, and enterprises have unique priorities based on their specific use cases. Despite these complexities, thorough evaluation remains crucial before adopting any LLM. The following recommendations will outline key factors for evaluating and comparing LLMs, helping you measure and enhance their effectiveness for your organization.

Model Type: General Versus Specific Applicability

For effective comparisons, it is crucial to understand whether the LLM is general-purpose or specific to a given task or context. General-purpose LLMs, like the GPT models from OpenAI, typically support a wide range of generic use cases as they lack specific training for any particular industry, business function, or task. In contrast, domain-specific LLMs are trained or fine-tuned on specialized datasets to develop expertise in particular tasks or domains.

To select the right LLM for their organization, leaders must grasp the common use cases for each model type:

  • General-purpose models: These models are generally used for broad natural language understanding and generation tasks, such as content creation and summarization. They often offer greater power and flexibility through prompt engineering (e.g., in-context learning) compared to domain-specific models.
  • Domain-specific models: These models are designed for specific domains (horizontal or vertical), organizations, or tasks. They possess deeper knowledge in particular industries or sectors and can be trained to excel in specialized tasks like coding, translation, and document understanding.

Building a comprehensive LLM-powered solution may require multiple models rather than a single LLM. Organizations might need both general-purpose and domain-specific models, or even other types of AI models. These LLMs would assume different roles within the solution and “collaborate” in various ways.

Evaluating Model Capabilities: Benchmarks and Test Cases

Several LLM benchmarks and leaderboards are available, either community-driven or provided by model makers. For general-purpose models, a valuable reference for assessing capabilities is the Large Model Systems Organization’s Chatbot Arena leaderboard. This crowdsourced open platform allows users to rank different models based on their responses to the same questions, without knowing the models’ names. Model makers do not have prior knowledge of all the questions, nor can they train or fine-tune their models specifically to these questions to achieve higher rankings. This makes the leaderboard a useful starting point for evaluating and comparing the general abilities of various models.

When new models are released, model makers typically provide evaluations. If you are focused on a specific capability of a model, task-specific benchmarks can also serve as useful references. However, it is important to note that public LLM benchmarks often suffer from data leakage issues, where evaluation datasets are inadvertently included in the training datasets. This can lead to evaluation results that do not accurately reflect the model’s real-world performance.

In addition to referring to benchmarks and leaderboards, organizations need to develop their own functional test cases aligned with their specific use cases. Start by defining a clear scope and purpose for each use case, as a broader scope of LLM responses can lead to higher risks of undesired behaviors. It is crucial to avoid using LLMs in unsuitable scenarios. Create test cases that closely mirror the LLM usage scenarios in production, utilizing similar or identical data, such as question-and-answer pairs, to ensure the evaluations are as relevant and accurate as possible. The figure below explains the process of LLM evaluation (see Figure 1).

Figure 1: Steps to Follow in LLM Evolution

LLM evaluation steps

After creating test cases, consider what measurements should be taken on the test cases. LLMs vary widely in scope; however, typically, test cases can measure factors such as accuracy, context relevance, safety and other specific metrics to particular use cases, all of which should be prioritized based on business requirements.

Apart from evaluating model capabilities, it is essential to consider nonfunctional factors such as price, speed, IP indemnification, and deployment approaches. These factors are critical for LLM assessment, especially for organizations with regulatory or strict security requirements that necessitate on-premises deployment, thereby limiting model options. Leaders must also weigh trade-offs between features like accuracy, inference cost, inference speed, and context window size to ensure the chosen model aligns with their specific needs and constraints.

Author bio: Ben Yan, Director Analyst at Gartner

Read next: 80% of engineers will need to upskill for generative AI by 2027 – Gartner report

Leave a Reply

Your email address will not be published. Required fields are marked *

95 ÷ 19 =