In This Article:
The new platform capabilities empower businesses to accelerate the development of their AI applications
KIRKLAND, Wash., March 26, 2024 (GLOBE NEWSWIRE) -- Appen Limited (ASX: APX), a leading provider of high-quality data for the AI lifecycle, announced the launch of new platform capabilities that will support enterprises customizing large language models (LLMs).
The solution supports internal teams who are attempting to leverage generative AI within the enterprise. Through a common and consistent process now available in Appen’s AI Data Platform, a user can move through the training of their LLM model(s) from use case to production. The steps include:
-
Model selection: Appen's platform connects directly to any model, enabling you to evaluate existing models, test new models, and conduct comprehensive benchmarking.
-
Data preparation: High quality data is critical to accurate and trustworthy AI. Appen's annotation platform enables the preparation of datasets for vectorization and Retrieval-Augmented Generation (RAG).
-
Prompt creation: To effectively validate model performance, a set of custom prompts are required for use cases. Appen's platform enables you to connect with your internal experts or our global crowd for the creation of custom prompts for model evaluation.
-
Model optimization: Appen's platform streamlines the process of capturing human feedback for model evaluation. Our platform includes templates for human evaluation, A/B testing, model benchmarking and other custom workflows to inspect performance throughout your RAG process.
-
Safety assurance: Appen's platform and Quality Raters help ensure that your models are safe to deploy. We have detailed workflows and teams to support red teaming to identify toxicity, brand safety and harm.
Appen’s new capabilities offer enterprises a way to incorporate proprietary data and collaborate with internal subject matter experts to refine LLM performance for enterprise-specific use cases—all within a single platform. Companies can deploy solutions on-premises, in the cloud, or hybrid, and balance LLM accuracy, complexity, and cost-effectiveness.
"Generative AI has created significant opportunities for enterprise innovation," said Appen CEO, Ryan Kolln. "However, the challenge that enterprises are facing is how to ensure that their LLM enabled applications are accurate and trustworthy. Appen has been at the forefront of human-AI collaboration for over 25 years, and I'm super excited that we can now bring our products and expertise to enterprises looking to build accurate and trustworthy LLM enabled applications."