Introduction

Welcome to Prompt Store

One way to think of Prompt Store is as a CMS for prompts.

The core features of the product include:

Prompt Management
βœ“
Create and maintain model-independent prompts supporting both completion and chat APIs using a choice of templating notation to insert real-time data.
Prompt Design
βœ“
Use a ChatGPT+ style interface to design and test prompts, interacting directly with the LLM. Supports vision models.
Semantic Functions
βœ“
Define semantic functions that connect prompts to LLM implementations and various sources of knowledge to augment the prompt.
Compositions
βœ“
Chain functions and tools using a visual designer to compose sophisticated functions that involve multiple LLM interactions.
Enable a β€œModel Zoo”
βœ“
Use multiple LLM providers such as OpenAI and Google. Select versions such as GPT-3.5 and GPT-4. Connect to self-hosted models such as from Huggingface or your own model implementations.
Semantic Indexes
βœ“
Create semantic search indexes from multiple data sources using a Vector Database.
Knowledge Graphs
βœ“
Extract entities and relationships from data sources to create knowledge graphs that can then be used to enhance retrieval augmented generation.
Data Sources
βœ“
Extract knowledge from multiple data sources including documents: text, PDF, Word - crawling web content - Feature Stores.
Feature Store Integration
βœ“
Integrate with feature stores to augment prompts with live data. Currently supporting Feast and Anaml.
Document Management
βœ“
Upload and preview documents to use as a source of knowledge to augment prompts.
Batch Data Transformation
βœ“
Define and schedule transformation jobs using LLM generated features. Supports bulk processing by packing multiple input texts into a single request.
Evaluation
βœ“
Evaluate models and prompts using LLM rubrics to assess responses against criteria such as conciseness, relevance, and correctness.
Response Labelling
βœ“
Tools to streamline response labelling to create β€œgolden” datasets for model evaluation.
Agents
βœ“
Schedule autonomous agents to find information and perform tasks in the background, providing notifications as requested. Implement automated decisioning and nudge systems using the LLM-powered agents.
Guardrails
βœ“
A plugin system for checking model inputs and outputs.
Experimentation
βœ“
Test alternative prompt designs and model implementations.
Monitoring
βœ“
Captures history about which prompts are being effective, number of edits and time to settle on final copy.
Workspaces
βœ“
To enable multiple teams and parts of the organization to work in their own spaces.
User Controls
βœ“
Integrate with enterprise single sign-on (SSO) systems.
Application Programming Interfaces (API)
βœ“
To enable automation and integration with other systems.

In a nutshell, why do I need this?

One of the first things an AI practitioner does, looking at an AI system, is to find the prompts used by the system. These show how the system works, and what the LLMs are doing.

Now, imagine you are:

  • the person responsible for operating the system
  • the developer responsible for tuning the prompts while the system is in production
  • the AI engineer responsible for experimenting over different versions of the prompts
  • the business or product owner responsible for understanding and governing how the system works

Assuming, for one moment, it was easy to find the prompts in the code, and find a description of the parameters injected into the prompts, a change to any of the prompts will require a build and deploy cycle. What if I want to:

  • Continuously refine the prompts
  • Use different versions for different models
  • Conduct A/B testing of multiple prompt designs

It’s important to separate prompts from code in the same way we separate content from code using a content management system. Making prompts dynamic means we can more rapidly adapt to evolving model capabilities. The AI system becomes more transparent and easier to govern. That is the purpose of this platform.