Introduction
Welcome to Prompt Store
One way to think of Prompt Store is as a CMS for prompts.
The core features of the product include:
Prompt Management | β | Create and maintain model-independent prompts supporting both completion and chat APIs using a choice of templating notation to insert real-time data. |
Prompt Design | β | Use a ChatGPT+ style interface to design and test prompts, interacting directly with the LLM. Supports vision models. |
Semantic Functions | β | Define semantic functions that connect prompts to LLM implementations and various sources of knowledge to augment the prompt. |
Compositions | β | Chain functions and tools using a visual designer to compose sophisticated functions that involve multiple LLM interactions. |
Enable a βModel Zooβ | β | Use multiple LLM providers such as OpenAI and Google. Select versions such as GPT-3.5 and GPT-4. Connect to self-hosted models such as from Huggingface or your own model implementations. |
Semantic Indexes | β | Create semantic search indexes from multiple data sources using a Vector Database. |
Knowledge Graphs | β | Extract entities and relationships from data sources to create knowledge graphs that can then be used to enhance retrieval augmented generation. |
Data Sources | β | Extract knowledge from multiple data sources including documents: text, PDF, Word - crawling web content - Feature Stores. |
Feature Store Integration | β | Integrate with feature stores to augment prompts with live data. Currently supporting Feast and Anaml. |
Document Management | β | Upload and preview documents to use as a source of knowledge to augment prompts. |
Batch Data Transformation | β | Define and schedule transformation jobs using LLM generated features. Supports bulk processing by packing multiple input texts into a single request. |
Evaluation | β | Evaluate models and prompts using LLM rubrics to assess responses against criteria such as conciseness, relevance, and correctness. |
Response Labelling | β | Tools to streamline response labelling to create βgoldenβ datasets for model evaluation. |
Agents | β | Schedule autonomous agents to find information and perform tasks in the background, providing notifications as requested. Implement automated decisioning and nudge systems using the LLM-powered agents. |
Guardrails | β | A plugin system for checking model inputs and outputs. |
Experimentation | β | Test alternative prompt designs and model implementations. |
Monitoring | β | Captures history about which prompts are being effective, number of edits and time to settle on final copy. |
Workspaces | β | To enable multiple teams and parts of the organization to work in their own spaces. |
User Controls | β | Integrate with enterprise single sign-on (SSO) systems. |
Application Programming Interfaces (API) | β | To enable automation and integration with other systems. |
In a nutshell, why do I need this?
One of the first things an AI practitioner does, looking at an AI system, is to find the prompts used by the system. These show how the system works, and what the LLMs are doing.
Now, imagine you are:
- the person responsible for operating the system
- the developer responsible for tuning the prompts while the system is in production
- the AI engineer responsible for experimenting over different versions of the prompts
- the business or product owner responsible for understanding and governing how the system works
Assuming, for one moment, it was easy to find the prompts in the code, and find a description of the parameters injected into the prompts, a change to any of the prompts will require a build and deploy cycle. What if I want to:
- Continuously refine the prompts
- Use different versions for different models
- Conduct A/B testing of multiple prompt designs
Itβs important to separate prompts from code in the same way we separate content from code using a content management system. Making prompts dynamic means we can more rapidly adapt to evolving model capabilities. The AI system becomes more transparent and easier to govern. That is the purpose of this platform.