Overview

Valyu provides a seamless wrapper for the OpenAI SDK that automatically enriches your prompts with relevant context. This allows your LLM calls to be more informed and accurate without changing your existing OpenAI code.

Installation

Install both the Valyu SDK and OpenAI package:

pip install valyu openai

Usage

Basic Integration

Simply wrap your OpenAI client with Valyu to enable automatic context enrichment:

from valyu import Valyu
import openai

# Initialize Valyu and wrap OpenAI
valyu = Valyu()
client = valyu.wrap(openai)

# Use OpenAI as normal - context is automatically added
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "user", "content": "What are the latest developments in quantum computing?"}
    ]
)

print(response.choices[0].message.content)

Configuration Options

You can customize how Valyu enriches your prompts:

# Configure context retrieval settings
client = valyu.wrap(
    openai,
    search_type="proprietary",  # Use only proprietary sources
    num_results=5,              # Number of context results
    max_price=10,              # Maximum price per enrichment
    auto_enrich=True           # Enable/disable automatic enrichment
)

Advanced Usage

Manual Context Control

You can explicitly control when context is added:

# Disable automatic enrichment
client = valyu.wrap(openai, auto_enrich=False)

# Manually enrich specific calls
response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "user", "content": "Explain quantum entanglement"}
    ],
    enrich=True  # Enable context for this call only
)

Custom Context Processing

Customize how context is integrated into your prompts:

def custom_context_processor(context_results, original_messages):
    # Process context results
    context_text = "\n".join([
        f"Source ({r.relevance_score:.2f}): {r.content}"
        for r in context_results
    ])
    
    # Add context to messages
    return [
        {"role": "system", "content": f"Use this context:\n{context_text}"},
        *original_messages
    ]

# Use custom processor
client = valyu.wrap(
    openai,
    context_processor=custom_context_processor
)

Streaming Support

Context enrichment works seamlessly with streaming:

# Stream completion with context
stream = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "user", "content": "Explain quantum computing"}
    ],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Best Practices

1. Cost Management

Monitor and control costs with price limits:

# Set maximum price per enrichment
client = valyu.wrap(
    openai,
    max_price=5,           # Maximum credits per enrichment
    fail_on_price=True     # Fail if price would exceed limit
)

try:
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "Your query"}]
    )
except ValueError as e:
    print("Context enrichment would exceed price limit")
    # Handle accordingly

2. Error Handling

Implement robust error handling:

from valyu.exceptions import ValyuAPIError

try:
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "Your query"}]
    )
except ValyuAPIError as e:
    print(f"Valyu API error: {e}")
    # Fall back to standard OpenAI call
    response = openai.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "Your query"}]
    )

3. Context Caching

Enable caching for frequently used queries:

client = valyu.wrap(
    openai,
    enable_cache=True,
    cache_ttl=3600  # Cache context for 1 hour
)

Example Applications

Research Assistant

from valyu import Valyu
import openai

valyu = Valyu()
client = valyu.wrap(openai)

def research_assistant(topic):
    # Initial research query
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[
            {
                "role": "system",
                "content": "You are a research assistant. Synthesize the context provided and give a comprehensive answer."
            },
            {
                "role": "user",
                "content": f"What are the latest developments in {topic}?"
            }
        ],
        temperature=0.7
    )
    
    return response.choices[0].message.content

# Use the assistant
result = research_assistant("quantum computing")
print(result)

Document Q&A

def document_qa(document_text, question):
    client = valyu.wrap(
        openai,
        search_type="all",  # Use both proprietary and web sources
        num_results=3
    )
    
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[
            {
                "role": "system",
                "content": f"Document content: {document_text}\n\nAnswer questions based on the document and enriched context."
            },
            {
                "role": "user",
                "content": question
            }
        ]
    )
    
    return response.choices[0].message.content

Additional Resources