This page offers guidance on how to query the Valyu DeepSearch API effectively to get the best results.

Why Prompting Matters

Valyu is AI-native and built for AI agents that need factual grounding from authoritative sources. The more precise your search instructions, the better Valyu can deliver real-time, relevant results that reduce hallucinations and improve your AI’s accuracy.

Anatomy of a Good Prompt

Effective prompts for Valyu’s should include:

ComponentDescriptionExample
IntentWhat specific knowledge do you need?”LLM transformer efficiency optimizations”
Source TypeWhich data sources should Valyu prioritize?”{author} {document name} “
ConstraintsWhat filters improve relevance?“production-ready solutions”

Common Prompting Mistakes

Ineffective prompts that waste your API credits:

Avoid Generic Queries

Too generic: Valyu needs specificity to deliver factual grounding

❌ “AI research”

✅ “transformer attention mechanism computational complexity analysis”

Generic queries return broad, unfocused results that dilute relevance. Specific technical terms help Valyu’s semantic search identify precise academic papers, implementation details, and authoritative sources that match your exact knowledge needs.

Specify Source Guidance

Missing source guidance: Specify the type of content you need

❌ “Stock data”

✅ “Apple quarterly earnings financial statements SEC filings”

Without source context, Valyu may return news articles when you need financial data, or vice versa. Explicit source indicators help prioritize the right content type from Valyu’s comprehensive search index spanning web, academic, and financial sources.

Focus Your Scope

Overly broad scope: Granular search controls work better with focused queries

❌ “Everything about quantum computing”

✅ “quantum error correction surface codes implementation”

Broad topics overwhelm search algorithms and return surface-level content. Focused queries leverage Valyu’s deep indexing to surface specialized research, technical specifications, and implementation details that provide actionable insights for your AI system.

Single Intent Per Query

Multiple queries in one prompt: Broad queries dilute the intent, keep to focused queries

❌ “Explain causes of high inflation rates, and also tell me about cryptocurrency market trends”

✅ “Federal Reserve interest rate policy impact on inflation 2023-2024”

Multiple intents dilutes query intent and reduce precision for each topic. Single-intent queries allow Valyu’s relevance algorithms to optimize for one specific knowledge domain, delivering higher-quality results that better serve your LLM’s context requirements.

Optimize for Low-Verbosity Structure

Too Verbose Querying: Don’t add additional noise to the query, keep to key information

❌ “Explain concepts on how bioinformatics works by helix”

✅ “DNA helix structure bioinformatics sequence analysis”

Verbose phrasing with unnecessary words reduces search precision and wastes tokens. Compressed, keyword-focused queries improves search precision especailly when looking for speciifc information within a specific document.

Transform weak prompts into high intent queries:

Ineffective PromptOptimized for Valyu
”Find information about machine learning""performance benchmarks and implementation details for production RAG systems"
"Cancer research""CAR-T cell therapy clinical trial results for B-cell lymphoma, efficacy rates, adverse events, and FDA approval timelines"
"Recent studies on psychology""meta-analysis of cognitive behavioral therapy effectiveness for treatment-resistant depression in adolescents"
"Database optimization""PostgreSQL query performance tuning for time-series data, indexing strategies, partitioning, and memory configuration benchmarks”

If a user is querying the Valyu API directly (not through an LLM tool call), set tool_call_mode=false for better results.

Advanced Techniques for Production Systems

Maximizing Valyu’s Search Controls

Combine agent-ready prompts with Valyu’s granular parameters for increased performance:

response = valyu.search(
    "GPT-4 vs GPT-3 architectural innovations: training efficiency, inference optimization, and benchmark comparisons",
    search_type="proprietary",
    max_num_results=10,
    relevance_threshold=0.7,
    included_sources=["valyu/valyu-arxiv"],
    max_price=50.0,
    category="machine learning"
)

Pro tip: Leverage Valyu’s beyond-the-web capabilities with included_sources like valyu/valyu-arxiv for academic content, financial market data, or specialized datasets that other APIs can’t access.

Optimizing your AI Integration

Avoiding Common Integration Issues

  1. Token waste: Focus prompts on essential information for your LLM context dont ask general questions
  2. Ambiguous queries: Define domain-specific terms and expand acronyms to improve search precision
  3. Missing filters: Always use Valyu’s relevance thresholds and source controls
  4. Ignoring cost optimization: Balance max_price with result quality needs
  5. Wrong source expectations: Sometimes highly-cited/ popular sources may not contain the context you need. For example, the “Attention is All You Need” paper is foundational but terrible for learning how transformers work in modern LLMs

Start Building with Valyu

Ready to integrate production-grade search into your AI stack?

Developer Support

Building something ambitious? Our team helps optimize search strategies for mission-critical AI applications:

Performance tip: The most effective prompts combine domain expertise with Valyu’s search controls. Start with our templates, then iterate based on your LLM’s specific context requirements.