This page offers guidance on how to query the Valyu DeepSearch API effectively to get the best results.

Why Prompting Matters

Valyu is AI-native and built for AI agents that need factual grounding from authoritative sources. The more precise your search instructions, the better Valyu can deliver real-time, relevant results that reduce hallucinations and improve your AI’s accuracy.

Anatomy of a Good Prompt

When calling the API from your LLM, Agent or User, effective prompts for Valyu’s should include:
ComponentDescriptionExample
IntentWhat specific knowledge do you need?”LLM transformer efficiency optimizations”
Source TypeWhich data sources should Valyu prioritize?”{author} {document name} “
ConstraintsWhat filters improve relevance?“production-ready solutions”
Pro tip: Don’t want to write prompts? Use tool_call_mode=false in the API parameters. However, to get the best results, keep reading.

Query Optimisation Essentials

Character Limits and Precision

Valyu works best with focused, concise queries under 400 characters. Think search terms, not conversational prompts. ❌ Too long (450+ characters): “I need comprehensive information about the latest developments in artificial intelligence and machine learning technologies, particularly focusing on large language models, their training methodologies, performance benchmarks, computational requirements, and how they compare to previous generations of AI systems in terms of accuracy and efficiency” ✅ Optimized (under 400 characters): “LLM training methodologies performance benchmarks computational requirements vs previous AI systems”

Multi-Topic Query Strategy

Complex research needs? Split them into targeted sub-queries rather than cramming multiple intents into one request. This approach delivers more precise results and better leverages Valyu’s DeepSearch capabilities over each query. Instead of one broad query:
{
  "query": "Tell me everything about company ABC including competitors, financials, recent news, and industry trends"
}
Use focused, parallel queries:
{
  "query": "Company ABC main competitors market share analysis"
}
{
  "query": "ABC Corp quarterly revenue growth 2024"
}
{
  "query": "ABC recent acquisitions strategic partnerships"
}
{
  "query": "Industry trends affecting ABC business model"
}
Developer insight: Parallel focused queries often outperform single comprehensive ones, delivering higher relevance scores.

Common Prompting Mistakes

Ineffective prompts that waste your API credits:

Avoid Generic Queries

Too generic: Valyu needs specificity to deliver factual grounding ❌ “AI research” ✅ “transformer attention mechanism computational complexity analysis” Generic queries return broad, unfocused results that dilute relevance. Specific technical terms help Valyu’s search algorithms identify precise sources, details and content that match your AI needs.

Specify Source Guidance

Missing source guidance: Specify the type of content you need ❌ “Stock data” ✅ “Apple quarterly earnings financial statements SEC filings” Without source context, Valyu may return news articles when you need financial data, or vice versa. Explicit source indicators help prioritise the right content from Valyu’s comprehensive search index.

Focus Your Scope

Overly broad scope: Granular search controls work better with focused queries ❌ “Everything about quantum computing” ✅ “quantum error correction surface codes implementation” Broad topics overwhelm dilute your search intent and will return surface-level or inprecise content. Focused queries will better surface specialised content that will be more relevant and useful for your AI system.

Single Intent Per Query

Multiple queries in one prompt: Broad queries dilute the intent, keep to focused queries ❌ “Explain causes of high inflation rates, and also tell me about cryptocurrency market trends” ✅ “Federal Reserve interest rate policy impact on inflation 2023-2024” Multiple intents dilutes query intent and reduce precision for each topic. Single-intent queries allow Valyu’s relevance algorithms to optimise for a specific domain, delivering higher-quality results that better serve your LLM’s context requirements.

Optimize for Low-Verbosity Structure

Too Verbose Querying: Don’t add additional noise to the query, keep to key information ❌ “Explain concepts on how bioinformatics works by helix” ✅ “DNA helix structure bioinformatics sequence analysis” Verbose phrasing with unnecessary words reduces search precision and wastes tokens. Compressed, keyword-focused queries improves search precision especailly when looking for speciifc information within a specific document.

Transform weak prompts into high intent queries:

Ineffective PromptOptimized for Valyu
”Find information about machine learning""performance benchmarks and implementation details for production RAG systems"
"Cancer research""CAR-T cell therapy clinical trial results for B-cell lymphoma, efficacy rates, adverse events, and FDA approval timelines"
"Recent studies on psychology""meta-analysis of cognitive behavioral therapy effectiveness for treatment-resistant depression in adolescents"
"Database optimization""PostgreSQL query performance tuning for time-series data, indexing strategies, partitioning, and memory configuration benchmarks”
If a user is querying the Valyu API directly (not through an LLM tool call), set tool_call_mode=false for better results.

Maximizing Valyu’s Search Parameters

Combine optimised prompts with Valyu’s granular parameters for better guardrails. this prevents your AI from searching over a specific area. This is useful if trying to use the DeepSearch API for a specific domain or tool call.
response = valyu.search(
    "GPT-4 vs GPT-3 architectural innovations: training efficiency, inference optimization, and benchmark comparisons",
    search_type="proprietary",
    max_num_results=10,
    relevance_threshold=0.6,
    included_sources=["valyu/valyu-arxiv"],
    max_price=50.0,
    category="machine learning"
    start_date="2024-01-01",
    end_date="2024-12-31"
)
We still recommend passing passing all the search boundaries in the query for the best results. Treat the parameters as a guardrail or hard filter.
Pro tip: Leverage Valyu’s beyond-the-web capabilities with included_sources like valyu/valyu-arxiv for academic content, financial market data, or specialized datasets that other Search APIs can’t access.

Avoid Common Integration Mistakes

  1. Token waste: Focus prompts on essential information for your LLM context dont ask general questions
  2. Ambiguous queries: Define domain-specific terms and expand acronyms to improve search precision
  3. Missing filters: Always use Valyu’s relevance thresholds and source controls
  4. Ignoring cost optimization: Balance max_price with result quality needs
  5. Wrong source expectations: Sometimes highly-cited/ popular sources may not contain the context you need. For example, the “Attention is All You Need” paper is foundational but terrible for learning how transformers work in modern LLMs

Start Building with Valyu

Ready to integrate production-grade search into your AI stack?

Developer Support

Building something ambitious? Our team helps optimize search strategies for mission-critical AI applications:
Performance tip: The most effective prompts combine domain expertise with Valyu’s search controls. Start with our templates, then iterate based on your LLM’s specific context requirements.