Overview
AgentSource MCP (Model Context Protocol) provides a standardized way to access Explorium’s comprehensive business and contact data. Whether you’re building AI agents, data pipelines, or custom applications, we offer multiple implementation paths to suit your needs.Available Implementation Methods
OpenAI Integration
Best for: AI-powered applications and conversational agents Build intelligent agents that can automatically discover and use AgentSource tools through OpenAI’s API. Key Features:- Automatic tool discovery
- Natural language queries
- Built-in response formatting
- Minimal setup required
- Customer service chatbots
- Sales intelligence assistants
- Automated research agents
- Interactive data exploration
Python SDK
Best for: Direct programmatic access and custom applications Native Python implementation for developers who need fine-grained control over data access and processing. Key Features:- Asynchronous operations
- Full control over tool execution
- Ideal for data pipelines
- Comprehensive error handling
- Data enrichment pipelines
- Batch processing systems
- Custom analytics tools
- Integration with existing Python applications
LangGraph Integration
Best for: Complex AI workflows and multi-step processes Build sophisticated AI agents with state management and complex decision trees using LangGraph. Key Features:- Stateful conversations
- Multi-step workflows
- Conditional logic
- Tool chaining
Quick Comparison
| Feature | OpenAI | Python SDK | LangGraph |
|---|---|---|---|
| Setup Complexity | Low | Medium | Medium |
| Best For | AI Agents | Data Pipelines | Complex Workflows |
| Language | Python | Python | Python |
| Tool Discovery | Automatic | Manual | Automatic |
| Response Format | Structured | JSON | Structured |
| Async Support | ✓ | ✓ | ✓ |
| State Management | Limited | Custom | Built-in |
| Error Handling | Built-in | Manual | Built-in |
Getting Started
Prerequisites
Before implementing AgentSource MCP, ensure you have:- AgentSource API Key - Get your API key
- Development Environment - Python 3.8+ recommended
- Basic Understanding - Familiarity with REST APIs and async programming
Core Concepts
What is MCP?
Model Context Protocol (MCP) is a standardized protocol that enables AI models and applications to discover and use tools dynamically. With AgentSource MCP, you get:- Dynamic Tool Discovery - Tools are discovered at runtime, not hardcoded
- Standardized Interface - Consistent API across all tools
- Type Safety - Structured inputs and outputs
- Streaming Support - Real-time data updates via SSE
Available Tool Categories
Business Intelligence- Company search and matching
- Firmographic enrichment
- Technology stack analysis
- Financial metrics (public companies)
- Competitive landscape insights
- Employee search by role/department
- Contact information enrichment
- Professional profile data
- Work history and experience
- Social media activity
- Industry trends and statistics
- Company events and news
- Website changes monitoring
- Workforce trend analysis
- Funding and acquisition data
Common Use Cases
Sales Intelligence
Find prospects at target companies, enrich contact information, and track company events:- Identify decision makers at target accounts
- Get verified email addresses and phone numbers
- Monitor job changes and company updates
- Analyze technology stack for better positioning
Market Research
Analyze industries, competitors, and market trends:- Company segmentation by size, revenue, location
- Technology adoption trends
- Competitive landscape analysis
- Industry-specific insights
Lead Enrichment
Enhance your CRM data with comprehensive business and contact information:- Bulk company matching and enrichment
- Contact verification and updates
- Organizational structure mapping
- Technographic segmentation
Due Diligence
Comprehensive company analysis for investment or partnership decisions:- Financial metrics and performance
- Leadership and organizational changes
- Technology infrastructure
- Growth indicators and challenges
Best Practices for Integrating AgentSource MCP
Here are some recommendations for optimal integration of Explorium’s AgentSource MCP server into your AI agent application. Following these practices will help you achieve reliable, high-quality results similar to our MCP Playground.System Prompt Guidelines
Core Principles Effective MCP performance is highly dependent on your system prompt. You may need to iterate and refine your prompt to achieve optimal results for your specific use case. Reference System Prompt Below is the system prompt used in Explorium’s MCP Playground. This can serve as a baseline for your implementation:- Replace the date placeholder: Update
{currentDateTime}with your actual date/time variable or remove if not needed - Adjust the identity: Modify “The assistant is AgentSource” to match your agent’s branding
- Tailor the workflow: Add or modify workflow steps based on your specific use case
- Customize response style: Adapt formatting and verbosity requirements to match your application’s UX
- Add domain-specific instructions: Include any industry or use-case specific guidance relevant to your users
Model Selection
Recommended Models We highly recommend using Claude Sonnet models for optimal performance:- Claude 4.5 Sonnet (primary recommendation)
- Claude 4.0 Sonnet (alternative)
- OpenAI GPT-5
- OpenAI GPT-4.1
Model Parameters
Temperature Settings Critical: Use a low temperature setting to ensure reliable MCP tool usage.- Recommended range: 0.0 - 0.1
- Explorium uses: 0.05
- Why this matters: Higher temperature values may cause the agent to query MCP tools incorrectly or inconsistently
Other Parameters
Consider these additional parameter recommendations:- Max tokens: Set appropriately for your use case - we recommend not limiting the number of tokens.
- Top-p: If used, keep at 0.9 or lower for consistent behavior
Prompt Engineering Best Practices
Tool Calling Instructions
- Be explicit about workflow: Clearly define when and how each MCP tool should be used
- Handle errors gracefully: Provide clear instructions on error handling and fallback behavior
Response Formatting
- Specify output format: Be explicit about Markdown formatting, table usage, and response structure
- User experience: Include guidelines on when to wait for user input vs. proceeding automatically
Testing and Iteration
Validation Steps- Test common queries: Validate your integration with typical user queries in your domain
- Edge case testing: Test with complex, multi-filter queries
- Error scenarios: Verify behavior when API calls fail or return no results
- Consistency checks: Run the same query multiple times to ensure consistent behavior
- Monitor tool calling patterns: Ensure the agent is calling tools in the optimal sequence
- Review response quality: Check that responses are well-formatted and user-friendly
- Iterate on the system prompt: Refine instructions based on observed agent behavior
- A/B test variations: Try different prompt formulations to find what works best for your use case
Common Pitfalls to Avoid
- ❌ Using high temperature settings (> 0.1)
- ❌ Exposing internal function names in user-facing responses
- ❌ Insufficient error handling instructions
- ❌ Vague or ambiguous workflow instructions
Need Help?
Resources
Support
- Technical Support: support@explorium.ai
- Community Forum: Coming Soon!
What’s Next?
- Choose your implementation method based on your use case
- Follow the specific guide for detailed instructions
- Start with simple queries and gradually increase complexity
- Join our community to share experiences and get help