MCP Server
Fload provides an MCP (Model Context Protocol) server that lets AI clients like Claude Desktop, Cursor, or any MCP-compatible tool connect directly to your Fload data and actions.
What is MCP?
Model Context Protocol is an open standard that lets AI applications connect to external tools and data sources. Instead of copy-pasting data into your AI chat, MCP lets the AI pull data directly from Fload — and take actions on your behalf.
With the Fload MCP server, you can:
- Ask Claude Desktop about your app metrics and get live answers
- Have Cursor reference your app's reviews while writing code
- Build custom AI workflows that read and act on your Fload data
Setup
1. Get an API key
Generate an API key from Settings → API Keys in your Fload dashboard. See API Keys for details on creating and managing keys.
2. Configure your AI client
Claude Desktop — add this to your claude_desktop_config.json:
{
"mcpServers": {
"fload": {
"command": "npx",
"args": ["-y", "@fload/mcp"],
"env": {
"FLOAD_API_KEY": "your-api-key-here"
}
}
}
}Cursor — add the same configuration to your Cursor MCP settings.
That's it. The MCP server runs locally as a subprocess and connects to Fload using your API key.
What you can do
The MCP server gives your AI client access to 37 tools:
Apps
- List all apps in your organization
- Get detailed information about any app
Reviews
- Access reviews with filters for rating, platform, and date
generate_review_reply— Generate an AI draft reply for a reviewsend_review_reply— Send a reply to an app reviewtranslate_review— Translate a review to English
Analytics
- Get key metrics (revenue, downloads, subscriptions) for any date range
ASO
get_aso_summary— Get ASO score, health, and overview for an appget_aso_recommendations— Get current ASO recommendations (title, subtitle, keywords)get_aso_keywords— Get keyword rankings, search volume, and competitor dataget_aso_experiments— List ASO experiments with status filteringget_aso_locale_snapshots— Get current listing snapshots across all localestrigger_aso_analysis— Trigger a new ASO analysis run
Agents
- See which agents are configured and their status
- View recent agent run history and results
trigger_agent_run— Trigger a manual agent runpause_agent— Pause an agentresume_agent— Resume a paused agentget_agent_activity— Get recent activity log for an agent
Growth
- Get the latest growth audit report and scores
Monitoring
- Access detected anomalies across all metrics
get_anomaly_detail— Get full detail for a single anomalyacknowledge_anomaly— Mark an anomaly as acknowledgeddismiss_anomaly— Dismiss an anomaly
Ads
- Get campaign performance data across all platforms
Forecasting
- Get revenue and download projections
Chat
list_conversations— List chat conversationsget_conversation_messages— Get messages in a conversationsend_chat_message— Send a message to the AI chat
Dashboard
- High-level overview of all your apps at once
Pending actions
- List actions awaiting your approval
- Approve or reject actions directly through the AI
Example prompts
Once connected, just talk to your AI client naturally:
- "What happened to my revenue this week?" → pulls analytics data and compares to last week
- "Are there any anomalies I should know about?" → checks detected anomalies
- "What's my current growth score?" → retrieves the latest growth audit
- "Show me pending actions and approve the review replies" → lists and approves actions
API key management
- Keys are organization-scoped — they access all apps in your org
- Keys support configurable permission scopes (read-only, specific tools, or full access)
- Generate and revoke keys from Settings → API Keys
- See API Keys for full details
Limitations
- The server runs locally as a subprocess (STDIO transport)
- One organization per API key
- MCP reads and writes are free — they don't consume credits