What recursive-research Does
Recursive Research is an advanced research automation skill designed to conduct deep, multi-layered investigations across any academic or professional domain—from quantum physics to business strategy to art history. It leverages recursive query refinement and source credibility tiering to build comprehensive knowledge hierarchies that reach PhD-level depth. The skill is engineered for researchers, analysts, product strategists, and knowledge workers who need to move beyond surface-level information and build defensible, well-sourced arguments. By combining intelligent source evaluation with decision-making frameworks like Munger’s inversion and Wegmans’s Delphi Method (WDM), it enables users to autonomously explore complex domains and reach evidence-backed conclusions without manual literature review.
How to Install
Prerequisites
- Python 3.8 or higher
- Git installed on your system
- API access to academic databases (optional but recommended): Google Scholar API, Semantic Scholar API, or arXiv API
- Claude API key (for autonomous decision-making features)
Installation Steps
-
Clone the repository
git clone https://github.com/Anjos2/recursive-research.git cd recursive-research -
Create a Python virtual environment
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate -
Install dependencies
pip install -r requirements.txt -
Configure API credentials
- Create a
.envfile in the project root - Add your API keys:
CLAUDE_API_KEY=your_key_here SEMANTIC_SCHOLAR_API_KEY=your_key_here ARXIV_API_KEY=your_key_here
- Create a
-
Verify installation
python -m recursive_research --version -
Run a test query
python -m recursive_research query "machine learning bias in hiring" --depth=3
Use Cases
- Due Diligence for Product Decisions: Product managers researching competitive landscapes or evaluating emerging technologies can use recursive research to automatically surface 50+ relevant sources across trade publications, academic papers, and market reports, organized by credibility tier
- Thesis Development for Business Strategy: Strategy consultants building investment theses or market entry recommendations can leverage the Munger inversion framework to automatically stress-test assumptions and identify counterarguments across sources
- Academic Literature Reviews: PhD students and researchers can automate the initial discovery phase of literature reviews, recursively exploring citation networks and identifying seminal papers without manually visiting 100+ websites
- Patent Landscape Analysis: Intellectual property strategists can recursively map patent families, technical claims, and competitor innovations across international databases to identify white space and infringement risks
- Innovation Forecasting: Innovation teams can use the recursive framework to explore emerging domains (e.g., synthetic biology, quantum computing) by automatically identifying key researchers, breakthrough publications, and institutional clusters
How It Works
Recursive Research operates through a four-stage architecture. First, query decomposition breaks down a user’s research question into constituent sub-questions using Claude’s reasoning capabilities. For example, “What is the state of AI in healthcare?” automatically decomposes into 8-12 focused queries like “What are the FDA approval pathways for AI diagnostics?” and “What are the current failure modes in clinical AI deployment?”
Second, multi-source search and retrieval executes these decomposed queries across heterogeneous sources—academic databases, preprint servers, industry reports, news archives, and regulatory filings. Each source returns ranked results that feed into the third stage. Third, source credibility tiering applies a weighted scoring system that evaluates source authority (peer review status, institutional affiliation, citation count), recency, domain specificity, and citation consensus. Sources are automatically classified into tiers: Tier-1 (peer-reviewed, high-citation academic sources), Tier-2 (industry analyses, well-established journalism), and Tier-3 (blog posts, social media, preliminary findings). The system maintains transparency about source uncertainty and flags contradictions between tiers.
Fourth, decision framework synthesis applies two complementary reasoning approaches. Munger’s inversion technique automatically flips research questions (“What would make this wrong?” instead of “What makes this right?”) to surface blindspots and biases in the collected evidence. Simultaneously, the Wegmans Delphi Method aggregates expert perspectives from the sources into probabilistic forecasts. The output is a hierarchical research report that shows not just findings, but the confidence bands around them and the logical dependencies between claims across sources.
Pros and Cons
Pros:
- Reaches PhD-level depth autonomously without manual literature review or domain expertise
- Source tiering ensures transparency about evidence quality and flags conflicting information
- Recursive decomposition discovers insights a single search query would miss
- Munger inversion built-in identifies blindspots and stress-tests conclusions
- Works across any domain—science, business, humanities, arts—with consistent methodology
- Outputs structured, exportable results compatible with Notion, Obsidian, Excel
- Significantly faster than manual research: hours instead of weeks
Cons:
- API costs scale with recursion depth; complex queries can cost $30-50+
- Requires API access to academic databases for best results; some databases are paywalled
- May over-index on readily available, English-language sources; misses research in other languages
- Struggles with very recent breaking news (< 2 weeks old) or extremely niche topics with little published research
- Output quality depends heavily on how well you decompose your initial question; vague questions produce vague results
- Cannot conduct original primary research (surveys, interviews, experiments); works only with existing published sources
- Tier-3 sources (blogs, preprints) can introduce noise if you’re not careful filtering results
Related Skills
- Academic Literature Mapper: Automatically visualizes citation networks and identifies seminal papers and research clusters within a domain
- Competitive Intelligence Gatherer: Applies similar recursive decomposition and source tiering specifically to market analysis, competitor tracking, and industry trends
- Decision Framework Synthesizer: Implements structured decision-making tools (MECE, decision trees, scenario planning) on top of research findings
- Citation Analysis Tool: Provides deep citation tracking and impact metrics to evaluate which research sources are most influential in a field
- Evidence Evaluator: Specializes in assessing research quality, methodological rigor, and replicability of claims across sources
Alternatives
- Manual Literature Review + Google Scholar: Free and flexible but labor-intensive; requires expert domain knowledge to decompose questions effectively; can miss important sources
- ChatGPT/Claude Direct + Web Search: Fast for quick answers but lacks structured source credibility tiering and doesn’t recursively deepen research; prone to hallucination on niche topics
- Subscription Research Platforms (Bloomberg Terminal, FactSet, LexisNexis): Provide high-quality curated sources and tiering but are expensive ($20K-100K+/year), require institutional access, and lack autonomous reasoning frameworks like Munger inversion