Case Study
AI
Smart Grant AI
SmartGrant sims to automate discovery of public funding and grant opportunities in Germany.
YEAR
2025
TEAM
Kiran Mulawad
TECH-STACK
RAG, AI
LOCATION
Germany
Published on: October 24, 2025
Project Introduction
SmartGrant crawls multiple German funding portals, translates and normalizes program data, and builds a semantic vector store (OpenAI embeddings + Pinecone). Users describe their company or upload a PDF profile; the system searches 500+ programs, ranks matches using a custom relevance score (deadline, location, domain), and uses GPT-4 to produce prioritized recommendations and draft application documents (DOCX).
A Streamlit UI provides an interactive chat, RAG-backed answers, PDF parsing, and persisted session/history via PostgreSQL. The pipeline includes Selenium/Playwright scraping, DeepL translation, data cleaning, and python-docx generation — enabling a near end-to-end automated assistant for funding discovery and first-draft application creation.
This is an internal research project and is still in prototype Phase.
Key Challenges
Data heterogeneity & freshness: Funding portals have differing schemas, frequent updates, and rate limits — making reliable, up-to-date aggregation difficult.
Multilingual extraction & fidelity: Accurate translation and meaning-preservation from German legal/administrative text into English summaries is non-trivial.
Relevance & legal compliance: Ranking must consider fine-grained eligibility constraints (legal, regional, timeline) to avoid false positives and reduce wasted effort.
AI solution
RAG + semantic search: Use OpenAI embeddings + Pinecone for fast semantic matching and retrieval, then feed RAG context to GPT-4 to produce accurate program recommendations and justifications.
Hybrid LLM workflow: Use GPT-4-turbo for recommendation and application drafting; use GPT-3.5 (or lighter models) for summaries and low-cost paraphrasing to reduce cost.
Relevance scoring + rules layer: Combine vector similarity with rule-based filters (deadlines, eligibility, location) to improve precision and surface actionable opportunities.
Results / Benefits
Time saved: Reduce manual search and first-draft application work from days to minutes for each opportunity.
Higher match quality: Semantic and rule-based ranking returns fewer false positives and more actionable matches.
Faster go-to-application: Auto-generated DOCX drafts accelerate the application cycle and increase throughput for consultants or SMEs.
Resource efficiency
Labor efficiency: Automates repetitive research and drafting tasks — lowers human-hours per application and reduces consultancy costs.
Compute / cost optimization: Tiered model use (GPT-3.5 for summaries, GPT-4 for drafts) reduces API spend while preserving quality where it matters.
Reduced travel/printing: Digital document generation and precise matches reduce the need for in-person consulting, physical submissions, and printed paperwork.























