In early 2026 we prepared a complete application for the FENG Ścieżka SMART research grant — Poland's flagship R&D funding instrument, with a programme cap of 20 million PLN over 36 months.
We did not spend six months writing it. We spent that time building the system that writes it.
The problem, structurally
A competitive FENG Ścieżka SMART application has:
- A work-package plan with fine-grained milestones and a matched budget.
- An impact narrative scored against specific EU research priorities.
- A commercialisation plan with numbers that a commercial reviewer will actually believe.
- Regulatory coverage across Polish R&D tax law, IP assignment, and GDPR.
- Team CVs, letters of support, bibliographic citations, IP register.
Each section is long, structured, and scored against a published rubric. Consulting firms do this work; they charge a success fee as a percentage of the grant value. For a grant at the programme cap, that bill is material.
What we built
- An ingestion agent that parsed the full FENG Ścieżka SMART call documentation and produced a structured requirements tree.
- A section-drafting agent set — one agent per required section, each with its own rubric awareness and domain context. The agent for the impact section read the relevant EU priorities. The agent for budget knew Polish R&D cost categories.
- A coherence agent that read the full draft and flagged contradictions between sections — the most common failure mode in multi-author grant work.
- A QA agent that scored each section against the published rubric before a human reviewed it.
Not a chatbot writing a grant. A composed pipeline specifically for writing grants against published rubrics.
"Six months of grant-prep was replaced by the time it took to describe the call, the team, and the research plan to the system. The writing itself was a read-and-revise job, not a write-from-scratch job."
What we did not do
We did not file the application.
The preparation was complete. The work product existed. The commercial review showed the grant's obligations — fixed timelines, fixed headcount commitments, a multi-year reporting cadence, matching-funds obligations satisfied from commercial revenue — were misaligned with how we actually wanted to run the business. We took the commercial decision and did not submit.
That is the honest part of this story. "We won a 20M PLN grant with AI" would be a better headline. We don't have it.
What we kept
We kept the agent system.
It was already a working pipeline. Point it at a different grant call — Horizon Europe, Innovate UK, ARIA, an Interreg cross-border programme — and it produces the same outputs: eligibility score, section drafts, coherence QA, rubric-simulated review. Grant calls vary in language and weighting, but the structural shape is consistent across funders.
That system is now GrantAI. Pillar 04 of Blackflake.
The dogfooding posture
Most AI products validate against someone else's use case. GrantAI validated against our own — the hardest possible customer, preparing the largest ask, under time pressure, with full self-criticism at every stage. We read every draft. We argued with the QA agent's scoring. We revised the rubric. We made the commercial call at the end and walked away from the funding.
When GrantAI is deployed at another firm, the pipeline they use is the one we ran against ourselves first. We know where it's sharp. We know where it needs a human partner's judgment. We know which agents to trust and which to treat as drafts.
What GrantAI does today
For a firm considering a grant application:
- Parses the call documentation, builds a structured requirements tree.
- Scores organisational eligibility against the call's published criteria.
- Drafts each required section against the published rubric.
- Runs coherence QA across sections — catches the contradictions a multi-author team creates.
- Simulates evaluator scoring before submission.
For a consultancy running grant applications:
- Scale expertise across more calls without adding senior headcount.
- Stop re-writing the same sections across adjacent clients.
- Produce inspectable reasoning — every scoring decision is auditable, which matters when a client asks why you charged for that specific revision round.
Target markets, in order of active development:
- Poland: FENG Ścieżka SMART, FENG Ścieżka Akademicka, PARP operational programmes.
- United Kingdom: Innovate UK Smart, ARIA opportunity spaces, Catapult calls.
- EU-wide: Horizon Europe pillars, EIC Accelerator, Interreg.
- R&D tax credits: Polish B+R and UK RDEC / SME — sharing the same rubric-awareness layer.
Status and engagement
In build. The FENG application work is the proof-of-concept. Production deployment tracks behind Legal AI and MedAssist on the current roadmap. No drip sequences, no BDRs, no pipeline.
If you run grant applications at scale and want an early look: enterprise@blackflake.com. Include the grant call and the stage you're at. We'll say yes, no, or "worth a 30-minute call."
— Bartek Kubas · Founder-architect · Blackflake Łódź, Poland · 22 April 2026