Building an AI Grant Management Platform on AWS Bedrock
Achievements:
How Perfsys built four specialized AI services for EPIRA.ai that turn weeks of manual grant work into minutes — and put the right programs in front of every citizen.
EPIRA.ai is a GovTech company with a straightforward but ambitious mission: fully digitalize the US grant ecosystem. Today, hundreds of thousands of federal, state, and local grant programs exist across the country — most administered through slow, paper-based, or fragmented digital processes that frustrate both the agencies running them and the citizens trying to access them.
Headquartered in Arizona with a development presence in Dublin, Ireland, EPIRA.ai serves two primary user groups: government administrators who design and manage grant programs, and citizens who need to find and apply for financial assistance. The platform’s long-term vision is an intelligent grant marketplace where any citizen can instantly discover what they qualify for and apply without friction.
The challenge Perfsys was brought in to solve was foundational: neither side of the platform had the tooling to operate at the scale the vision required.
The Problem: Two Issues That Build on Each Other
Grant management involves two distinct but deeply connected workflows. Both were broken, and fixing one without the other would only partially solve the problem.
Government Side: Building Forms Was Slow and Error-Prone
Creating a digital grant form the traditional way meant a government representative manually reading a paper-based policy document, then recreating it field by field in a form builder. This process was labor-intensive, prone to errors, and required multiple review iterations before a form was ready to publish.
Grant forms are far more complex than a standard registration form. Each one contains:
Static fields common to every grant, such as name and description
Dynamic field structures unique to each program, requiring specific applicant information
Conditional logic that changes the form’s structure based on how a user answers earlier questions
Field validation rules governing input types (text, numbers, documents) and constraints
Eligibility criteria written as free-form natural language — conditions like “you can apply only if your annual income is under $100,000” or “your roof must be flat”
That last point was the most painful. Because eligibility rules exist as unstructured prose, they cannot be validated programmatically with traditional code. The standard workarounds — extensive if-else logic trees or BPM tooling — are expensive to build and brittle to maintain.
Citizen Side: No Way to Know Before Applying
Citizens browsing available grants had no practical way to assess their eligibility before investing time in an application. Without guidance, most would either abandon the process entirely or submit applications for programs they were never going to qualify for.
The downstream effect compounded the problem on the government side: reviewers were flooded with ineligible submissions, spending significant staff time sorting through applications that should never have been filed.
The core issue was not a lack of data or willingness — it was the absence of automation that could turn complex, unstructured grant information into a structured, intelligent experience for everyone involved.
The Solution: A Four-Service AI Grant Management Platform
Rather than building a monolithic AI system, Perfsys designed a modular architecture composed of four specialized AI services, each solving a discrete part of the grant workflow. All four are backed by a shared data and document layer, accessible through a unified GraphQL API, and powered by AWS Bedrock with Anthropic’s models.
This approach was deliberate. Grant management involves distinct problem types — information extraction, eligibility judgment, profile enrichment, and form pre-population — each requiring different AI behavior, prompt design, and data access patterns. A single monolithic service would have created an unmaintainable system with no clear boundaries. Instead, each service operates independently through well-defined interfaces.
Architecture of the EPIRA.ai AI grant management platform — four specialized AI services on AWS Bedrock, unified through a shared GraphQL API and serverless data layer.
Grant Extraction — AI-Powered Form Builder
Used by: Government Administrator via the Gov Portal
An AI agent that takes a source document — provided as a file upload, a URL, or a link to an existing form — and autonomously extracts all form structure: fields, metadata, validation rules, and conditional logic. It produces a complete draft form in autopilot mode with no manual input required.
A conversational agentic helper mode then allows administrators to review, question, and refine the draft iteratively.
Eligibility Scoring — Pre-Application Filtering
Used by: Citizen via the Citizen Portal
A bulk AI service that evaluates a citizen’s profile against all available grants simultaneously, returning a color-coded eligibility signal for each: green (likely eligible), yellow (uncertain), or red (not eligible). Citizens see at a glance where their effort is most likely to pay off — before committing time to any application.
Profile Enrichment — Reusable Citizen Data
Used by: Both portals (via GraphQL API)
A service that builds and maintains a persistent citizen profile, accumulating information across every grant submission. The more grants a citizen engages with, the richer their profile becomes. This profile is the data foundation that powers both Eligibility Scoring and Smart Form Filling, reducing repetition and improving accuracy over time.
Smart Form Filling — Pre-Populated Applications
Used by: Citizen via the Citizen Portal
When a citizen selects a specific grant to apply for, this service pre-populates the form using their enriched profile. Because many grants share overlapping field structures, data collected in one application can be reused across others — dramatically reducing the time required to complete each new submission and lowering the chance of input errors.
How the Platform Was Built
Perfsys delivered the full platform in three months with a two-engineer team — output equivalent to one full-time engineering resource. The architecture is deliberately serverless, designed to scale without infrastructure overhead.
AWS Bedrock as the AI Foundation
All four AI services run on AWS Bedrock using Anthropic models. Rather than integrating a third-party AI vendor or building custom model infrastructure, Bedrock gave the team managed, scalable access to frontier model capabilities inside the existing AWS environment. Custom prompts — one per field type and AI function — form the intellectual property layer that drives each agent’s behavior and are a core proprietary asset of the platform.
Chained AI Calls for Reliable Outputs
One of the most impactful architectural decisions was the use of multi-step AI chains rather than single-call prompts. For the Grant Extraction AI agent, this meant training discrete “atomic skills”: separate extraction functions for fields, field metadata, and conditional logic. Each skill operates independently and the outputs are composed into a unified form draft.
For judgment-based services like Eligibility Scoring, the chain includes a self-evaluation step — the model is asked not just for a result but for its reasoning, then asked to evaluate whether that reasoning is valid. This approach consistently produced more accurate and defensible outputs than single-call prompts.
Serverless Infrastructure with No Standing Compute
AWS Lambda handles all serverless processing across the AI layer — no persistent compute instances to manage or scale manually. DynamoDB stores form data, user profiles, and AI outputs with sub-millisecond read performance at any scale. S3 handles source document storage and file attachments. Both front-end portals are hosted on AWS Amplify, enabling rapid front-end iteration without DevOps overhead.
A Shared API Layer That Unifies Both Portals
Both the Government Portal and the Citizen Portal connect through a single GraphQL API. The Gov Portal also has a direct connection to the Grant Extraction service for form-building workflows. A unified authentication service covers both portals. This single-API design ensures that citizen profiles built through one service are immediately available to the others, and that adding new services in the future requires no changes to the existing portal architecture.
Results and Business Impact
The platform was delivered ahead of the production launch window. The following outcomes are projected once the system is operating at scale.
Grant Form Creation: 80–90% Time Reduction (Projected)
Creating a complex grant form manually — reading the source policy, extracting fields, encoding conditional logic, and iterating through review cycles — typically takes a government administrator several hours to multiple days depending on complexity. The Grant Extraction agent reduces this to minutes for autopilot draft generation, with agentic helper mode available for refinement. An 80–90% reduction is a conservative projection for moderately complex grants; simpler forms may see even greater gains.
Application Completion: Up to 3× Faster (Projected)
Smart Form Filling pre-populates forms using the citizen’s enriched profile. As the profile grows richer with each submission, an increasing share of fields across new applications can be auto-filled. Combined with Eligibility Scoring ensuring citizens focus only on grants they are likely to qualify for, the expected result is roughly 3× faster completion compared to starting from a blank form with no guidance.
Significant Reduction in Ineligible Submissions (Projected)
Eligibility Scoring performs bulk pre-qualification before a citizen commits to applying. Government reviewers currently spend a material portion of their time processing applications from individuals who were never eligible. By surfacing this signal before submission, the platform is designed to materially reduce ineligible application volume — a direct operational cost saving for the agencies that rely on EPIRA.ai.
Key Lessons for AI Agent Delivery
Three engineering insights from this engagement are directly transferable to any project involving AI agent delivery or automated document processing.
Reasoning prompts outperform answer-only prompts. When building AI judgment systems, asking the model to explain its reasoning — and to self-evaluate that reasoning — substantially improves output accuracy. This held true regardless of model choice.
Chains of AI calls beat single large calls. Complex tasks are best decomposed into sequential AI steps, where each call verifies or builds on the previous one. Both Eligibility Scoring and Grant Extraction rely on multi-step chains that produce meaningfully better results than any equivalent single-call approach.
LLMs have real processing limits — architect around them. Rather than passing large documents in a single call, the team split extraction work into smaller, independently processed chunks. This improved both reliability and output quality across diverse source document formats.
Client Feedback
The client team shared their experience working with Perfsys in a verified Clutch review:
“Their genuine depth of experience in government and financial technology.”
— Project Manager, AI Solutions Company, Dublin, Ireland (Clutch, October 2025)
The client rated Perfsys 4.5 out of 5 overall, with a perfect 5.0 for cost-value and a perfect 5.0 willingness to refer.
Building something complex in a regulated or data-heavy domain?
Perfsys helps startups and SMBs turn unstructured data and complex business logic into production-grade AI systems on AWS — without the enterprise timeline or overhead.
Why AWS Bedrock instead of building directly on OpenAI or another provider?
EPIRA.ai is an AWS-native platform. Bedrock gave the team managed access to Anthropic’s models within the same cloud environment as the rest of the infrastructure — simplifying IAM, data residency, security posture, and billing. It also avoids a hard dependency on any single model provider; Bedrock’s multi-model access means the team can switch or combine models as the technology evolves.
How does the platform handle grant forms with highly unusual structures?
The Grant Extraction agent was trained with atomic skills per extraction function rather than a single catch-all prompt. This modular design means each skill can be refined independently. When a source document is unusual — a non-standard file format, an inconsistently structured form, or unusual conditional logic patterns — the agentic helper mode allows the administrator to review and correct the draft before publishing.
Is the eligibility scoring reliable enough for government-grade decisions?
The Eligibility Scoring service is designed as a pre-filter, not a binding eligibility determination. It surfaces probability signals (green / yellow / red) to help citizens prioritize their effort and reduce ineligible submissions. The color-coded system is conservative by design: uncertain cases surface as yellow rather than forcing a false binary. Final eligibility is always determined by the government agency reviewing the submitted application.
How long does a project like this typically take with Perfsys?
The EPIRA.ai platform — four AI services, two portals, a shared API, and full data infrastructure — was delivered in three months by two engineers (one FTE equivalent). Scope and complexity will vary per engagement, but Perfsys operates with a bias toward fast iteration and working software over extended planning phases. Most AI MVP engagements fall in the 6–14 week range.
Cut AWS costs without compromising quality
Up to 40% savings with serverless solutions, audits, and Well-Architected Reviews.
i Got This! AI Mental Health Assistant MVP — Built a secure, scalable AI assistant on AWS Bedrock, combining empathetic avatars with serverless AWS architecture and cutting development and testing costs by over 90%. Read the i Got This! AI mental health assistant case study