ResearchDataGov
A unified federal portal for researchers to apply for access to restricted datasets across 16 U.S. statistical agencies — designed as a single application flow accommodating 16 sets of divergent agency requirements.
Lead product designer
1 Product Owner
1 Tech Lead
4 Developers
1 UX Engineer
Jan 2023 - Mar 2025
Live at researchdatagov.org
Updated March 2025
ResearchDataGov is the official portal for the Standard Application Process (SAP) — a federal initiative mandated by the Foundations for Evidence-based Policymaking Act of 2018. It gives researchers a single application pathway to request access to confidential datasets held by 16 U.S. federal statistical agencies.
Before ResearchDataGov, those same researchers navigated 16 separate, incompatible processes, each with its own forms, timelines, sensitivity protocols, and eligibility requirements. As Lead Product Designer, I designed the dual-persona UX system (Applicant and Reviewer), and the progressive disclosure patterns that allow one application flow to accommodate 16 sets of divergent agency requirements.
The platform was built by and is hosted at The Archive under contract with the Project Management Office at the National Center for Science and Engineering Statistics (NCSES) within NSF.
The 16 participating agencies and units span the breadth of 16 U.S. federal statistical infrastructure.
Each agency had developed its own eligibility requirements, required documents, review timelines, and definition of what it meant for an application to be 'complete.' The design problem was absorbing all of this without making the applicant's experience feel like 16 different forms bolted together.
The discovery phase involved stakeholder interviews across federal agencies and with researchers who had attempted the existing process. The consistent finding: the problem wasn't any single agency's process being bad. It was the absence of a shared model for what a data access application should contain, what a review should evaluate, and how both parties should communicate during the process.
Agency data managers had developed review criteria in isolation — often for legitimate compliance and sensitivity reasons. Researchers had adapted by building tribal knowledge about which agencies required what in which format. Neither party had shared vocabulary, shared timeline expectations, or shared status definitions. An application 'under review' at one agency meant something very different from 'under review' at another.
Application Architecture: One Flow for 16 Agency Requirements
Designing one application flow for 16 agencies is a constraint architecture problem first, a UI problem second. The naive solution — a single form with all possible fields from all agencies — would have produced a 50+ screen application that researchers would abandon and reviewers would find unmanageable.
My solution: a two-layer application architecture. The core layer contains the fields that every agency requires, using standardized terminology established through the stakeholder process (researcher information, employer/affiliation, research description, demonstrated need for restricted access). The agency-specific layer surfaces fields contextually, based on which datasets the researcher has added to their basket — revealed at the point in the flow where they're relevant, with plain-language explanations of why each field exists for that agency.
The basket metaphor was central to the UX. Researchers browse a federated data catalog — searchable by keyword, topic, and agency — and add datasets to a basket before starting an application. The basket determines which agencies are involved, which in turn determines which requirements appear.
Dual-Persona Architecture: Applicant + Reviewer
ResearchDataGov serves two fundamentally different users through the same data model. Applicants (researchers) need a clear, guided path through a complex multi-agency process. Reviewers (federal agency data managers) need a structured workspace for evaluating applications against their agency's specific criteria.
I designed two separate interfaces on a shared data layer. The Applicant view is a guided, step-by-step flow with inline explanations, a real-time completeness indicator, and persistent status tracking. The Reviewer view is a dashboard-oriented workspace organized by application status, agency filter, and review urgency — optimized for the evaluation workflow rather than the submission workflow.
The most critical design decision: a shared status model. Plain-language status states visible to both users — Submitted, Under Review, Pending Additional Information, Approved, Denied — with different levels of detail appropriate to each role. Denial decisions include specific, actionable reasons, not boilerplate. The entire application history is permanently visible and exportable.
Trust Design for a Federal Platform
Federal platforms carry a specific trust deficit earned by decades of opaque bureaucratic processes. ResearchDataGov needed to signal legitimacy and transparency without asserting it. Key design decisions: every status change includes a plain-language explanation. Reviewer names and roles are visible to applicants after decisions are made. The human review process is not hidden behind an institutional facade — it is made legible, because transparency is the only credible foundation for trust in this context.
Section 508 accessibility compliance was non-negotiable. Every component was designed and tested against WCAG 2.1 AA standards, with assistive technology testing throughout the design process — not added as a final-stage checkbox.
ResearchDataGov launched at researchdatagov.org and is live as of March 2025, serving researchers applying for access to restricted datasets across 16 federal agencies. The platform represents one of the first unified access points for confidential federal statistical data in U.S. history — a direct implementation of the Foundations for Evidence-based Policymaking Act's mandate for standardized data access.
https://www.researchdatagov.org/
ResearchDataGov taught me that the hardest institutional design problems are negotiation problems wearing a UX costume. Standardizing 16 agencies' requirements required as much stakeholder facilitation as design work. The design was the artifact that made the negotiation concrete: when agencies could see that their requirements fit into a shared model, the abstract conversation about 'standardization' became a tractable design problem.
At scale, design becomes governance. The application architecture I built isn't just a product — it is a policy instrument that determines how researchers access federal data. Understanding that a design decision is also a policy decision changes how you make it.