dotPublic Standards Framework

Version 0.6 (draft). These are the nine technical and behavioural standards, and the three levels of accreditation, being tested through the dotPublic research initiative.

Accreditation applies only to compliant materials. Organisations can maintain non-compliant infrastructure elsewhere while publishing verified content in accredited namespaces.

Research context: in partnership with Trinity College Dublin, testing whether institutional commitments to voluntarily implement and maintain civic digital standards achieve lasting behavioural change.

The three accreditation levels

Level 1 — Basic — timeline of days (self-service). For individuals, bloggers, small sites, community projects. Verification is entirely automated; every requirement must be externally verifiable by script.

Level 2 — Enhanced — timeline of weeks, with support. For small publishers, NGOs, local archives, and selective institutional materials. Verification combines automated checks with human review, plus selective provenance registration (100–1,000 items).

Level 3 — Advanced — timeline of months, with dedicated resources. For large institutions, universities, national libraries, research organisations. Verification is comprehensive and human-reviewed; registered items number 1,000 or more.


1. Accessibility

Applies at all three levels with progressive rigour. Level 1 focuses on automated technical compliance; Levels 2–3 add manual audits and user involvement.

Level 1

  • Valid semantic HTML passing WCAG 2.1 Level A automated testing
  • Keyboard navigability for all core functions
  • Text alternatives for images (alt attributes present)
  • Proper heading hierarchy (h1–h6)
  • Sufficient colour contrast (4.5:1 for normal text)
  • No auto-playing media without user controls

Level 2 (adds)

  • WCAG 2.1 Level AA compliance (automated + manual audit)
  • Reliable functionality with screen readers and assistive technologies
  • Mobile responsive design
  • Accessible form labels and error messages

Level 3 (adds)

  • WCAG 2.1 Level AAA compliance where feasible
  • Regular accessibility audits involving disabled users
  • Published accessibility statement with known issues and remediation timeline
  • Feedback mechanism with response commitment

2. Privacy

Ranges from script detection at Level 1 to comprehensive data governance at Level 3.

Level 1

  • No third-party tracking scripts (analytics pixels, ad tech, fingerprinting)
  • No behavioural or targeted advertising scripts
  • No third-party cookies
  • First-party cookies limited to functional necessity (session, preferences)
  • No hidden iframes or tracking beacons
  • Count of third-party services called from the page, categorised (ads, analytics, fonts, tag managers, cookie-consent SaaS, embeds); zero is the pass condition

Prohibited resources — examples: Google Fonts, Gravatar, Google reCAPTCHA, YouTube embeds, social media widgets, surveillance analytics.

Permitted alternatives — examples: privacy-respecting analytics (Plausible, Fathom, self-hosted Matomo), self-hosted fonts, CDNs with Subresource Integrity hashes, privacy-respecting CAPTCHAs (hCaptcha, Cloudflare Turnstile).

Level 2 (adds)

  • Published, human-readable privacy policy
  • Plain language summary of data practices (not legal boilerplate)
  • Data collection limited to functional necessity with published justification
  • Encryption at rest for stored personal data
  • Published data retention periods and deletion schedules
  • Right to deletion honoured within stated timeframe
  • No dark patterns in consent flows (equal prominence for accept/reject)
  • If advertising exists: clearly labelled, contextual only, no surveillance-based targeting
  • Cookie banner discloses third-party data sharing and, when it does, states an explicit count of partners, so users know the scale of IAB TCF-style vendor lists

Level 3 (adds)

  • Privacy impact assessments for new features
  • Data minimisation review process
  • No personal data reused beyond original collection without fresh consent
  • Annual privacy impact reports published
  • Aggregation policies documented

3. Security

Baseline technical controls at Level 1 expanding to incident response and penetration testing at Level 3.

Level 1

  • HTTPS by default (enforced redirect from HTTP)
  • Valid TLS certificate (not expired, not self-signed for public sites)
  • No mixed content (all resources loaded over HTTPS)
  • Security headers present (X-Content-Type-Options, X-Frame-Options, or CSP frame-ancestors)

Level 2 (adds)

  • Content Security Policy (CSP) header
  • Regular automated security scanning
  • Published security contact (security.txt file)
  • Backup and redundancy measures documented

Level 3 (adds)

  • Incident response plan with public notification commitment
  • Annual penetration testing minimum
  • Public disclosure of incidents within reasonable timeframe
  • Bug bounty program or responsible disclosure policy

4. Transparency

Expands from basic contact information at Level 1 to algorithmic decision disclosure at Level 3.

Level 1

  • Contact information present and findable (email or contact form)
  • Statement of purpose present (about page or equivalent)
  • Funding/sponsorship disclosure if applicable

Level 2 (adds)

  • Contact information includes named responsible person
  • Substantive and clear statement of purpose
  • Funding sources disclosed with meaningful detail
  • Governance structure disclosed
  • If algorithms rank, filter or recommend: published explanation of criteria
  • Annual data practices statement

Level 3 (adds)

  • All algorithmic decision-making criteria published intelligibly for the general public
  • Open source code where feasible, or published technical documentation
  • Funding disclosed with conditions and success metrics
  • Public API access to key non-personal datasets
  • Public register of significant editorial or content decisions

5. Accountability

Begins at Level 2 with named contacts and complaint processes; Level 3 adds independent oversight.

Level 2

  • Named responsible person (not a generic inbox like webmaster@)
  • Published response timeframes for complaints and corrections
  • Documented complaints process
  • Clear appeals process for content or moderation decisions
  • Response commitments actually honoured (spot-checked)

Level 3 (adds)

  • Bi-directional appeals process (users appeal, organisation responds, escalation path exists)
  • Published moderation policies
  • Regular public reporting on complaints received and resolved
  • Independent oversight mechanism (board, ombudsman, equivalent)
  • Annual accountability review published

6. Interoperability

Begins at Level 2 with open formats and data export; Level 3 adds federation and succession planning.

Level 2

  • Content published in open formats (no proprietary lock-in for access)
  • User data exportable on request (machine-readable format)
  • RSS/Atom feeds for regularly updated content
  • No vendor lock-in for essential functions

Level 3 (adds)

  • Full machine-readable data export for users and institutions
  • Published APIs with stability commitments
  • Federation/interconnection with other .PUBLIC services where appropriate
  • Important records in durable, open archive formats
  • Escrow or mirroring arrangements for critical content
  • Published uptime commitments with public status page
  • Succession/continuity plan if organisation fails or exits

7. Provenance

Begins at Level 2 with selective DOI registration; Level 3 encompasses comprehensive audit trails and historical retrospection.

Level 2

  • Selective DOI registration (100–1,000 key items) with persistent identifiers
  • Version history maintained for major policy/guidance documents
  • Basic authorship metadata (who published, when)
  • CMS integration for automated registration where available

Level 3 (adds)

  • Comprehensive DOI registration for all significant documents (1,000+ items)
  • Tamper-evident version history with full audit trail
  • Machine-readable authorship and attribution metadata via API
  • Content negotiation for metadata retrieval
  • Integration with discovery, search and social layer infrastructure
  • Annual provenance audits
  • Retrospective registration of historical content

8. AI & Automation

Appears at all three levels due to prevalence; disclosure requirements vary by depth.

Level 1

  • If the site is primarily AI-generated: disclosure in metadata or a visible page element
  • No AI-generated content presented as human-authored without disclosure
  • An AI use policy is in place

The standard for AI metadata disclosure is not yet resolved; options include C2PA Content Credentials, Schema.org markup, or custom meta tags.

Level 2 (adds)

  • Published AI use policy (what AI is used for, what remains human-controlled)
  • AI-assisted content clearly marked where significant
  • Human oversight documented for AI systems making decisions affecting users

Level 3 (adds)

  • AI systems making user-affecting decisions must meet full transparency requirements
  • Right to human review of significant AI-made decisions
  • Published information about AI model providers and training data sources where feasible
  • Regular audits of AI systems for bias and accuracy

9. Responsibility to the Future

Level 3 only. Addresses organisational sustainability, environmental impact, and cultural values.

Level 3

  • Comprehensive succession and continuity plan if the organisation fails or exits
  • Carbon and energy disclosure for hosting infrastructure
  • Published environmental impact assessment for digital services
  • Published policies on worker wellbeing and support
  • Transparent decision-making processes considering long-term impact
  • Documented practices embedding care, compassion and respect into organisational culture

Open questions

Some decisions remain unresolved in this draft, including:

  • AI disclosure metadata standard (C2PA vs. Schema.org vs. custom)
  • CDN usage policy (SRI hashes sufficient or require self-hosting?)
  • Downgrade process (grace period vs. automatic vs. manual review?)
  • Verification frequency for Level 1 (daily / weekly / on-demand?)
  • Video embed alternatives (Vimeo / PeerTube / self-hosted guidance needed)
  • How to measure compliance drift over time

Feedback on the framework is welcome. Please contact us.