Enabling Provenance for AI-Assisted Academic Work

Frequently Asked Questions

VeritasHub introduces a structured approach to managing AI use in academic work. This page provides clear answers to the most common questions from students, faculty, and institutions.

VeritasHub integrates seamlessly with existing academic platforms, supporting smooth adoption across teaching, learning, and research environments. It is designed to complement—not replace—your current systems, working alongside learning management systems, assessment tools, and institutional processes. By aligning with established workflows, VeritasHub removes friction for both staff and students, enabling immediate use without complex onboarding, retraining, or disruption to academic practice.

The platform supports flexible deployment models, allowing institutions to introduce VeritasHub at their own pace, whether as a pilot, faculty-level rollout, or full institutional implementation. Integration points are designed to be lightweight and adaptable, ensuring compatibility with a wide range of existing technologies and digital ecosystems. This approach reduces technical overhead while maintaining consistency with institutional governance and IT policies.

As adoption grows, VeritasHub scales naturally with institutional needs, supporting increasing volumes of users, submissions, and policy frameworks without loss of performance or control. The result is a stable, future-ready integration that strengthens transparency and accountability while preserving the integrity of existing academic systems.

SECTION 1 — PLATFORM OVERVIEW

What is VeritasHub?

VeritasHub is a structured academic self-declaration platform that records how artificial intelligence tools are used in the creation of academic work. It produces a clear, auditable provenance record that accompanies each submission.

Why was VeritasHub created?

Existing approaches to AI in education rely on detection tools and enforcement mechanisms. These approaches are unreliable, difficult to defend, and often encourage concealment. VeritasHub replaces detection with structured transparency, providing institutions with consistent, auditable evidence of AI use.

Does the platform use AI to analyse student work?

VeritasHub uses AI in two limited and transparent ways: (1) the Policy Guidance Assistant uses an AI interface to help students and staff navigate institutional policy documents — it does not analyse academic work; and (2) the AI Usage Insights feature analyses a student's own declaration history to identify patterns in their AI engagement and offer developmental feedback. Neither feature analyses or assesses academic work content. No student writing, code, images, or other submitted content is processed by AI systems within the platform.

What does VeritasHub actually do?

VeritasHub enables students to document their use of AI tools through a structured process at the point of submission. This creates a consistent record of: what tools were used how they were used why they were used how outputs were evaluated and transformed This record supports faculty review and institutional governance.

How is VeritasHub's policy configuration managed?

Each institution's VeritasHub environment is configured with its own AI usage policy documents. These are uploaded by institutional administrators and are surfaced to students and staff through the Policy Guidance Assistant and embedded into declaration workflows. Institutions can update, replace, or supplement policy documents at any time without disrupting existing student records. The policy corpus is entirely separate between institutions — no institution can access another's policy configuration.

What are students' rights over their data?

Students retain full control over their personal data within VeritasHub. They can view all entries and declarations at any time, edit or delete draft entries, export their full declaration history as PDF, and request deletion of their account and associated data. The platform is designed to support institutional data subject access request obligations. Students are never required to submit data without an explicit act of submission — there is no background collection.

SECTION 2 — AI USE AND DECLARATION

Does VeritasHub detect AI use?

No. VeritasHub does not attempt to detect or infer AI use. It relies on structured, student-declared transparency captured at the point of submission.

Why not use AI detection tools?

Detection tools are inconsistent and prone to false positives. They cannot reliably determine authorship or intent. VeritasHub replaces detection with declared evidence, supporting clearer and more defensible academic judgement.

Is AI use allowed?

This depends on institutional policy. VeritasHub does not determine what is allowed. It ensures that AI use, where it occurs, is clearly declared and recorded.

What happens if AI use is not declared?

Non-declaration is handled under institutional academic integrity policies. VeritasHub records what is declared; it does not detect what is not. Where undeclared use is identified through other means, the absence of a declaration may be considered as part of an academic review process.

SECTION 3 — STUDENT EXPERIENCE

What does a student actually do?

At submission, students complete a structured declaration covering: context of the work purpose of AI use tools used prompts or intent outputs generated processing and evaluation optional supporting material This takes a short amount of time and becomes part of the submission record.

Is VeritasHub suitable for postgraduate and research students?

Yes, and it is specifically designed to support research contexts. The 8-step declaration captures the full lifecycle of AI interaction in research work — including prompt engineering, output evaluation, and the critical processing step where students document how they transformed or rejected AI output. The AI Tool Permission Workflow supports ethics committee escalation for novel or high-risk tools. The Working Methods Portfolio creates a continuous research process record that can accompany thesis submissions, ethics applications, and research publications. The Northeastern University policy alignment mapping (available on request) illustrates how VeritasHub supports research AI governance obligations.

Does VeritasHub monitor students?

No. VeritasHub does not track behaviour, keystrokes, browsing activity, or usage outside the platform. All data is provided intentionally by the student.

Do students retain control over their data?

Yes. Students can view, edit, export, and manage their records. Data is only created through deliberate submission.

How does VeritasHub handle indigenous data sovereignty considerations?

The platform includes a Compliance Advisor feature that can flag research entries involving indigenous data for additional review, aligned with CARE Principles (Collective Benefit, Authority to Control, Responsibility, Ethics). For New Zealand institutions, this supports alignment with Māori data sovereignty frameworks. The specific indigenous data sovereignty configuration is set at the institutional level — VeritasHub provides the governance workflow; the institution defines the standards and escalation process.

SECTION 4 — FACULTY AND REVIEW

What oversight do tutors and administrators have?

Faculty see: Tutors have dashboard access to all AI usage entries and declarations submitted by their assigned students. They can view the full 8-step declaration, flag entries of concern, provide written feedback, approve or reject AI tool permission requests, and monitor compliance across their cohort. Institutional administrators have access to anonymised aggregate analytics — showing AI tool usage patterns, declaration rates, and compliance metrics — to support governance and policy review. No individual student record is visible to administrators outside their own institution.

Does VeritasHub replace academic judgement?

Absolutely not! VeritasHub supports academic judgement by providing structured evidence. All decisions remain with faculty and institutional processes.

How does the AI Tool Permission Workflow support institutional governance?

Where a student wishes to use an AI tool not already on the institutional approved list, they must submit a formal pre-use request through VeritasHub. This request includes the tool name and details, the intended purpose, expected benefits, and ethical considerations. The request is automatically routed to the assigned tutor, who can approve, reject, or escalate to an ethics committee reviewer. All decisions are logged with timestamps. This creates a governed, auditable record of tool approval decisions that institutions can reference in quality assurance and accreditation processes.

How does this improve assessment?

By replacing assumption with declared evidence, VeritasHub enables: clearer understanding of student work more consistent decisions defensible academic outcomes

How is VeritasHub's policy configuration managed?

Each institution's VeritasHub environment is configured with its own AI usage policy documents. These are uploaded by institutional administrators and are surfaced to students and staff through the Policy Guidance Assistant and embedded into declaration workflows. Institutions can update, replace, or supplement policy documents at any time without disrupting existing student records. The policy corpus is entirely separate between institutions — no institution can access another's policy configuration.

Does VeritasHub replace our existing academic integrity policy or plagiarism tools?

No. VeritasHub is designed to complement, not replace, existing academic integrity frameworks. The platform embeds your institution's own AI policy into its workflows — it does not impose external standards. Your existing plagiarism detection tools, assessment policies, misconduct procedures, and disciplinary processes remain entirely unchanged. VeritasHub adds a structured transparency layer that generates auditable evidence of AI use, which can inform — but does not determine — your institutional judgements.

SECTION 5 — DATA, SECURITY AND PRIVACY

What data does VeritasHub collect?

Only data intentionally submitted by the user, including: identity for authentication course and submission context AI usage declarations No background data collection occurs. This provides context for informed and consistent academic judgement. prompts or intent outputs generated processing and evaluation optional supporting material This takes a short amount of time and becomes part of the submission record.

Where is data stored?

Data is stored within secure infrastructure with full institutional separation. Each institution operates within its own environment.

Is VeritasHub compliant with data protection regulations?

Yes. The platform is designed to align with GDPR and recognised international security standards, including encryption in transit and at rest, and role-based access control.

Does VeritasHub comply with GDPR?

Yes. The platform is designed for GDPR compliance at the infrastructure level, not merely at the policy level. Data is processed within Germany (EEA), encryption is applied at rest (AES-256) and in transit (TLS 1.2+), processing is limited to the minimum necessary, and data subjects retain full rights to access, rectification, portability, and erasure. The platform aligns with GAIA-X sovereign infrastructure principles and ISO/IEC 27001, 27017, and 27018 standards. A Data Processing Agreement (DPA) compliant with GDPR Article 28 is available for execution prior to any pilot or deployment. Territory-specific DPAs are available for UK (UK GDPR / DPA 2018), New Zealand (Privacy Act 2020), Australia (Privacy Act 1988), and the United States (FERPA).

SECTION 6 — IMPLEMENTATION AND INTEGRATION

Does VeritasHub require system integration?

No for initial deployment. VeritasHub can be used as a standalone platform. Integration with learning management systems is available where required. No background data collection occurs. This provides context for informed and consistent academic judgement. prompts or intent outputs generated processing and evaluation optional supporting material This takes a short amount of time and becomes part of the submission record.

Which systems can VeritasHub integrate with?

VeritasHub supports integration with common platforms including: Canvas Blackboard Moodle D2L

What does the pilot deliver to the institution at the end?

At the conclusion of the pilot, each participating institution receives: a full anonymised AI usage report covering tool distribution, declaration rates, and compliance patterns; a student analytics summary; a policy compliance evaluation against the institution's own embedded policies; and recommendations for AI governance frameworks going forward. This provides a structured evidence base for institutional decision-making about AI policy and long-term governance, independent of any decision to continue with VeritasHub.

. What is required from IT to participate in the pilot?

The beta trial requires no LMS integration and no infrastructure changes from the institution. Participants access VeritasHub via standard web browser — no software installation is required. Institutional email authentication is supported, and Google OAuth can be configured. LTI integration with Canvas, Blackboard, Moodle, or D2L is available for the paid pilot phase and beyond, but is explicitly not required for the beta trial period.

VeritasHub does not monitor behaviour or attempt to detect AI use.


It provides a structured transparency framework that enables institutions to manage AI use with clarity, consistency, and accountability.