Open Research

Open Research & Publication Policy

Committed to transparent and reproducible science

CredenceX supports open and reproducible research. This policy describes our approach to publishing findings, sharing artifacts, and communicating limitations and uncertainty.

Governance
Last Updated: March 2026

CredenceX AI Research Lab supports open, rigorous, and responsible dissemination of research outputs. Where feasible and appropriate, we aim to share publications, preprints, methods, code, documentation, and non-sensitive supporting materials to strengthen scientific progress, reproducibility, and public understanding.

What We Aim to Share

Depending on the project context, CredenceX may share:

  • Peer-reviewed publications and/or preprints.
  • Code and configuration files necessary to reproduce key results.
  • Model documentation (e.g., intended use, limitations, evaluation scope, failure modes).
  • Reporting artifacts (e.g., ablation summaries, robustness checks, calibration analysis).
  • Non-sensitive supporting materials (e.g., templates, annotation guidelines).

Responsible Constraints on Openness

CredenceX recognizes legitimate constraints on disclosure, including:

  • Participant privacy and consent restrictions.
  • Legal and contractual obligations.
  • Security considerations and misuse risk.
  • Protection of sensitive operational details when needed.

Where constraints apply, we aim to use proportionate alternatives such as redaction, delayed release, controlled access, or sharing of derived artifacts rather than raw sensitive data.

Publication Ethics

CredenceX expects publications to:

  • Describe methods, assumptions, and limitations transparently.
  • Avoid overstating clinical readiness or deployment claims.
  • Disclose relevant funding, affiliations, and competing interests.
  • Use appropriate citations and avoid inappropriate duplication.

Public Benefit Statement

CredenceX exists to advance research and innovation in the public interest. Our work seeks to improve the trustworthiness, transparency, and real-world reliability of AI systems, particularly in high-stakes domains where failure can meaningfully impact human well-being. We interpret public benefit to include scientific contribution, educational value, responsible innovation, and the reduction of harms associated with unsafe or opaque AI.

Effective Date: March 2026

Questions about our open research policy?

Contact Us