15.4 Transparency in Model Attribution
Transparency is a non-negotiable pillar of trust in artificial intelligence. Craft provides full disclosure of the models powering its features, including third-party LLMs (such as OpenAI’s GPT-4, Anthropic’s Claude, or Mistral) and generative tools (such as Stability AI for images or ElevenLabs for voice synthesis). For each AI tool, the platform clearly identifies the source model or engine used, either through tooltips, interface notes, or documentation. Additionally, Craft tags generated outputs with metadata that optionally allows users to credit the AI origin. In the case of enterprise deployments, Craft provides detailed model documentation, model changelogs, and API-level transparency to ensure complete visibility into what powers the end-user experience. This enables regulatory alignment and ethical accountability at both the user and organizational level.
Last updated
Was this helpful?