AI
Enterprise
Hosting

AI usage and the importance of data management

Written by
Will Newland

Managing Director

Contents

AI is rapidly reshaping how content is created, managed, and delivered, and as it continues to evolve, the potential to drive efficiency and productivity grows. In the same breath, the rapid pace of AI development means governance frameworks have not kept up with the adoption of AI.

This imposes a wider threat to users’ data privacy and confidentiality.

The gap between Marketing and IT

AI, when deployed within a defined governance model, can enable marketing teams to move at pace across the entire marketing mix. Design can use generative tools to accelerate visual production and animation; developers can use AI-assisted coding tools to build out and advance front-end development; and content and marketing teams can integrate smart workflows into campaign and analytics processes.

In each use case, operational speed increases, yet visibility over the data decreases, and the question of where data goes and who is processing it is often overlooked.

Without defined accountability, this risk sits in a grey area between both the Marketing teams, largely adopting these tools independently, and IT departments, struggling to keep up with what tools are being used, due to the rapidly evolving nature of the AI industry.

Risks behind unmanaged AI adoption

Data

AI tools process input data, and ultimately, that data must go somewhere.

In unmanaged environments, sensitive information is often shared with external providers through individual accounts, undocumented integrations, or loosely governed workflows.

Typical scenario – development:

A developer pastes authentication credentials, API keys, or endpoint URLs into an AI coding assistant.

If prompts are logged or stored by the provider, those credentials may now exist outside the organisation’s contractual control.

IP

AI systems are frequently used to refine proprietary content, commercial strategy documents, and internal codebases. Without clarity on retention policies or model training behaviour, organisations may, unwittingly, expose this information.

The key issue here is that most people using these tools do not evaluate the downstream data handling.

Regulatory

In regulated sectors, the risk extends beyond confidentiality. If personal data is processed through AI systems without a Data Processing Agreement or defined Retention Policies, the organisation may face exposure under GDPR.

Typical scenario – data analysis:

An analytics team pastes website form submissions containing personal data into an AI tool to identify conversion patterns. The data includes names and contact details originally collected under a specific privacy notice.

The AI provider is not covered under that agreement.

Uncontrolled automation

As AI evolves from assistive tools to autonomous agents, risk shifts from “data processing” to “action execution.” AI agents may operate without formal approval processes, logging, or rollback capability.

If changes are made directly in live environments without oversight, this increases the potential operational and reputational risk for the business.

Security surface expansion

Every AI integration or plugin used across your website or DXP introduces a new external dependency.

Without assessment and management, these can accumulate faster than you can monitor them, and security complexity increases faster than traditional control models were designed to handle.

A practical governance framework

Data classification

Before evaluating any AI tool, the first step is understanding what data it will access and how, as not all data carries the same level of risk.

Example risk categories

Published website content may be considered low sensitivity.

Internal strategy documents, customer records, authentication credentials, and analytics datasets would be considered high sensitivity.

The risk changes quickly when tools are given access beyond published content, particularly where personal data, gated content, internal documentation, or system credentials are involved.

Organisations should formally classify the data into categories and ensure AI tools only access data aligned with their approved risk level.

Plugin/Tool evaluation

Websites and DXPs use plugins that are essentially third-party software extensions that can be added to the site to provide additional pre-built functionality. Many of these require access to site content, user data, or backend systems.

Every AI tool or plugin used should go through a documented evaluation process before adoption.

At a minimum, it’s important to understand:

  • What data does the plugin access
  • Where that data is processed geographically
  • Whether processing is server-side or client-side
  • Whether prompts or outputs are logged
  • Whether data is retained or used for model training

Contractual controls

Where AI tools process organisational or personal data, contractual coverage is essential and should be detailed through your Data Processing Agreement.

Deployment controls

You should ensure data separation between staging and live environments and ensure there is a clearly defined approval process in place.

AI-enabled workflows should operate within defined boundaries, approval processes and logging standards.

Establishing defined constraints to autonomous action will enable you to have more control over what is happening.

Audit & monitoring

Governance does not end at deployment.

At a macro level, organisations should maintain visibility over which AI tools are connected to which platform and what data they access.

This can be managed on a team-by-team basis, but without auditability, AI adoption becomes difficult to control.

How SoBold approaches AI governance

Our operations run under ISO 9001 quality management processes.

Our hosting infrastructure partners maintain ISO 27001 certified environments with defined geographic data boundaries and server-side logging, with documented data flows across the hosting stack.

In practice, this means AI integrations are assessed against established security and quality controls before deployment.

Related news

blue & purple toned data centre with blinking lights
News
Enterprise
Hosting

Disaster recovery: protecting revenue, reputation and data

5 min read
biege building with backdrop of glass skyscraper
News
Design
Enterprise

Understanding the website redesign process from the view of a Private Equity firm

5 min read
News
Agency
Enterprise

Cookiebot implementation across complex data-driven platforms

10 min read
News
Enterprise

Trust at scale: How global law firms build credibility through their website

7 min read