Don’t get left behind – the latest news and practical tips on AI, security, cloud, networking …

The Risks of Using AI for Small Companies (Part 2)

 

For small and medium-sized businesses (SMBs), the biggest risk in using AI is mishandling data privacy and security, especially when combined with a lack of legal and regulatory compliance. This risk is amplified by limited resources, less technical expertise, and Shiny Object Syndrome (SOS – the tendency to be distracted by new, exciting opportunities and rush to action without proper assessment of risks).

Why Data Privacy and Security Stand Out

  • Sensitive Data Exposure: Small businesses often feed proprietary or customer data into AI tools without fully understanding where that data goes or how it’s used. If the AI provider mishandles this data or suffers a breach, confidential information could be exposed, leading to loss of customer trust and potential legal action.
  • Regulatory Compliance: Regulations like GDPR (in the UK and EU) require strict controls over personal data. Many SMBs underestimate how these rules apply to AI, risking hefty fines for non-compliance if they use AI for automated decision-making or if data is processed without proper assessments5.
  • Skipping Legal Checks: AI tools are designed to be user-friendly, tempting businesses to implement them without the usual legal or IT checks they’d apply to other technologies. This can open the door to data leaks, confidentiality breaches, and regulatory penalties.

Real-World Example

A small HR consultancy uses an AI-powered resume screening tool. Without a proper data protection impact assessment, they upload hundreds of CVs containing personal data. If the AI provider stores or processes this data insecurely, or uses it to train other models, the consultancy could face GDPR violations and reputational damage.

Other Significant Risks (and How They Relate)

  • Mistakes and Inaccuracies: AI can generate errors at scale—one faulty recommendation or biased decision can impact hundreds or thousands of customers before it’s noticed, which can be disastrous for a small business’s reputation.
  • Bias and Discrimination: AI trained on biased data can make unfair decisions, such as in hiring or lending, leading to ethical issues and potential legal claims. Amazon abandoned their AI-powered recruiting tool when they discovered that it vastly preferred male candidates, penalizing CVs that included words like “women’s”
  • Cost and ROI: Implementing AI can be expensive. If the system fails to deliver value, the financial hit is felt more acutely by smaller firms with tight budgets.
  • Over-Reliance and Lack of Oversight: Small teams might be tempted to let AI run without sufficient human oversight, increasing the risk of unchecked errors or inappropriate outputs.

The Chicago Sun-Times published a summer reading list in May 2025 that included books that don’t exist. This AI Hallucination included ‘Tidewater Dreams’ by Isabel Allende, “a climate fiction novel that explores how one family confronts rising sea levels while uncovering long-buried secrets.” Not good for the reputation of the Chicago Sun-Times! (For more AI disasters, see this article: 12 famous AI disasters.

 

  • Intellectual Property Issues: Using generative AI for content or design can inadvertently infringe on copyrights, exposing the business to legal threats.

Tools and Templates for Mitigation

  • Data Protection Impact Assessment (DPIA) Templates: Use DPIA templates (many are available from the UK ICO or EU GDPR websites) before integrating any AI tool that processes personal data.
  • AI Vendor Security Checklists: Before adopting an AI tool, use a checklist to review the provider’s data handling, security, and compliance policies.
  • AI Usage Policy: Draft a simple internal policy outlining what data can be shared with AI tools, who is responsible for oversight, and how outputs are reviewed.
  • Bias Testing Tools: Use open-source tools like IBM’s AI Fairness 360 to check for bias in AI outputs, especially in HR or customer-facing applications.

AI Disaster: Shopify Merchant

A Shopify merchant used AI to automate customer support. Initially, they failed to set up proper data access controls, and a data breach exposed customer emails and order histories. After the incident, they implemented stricter data encryption, regular audits, and clear staff training, which restored customer trust and allowed them to safely continue using AI.

An essential part of your defence against AI risks is an AI Usage Policy for staff – and that’s the subject of our next article.

Follow us on LinkedIn or subscribe to our newsletter to make sure that you don’t miss it!

Let us tame your IT

To discuss any of our services please fill in our short form and one of our team members will be in touch right away.

No worries if contact forms aren’t your thing – our team are a friendly bunch and waiting for your call – 01403 29 29 30.

Occasionally, we would like to reach out regarding our news, services, and other relevant content that may be useful to your business. If you prefer not to receive this, kindly untick the box.