Should businesses be concerned about AI regulation?

 

If you’re an avid Facebook user you might have seen a post getting shared last week – the ‘Goodbye Meta AI’. Meta, the parent company of Facebook, Instagram, and WhatsApp, recently announced plans to use millions of publicly available UK posts to train its AI models. This message had been shared by many users thinking it would opt them out of training Meta’s AI. Many users mistakenly believed that sharing these messages would prevent their data from being used. However, Meta has confirmed that such posts do not constitute a valid opt-out. Instead, only official objection forms filled out by users will be honoured. These training practices serve as a stark reminder of the risks businesses face when using other business’ software for their everyday business – you don’t set the terms and with AI developing at a rapid pace – there’s a reason to be concerned.

California Governor Gavin Newsom decided to veto a landmark artificial intelligence (AI) safety bill, sparking concerns for both consumers and businesses. The proposed legislation aimed to set some of the first regulations on AI technology in the United States, targeting the most advanced AI models with mandatory safety testing and oversight. These measures would have included a “kill switch” feature, allowing organisations to disable AI systems if they posed a threat.

However, Newsom argued that such regulations could stifle innovation and prompt AI developers to relocate to other states or countries with less stringent rules. This highlights a growing divide between the push for tighter AI oversight and the tech industry’s preference for minimal restrictions.

Why consumer concerns with AI matters to business

The Meta AI incident underscores a broader issue that businesses should heed: consumer sentiment and trust. For companies using AI technology, maintaining transparency and upholding data privacy standards is crucial—not only to avoid legal pitfalls but to sustain consumer trust. Your commitment to GDPR can never be forgotten and as a result if your customer and potential customer’s data is being held externally to yourself – you don’t have

As shown by the response to Meta’s practices, failing to adequately address consumer concerns can lead to negative publicity and calls for boycotts, even if the company’s actions are legally permissible. Get something wrong with this tech, even if you’re just using someone else’s software and the consequences are high.

Preparing for a future of AI regulations

Businesses should view these developments as an early warning of the evolving regulatory landscape around AI. While Newsom’s veto temporarily halts AI regulations in California, the growing momentum behind similar legislative efforts, both in the US and internationally, suggests that stricter AI oversight is inevitable. The EU, for example, has already taken a more aggressive stance with its AI Act, which aims to regulate high-risk AI applications comprehensively this includes behavioural manipulation, social scoring an biometric identification.

In the absence of clear regulations, companies should take proactive steps to self-regulate and establish robust AI use frameworks. Implementing internal safety testing, maintaining clear documentation of processes that are using AI, and ensuring that AI systems have appropriate “off” mechanisms and practical steps that can mitigate risks and demonstrate a commitment to responsible AI use. Most importantly though, that staff are cognisant in how they are using AI in their work, but also not becoming dependent on it.

The business case for early compliance

There is a strong business case for early compliance with emerging AI safety standards. Companies that implement best practices in AI safety and ethics will be better positioned to navigate future regulations, potentially gaining a competitive advantage over less prepared rivals. Moreover, by prioritising transparency and user protection, businesses can avoid the backlash faced by Meta and instead position themselves as leaders in responsible AI innovation.

The California AI bill’s veto and the Meta AI training controversy highlight the need for a balanced approach to AI regulation—one that protects public interests without stifling innovation. For businesses, this means staying informed, being prepared for changing regulations, and, most importantly, putting user trust and safety at the centre of their AI strategies. A perfect way of achieving this balance and the ability to use the technology in the long-term is to partner with an expert digital and tech partner, like Shoothill.