Trusting software to achieve your business goals: Copyright and your reputation

3rd of  June 2024

In the work at Shoothill, we often encounter software systems, or lack there of, crippling a business’ efficiency. But in recent weeks the headlines have been awash with tales of software that highlight off-the-shelf solution pitfalls.

In this article, we explore how AI tools are infringing on intellectual property, and how faulty software systems can lose customers and risk your reputation. We’ll leave you more aware of the potential hazards of implementing off-the-shelf software systems and with an idea of what preventative measures you can take.

Copyright infringement

Open AI and Hollywood actor Scarlet Johansson, have gotten into a legal dispute after the ChatGPT’s new voice Sky has taken on the actor’s voice – without her permission. Johansson said: “When I heard the released demo, I was shocked, angered and in disbelief that Mr Altman would pursue a voice that sounded so eerily similar to mine.” Whether or not they stole the actor’s voice is still in question.”

Open AI denies that they have used the actor’s voice. In an official statement, Sam Altman said: “The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers.”

But this isn’t the first time Open AI has got into copyright or intellectual property infringement trouble. Last year, the New York Times, authors, and artists filed lawsuits against them for using their property to train the chatbot.

How could this impact my business?

More and more off-the-shelf SaaS solutions (Software as a Service e.g Hubspot, Wix or Salesforce) are bringing AI tools into their platforms through tools like the ChatGPT API, a method of bringing Chat GPT into your software.

These generative AIs effectively use data provided to them, copyright infringing or not, to create a ‘new’ thing. The consequence is that there’s aninherent risk that nothing created from this is original – or truly your business’ intellectual property.

So like spellcheck when correcting a typo to the wrong word entirely – the intentions are there but the consequences are high.

Examples of this you may encounter:

  • A website built with a generative AI could plagiarise its copy and imagery from the website of a competitor.
  • A blog post or news article created through AI may apply the wrong data or use misinformation to draw its conclusions.
  • By brainstorming or developing your ideas and designs with these tools, you may be inadvertently using another person’s intellectual property.

So how do you avoid these risks?

It’s potentially damaging to your business’s long-term growth to just ban the use of AI, leaving the potential efficiency gains on the table could leave your business lagging behind your competition.

To avoid risking their client’s data organisations like Deloitte and McKinsey have created their own AIs by using their internal software teams, enabling them to fully reap the efficiency gains.

So you have two paths open to your team: establishing clear guidelines for your teams to use the technology or commissioning your own tool. In either scenario, a custom software developer such as ourselves can help.

Facial recognition – an infallible technology?

There are consequences to technological errors that you may not consider. As we covered last week, and you’ll have seen in the news these past months a software error led to the wrong convictions of nearly 1000 sub-postmasters.

This week anti-shop lifting facial recognition technology provider, Facewatch, broke into the headlines after one shopper spoke to the BBC regarding their misidentification and expulsion from a store as a shoplifter. This sort of technology is not just limited to retail, but has long been used in law enforcement with the potential for huge ramifications.

A 2020 Harvard paper, Racial Discrimination in Face Recognition Technology said: “Face recognition algorithms boast high classification accuracy (over 90%), but these outcomes are not universal.” The paper cites that different facial recognition systems “performed the worst on darker-skinned females, with error rates up to 34% higher than for lighter-skinned males.”

There are many reasons for potential errors in a software system, it could be a development oversight, or a user using the platform outside of how it was designed. In the case of facial recognition bias, it’s widely attributed to the fact that in early development these systems are tested on the staff building them, largely male.

How could this impact my business?

  • A faulty verification system could lose customers and potentially entire demographics.
  • The technology you’re using could open you to reputational damage.
  • Have access to inappropriate and incorrect data for reports, leading to misguided business decisions.

How can my business avoid these hazards?

Before signing up for an off-the-shelf software solution or the latest subscription-based SaaS platform you need to ask, how does this software achieve its results, do we know the implications for this change and what alternatives can achieve the same results?

In criminal law, its often said that ignorance is not a defence, so the jury may still be out on if copyright theft through ChatGPT is still theft or if your facial recognition system can be responsible for discrimination. The question remains, can your business take the risk and be the first to find out?

You can explore better results, as well as results tailored to your business through the commissioning of bespoke systems by a company such as Shoothill. In the process of commissioning your platform, you can discuss concerns such as those outlined above and build a platform that won’t scupper a business.

If the problems discussed here resonate with you – contact us and together we can transform your business.

Contact: [email protected]

Phone: +44 (0)1743 636300