Image of transparent glass of RPC building.

Next-gen AI: Disrupting your business?

11 April 2023. Published by Nicholas McKenzie, Trainee Solicitor

It’s no secret the tech sector is going through a tough time at the moment with the large swathes of layoffs seen in 2022 continuing into 2023. So much so that bespoke trackers now actively monitor the 167,004 (and counting) tech jobs already lost this year.

Yet whilst real-life employees seem to be losing out, a new suite of generative artificial intelligence ("AI") products have been infiltrating the business world. The question is: are they offering the true disruption to how we work that AI has been promising?

What is generative AI?

The key selling point for generative AI compared to AI we’ve seen before lies in generative AI’s ability to create new things rather than just beating you at your favourite boardgame.

Generative AI can produce new creations from prompts given by users. In theory, the AI knows how to create what the user wants because it has been trained on large quantities of example data to mimic things people have created before. Many new generative AIs use the internet to make their training easier by trawling through the multitude of text and image data available online (no 80s montage needed).

This technology has now matured into a range of marketable products. Notable entries include ChatGPT, a chat bot that can set you homework, do it for you then give you feedback, Stable Diffusion, which generates images based on keyword prompts, and GitHub Copilot, whose creators claim it is writing up to 40% of the code for software engineers using it.

Businesses see the potential

Businesses already use a range of AI tools on a daily basis, from predicting what customers might want to buy so that prices can be adjusted in real-time to powering basic chat-bots that can answer high-volume queries. AI is also used for fraud prevention, increasing detection of true positives by 50%, reducing false positives by 60% and substantially reducing costs wasted on investigations not related to fraud.

Generative AI presents the next step in how businesses can use AI by automating creative tasks that traditionally can only be completed, or at least started, using human input. This could include writing marketing and sales copy, drafting documents or even side-stepping the entire creative process by writing and illustrating publishable books. ChatGPT, in particular, has seen use in academia to shorten previously time-consuming tasks such as drafting course structures and co-writing academic papers.

Generative AI has not just gained a lot of attention in the press and on social media. The World Economic Forum recently highlighted its importance as throughout 2022 investors put $1.37bn towards generative AI start-ups.

OpenAI, the creators of ChatGPT and image generator DALL-E, are backing themselves with a projection of $1 billion revenue by 2024. This confidence comes from the likes of Microsoft directly investing in OpenAI and being vocal about seeing ChatGPT as a way to disrupt the internet search engine market. In fact, Microsoft have already begun embedding ChatGPT into Bing, with "multiple millions" currently on the waiting list to give it a try and are eager to roll-out similar functionality across Microsoft's suite of software.

Generating problems

Unsurprisingly, not everyone is happy at the prospect of AI encroaching on their creative territory, with Nick Cave describing AI written music as "a grotesque mockery of what it is to be human". Yet, generative AI faces more problems than simply being unpopular. Substantive disputes are anticipated in the wake of generative AI gaining widespread use. For example, visual artists are not happy with AI products being trained on their work without their consent and have responded by creating a tool which searches over 5 billion images to check whether their work has been used to train an AI.

Questions are also being raised on the ownership of AI generated art with some proposing that there will need to be judicial or legislative interpretations on the threshold between AI creations being copies of existing styles, which would infringe copyright, or being new creations informed by existing styles, which would not infringe. More complex authorship and ownership issues are also forming as artists begin creating their own generative AIs, train them solely on their own work but then let those AIs create new art. However, if recent court decisions are followed, such as Thaler v Comptroller-General of Patents, Designs and Trade Marks which is currently being appealed, then it may be some time before AIs can qualify as legitimate creators, authors or inventors.

The quality of generative AI's work has also faced scrutiny. This isn't too surprising given that anything trained based solely on information on the internet is bound to pick up a few bad habits from the relentless waves of misinformation and general nastiness found online. However, generative AI faces unique problems such as the fact that prompts given to an AI can drastically change what is produced. Current generative AIs also confidently produce incorrect answers without any hesitation, something which university leaders have found as a telling way that one has been used to write an essay. This means that human users also need to be carefully trained to ensure each AI generates something valuable and to recognise poor quality responses.

Another crucial factor causing concern is that the underlying neural networks and deep learning that next-gen AIs use means that it can be difficult, and often impossible, to understand exactly how a generative AI has reached a decision or created its masterpiece. This has led to some businesses beginning to crack down on allowing employees to use generative AI at work due to the fear of confidential information shared with AIs being leaked.

Looking forward

Generative AI looks here to stay for at least the near future and with development seemingly unleashed we can expect the next 12-18 months to bring us more bots, products and experiments as new generative AIs hit the market. However, this excitement may plateau in the longer term, especially if concerns come true that the amount of data available to train new AIs might run out. As generative AI becomes embedded into businesses we can also expect the narrative to address ethical questions on our reliance on such products. Questions which religious leaders around the world have already started to consider. Nonetheless, next-gen AIs don't appear to be poised to take our jobs, for now, but it's a pretty safe bet that businesses not engaging with them will start being left behind.