'Artificial Intelligence is capable of extreme damage to people without right laws,' warns  Roger Gewolb

Digital generated image of multiple robots working on laptops

'AI simply cannot think, at least not yet. It is not sentient,' says Roger Gewolb

Getty
Roger Gewolb

By Roger Gewolb


Published: 18/06/2024

- 09:57

Dr Roger Gewolb is a former Bank of England adviser

However wonderful, and equally dangerous, artificial intelligence (AI) is portrayed to be in the media and the great narratives of today, it simply cannot think, at least not yet. It is not sentient.

AI is basically a magnificent data analysis tool. The very best AI applications can respond to queries, with all the relevant data and back up imaginable, in beautifully presented order, in only a matter of seconds.


But, in simplest terms, it cannot give you an answer that you have not thought of yourself. It may throw up and present information in new and unique ways that you had not thought of and that make you think of a new solution, or a new angle from which to regard something.

But that is not, at least yet, the same as the human brain and other sentient qualities producing something really new, or making order out of something apparently totally disordered.

Nana Akua, Anna McGovern, \u200bDr Roger Gewolb, Justin Urquhart-Stewart

Nana Akua, Anna McGovern, Dr Roger Gewolb and Justin Urquhart-Stewart discussed HMRC AI tax spies

GB News

Britain currently does not have any dedicated laws or regulations specifically governing AI, although the need for a regulatory framework is well-recognised.

In other words, there are currently no clear guidelines or safeguards specifically regulating the use of AI for sensitive applications.

It is the same in many other countries, such as Australia, and yet, the Australian police are using AI for various purposes, such as experimenting with AI chatbots to help officers navigate complex governance rules and guidelines related to investigations, search warrants, and sensitive cases particulary those involving politicians or media.

“This year-long trial aims to assess the feasibility of using AI for increasing the usability of governance instruments.”, it is officially stated.

However, here in the UK, the current election campaigns are rapidly filling up with promises of billions being spent on AI for all manner of endeavours that could turn out to be disasters on a scale even greater than the long-running Post Office/Fujitsu Horizon software scandal.

For example, HMRC is already looking like it is about to use AI to crack down on tax evasion. Incredibly, they will start by getting rid of customer service representatives so that Brits will only have chat bots to explain one of the world’s generally agreed most complicated and unfit for purpose taxation systems, hardly understandable in many instances even with a personal conversation by phone with an HMRC representative.

Then, they are going to apparently use it rather than seasoned agents and professional collection experts to process information and decide whether claims lay for tax evasion against people.

Similarly, some British police forces are now using it for what they call “predictive policing – pinpointing who will commit what crimes when where and how and responding accordingly, it seems, solely on the AI – provided information, at least initially.

Can you imagine someone showing up at your front door accusing you of something because of what the computer said? Of course, the police forces say that they “are working to address concerns around bias, privacy, and human oversight”. Well, that’s a big relief, isn’t it? Especially since the sole oversight on all these potentially life-changing issues is in the hands of the agencies using it as of today!

LATEST OPINION:

What is even more comforting (NOT) is that the UK is now proposing, unlike the EU which will adopt code-specific laws and regulations dependent on the assumed risk level of the AI utilisation, to simply “use existing laws to regulate” this entirely new and virtually unknown field, adopting implementing principles as they go along and as knowledge develops. Like when the proverbial horse has already bolted.

I’m really getting concerned that we are veering further and further away from what should be a sensible, measured approach to this potentially highly valuable tool, but also something that could be a minefield of damage to its citizens.

You may like