A British peer in the House of Lords suggested artificial intelligence (AI) could easily replace its members in the near future. But one expert argued the desire for tradition and trust in the human element when making major decisions will likely delay AI adoption.
“One of my thoughts is that the British have a sense of legacy – it’s a big thing for them,” Alex Sharpe, principal of Sharpe Management Consulting LLC, told Fox News Digital. “They also give ‘discourse’ a whole new dimension. It’s almost like political theater, so I can’t see it going away.”
A debate in the House of Lords this week prompted a chilling prophecy from Richard Denison, 9th Baron Londesborough, who warned AI may soon learn his style of speech “with no hesitation, repetition or deviation.”
The House of Lords, which until 1999 largely had hereditary membership, serves in an advisory capacity to the House of Commons, the elected body of members that actually debates and decides policy and laws for the United Kingdom.
“Is it an exciting or alarming prospect that your lordships might one day be replaced by peer bots with deeper knowledge, higher productivity and lower running costs?” Denison said during a debate about the impact of AI on the job market. “Yet this is the prospect for perhaps as many as 5 million workers in the U.K. over the next 10 years.
“I was briefly tempted to outsource my AI speech to a chatbot and to see if anybody noticed. I did, in fact, test out two large language models. In seconds, both delivered 500-word speeches, which were credible, if somewhat generic.”
Another peer, Charles Colville, said he asked ChatGPT to write a speech for him on the threat AI poses to journalism, which prompted fears humanity “will descend into a landscape where news is stripped of the very human elements that make it relatable, understandable and ultimately impactful,” The Guardian reported.
Sharpe, in an interview with Fox News Digital, argued AI has been around for years, pointing to programs like Siri that are, in fact, AI, but not on the level of a large language model like ChatGPT.
“What we’re hearing now and seeing now is no different than what we see in other places, except that it’s really white collar instead of blue collar,” Sharpe explained, adding that what people are thinking of as AI is mostly informed by “a lot of movies and science fiction.”
“[Alan] Turing wrote the first paper, and I believe his paper actually used the term artificial intelligence,” Sharpe noted, in reference to Turing’s seminal “Computing Machinery and Intelligence” paper, which asked, “Can machines think?”
The paper formed the basis of his work in developing the earliest stages of computer science and the foundations of theory and research into artificial intelligence.
The biggest issue AI faces in reaching those truly human-like behaviors that would pass Turing’s “imitation game,” during which someone wouldn’t be able to tell they’re talking to a machine, is the lack of significant data to train the model.
For politicians, that poses an interesting problem since their speeches, thoughts and ideas are heavily documented in video and writing for AI to analyze.
“When you’re talking politicians, they have all this documented history and all that, but then the machines are really not creating anything new,” Sharpe said. “They’re putting stuff together. They’re making inferences.”
This ability to replicate a person’s ideas and thoughts to near perfection could ultimately make a body like the House of Lords, which is purely advisory, obsolete. But other issues, such as the legal requirement for politicians as stated in a country’s constitution or similar documents, will likely delay adoption, according to Sharpe.
“And could you imagine a lobbyist trying to convince a machine to do something?” Sharpe asked. “We talk a lot about lobbyists, and we look at them very negatively, but the reality is there is a lot of jiggering that goes on, a lot of deals that go on to get very important things done. Because, at the end of the day, politicians are elected by their constituents.
“When it comes to governance and long-term strategy, I don’t want to leave that up to a machine,” he added. “I think aiding humans to make better decisions and being held accountable for decisions is a good thing, but turning it over to machines – I just don’t see that happening anytime soon.”