Governor Andrew Bailey confirms House of Lords' view that the AI technology should be seen as a positive, rather than a risk to business.
In conversation with the BBC, Bailey shares that AI will not be a "mass destroyer of jobs" and humans will adapt to working with new technologies because of the "great potential with it."
The Bank of England believes that businesses that have already adapted and invested in AI will see the benefits to productivity soon. Bailey adds: "I’m an economic historian, before I became a central banker. Economies adapt, jobs adapt, and we learn to work with it. And I think, you get a better result by people with machines than with machines on their own. So I’m an optimist."
Chair of the House of Lords committee Baroness Stowell advises that "existential risks and sci-fi scenarios" should prevent organisations from reaping the rewards of this technology. She adds that the Lords Communications and Digital Committee’s report - which focuses on large language models (LLMs) like generative AI tools like ChatGPT - reveals that the UK could "miss out on the AI goldrush" if "apocalyptic" warnings about AI’s dangers were listened to.
Instead, these tools should be valued for their human-like responses to questions. Baroness Stowell warns that the government must ensure the UK didn’t end up as "the safety people. No expert on safety is going to be credible if we are not at the same time developers and part of the real vanguard of promoting and creating the progress on this technology", she says.
According to the BBC, Secretary of State for Science, Innovation and Technology Michelle Donelan will speak to the Lords Communications and Digital Committee next week, where she will be questioned on the UK Government’s reaction to the report.
A Department for Science, Innovation and Technology spokesperson adds: "We do not accept this - the UK is a clear leader in AI research and development, and as a government we are already backing AI’s boundless potential to improve lives, pouring millions of pounds into rolling out solutions that will transform healthcare, education and business growth, including through our newly announced AI Opportunity Forum.
They continue: "The future of AI is safe AI. It is only by addressing the risks of today and tomorrow that we can harness its incredible opportunities and attract even more of the jobs and investment that will come from this new wave of technology. That’s why [we] have spent more than any other government on safety research through the AI Safety Institute and are promoting a pro-innovation approach to AI regulation."
Published today on Finextra, Lord Chris Holmes explores how upcoming legislation, such as the Data Protection and Digital Information Bill, will harness algorithms, AI, and big tech for public good. "While we wait for the Government’s AI regulation white paper response, I have introduced a Private Members’ Bill, the AI Regulation Bill, that looks at exactly these issues. I drafted the Bill with the essential principles of trust, transparency, inclusion, innovation, interoperability, public engagement, and accountability running through it.
"Regulating AI was very much on the agenda at Davos this year and I think 2024 will be the year when legislators and regulators undertake to incorporate all of the principles set out above, deeply consider copyright and IP rights and ultimately, achieve greater clarity around what is required to ensure we can realize the economic, social, and psychological benefits AI can bring."
By on Fri, 02 Feb 2024 10:57:00 GMT
Original link