AI - Thomson 158 Reuters https://thomson158reuters.servehalflife.com Latest News Updates Wed, 23 Oct 2024 18:41:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 How artificial intelligence is helping decide who can get a home loan https://thomson158reuters.servehalflife.com/how-artificial-intelligence-is-helping-decide-who-can-get-a-home-loan/ https://thomson158reuters.servehalflife.com/how-artificial-intelligence-is-helping-decide-who-can-get-a-home-loan/#respond Wed, 23 Oct 2024 18:41:33 +0000 https://thomson158reuters.servehalflife.com/how-artificial-intelligence-is-helping-decide-who-can-get-a-home-loan/ In 1995, American talk show host David Letterman did a now-iconic interview with Microsoft founder Bill Gates, where he asked him to explain this “internet thing”. “What the hell is that exactly?” Letterman cheekily asks, in a time where the internet was taking off. Gates attempts to explain emails and how revolutionary the internet is […]

The post How artificial intelligence is helping decide who can get a home loan first appeared on Thomson 158 Reuters.

]]>

In 1995, American talk show host David Letterman did a now-iconic interview with Microsoft founder Bill Gates, where he asked him to explain this “internet thing”.

“What the hell is that exactly?” Letterman cheekily asks, in a time where the internet was taking off.

Gates attempts to explain emails and how revolutionary the internet is — before Letterman says that when he heard you can watch a baseball game live on the internet: “I just thought to myself, does radio ring a bell?”

Letterman may have been deliberately flippant, but the interview was also telling about how people hadn’t yet understood how profoundly the internet would transform our lives.

That’s where we are now with generative artificial intelligence, according to Commonwealth Bank’s chief executive Matt Comyn.

“It is fair to say that whilst there is certainly potential with AI, it will take some time before we will be sufficiently confident that we can control for all the risks to be able to manage that safely at scale,” Mr Comyn noted at the bank’s annual general meeting in October.

AI bots could replace thousands of call centre workers

CBA was one of the first big four banks to come out publicly and say it is trialling a ChatGPT-style AI chat bot in its call centres, that could replace thousands of local call centre staff.

It’s early days yet to know the full impact of job losses, although the Finance Sector Union and industry experts predict in banking call centres alone, the impact could be in the thousands. 

And those risks Mr Comyn refers to are enormous, especially when it involves machines making decisions about home loan applications. 

But CBA is not alone in thinking about how it can use AI to help its workers better answer customer calls, carry out security checks and more quickly assess documents used during the loan application process.

The big four bank bosses that ABC News spoke to relayed how they are already doing such tasks with AI.

But they were quick to point out these tools are there to just assist their staff in making financial decisions, not sign off on them.

ANZ’s chief technology officer Tim Hogarth says AI is currently helping ANZ staff quickly verify documents like pay slips and assess complex loan contracts.

A man at ANZ's Docklands AI centre in Melbourne

Tim Hogarth says AI is currently helping ANZ staff verify loan documents.  (ABC News: Nassim Khadem)

Over time, the technology it will be able to give customers insights into how to spend their money.

“AI can now allow us to actually take information from documents and extract all of that meaning and cutting the amount of time it takes from hours and hours, down to sometimes mere seconds,” Mr Hogarth says.

“In future, AI is going to help you find and spot patterns more readily.

“For example, it might help you understand all those subscriptions that you’ve collected over time and give you some ideas on what you might want to do with those.”

AI already verifying loan documents, so how far away are bot loan approvals?

As AI becomes better at doing tasks that humans do and getting more involved in crucial decisions — like whether to approve a home loan – Mr Hogarth believes that “some jobs will go away, new jobs will come in”. 

Like CBA, ANZ is using AI to help 1,200 staff across ANZ’s call centres as an ‘over-the-shoulder’ assistant.

The bank recently opened what it calls its ‘AI immersion centre’ in Docklands, Melbourne, and is training 3,000 workers on how to use AI to do their jobs.

It has joined forces with leading companies across Silicon Valley, including Microsoft, Google and with Amazon, as well as niche startups, to help improve its AI technology.

The ANZ Docklands office in Melbourne sits by the water

ANZ’s new AI centre is located at its Docklands office in Melbourne. (Supplied. )

“People have used this technology to very quickly summarise everything about what the bank has on file for the customer and … to help them answer the customer’s questions more quickly and more reliably,” Mr Hogarth says.

AI is also helping ANZ staff with other tasks: it can identify cases of financial hardship so they can interact with a customer before a situation gets dire and the customer may be forced to sell their home. And it can be used as an authentication tool to check people are who they say they are.

He says a world where you can get your home loan through a robot is one he cannot envisage.

“I don’t think that’s likely to happen in the next five to 10 years … having a home loan completely decided by a machine,” Mr Hogarth says.

“It’s very likely, though, that the way we’re going to buy homes will be fundamentally different.”

‘Are they agitated?’: How NAB is using AI to track customer sentiment

National Australia Bank’s executive for data strategy execution, Jessica Gleeson, says NAB is trialling AI to help staff categorise and verify documents.

The aim is to reduce review times from 45 minutes to 5 minutes.

They are also exploring AI’s role in customer identity checks and ’emotional sentiment analysis’ — this is where AI can help interpret and summarise conversations its staff have with customers.

“It’s more (about assessing) the tone of the customer’s voice — if you’re taking a call and a customer is agitated, or they’re upset, or they’re relatively neutral,” Ms Gleeson says.

“Our vision is that we’ll be able to service it up to our bankers so they kind of understand what their call is about, that they’re about to take, and also be able to give the right level of empathy to customers.”

Ms Gleeson says call centre staff may take about 100 calls a day, and they’re dealing with different customers with different needs.

“We need to keep a record of how we’ve interacted.

“If we can use large language models to create that summary, to create those end-of-call notes, the colleague doesn’t need to spend time on that, and they can take the next call and serve more customers,” she says.

A lady smiling at NAB office in Sydney

Jessica Gleeson says NAB is trailing AI to help staff categorise and verify documents. (ABC News: Scott Preston )

Job losses aren’t part of the plan, according to Ms Gleeson, and she sees AI as a tool to “create capacity in people’s jobs”.

“Jobs are going to morph and merge into different things — where today you might perform a task, all of a sudden, AI is your co-pilot,” she says.

‘Too risky right now’: Calls for banks to be more transparent about AI use

The union for finance sector workers has concerns about the way AI is being used and expects there could be thousands of job losses.

“If some of the large banks were to replace their contact centre staff with AI, that would cost thousands of jobs across Australia,” Finance Sector Union national assistant secretary Nicole McPherson says.

She wants more transparency from the banks, including about how people’s jobs will change.

“We are very worried about people in contact centres, people in administration roles, people in processing roles — we think that they’re the roles that are going to be most quickly impacted by AI.

“We don’t want to see a situation where finance workers or consumers in Australia are simply having AI foisted upon them and they are having to deal with whatever the banks decide is a good idea,” Ms McPherson says.

An AI-generated photorealistic image of a robot using a laptop computer, while sitting in a warm cafe

Could AI robots soon approve your home loan? (OpenAI: DALL·E 2)

Ms McPherson says AI is a tool workers are excited about, but also a technology that is “still quite untested” and so that comes with enormous risk.

“It’s simply too risky right now to be using untested technology, potentially to be slashing jobs and to not be using the most ethical and trained people to do this incredibly important work,” she says.

Small talk, work and the weather: AI currently struggles with chitchat

Using AI for sentiment analysis can also be misleading, Ms McPherson warns.

She cites the example of a recent incident where one of their finance sector members was taking a phone call with a customer that was wrongly interpreted by AI.

“They made a very mundane comment about the weather — how, unfortunately, it’s raining at the moment,” she recalls.

“When this conversation was analysed later on by AI, what the AI said was that this was a ‘negative customer interaction’, because they used the word unfortunately.

“It clearly wasn’t a negative interaction, but this is one of the big challenges with AI at the moment.

“Though it is called artificial intelligence, it still does need that human oversight.”

Fine line between AI helping and straying into financial advice

The potential of AI is exciting — but in reality, there are limitations.

And in the highly-regulated banking world, there are also limitations on which tasks can be performed by a bot, before legal lines are crossed.

Gabriele Sanguigno is the head of startup ToothFairyAI.

He’s created an AI tool to help superannuation funds assess a customer’s financial position, and wants to pitch his tool to the big four banks.

A startup boss sitting at his computer in a co working space at Stone and Chalk in Melbourne

Gabriele Sanguigno wants to pitch his AI tool to the big banks. (ABC News: Nassim Khadem)

He is also feeding into government inquiries on the issue. 

He says AI agents can be helpful in speeding up the home loan process, but they can’t offer financial advice or sign off on loans.

“The AI agent can come up with some possible scenarios,” Mr Sanguigno notes.

“It [AI] can come up even with some options in terms of how you can refinance your loan.

“However, you always need to keep the human in the loop to make sure that the last check is done by a person.”

He says while there’s much hype about how many jobs might be lost because of AI, it will have a big impact and that could happen sooner than people expect.

“The idea of thinking that this technology will not have an impact on the job market? I think it’s ludicrous,” Mr Sanguigno says.

Joe Sweeney, an analyst at technology consultancy IBRS, also believes call centre jobs will be lost.

He says a big issue is whether answers provided by AI that feed into decisions about home loans could be deemed financial advice.

That would be a contravention of regulations around who can give financial advice, and potentially responsible lending laws.

Joe Sweeney photo by Daniel Irvine

Joe Sweeney says AI is not that intelligent but it is good at picking up patterns quickly.  (ABC News: Daniel Irvine)

“The banking sector is not allowed to offer financial advice above the most basic levels,” Dr Sweeney says.

“You could create a series of questions that would lead to the AI giving you a response that it really shouldn’t.

“And this is why the design of the AI and the information that is fed to these AIs is so important.”

He says too many people think AI has greater capacity than it actually does.

“There is no intelligence in that artificial intelligence at all — it is simply pattern replication and randomisation … It’s an idiot, plagiarist at best.

“The danger, particularly for financial institutions or any institution that is governed by certain codes of behaviour, is that AI will make mistakes,” Dr Sweeney says.

“And if those mistakes are in breach of regulation, then those organisations have a real problem.”

Can regulation keep up with AI technology?

The rapid pace at which AI is advancing means the technology is moving faster than regulation.

The European Union has introduced laws to regulate artificial intelligence, a model that Australian Human Rights commissioner Lorraine Finlay says Australia could consider.

A portrait of the Human Rights Commissioner Lorraine Finlay at parliament house Canberra

Human Rights Commissioner Lorraine Finlay is worried AI is moving faster than regulation.  (ABC News: Mark Moore)

“Australia really needs to be part of that global conversation to make sure that we’re not waiting until the technology goes wrong and until there are harmful impacts, but we’re actually dealing with things proactively,” Ms Finlay says.

The commissioner has been working with Australia’s big banks on testing their AI processes to remove bias during the loan application decision process.

“We’d be particularly concerned with respect to home loans, for example, that you could have disadvantage in terms of people from lower socio-economic areas,” she explains.

“There could be racial discrimination, disability discrimination, gender discrimination.”

She says that however banks decide to use AI, it’s crucial they start disclosing it to customers and make sure “there’s always a human in the loop”.

The horror stories that emerged during the banking royal commission came down to people making bad decisions that left Australians with too much debt and led to them losing their homes and businesses. 

If a machine made bad decisions that had disastrous consequences, who would the responsibility fall on? It’s a major question facing the banks.

“Don’t just have a machine making final decisions that will have a really significant impact on people’s lives,” Ms Finlay advises.

.



Source link

The post How artificial intelligence is helping decide who can get a home loan first appeared on Thomson 158 Reuters.

]]>
https://thomson158reuters.servehalflife.com/how-artificial-intelligence-is-helping-decide-who-can-get-a-home-loan/feed/ 0 21524
Anthropic Wants Its AI Agent to Control Your Computer https://thomson158reuters.servehalflife.com/anthropic-wants-its-ai-agent-to-control-your-computer/ https://thomson158reuters.servehalflife.com/anthropic-wants-its-ai-agent-to-control-your-computer/#respond Tue, 22 Oct 2024 15:00:35 +0000 https://thomson158reuters.servehalflife.com/anthropic-wants-its-ai-agent-to-control-your-computer/ Demos of AI agents can seem stunning, but getting the technology to perform reliably and without annoying (or costly) errors in real life can be a challenge. Current models can answer questions and converse with almost humanlike skill, and are the backbone of chatbots such as OpenAI’s ChatGPT and Google’s Gemini. They can also perform […]

The post Anthropic Wants Its AI Agent to Control Your Computer first appeared on Thomson 158 Reuters.

]]>

Demos of AI agents can seem stunning, but getting the technology to perform reliably and without annoying (or costly) errors in real life can be a challenge. Current models can answer questions and converse with almost humanlike skill, and are the backbone of chatbots such as OpenAI’s ChatGPT and Google’s Gemini. They can also perform tasks on computers when given a simple command by accessing the computer screen as well as input devices like a keyboard and trackpad, or through low-level software interfaces.

Anthropic says that Claude outperforms other AI agents on several key benchmarks including SWE-bench, which measures an agent’s software development skills, and OSWorld, which gauges an agent’s capacity to use a computer operating system. The claims have yet to be independently verified. Anthropic says Claude performs tasks in OSWorld correctly 14.9 percent of the time. This is well below humans, who generally score around 75 percent, but considerably higher than the current best agents—including OpenAI’s GPT-4—which succeed roughly 7.7 percent of the time.

Anthropic claims that several companies are already testing the agentic version of Claude. This includes Canva, which is using it to automate design and editing tasks, and Replit, which uses the model for coding chores. Other early users include The Browser Company, Asana, and Notion.

Ofir Press, a postdoctoral researcher at Princeton University who helped develop SWE-bench, says that agentic AI tends to lack the ability to plan far ahead and often struggles to recover from errors. “In order to show them to be useful we must obtain strong performance on tough and realistic benchmarks,” he says, such as reliably planning a wide range of trips for a user and booking all the necessary tickets.

Kaplan notes that Claude can already troubleshoot some errors surprisingly well. When faced with a terminal error when trying to start a web server, for instance, the model knew how to revise its command to fix it. It also worked out that it had to enable popups when it ran into a dead end browsing the web.

Many tech companies are now racing to develop AI agents as they chase market share and prominence. In fact, it might not be long before many users have agents at their fingertips. Microsoft, which has poured upwards of $13 billion into OpenAI, says it is testing agents that can use Windows computers. Amazon, which has invested heavily in Anthropic, is exploring how agents could recommend and eventually buy goods for its customers.

Sonya Huang, a partner at the venture firm Sequoia who focuses on AI companies, says for all the excitement around AI agents, most companies are really just rebranding AI-powered tools. Speaking to WIRED ahead of the Anthropic news, she says that the technology works best currently when applied in narrow domains such as coding-related work. “You need to choose problem spaces where if the model fails, that’s okay,” she says. “Those are the problem spaces where truly agent native companies will arise.”

A key challenge with agentic AI is that errors can be far more problematic than a garble chatbot reply. Anthropic has imposed certain constraints on what Claude can do—for example, limiting its ability to use a person’s credit card to buy stuff.

If errors can be avoided well enough, says Press of Princeton University, users might learn to see AI—and computers—in a completely new way. “I’m super excited about this new era,” he says.

.



Source link

The post Anthropic Wants Its AI Agent to Control Your Computer first appeared on Thomson 158 Reuters.

]]>
https://thomson158reuters.servehalflife.com/anthropic-wants-its-ai-agent-to-control-your-computer/feed/ 0 21081
Sotheby’s to auction its first artwork made by a humanoid robot https://thomson158reuters.servehalflife.com/sothebys-to-auction-its-first-artwork-made-by-a-humanoid-robot/ https://thomson158reuters.servehalflife.com/sothebys-to-auction-its-first-artwork-made-by-a-humanoid-robot/#respond Mon, 21 Oct 2024 21:38:19 +0000 https://thomson158reuters.servehalflife.com/sothebys-to-auction-its-first-artwork-made-by-a-humanoid-robot/ Sotheby’s later this month hopes to make the auction house’s first ever sale of an artwork made by a humanoid robot.  Ai-Da, a humanoid robot artist, is contributing “AI God,” a portrait of Alain Turing, the mathematician and computer scientist considered to be the progenitor of modern computing, to what Sotheby’s calls a “digital art […]

The post Sotheby’s to auction its first artwork made by a humanoid robot first appeared on Thomson 158 Reuters.

]]>

Sotheby’s later this month hopes to make the auction house’s first ever sale of an artwork made by a humanoid robot. 

Ai-Da, a humanoid robot artist, is contributing “AI God,” a portrait of Alain Turing, the mathematician and computer scientist considered to be the progenitor of modern computing, to what Sotheby’s calls a “digital art day” auction. Turing is also credited with providing some of the earliest insights into what is now referred to as “artificial intelligence.” 

The 64 x 90.5 inch mixed-media painting, which was created this year and is signed “A” by Ai-Da, is estimated to fetch between $120,000 and $180,000, according to a listing on Sotheby’s website. The auction opens on Oct. 31. 

aida.jpg
Sotheby’s estimates that the painting, “A.I. God, Portrait of Alan Turing,” by a humanoid robot dubbed Ai-Da, could attract bids of up to $180,000 when it goes up for auction on Oct. 31, 2024.

Sotheby’s


The Ai-Da robot, who is depicted as female, is a project created by U.K.-based art dealer and gallery owner Aidan Meller. The robot can draw and paint using cameras in her eyes, AI algorithms and a robotic arm.

A robotic first

“What makes this work of art different from other AI-generated works is that with Ai-Da there is a physical manifestation, and this is the first time a work from a robot of this type has ever come to auction,” Meller told CBS MoneyWatch. 

The auction also highlights the advent of AI in society, he added.

“There is a lot of innovation happening — a huge number of robots are coming forward — and they will eventually do all sorts of different tasks. Art is a way of discussing the incredible changes in society that are happening because of technology,” Meller said. 

Meller said the proceeds from the sale will be reinvested in the Ai-Da project, which is costly to power.

Glastonbury Festival 2022 - Day Two
Ai-Da, the world’s first robot artist, paints portraits of the headline music acts in an exhibition during the Glastonbury Festival on June 23, 2022, in Glastonbury, England.

Leon Neal/Getty Images


“Ai-Da’s portrait joins a selection of cutting-edge works that — in their individual ways — push the boundaries of artistic creation today. Together, they prompt a discussion of how we can appreciate and experience the ever-evolving possibilities around artmaking in the 21st century,” Michael Bouhanna, Sotheby’s Head of NFT and digital art, said in a statement.

Even in the notoriously opaque and fickle art market, however, valuing AI-generated works could be a challenge, and more difficult than determining the market worth of works by human artists.

.



Source link

The post Sotheby’s to auction its first artwork made by a humanoid robot first appeared on Thomson 158 Reuters.

]]>
https://thomson158reuters.servehalflife.com/sothebys-to-auction-its-first-artwork-made-by-a-humanoid-robot/feed/ 0 20283
‘I miss reading an actual book’: Screen time and AI use far exceeds guidelines amid calls to rethink bans https://thomson158reuters.servehalflife.com/i-miss-reading-an-actual-book-screen-time-and-ai-use-far-exceeds-guidelines-amid-calls-to-rethink-bans/ https://thomson158reuters.servehalflife.com/i-miss-reading-an-actual-book-screen-time-and-ai-use-far-exceeds-guidelines-amid-calls-to-rethink-bans/#respond Tue, 15 Oct 2024 19:26:57 +0000 https://thomson158reuters.servehalflife.com/i-miss-reading-an-actual-book-screen-time-and-ai-use-far-exceeds-guidelines-amid-calls-to-rethink-bans/ It’s no secret that Australians are grappling with the rapid rise of digital technology, whether it’s excessive screen times at home, the inability to tell truth from fiction online, or the intractable impact of artificial intelligence (AI) on institutions and jobs. Nearly everybody has more technology in their lives than they would like, and the […]

The post ‘I miss reading an actual book’: Screen time and AI use far exceeds guidelines amid calls to rethink bans first appeared on Thomson 158 Reuters.

]]>

It’s no secret that Australians are grappling with the rapid rise of digital technology, whether it’s excessive screen times at home, the inability to tell truth from fiction online, or the intractable impact of artificial intelligence (AI) on institutions and jobs.

Nearly everybody has more technology in their lives than they would like, and the barrier to entry is not only getting lower, but younger, with eSafety data showing a majority of children online before age four.

However while official guidelines recommend children aged five to 17 limit recreational, non-school screen time to two hours per day, many parents, students and teachers who responded to an ABC call out about the education system suggested such guidelines are out of touch, and do not reflect reality.

While screen time in excess of two hours a day is common, others say it’s just the tip of the iceberg, with the widespread use of AI to cheat, and interminable use of devices at home and in class now just a part of everyday life.

“Teachers claim to have AI detectors, but many people I know write essays and assessments with AI and still get top marks,” said 16-year-old Tasmanian private school student Jessica, who asked to use pseudonym. 

“Some teachers even use AI to write lesson plans or check for cheating, which kind of betrays the point of not using it and sets a bad example.”

But with digital technologies and AI only becoming more entrenched, some experts say that rather than cracking down on use, it might be time for a change.

‘I miss reading an actual book, drawing a proper picture’

Two small children are sitting on a couch in colourful pajamas, watching cartoons on their tablets.

Recent data shows a rise in children spending leisure time on screens. (Flickr: Wayan Vota; licence)

Several call-out submissions spoke of students running one original essay through AI multiple times to obscure copied assignments.

On the topics of screen time and AI-assisted plagiarism, students like Jessica pointed to contradictory policies of requiring kids to have their own devices, or relying on online programs to complete assignments, while also trying to impose bans.

“With devices being ‘personal’, nothing really can be done to stop AI use and screen time,” she said.

“AI usage is becoming an increasingly bigger part of [the problem]. And while I don’t agree with it, I don’t know what — if anything — can be done; by teachers, schools or the Department of Education.”

Other respondents said kids were foregoing sleep, using devices late at night and sometimes “waking up at dawn to get online”.

Mary, a Melbourne-based high school teacher with experience in both private and public schools who also asked to use a pseudonym, said excessive use of devices was common.

“A young student told me they’re waking up at 6am — two hours early — to work on their ‘snap streak’,” she said.

“That blew my mind.”

The educator said there were regular occurrences of “students watching Netflix in class, AirPods tucked behind hair”.

On AI, Mary said some students viewed it as “just another hack” to save time, similar to watching a film version of a set text instead of reading the book.

She said students often told her they didn’t have enough time to do assignments from scratch, but she believed they lacked time and focus due to unrestrained screen time at home.

Like many other parents, Dicle Demirkol, a mother based in Melbourne’s north-west, said she was not “against social media and the internet as they can be powerful tools” so long as kids were informed about their role in modern life, and that the education system evolved to meet that need effectively.

A closeup photo of Dicle smiling with long brown hair.

Melbourne parent Dicle was among many concerned about increased screen time. (Supplied)

“As the world changes rapidly, so do our needs. With the internet and the constant exchange of (mis)information, I wonder if what [schools] offer will stay relevant in the future. 

“I don’t think we’re fully prepared.”

However, rules intended to limit tech use and ban copy-and-paste plagiarism through AI are pointless, according to some students and educators.

In order to speak freely without fear of being penalised by schools, employers or the community, several teachers and students — like Jessica and Mary above — requested some level of anonymity.

Teenagers use their phones.

Young people use social media to connect with friends. (ABC News: Nethma Dandeniya)

“No matter how many different sites our school blocks, there is always something we can get to: YouTube, checking emails, reading news, playing Tetris… there’s never only one thing going on in the classroom,” one year 12 student in Queensland said.

“I miss using pen and paper, reading an actual book, drawing a proper picture.

“I feel as if I am teaching myself, and simply being supervised by a teacher.”

The student reiterated her traditional view quite simply: “Students should not be taught by a computer”.

Teachers told the ABC that unrestrained screen time and misuse of AI was ubiquitous, regardless of institution or school policy.

“Back in my day it would have been passing notes — this is well beyond that,” Mary said. 

“They’re not just chatting, they’re actually completely distracted.” 

Several tutors, often tasked with picking up lost school hours, also expressed concerns about “irresponsible use of technology” amid noticeable declines in attention spans, handwriting, spelling, and claims of reduced critical thinking skills.

But with modern life increasingly tech-dependent, many are divided on whether the answer is more bans and a return to low-tech teaching, or rapidly embracing its use, albeit with better-understood frameworks. 

‘It’s the nature of the task, not the tools’

Three children sitting on the floor with laptop computers on their laps.

Experts suggest a more considered treatment of the role of technology in modern society needs to be adopted. (AAP: Paul Miller)

Paul Haimes is a Perth-born associate professor of design at Ritsumeikan University in Japan who teaches a number of Australian exchange students — and he says the rapid adoption of AI caught everyone off guard.

“I, like many of my colleagues, was caught completely off guard by the sudden arrival of publicly-available AI applications like ChatGPT,” he told the ABC.

A black and white photo of Paul Haimes looking at the camera wearing glasses.

Associate professor Paul Haimes says he is one of many educators caught off guard by the rise of technology and AI. (Supplied)

“The reality of course though is that AI is here to stay, and schools and universities need to quickly figure out what the legitimate uses are, and provide clear guidance to both teaching staff and students.

“At the very least, AI shouldn’t be used to undermine the objectives of a course or curriculum, but if there are ways that it could be utilised to support students’ learning that isn’t just a lazy shortcut for them, then it might be worth considering.

Logo Open AI is in white on black phone screen in front of a white computer screen with words.

AI chatbots like ChatGPT are being widely used by both students and teachers. (AP Photo: Michael Dwyer)

“Given the different types of assessment out there, the specifics are likely something best addressed at the course level, in line with a department or school’s other policies.”

For educators, AI has the potential to help in several ways, like assisting with repetitive administrative tasks, or helping design assessments and lesson plans.

Multiple teachers told the ABC the tools had already been a big help.

Education professor at Curtin University, Karen Murcia, underlined that electronic devices are crucial to modern life, and that it is important to be “reflective and transparent” about their use and potential, as well as that of AI.

“We have to think more widely than simply ‘screens’ when we talk about digital environments and impacts on children’s development,” she said.

A woman with blonde hair and glasses, smiling.

Professor Karen Murcia is an expert in children’s engagement with digital technologies. (Supplied: Twitter)

“By withdrawing children from devices and the digital world, we might be denying them their basic rights, if we’re not empowering them with critical foundation skills for digital citizenship.”

She said it was important to accept technology can make aspects of traditional assignments redundant, and that AI can achieve things that no teacher in a physical classroom with 30 students can do, like providing tailored 24/7 tutoring and support.

“The question for me is, what is the nature of the assessments we’re giving to students? Are we asking them to be creative and innovative?” she said.

“It’s the nature of the task, not the tools that they’re using.”

A couple sits on the floor of a cozy lounge room with a toddler, smiling and playing on a laptop.

There are practical steps parents can take to help children maintain a healthy relationship with screens. (Pexels: Andrea Piacquadio)

Asked whether screen time and sedentary behaviour guidelines were out of touch with modern expectations, a spokesperson for the Department of Health and Aged Care said limiting sedentary activities was “essential for overall health and wellbeing” and to “reduce the risk of chronic disease”.

They noted that “the guidelines themselves acknowledge that meeting the recommendations may be challenging at times”, but the important aspect was to “ensure a healthy balance”, and find opportunities to be physically active whenever possible.

“Schools, school systems and teachers share a responsibility in how and when to use these tools,” the spokesperson said, with reference to the December 2023 framework for AI in schools. 

“Individual states and territories, and non-government school sectors, are responsible for rolling out the framework … [which] will be reviewed at least every 12 months so that it keeps pace with developments.”

There are also practical steps, released by the eSafety Commission, that parents can take to help kids maintain a healthy relationship with screens, while ensuring they get enough sleep and exercise.

Commissioner Julie Inman Grant told the ABC that “there really is no magic number” when it comes to how long you should let your child be on screens.

“It can be easy to focus only on the clock, but the quality and nature of what they are doing online, and your involvement, are just as important,” she advised.

“If it starts to get in the way of their sleep or their ability to get outside for fresh air and exercise, or if it starts impacting face-to-face connections with family and friends, then it might be time to sit down with your child to come up with a plan to strike a more healthy balance of online and offline activities.

She emphasised it was important to do this together “as young people are more likely to respond to rules that they have helped come up with”.

“And make sure as a parent you are setting a good example,” the eSafety Commissioner added.

It’s no good telling your child get off screens if you’re sitting looking at your phone at the dinner table.

.



Source link

The post ‘I miss reading an actual book’: Screen time and AI use far exceeds guidelines amid calls to rethink bans first appeared on Thomson 158 Reuters.

]]>
https://thomson158reuters.servehalflife.com/i-miss-reading-an-actual-book-screen-time-and-ai-use-far-exceeds-guidelines-amid-calls-to-rethink-bans/feed/ 0 16859
Demis Hassabis: The video gamer and knight who became a Nobel Laureate | World News – Times of India https://thomson158reuters.servehalflife.com/demis-hassabis-the-video-gamer-and-knight-who-became-a-nobel-laureate-world-news-times-of-india/ https://thomson158reuters.servehalflife.com/demis-hassabis-the-video-gamer-and-knight-who-became-a-nobel-laureate-world-news-times-of-india/#respond Thu, 10 Oct 2024 09:55:55 +0000 https://thomson158reuters.servehalflife.com/demis-hassabis-the-video-gamer-and-knight-who-became-a-nobel-laureate-world-news-times-of-india/ Demis Hassabis, CEO of DeepMind Technologies, the AI division behind Gemini, poses for a photo at the Google DeepMind offices in London, Wednesday, Oct. 9, 2024 after being awarded with the Nobel Prize in Chemistry.(AP Photo/Alastair Grant) Demis Hassabis, Britain’s recent Nobel Prize winner in Chemistry, has a remarkable background that bridges the worlds of […]

The post Demis Hassabis: The video gamer and knight who became a Nobel Laureate | World News – Times of India first appeared on Thomson 158 Reuters.

]]>

Demis Hassabis: The video gamer and knight who became a Nobel Laureate
Demis Hassabis, CEO of DeepMind Technologies, the AI division behind Gemini, poses for a photo at the Google DeepMind offices in London, Wednesday, Oct. 9, 2024 after being awarded with the Nobel Prize in Chemistry.(AP Photo/Alastair Grant)

Demis Hassabis, Britain’s recent Nobel Prize winner in Chemistry, has a remarkable background that bridges the worlds of video games and artificial intelligence. Unlike most teenagers, who typically spend their time playing video games, Hassabis spent his formative years developing them. His big break came in 1994 when he co-designed the hit game Theme Park, a simulation where players manage amusement parks.
Born in London to a Greek Cypriot father and a Singaporean mother, Hassabis excelled academically. He earned a double first in computer science from Cambridge University, later launching his own video game company. His intellectual curiosity led him to pursue a PhD in cognitive neuroscience, which became a stepping stone to co-founding the AI startup DeepMind. This venture was eventually acquired by Google for £400 million in 2014.
Now 48 years old, Hassabis holds the position of Chief Executive at Google DeepMind, the company’s advanced AI unit. He was knighted in 2023 for his services to artificial intelligence, and his work has culminated in the Nobel Prize, which he shares with colleague John Jumper and US academic David Baker. Their award stems from groundbreaking work in using AI to predict and design the structure of proteins, a significant achievement in both the scientific and AI communities.
Hassabis has long emphasised the connection between gaming and artificial intelligence, describing games as an entryway to the world of AI. His early passion for chess, coupled with a fascination for chess computers, sparked ideas about how machines learn, which would later influence his pioneering work in AI. His startup, DeepMind, achieved global recognition by building AI systems that outperformed humans in complex games like Go, chess, and Starcraft II.
Beyond gaming and research, Hassabis’s expertise is highly valued by governments and thought leaders. In 2020, he was called upon to provide advice to the UK government’s Scientific Advisory Group for Emergencies during the Covid-19 pandemic. His reputation as a forward-thinking AI expert led figures like Dominic Cummings and Tony Blair to seek his counsel on national and global matters.
As a leader in Google’s AI research, Hassabis is at the forefront of the global AI race, with major US tech companies such as Google, Meta, OpenAI, and Microsoft competing for breakthroughs. While deeply involved in advancing AI technologies, Hassabis is acutely aware of the risks. In recent years, he has voiced concern over the potential dangers of AI, likening its risks to those of pandemics and nuclear war. In 2023, he signed a statement urging global leaders to treat the threat of uncontrolled AI systems as seriously as they treat other existential challenges, such as climate change.
However, Hassabis remains optimistic about AI’s potential for good. The Nobel Prize itself is a testament to this, particularly his work with AlphaFold, an AI system that accurately predicts protein structures. This development has far-reaching implications for medical research and drug development, showcasing the profound ways in which AI can benefit society.
Despite his caution regarding AI’s dangers, Hassabis continues to champion its capacity to address complex global problems. His leadership in both AI innovation and ethical considerations ensures that he will remain a key figure in the future of technology.

.



Source link

The post Demis Hassabis: The video gamer and knight who became a Nobel Laureate | World News – Times of India first appeared on Thomson 158 Reuters.

]]>
https://thomson158reuters.servehalflife.com/demis-hassabis-the-video-gamer-and-knight-who-became-a-nobel-laureate-world-news-times-of-india/feed/ 0 14637