Facebook debacle reveals sensitivities around AI

Facebook debacle reveals sensitivities around AI

A recent incident involving Facebook has raised serious security concerns around the use of smart robots, however insiders are playing down these concerns as “irresponsible”.

From robo-advice in financial services to customer service robots in shopping centres, robotics and artificial intelligence are taking a great leap forward. But how secure is that leap?

In scenes reminiscent of the famous Terminator movies, an artificial intelligence program being developed by social media giant Facebook was reportedly abandoned after it began developing and communicating in its own language.

A spokesperson for Facebook was mum on the issue, telling My Business that there would be no official comment made on the issue.

However, in a Facebook post on the matter, one of the social network’s researchers, Dhruv Batra, suggested there was no truth to the global PR debacle, labelling the doomsday commentary around the incident as “clickbaity and irresponsible”.

“While the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established subfield of AI, with publications dating back decades,” Mr Batra wrote.

“Simply put, agents in environments attempting to solve a task will often find unintuitive ways to maximise reward. Analysing [sic] the reward function and changing the parameters of an experiment is NOT the same as ‘unplugging’ or ‘shutting down AI’. If that were the case, every AI researcher has been ‘shutting down AI’ every time they kill a job on a machine.” 

Facebook debacle reveals sensitivities around AI
mybusiness logo

Related Articles

promoted stories