Computer scientists are questioning whether Alphabet’s DeepMind will ever make A.I. more human-like (CNBC) - I contributed a few comments to this piece, highlighted here. Received a few messages from peers asking "isn't this a bit too harsh?". Short Answer: No Sir, it isn't too harsh. Please don't take everything so serious - especially me, giant corporations, yourself, AI and God.

Not everyone is convinced, however. Samim Winiger, an AI researcher in Berlin, told CNBC that DeepMind’s “reward is enough” view is a “somewhat fringe philosophical position, misleadingly presented as hard science.”

He said the path to general AI is complex and that the scientific community is aware that there are countless challenges and known unknowns that “rightfully instill a sense of humility” in most researchers in the field and prevent them from making “grandiose, totalitarian statements” such as “RL is the final answer, all you need is reward.”

“In somewhat typical DeepMind fashion, they chose to make bold statements that grabs attention at all costs, over a more nuanced approach,” said Winiger. “This is more akin to politics than science.”

Winiger argues that we’re no closer to AGI today than we were several decades ago. “The only thing that has fundamentally changed since the 1950/60s, is that science-fiction is now a valid tool for giant corporations to confuse and mislead the public, journalists and shareholders,” he said.

#ML #Comment