DeepMind created an IQ test for AI, and it didnt do too well DeepMind幫人工智慧做智力測驗,結果差強人意

 

精英翻譯轉自http://iservice.ltn.com.tw/Service/english/english.php?engno=1225883&day=2018-08-20

 

◎劉宜庭

 

AI has gotten pretty good at completing specific tasks, but it’s still a long way from having general intelligence. One of the key elements of general intelligence is abstract reasoning — the ability to think beyond the "here and now" to see more nuanced patterns and relationships and to engage in complex thought.

 

人工智慧在完成特定任務上已經成果斐然,但在擁有一般智力上還有很長的一段路要走。「一般智力」的關鍵要素之一是「抽象推理」;這種能力可以超越「此時此地」進行思考,領會到更多具有細微差別的模式及關聯,並參與複雜思維。

 

On July 11th, researchers at DeepMind — a Google subsidiary focused on artificial intelligence — published a paper detailing their attempt to measure various AIs’ abstract reasoning capabilities.

 

711日,聚焦人工智慧的Google子公司DeepMind研究團隊發表一篇論文,詳述他們如何嘗試評估各種人工智慧的抽象推理能力。

 

In humans, we measure abstract reasoning using fairly straightforward visual IQ tests. One popular test, called Raven’s Progressive Matrices, features several rows of images with the final row missing its final image. To apply this test to AIs, the DeepMind researchers created a program that could generate unique matrix problems.

 

在人類世界裡,我們使用相當易懂的視覺化智力測驗評估抽象推理。其中一種受歡迎的測驗名為「瑞文氏圖形推理測驗」,特徵是在數行圖像的最後一行、最後一格留下空白。為了將這套測驗應用於人工智慧,DeepMind研究團隊設計了一套可以生成唯一解矩陣問題的程式。

 

The results of the test weren’t great. Ultimately, the team’s AI IQ test shows that even some of today’s most advanced AIs can’t figure out problems we haven’t trained them to solve.

 

這項實驗的結果並不好。最終,該團隊的「人工智慧智力測驗」結果顯示,就連當今最先進的人工智慧,也無法解出我們從未訓練它解決過的問題。

arrow
arrow
    全站熱搜

    ests24331677 發表在 痞客邦 留言(0) 人氣()