top of page
Writer's picturePanagiota Theodoni

Deep learning systems lack the human-like ability to acquire knowledge.

NYC January 22, 2023

Deep learning models exhibit astounding intellectual capacity. Their responses demonstrate that they possess several cognitive functions, such as attention, working memory, cognitive inhibition, decision making and reasoning. Additionally, they exhibit flexibility in their reasoning and in their use of language to express this. Having artificial systems with cognitive capacities can reveal aspects of human cognition that would otherwise be difficult to study. Artificial neural networks can help explore how cognitive representations and processes are implemented by neural circuits in the brain: they can provide insights that are technically prohibitive to garner from experiments, and they can help develop and evaluate entire spectra of biologically plausible and implausible neuro-computational mechanisms of cognitive function. Furthermore, increasingly, artificial neural systems can surpass human cognition, showing us its limitations as well as its unexplored possibilities.

However, artificial neural networks, like deep learning models, still lack certain fundamental human cognitive capacities. We would like to highlight capacities that pertain to a fundamental cognitive function, namely knowledge acquisition. For example, humans often rely on meta-cognition and creativity to generate novel insights and acquire knowledge. When told we have made an error in a mathematical computation, we are able to identify why we made that mis-calculation. This meta-cognitive process is central to refining and improving our knowledge base. The existence of an entire mathematical system built from imaginary numbers underscores another uniquely human ability to create concepts that lie outside the distribution of natural experience. To reach human-level cognition, deep learning systems would need to be originally creative, which is a capacity that is challenging to possess for systems that acquire knowledge solely within the distribution of the data they are trained on, such as deep learning systems. Thinking outside of the data is difficult to achieve in an artificial system.

Another example pertains to the acquisition of knowledge, via the distribution of the training data. While deep learning models are remarkably adept at learning from massive text corpi, the knowledge they extract therefrom is limited by the expressive power of language. The acquisition of human knowledge requires a medium of communication, language, together with human-like experience to contextualize what is being communicated. Otherwise, the acquired knowledge is limited by the expressive power of the medium alone. Language is itself insufficient to completely capture human experience. We argue that a system trained on language, but which by-passes experience, cannot genuinely understand it. When asked why a human should be happy, chatGPT, the most advanced language model today, responds with a list of utilitarian reasons. Yet, it fails to mention the most experiential aspect of happiness - it is a positive and therefore desirable experience to have. This inability to reason through a fundamentally experiential aspect of emotion highlights a failure of the system to understand what it means to be human.

Another way we see that deep learning systems do not genuinely understand what they have been trained on, is by examining its reasoning through logic. For instance, if you ask chatGPT whether the sentence “Aristotle is a feeling” is correct, it will answer no. Based on the data it is trained on, men do not appear as feelings. However if you present chatGPT with a valid argument like “Aristotle is happy, happy is a feeling, Aristotle is a feeling”, it reasons that the syllogism is wrong because Aristotle cannot be a feeling. This is a clear conflation of what is objectively correct (through syllogism) based on its own training data, and what is subjectively wrong (Aristotle is a feeling) also according to its own training data. We argue that a lack of experience can result in such conflations.

Accumulated experience over the course of our lives provides us with the capacity to understand language. Meta-cognition and creativity afford us the capacity to reason within and beyond our experience. Deep learning systems lack human-like experience, as well as the cognitive functions to reason within and beyond it,, which stymies their ability to genuinely acquire knowledge and achieve human-like cognition. And it is not yet straightforward how they can do so otherwise.


57 views0 comments

Recent Posts

See All

questions

THURSDAY, MARCH 17, 2011 transfer from http://phipsika.blogspot.com/2011/03/questions.html Let us suppose that there is a man in an...

Σχετική Λογική - Relative Logic

SATURDAY, JANUARY 15, 2011 transfer from http://phipsika.blogspot.com/2011/01/relative-logic.html Η λογική ενός παράλογου μυαλού είναι...

Comments


Post: Blog2_Post
bottom of page