When it comes to Artificial
Showing common sense
This is one of the most prominent problems for artificial intelligence. In order to display common sense, there are many factors to take in, such as credibility and depth, and emulating this ability is incredibly difficult. This emotion is simply not entirely displayed even with the help of the machine learning aspect of AI.
The main problem stems from the fact that commons sense perceptions are largely individualistic, and this is pretty difficult for AI to emulate because this kind of technology is not confined to both the norms of human behavior and various social factors and circumstances (such as religion or philosophy, for example). Common sense can also be terribly wrong sometimes. Given that there are too many criteria to factor in for AI to handle, whatever it comes up with has yet to make perfect sense.
This is why the research in commonsense knowledge is the focus of the Allen Institute for Artificial Intelligence (AI2). The institute is funded by Paul Allen, a Microsoft co-founder, with the hope to achieve scientific breakthroughs by “constructing AI systems with reading, learning and reasoning capabilities.”
When it comes to perceiving visual information, the differences between how AI functions and how a human brain processes the same image are immense. Artificial intelligence employs Convolutional Neural Networks which are somewhat based on the principles of how a human brain operates, but there are also many other factors, like pre-processing and applying filters, which need to be done manually if they are incorporated in AI systems.
Furthermore, the exact mechanisms of how a brain works when it comes to visual perceptions have not been completely understood to this day, so it is challenging to replicate them in vision-related projects for AI. Some researches have shown that visual perception is a process which happens while our eyes are constantly looking for visual input. During this complex process, there are many other factors, including subconscious ones, that artificial intelligence systems at the moments simply cannot grasp.
According to the Professor of Cognitive Robotics at Imperial College London, Murray Shanahan, one way to improve AI systems in this sense is to combine deep learning with the symbolic descriptions of Good Old-Fashioned Artificial Intelligence (GOFAI). He believes that this approach could provide artificial intelligence systems with a starting point to understand the world, rather than just supplying them with data and waiting for them to observe patterns, which will help understand how AI reaches its conclusions in the first place.