It’s one thing to speak of the capabilities of artificial intelligence; the ability to act on a particular set of logistics, or the ability to assemble and manufacture a specific set of goods. But it’s something else entirely to view AI as the conduit through which we can pour our wealth of human knowledge and wisdom.
There is a move in the field of AI to see it grow into something that is more than just a collection of knowledge based on the jagged rules of a certain set of information. Experts in the field of AI are working to hone the development of what they call “knowledge-based systems”. However, it may be more beneficial to view these systems as “wisdom-based”, as the goal is to instill the AI with a sense of reasoning and decision-making that emulates a human’s ability to parse and process difficult situations of a given field critically. Just how competent AI will be at thinking and reasoning is still up for debate, but the possibilities appear hopeful, at least for those in favor.
How It Can Help
One benefit of knowledge-based systems would be in the medical field. Already, AI is used to help determine how likely a patient is to have a heart attack or a stroke. AI is also used to diagnose patients, with the physician’s opinion taking precedent, of course. But where AI has failed in the medical field so far is in its inability to state why its diagnosis differed from the healthcare professional’s – something that has increased human distrust of AI.
Equipping AI with knowledge-based rhetoric will allow the system to assess a situation, patient, or illness and provide a detailed answer or diagnosis. The work being done in this field is to help AI emulate the wisdom one earns by spending years in their professional niche, and in turn, eliminate the impossibility of accurate and personal decisions.
Where It Works Now
AI systems have been tested to help propel businesses forward, though only after the close watch and input of trained experts and engineering professionals. Most AI requires a significant amount of engineering input and theoretical hand holding for the systems to operate on a function anywhere close to the level of critical thinking provided by experts in a given field. But an experiment done by Steven Gustafson and Maana, show that expert knowledge helped AI learn a likeness model that mimicked the experience-based knowledge of a certain domain needed to parse specific details of an engineering project.
By giving the engineering expert the opportunity to correct the AI and input the needed corrections into the system, the AI learned how to differentiate, choose, and learn a custom approach of likeness used in clustering, something akin to critical thinking and reasoning.