Large language models, neuro-symbolic AI and first-order logic: Musings from a knowledge representation viewpoint
Description
In this article, we informally review logical reasoning abilities of large language models, and how that seems to suggest that these models are neither deep thinkers nor an existential threat, in the sense of a rogue autonomous agent. Sample conversations are included to illustrate its shallowness. Its impact on democracy and our sanity by way of misinformation is another matter, of course. We conclude by arguing for the need to revert to well-established areas that unify logic and learning, such as statistical relational learning and neuro-symbolic AI, which enable frameworks for clear behavior specification. We note, nonetheless, that these frameworks are yet to embrace full first-order logical features, which could be an exciting avenue for future research.
Files
LLMs_and_logician___SRLNeSy_FOL_-3.pdf
Files
(154.9 kB)
Name | Size | Download all |
---|---|---|
md5:c80eefe0cfd4fb07d81d9cc9493c6662
|
154.9 kB | Preview Download |