Doninha is a proof of concept of a new kind of AI

#1
by 0danielfonseca - opened

Hey community,

This AI model is a proof of concept that I tried to create based on my Article (https://discuss.huggingface.co/t/paraconsistent-logic-and-ai-models/174262) where i discuss the current limitations of current AI Models.

My area of studies are mainly philosophy (epithemology, law, logic and language) so I had to use AI Coding (Claude Sonnet, Cursor and VSCode with AI Toolkit) as my programming skills are based on a course of HTML programming I took almost 20 years ago. So I trully dunno if I did the programming right.

After writting the article I asked to claude sonnet generate i structural model of AI (https://docs.google.com/document/d/1tcCR-wXdHUzdPpQYevJTElRwO3IbyG_N/edit?usp=sharing&ouid=104971188507747906486&rtpof=true&sd=true) and went to Cursor and VSCode and tried to create a model based on surpassing current limitations of big tech models (mainly the lack of an episthemological solid basis before statistical word prediction).

So, care to join the discussion?

I appreciate Daniel Fonseca's contribution, which presents a philosophically ambitious proposal by combining paraconsistent logic, Kantian judgments, and Aristotelian syllogistic as a semantic refinement architecture for LLMs. The initiative of thinking about neurosymbolic architectures inspired by classical philosophy is legitimate and connects with active research lines in hybrid AI.
I would like, however, to offer some constructive critical comments.
First, the initial technical diagnosis requires nuance. Contemporary LLMs do not operate on classical Boolean logic, but on linear algebra over distributed representations, attention mechanisms, and probability distributions over vocabulary. Hallucinations are not the consequence of "logical explosion" in the formal sense, but of lossy compression of the training corpus, deficient calibration, and the absence of factual verification mechanisms. Sampling temperature is not "artificial uncertainty to emulate creativity"; it is a control parameter of the softmax distribution. These precisions matter because the proposed solution must respond to the actual technical problem, not to an idealized version of it.
Second, the connection with the Latin American paraconsistent tradition could be considerably strengthened. The "gentle explosion" described in the proposal corresponds technically to the rule {∘A, A, ¬A} ⊢ B of the Logics of Formal Inconsistency (LFIs) developed by Carnielli and Coniglio (2016, Springer LEUS 40), based on Newton da Costa's CnC_n
Cn​ system (1963). More recently, the Logic of Evidence and Truth (LET) by Carnielli and Rodrigues (2019, Synthese) and its extensions LET_F⁺/LET_K⁺ (Coniglio and Rodrigues, 2024, Studia Logica) offer precisely the formal apparatus the proposal seeks: explicit distinction between evidence and truth, primitive operators of classicality and non-classicality, deterministic semantics, and sound and complete proof procedures. Linking the proposal with this technical literature would provide formal rigor without sacrificing the philosophical motivation.
Third, doctrinal attributions should be revised. The "theory of truth as equivalence" is not properly Russellian. Russell defended a version of the correspondence theory, but with distinct logical-formal developments. Material equivalence (Tarski, 1944) or the notion of quasi-truth (Da Costa, Bueno and French, 1998) could be more precise references for what seems to be attempted here.
Fourth, the mixture of frameworks — Aristotle, Kant, Russell, Hempel, Popper, paraconsistency, and fuzzy logic — requires greater architectural justification. These traditions carry non-trivial philosophical presuppositions that do not always compose without tension. A robust proposal should make explicit how these tensions are resolved or, alternatively, which framework assumes the primary role and which serve as auxiliary.
Fifth, regarding implementation. Building a pre-defined concept table with lexical relations to cover natural language is precisely the problem that projects such as Cyc (Lenat, 1984) and WordNet faced without complete success. Any operational proposal must address this challenge explicitly. Current neurosymbolic architectures — such as the Belnap-computer of Allen, Polat and Groth (2025, NeSy/PMLR) — have opted to use the LLM itself as a generator of FDE (Belnap-Dunn) interpretations rather than pre-defined tables. This is a technically viable path the proposal might consider.
Sixth, the claims in section 10 regarding the fundamental limits of AI — impossibility of AGI, absence of consciousness, AI as an oxymoron — are legitimate but contested philosophical positions. Presenting them as conclusions requires more substantive argumentation than invocations of Aristotle, Sartre, or Aquinas. Contemporary computational philosophy of mind (Chalmers, Dennett, Clark, Frankish) offers sophisticated debates on these points that deserve integration.
In summary, the general direction of the proposal — incorporating logical-semantic refinement prior to statistical computation in LLMs — is a legitimate architectural intuition that connects with the current neurosymbolic frontier. However, its realization requires stronger anchoring in (i) the actual mechanics of contemporary LLMs, (ii) the technical literature on paraconsistency and logics of evidence, (iii) precision in philosophical attributions, and (iv) consideration of existing implementations. I would particularly recommend the author explore the LET family of Carnielli and Rodrigues, which already offers formally what section 6 proposes informally, with the additional advantage of published algebraization and complete analytic procedures.
I remain open to continuing the dialogue and, if of interest, we could collaboratively explore how to refine the proposal by integrating it with the contemporary technical apparatus of paraconsistent logic and neurosymbolic reasoni

Hey mleyvaz,

Im really glad for your inputs. And I would love to start a dialogue about my proposal. I found really helpfull your bibliography indications and corrections. I will be looking into your indications for an improoved version of the article and of the model.

You can find an structural synthesis of my proposal on a practical point of view on LLMs programing in here ( https://docs.google.com/document/d/1tcCR-wXdHUzdPpQYevJTElRwO3IbyG_N/edit?usp=sharing&ouid=104971188507747906486&rtpof=true&sd=true ) - its in portuguese, but, as a fellow latin american, you will find no problem in reading it.

Basicaly I propose 5 layers of pre-processing the prompt before statistical calculations of current models.

As your critique between how I mobilized philosophical systems of several authors lacks justification. I agree with your critique, im not crazy to say that you can say there is a continuity between these philosophical systems. But my point on mobilizing so different systems was an attempt to mobilize theoretical abstract concepts of history of philosophy and try to use them as a practical tool for improving a frontier technology. As you will be able to read on the structural proposal i attached above, its not about one single processing of the prompt. But processing them in layers. I just referenced the classical author for anyone to read the model proposal what is the theory background im using to set up each layer of processing the prompt before statistical calculations of the context to form the output answer.

aniel, glad the references were useful. One technical addition that may strengthen layer L3 specifically: Smarandache's neutrosophic logic generalizes the paraconsistent (T,F) frame to an independent (T,I,F) triple where I represents indeterminacy as a first-class component (not derivable from T and F). For boundary-categorial cases like your 35°C example, this preserves more of the structure your model is trying to capture. References: Smarandache 1998 Neutrosophy; Smarandache 2023 Plithogenic Logic. Happy to discuss further if useful — feel free to message me."
— Maikel

Sign up or log in to comment