Artificial Intelligence as a pragmatic metavocabulary in Robert Brandom
DOI:
https://doi.org/10.26512/rfmc.v13i2.55078Keywords:
Artificial Intelligence. Frame Problem. Robert Brandom. Explainable Artificial Intelligence.Abstract
Classical Artificial Intelligence has a foundational place in Brandom: the practices whose domain constitutes the possession of a vocabulary are the application of a series of algorithms. Making explicit these algorithms provides an explanation for Brandom’s project of bringing the inferential commitments implicit in our practices into the game of giving and receiving reasons. This project fails for a reason well known in AI, the frame problem. Brandom proposes a solution to the frame problem through learning by training. Brandom’s proposal comes close to neural networks developed through machine learning. While this approach does not allow us to maintain the Brandomian framework of Between Saying and Doing, the parallel with Making it Explicit brings an important parallel with the project of Explainable Artificial Intelligence, namely making explicit the implicit inferential commitments of decision-making processes that affect our common life.
References
ABRAHÃO, F. S. et al. An algorithmic information distortion in multidimensional networks. In: Studies in Computational Intelligence. Cham: Springer International Publishing, 2021. p. 520–531.
ARJOVSKY, M. Out of distribution generalization in machine learning. arXiv preprint arXiv:2103.02667, 2021.
BARTH, C. Representational cognitive pluralism: towards a cognitive-science of relevance-sensitivity. Tese (Doutorado) — Belo Horizonte: Faculdade de Filosofia e Ciências Humanas, Universidade Federal de Minas Gerais, 2024.
BARTH, C. O “frame problem”: a sensibilidade ao contexto como um desafio para teorias representacionais da mente. Dissertação (Mestrado) — Belo Horizonte: Faculdade de Filosofia e Ciências Humanas, Universidade Federal de Minas Gerais, 2019.
BARTH, C. É possível evitar vieses algorítmicos? Revista de Filosofia Moderna e Contemporânea, v. 8, n. 3, p. 39–68, 2021.
BRANDOM, R. B. Between saying and doing: towards an analytic pragmatism. Oxford: Oxford University Press, 2008.
BRANDOM, R. B. Making it explicit: reasoning, representing and discursive commitment. Cambridge, MA: Harvard University Press, 1994.
BUCKNER, C. J. From deep learning to rational machines: what the history of philosophy can teach us about the future of artificial intelligence. Oxford: Oxford University Press, 2023.
CHRISTIAN, B. The alignment problem. New York: W. W. Norton & Company, 2020.
CHURCHLAND, P. S. Neurophilosophy: toward a unified science of the mind-brain. Cambridge, MA: MIT Press, 1989.
CLARK, A. Local associations and global reason: Fodor’s frame problem and second-order search. Cognitive Science Quarterly, n. 2, p. 115–140, 2002.
CUPANI, A. Filosofia da tecnologia. Florianópolis: Editora UFSC, 2011.
DEHAENE, S. How we learn. London: Penguin Books, 2021.
DENNETT, D. Cognitive wheels: the frame problem of AI. In: PYLYSHYN, Zenon W. (ed.). The robot’s dilemma: the frame problem in artificial intelligence. Norwood, NJ: Ablex, 1987. p. 41–64.
DREYFUS, H. What computers still can’t do. Cambridge, MA: MIT Press, 1992.
GEIRHOS, R. et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231, 2019.
GOODMAN, B.; FLAXMAN, S. European Union regulations on algorithmic decision making and a “right to explanation”. AI Magazine, v. 38, n. 3, p. 50–57, 2017.
GUNNING, D. et al. XAI—Explainable artificial intelligence. Science Robotics, v. 4, n. 37, 2019.
HASELAGER, W. F. G.; RAPPARD, J. F. H. V. Connectionism, systematicity, and the frame problem. Minds and Machines, v. 8, p. 161–179, 1998.
HURLEY, S. Making sense of animals. In: HURLEY, Susan; NUDDS, Matthew (eds.). Rational animals? Oxford: Oxford University Press, 2006. p. 139–171.
LIU, J. et al. Towards out-of-distribution generalization: a survey. arXiv preprint arXiv:2108.13624, 2021.
LONGO, L. et al. Explainable artificial intelligence (XAI) 2.0: a manifesto of open challenges and interdisciplinary research directions. arXiv preprint arXiv:2310.19775, 2023.
MCCARTHY, J.; HAYES, P. J. Some philosophical problems from the standpoint of artificial intelligence. Machine Intelligence, v. 4, p. 463–502, 1969.
MILLER, T. Explanation in artificial intelligence: insights from the social sciences. Artificial Intelligence, v. 267, p. 1–38, 2019.
MINSKY, M. A framework for representing knowledge. In: HAUGELAND, John (ed.). Mind design II: philosophy, psychology, artificial intelligence. Cambridge, MA: MIT Press, 1997. p. 111–142.
NANDA, N. et al. Progress measures for grokking via mechanistic interpretability. arXiv preprint arXiv:2301.05217, 2023.
PERINI-SANTOS, E. Desinformação, negacionismo e a pandemia. Filosofia Unisinos, v. 23, n. 1, p. 1–15, 2022.
PERINI-SANTOS, E. Viver é fácil de olhos fechados: fake news, negacionismo e teorias da conspiração. Belo Horizonte: Editora UFMG, 2025.
PHILLIPS, P. J. et al. Four principles of explainable artificial intelligence. National Institute of Standards and Technology (U.S.), 2021.
PICCININI, G. Physical computation: a mechanistic account. Oxford: Oxford University Press, 2015.
POURSABZI-SANGDEH, F. et al. Manipulating and measuring model interpretability. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21). New York: ACM, 2021.
RUDIN, C. Stop explaining black box machine learning models for high-stakes decisions and use interpretable models instead. arXiv preprint arXiv:1811.10154, 2018.
RYLE, G. ‘If’, ‘So’, and ‘Because’. In: BLACK, Max (ed.). Philosophical analysis. Ithaca, NY: Cornell University Press, 1950. p. 323–340.
SAMUELS, R. Classical computational models. In: SPREVAK, Mark; COLOMBO, Matteo (eds.). The Routledge handbook of the computational mind. London: Taylor & Francis, 2018. p. 103–119.
SCHANK, R. C.; ABELSON, R. P. Scripts, plans, goals and understanding. Hillsdale, NJ: Lawrence Erlbaum Associates, 1977.
SCHARP, K. A. Scorekeeping in a defective language game. Pragmatics & Cognition, v. 13, n. 1, p. 203–226, 2005.
SHANAHAN, M. Solving the frame problem: a mathematical investigation of the common sense law of inertia. Cambridge, MA: MIT Press, 1997.
SMITH, B. C. The promise of artificial intelligence: reckoning and judgment. Cambridge, MA; London: MIT Press, 2019.
STINSON, C. Explanation and connectionist models. In: SPREVAK, Mark; COLOMBO, Matteo (eds.). The Routledge handbook of the computational mind. London: Routledge, 2018.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Journal of Modern and Contemporary Philosophy

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright for articles published in this journal is retained by the authors, with first publication rights granted to the journal. By virtue of their appearance in this open access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.