Studia humana (SH) is a multi-disciplinary peer reviewed journal publishing valuable
contributions on any aspect of human sciences such as...


Petros Stefaneas

Dr Petros Stefaneas is Associate Professor at the Department of Mathematics of the National Technical University of Athens, Greece. His research interests include logic and formal methods for computer science and philosophy of computation.



Issue: ()

Ambiguity in Argumentation: The Impact of Contextual Factors on Semantic Interpretation

Issue: 11:3/4 (The forty third/fourth issue)
This article is concerned with the concept of ambiguity in argumentation. Ambiguity in linguistics lies on the coexistence of two possibly interpretations of an utterance, while the role of contextual factors and background/encyclopedic knowledge within a specific society seems to be crucial. From a systemic point of view, Halliday has proposed three main language functions (meta-functions): a) ideational function, b) interpersonal function, c) textual function. Language could reflect speaker’s experience of his external and internal world, interpersonal relationships and organization of text, respectively. Lexico-grammatical choices under a micro-level perspective and context (the environment of language) may lead to inconsistent interpretations through semantic or syntactic ambiguities. In philosophy and argumentation logic, strategies of ambiguity have been investigated by Aristotle, since the first sophistic movement. In his Topics, Metaphysics and Rhetoric, has pointed out the notion of “τὸ διττῶς / διχῶς λεγόμενον”, meaning that a term can have different senses and double interpretation. In this paper we discuss how we reconstruct the meaning of an utterance in dialogue through the mechanism of interpretation and how we analyze and construe ambiguities, combining the insights of argumentation theory and text linguistics. Research results show that in case of misunderstanding, the “best interpretation” is the less defeasible one according to contextual presumptions.

Argumentation-Based Logic for Ethical Decision Making

Issue: 11:3/4 (The forty third/fourth issue)
As automation in artificial intelligence is increasing, we will need to automate a growing amount of ethical decision making. However, ethical decision-making raises novel challenges for engineers, ethicists and policymakers, who will have to explore new ways to realize this task. The presented work focuses on the development and formalization of models that aim at ensuring a correct ethical behaviour of artificial intelligent agents, in a provable way, extending and implementing a logic-based proving calculus that is based on argumentation reasoning with support and attack arguments. This leads to a formal theoretical framework of ethical competence that could be implemented in artificial intelligent systems in order to best formalize certain parameters of ethical decision-making to ensure safety and justified trust.