On Computational Mechanisms for Shared Intentionality, and Speculation on Rationality and Consciousness

3 Jun 2023  ·  John Rushby ·

A singular attribute of humankind is our ability to undertake novel, cooperative behavior, or teamwork. This requires that we can communicate goals, plans, and ideas between the brains of individuals to create shared intentionality. Using the information processing model of David Marr, I derive necessary characteristics of basic mechanisms to enable shared intentionality between prelinguistic computational agents and indicate how these could be implemented in present-day AI-based robots. More speculatively, I suggest the mechanisms derived by this thought experiment apply to humans and extend to provide explanations for human rationality and aspects of intentional and phenomenal consciousness that accord with observation. This yields what I call the Shared Intentionality First Theory (SIFT) for rationality and consciousness. The significance of shared intentionality has been recognized and advocated previously, but typically from a sociological or behavioral point of view. SIFT complements prior work by applying a computer science perspective to the underlying mechanisms.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here