Abstract
As technological agents that we interact with take an increasing amount of autonomy and responsibility from us, acting on our behalf, there are prospects for social changes in our notions of responsibility for our own actions as well as for the development of ‘cognitive calluses’ in our interactions with such technologies. Other agents that we interact with assist us in such tasks as search, scheduling, organizing the information we access and other parts of our lives, so appropriate design considerations are desirable in order to respect human wholeness and build humanely interfaced social agents (Dautenhahn & Nehaniv 1999). The impact of technology on human modes of existence, experience, and relationships is changing who were are, and how we interact and relate with one another along a broad collection of dimensions that should be considered by the designers and users of technology (Gorayska et al. 1997). Yet most software agents today have no consciously designed affective communication skills or, if they do, often display inappropriate affect to the user and are unable to support the high ‘affective bandwidth’ present in human face-to-face (but not e-mail) communication (e.g. Picard 1997). Moreover, while human cognition may be fundamentally structured to deal with temporal grounding in terms of stories and narrativity (Schank & Abelson, 1977, 1995, Schank 1990, Read & Miller 1995), software agents today tend to lack any semblance of temporal grounding, but merely react to the user on the basis of no or very limited information of what has happened in the past (Dautenhahn & Nehaniv 1998) or behave in (e.g. strangely discontinuous) manner which is not believable to our human narrative intelligence (Sengers 1998, Sengers 1999).
Original language | English |
---|---|
Title of host publication | Procs 3rd Int Conf on Cognitive Technology: Networked Minds (CT'99) |
Pages | 313-322 |
Publication status | Published - 1999 |