Chatbot Research and Design: 4th International Workshop, CONVERSATIONS 2020, Virtual Event, November 23–24, 2020, Revised Selected Papers (Lecture Notes in Computer Science, 12604) - Home Teachers India

Breaking

Welcome to Home Teachers India

The Passion for Learning needs no Boundaries

Translate

Tuesday 13 September 2022

Chatbot Research and Design: 4th International Workshop, CONVERSATIONS 2020, Virtual Event, November 23–24, 2020, Revised Selected Papers (Lecture Notes in Computer Science, 12604)

 Theoretical Framework and Hypotheses Development!

     The Effect of Chatbot Disclosure on Social Presence, Trust and Attitude Toward the Online Retailer Humans and chatbots have different capabilities [37], consequently, humans differ in how they perceive the interaction and how they interact with a chatbot compared to another human [19]. This makes transparency in the nature and limitations of this technology somehow an important issue for academics and practitioners studying humanchatbot interactions. Some scholars argue that chatbots should be upfront about their machine status [28] because this is beneficial to limit users’ expectations in the system and avoid negative implications from users failing to realize the limitations in chatbots. According to a recent study, however, there are some negative effects in disclosing artificial agents that seem to be driven by a subjective human perception against machines [25]. Studies show that people prefer replacement of human employees by other humans as opposed to by machines/robots/new technologies, which negatively influences their overall attitude towards AI [15] that can be defined as an evaluative response, including both cognitive and affective components [30]. Moreover, compared to trust towards humans, prior research has argued that people tend to have less trust towards AI by default, so, according to the definition of trust, less belief in the competence, dependability, and security of the system, under the condition of risk [21], which may partly be explained by the high media attention on instances in which AI went wrong [36]. A prejudice many people have is that chatbots lack personal feelings and empathy and are less trustworthy [10] and less pleasant [37] compared to humans. So, on the one hand, if companies decide to explicitly disclose the artificial agent identity, they might not gain the full business value of AI chatbots due to customer resistance [25]. On the other hand, however, customers should have the right to know whether it is a bot or a human that handles their communications because of moral and ethical concerns, especially if such differentiation leads to disagreeing perceptions and outcomes. A recent study tested the causal impact of a voice-based bot disclosure on customer purchases and call length [25]. The results of the study show that when customers know the conversational partner is not a human, they are brusque, short the conversation length, and purchase less. Kim and Sundar [20] were among the first to argue that if an agent is presumed to be operated by an artificial agent, users are more likely to evaluate the quality of the agent’s performance based on their preexisting perceptions, regardless of the agent’s actual performance quality. In the years, other studies investigated the different perceptions users have when they chat – or believe they are chatting – with a human or rather with an artificial agent. These studies confirmed the preference for humans, even when the other (believed to be a human) is a bot [6, 31]. More specifically, Murgia et al. [31] found that a bot that answered users’ questions on a social website was regarded more positively when posing as a human than when explicitly revealing its bot identity. In Corti and Gillespie [6], users were more likely to expend effort in making themselves understood when the agent’s chat content was conveyed through a human than through an artificial text-based interface. Similarly, Sundar et al. [39] showed that participants were more willing to recommend a website to others when it provided a human chat agent compared to a chatbot agent, despite in both conditions the chatting protocol to communicate with all the participants was the same. According to some authors [e.g. 2], perceptions about the conversational agent may be influenced by how the agent is introduced before the conversation. Making users believe that they are engaging with a fully-autonomous agent when, in reality, the agent is human-controlled, or priming users to believe that they are engaging a real person when they are in reality interacting with an agent are common practices in experimental studies in HCI [6]. This priming effect was found to considerably influence subsequent general perceptions about the agent and, particularly, social presence, a construct at the heart of the HCI literature (HumanComputer Interaction) representing the “degree of salience of the other person in the interaction” [35 p. 65]. According to Etemad-Sajadi and Ghachem [8], social presence is particularly relevant in online business contexts because it creates the feeling of the employees’ presence and improves the customer experience in a retail interaction. From these premises, in line with past studies where the explicit disclosure of the artificial agent identity was shown to negatively affect users’ perceptions of the interaction and the system, we expect that participants will perceive lower levels of social presence, trust and attitude toward the online retailer in the disclosed chatbot condition than in the undisclosed chatbot condition. H1. Users perceive lower levels of social presence in the online retailer when the chatbot identity is disclosed compared to when the chatbot identity is undisclosed. H2. Users perceive lower levels of trust in the online retailer when the chatbot identity is disclosed compared to when the chatbot identity is undisclosed. H3. Users perceive a less positive attitude toward the online retailer when the chatbot identity is disclosed compared to when the chatbot identity is undisclosed. 

for download to any of given book click on its size .

IDAuthor(s)TitlePublisherYearPagesLanguageSizeExtensionMirrorsEdit
2467375Abhishek Singh, Karthik Ramasubramanian, Shrey ShivamBuilding an Enterprise Chatbot: Work with Protected Enterprise Data Using Open Source Frameworks [1st ed.]
978-1-4842-5033-4, 978-1-4842-5034-1
Apress2019XXII, 385
[399]
English6 Mbpdf[1][2][3][edit]
2487320Asbjørn Følstad, Theo Araujo, Symeon Papadopoulos, Effie Lai-Chong Law, Ole-Christoffer Granmo, Ewa Luger, Petter Bae Brandtzaeg (Eds.)Lecture Notes in Computer Science 11970
Chatbot Research and Design: Third International Workshop, CONVERSATIONS 2019 (Amsterdam, The Netherlands, November 19–20, 2019): Revised Selected Papers [1 ed.]
3030395391, 9783030395391
Springer2020288
[279]
English13 Mbpdf[1][2][3][edit]
3036495Rachel BatishVoicebot and Chatbot Design: Flexible conversational interfaces with Amazon Alexa, Google Home, and Facebook Messenger
1789139627, 9781789139624
Packt Publishing2018296
[345]
English25 Mbpdf[1][2][3][edit]
3094135Asbjørn Følstad (editor), Theo Araujo (editor), Symeon Papadopoulos (editor), Effie L.-C. Law (editor), Ewa Luger (editor), Morten Goodwin (editor), Petter Bae Brandtzaeg (editor)Chatbot Research and Design: 4th International Workshop, CONVERSATIONS 2020, Virtual Event, November 23–24, 2020, Revised Selected Papers (Lecture Notes in Computer Science, 12604) [1st ed. 2021]
3030682870, 9783030682873
Springer2021231English7 Mbepub[1][2][3][edit

No comments:

Post a Comment

Thank you for Contacting Us.

Post Top Ad