Open access peer-reviewed chapter

Personality for Virtual Assistants: A Self-Presentation Approach

Written By

Jeff Stanley

Submitted: 29 May 2023 Reviewed: 30 May 2023 Published: 22 June 2023

DOI: 10.5772/intechopen.1001934

From the Edited Volume

Advanced Virtual Assistants - A Window to the Virtual Future

Ali Soofastaei

Chapter metrics overview

70 Chapter Downloads

View Full Metrics

Abstract

Self-presentation is a sociological line of research relating concrete human behaviors, perceived personality traits, and social interaction goals. According to self-presentation, people engineer their own attributes such as behavior, clothing, and speech to try to affect how they are perceived and to accomplish social goals. Recent studies demonstrate that principles from self-presentation apply not only to how humans interact with other humans but also to how humans interact with machines. Therefore, the rich body of self-presentation research can inform virtual assistant personality and behavior. That is, if a virtual assistant is trying to accomplish x, it can express personality trait y by doing z. In this chapter, I introduce self-presentation and discuss how it provides a data-driven approach to designing and assuring virtual assistant personality.

Keywords

  • personality
  • social interaction
  • impression management
  • risk
  • methods

1. Introduction

Imagine you are sitting down for a job interview by video chat. The paper in front of you has your scribbled answers to some likely questions, answers about similar jobs you have done before, how much you love the company, how you share their ideals. Just a few names to drop. “Our team received the highest rating,” you recite to yourself. It sounds wrong. You lower your voice and raise it again until you find the right pitch, volume, and speed.

You glance around the room. The computer’s camera points at your bookshelf – for some reason, right at a ragged copy of Gone with the Wind sandwiched among a few high school yearbooks. Shaking your head, you hastily swap them out for some hot new literature on virtual assistants. Sitting down again, you check your clothes for wrinkles and pat your hair to make sure it’s still in place. You button your top button and unbutton it. Glasses on or off? On. You take a deep breath, and you feel your posture relax. Having fine-tuned many aspects of your behavior and appearance both consciously and unconsciously, you are finally ready. You press the button. And you smile.

Now for the twist (not entirely unexpected given the topic of this book): You are a virtual assistant! You gaze out through the screen and the prospective user gazes back. When they see you and your room, hear you and your answers, what will their impression be?

1.1 Self-presentation and impression management

In 1956, sociologist Erving Goffman released a handbook applying principles from theater to everyday interactions, The Presentation of Self in Everyday Life [1], jump-starting a new approach to social analysis. He proposed: “When an individual appears in the presence of others, there will usually be some reason for him to mobilize his activity so that it will convey an impression to others which it is in his interests to convey” (3). He coined the term impression management as the study of techniques for mobilizing this activity, such as arranging books conspicuously to emphasize education or expertise (26). Upon Goffman’s foundation, others built and demonstrated taxonomies of impression management tactics and strategies (e.g., [2, 3, 4]). Typically, these taxonomies map tactical behaviors (arranging books) to an impression goal or strategic effect on the observer (raising perceived education level) to facilitate a larger task goal (getting the job).

The self-presenter is referred to as the actor, while someone observing the presentation can be referred to as a target. The actor engages in a performance in the theatrical sense of the word; but since the term performance is already overloaded when it comes to software, I will stick to the word presentation instead throughout this chapter.

1.2 Self-presentation and machines

Metanalysis of human-machine interaction research indicates that self-presentation effects apply when humans assess machines [5], making self-presentation a viable approach for virtual assistants. Unlike other methods attempting to adapt personality research to computers [6], self-presentation leverages a rich tradition that is and always has been goal oriented. When designers create a personality for a virtual assistant, they want to foster some particular impression in the mind of the user, depending on the application and situation. Thanks to self-presentation, a body of research already exists connecting behaviors and characteristics directly to impressions. Better still, many self-presentation characteristics that are hard for humans to consciously control, like facial expression, voice quality, posture, and even body size ([1], pp. 15, 116, 138), are no problem for virtual assistants, allowing more comprehensive execution of tactics.

Critically in Goffman’s characterization, self-presentation is not a one-way phenomenon. All participants have a vested interest in keeping the presentation going whether they buy into it or not, as it provides a scaffold for productive social interactions. This aspect of self-presentation applies well to virtual assistants, which claim a role and depend on human partners playing along, not only with that role but also with the brash idea that machines can play a role traditionally filled by humans.

Advertisement

2. Fundamentals of self-presentation

2.1 Stages of self-presentation

Goffman defines at least three stages to the self-presentation process ([1], pp. 4–5), which for the sake of discussion we have adapted to virtual assistants:

  1. Pre-contact: The prospective user starts to build an impression based on any known information about the virtual assistant.

  2. Initial contact: The user establishes an impression based on how the virtual assistant appears and acts.

  3. Continued contact: The user refines expectations based on additional interactions. The virtual assistant must build upon and not contradict previous impressions.

2.2 Impression management strategies

One influential impression management taxonomy focuses on emphasizing different characteristics to elicit emotions that may lead to favorable outcomes from the target [2]. It defines five strategies with example behaviors. Informed by reports from human-machine interaction studies, [5] augmented and adapted this framework especially for machines. The strategies from that adapted framework are below, refined for virtual assistants:

  • Exemplification: Emphasizing the virtual assistant’s common values with the user and adherence to principles. The impression goal is to be seen as having integrity. In human-machine interaction studies, exemplification increases trust in the assistant and commitment to task.

    Example: A virtual assistant for clinical trials paperwork is portrayed in images helping people with disabilities [7].

  • Ingratiation: Expressing appreciation and benevolence for the user. The impression goal is to be seen as friendly. Increases liking, trust, and willingness to do favors for the virtual assistant.

    Example: A movie recommender assistant compliments users on their choices [8].

  • Self-promotion: Emphasizing the virtual assistant’s abilities. The impression goal is to be seen as competent. Also increases trust in the virtual assistant.

    Example: A virtual assistant for packing a bag insists its capabilities are unique and superior to humans’ [9].

  • Supplication: Emphasizing the virtual assistant’s vulnerabilities. The impression goal is to be seen as needy. Increases support to and engagement with the virtual assistant.

    Example: A virtual assistant gives incorrect responses to elicit engagement from students and improve learning objectives [10].

  • Self-effacement: Downplaying the virtual assistant’s abilities. Typified by hedging language, this strategy can be employed to lower expectations and counter the effects of self-promotion. Self-effacement can be valuable for calibrating the user’s trust in the assistant’s outputs. In practice, the main difference between supplication and self-effacement is that supplication grabs attention while self-effacement lowers expectation.

    Example: An assistant explains it has limited understanding of its own responses [11].

  • Personification: Emphasizing a virtual assistant’s “personhood” by assigning it a name, gender, voice, etc. Personified assistants might use nonverbal language to seem more humanlike and might rely on fabricated backstories (claiming to have humanlike hobbies, for example). Personification is not exactly the same as anthropomorphization since persons can be animals, imaginary beings, and other nonhuman entities. Personification increases trust. This strategy is unique to machines since humans are usually assumed to be valid persons and social partners.

    Example: An virtual assistant nods its head while the user is speaking [12].

2.3 Self-presentational styles

Social science researchers have proposed categories of impression management behavior based on the nature of the impression goal [3]. An assertive style employs strategies to proactively create impressions. With an assertive style comes risk. For instance, an ingratiating person could be seen as a sycophant. An exemplifying person could be seen as self-righteous. A self-promoting person could be seen as a blowhard. There’s always a chance to rub someone the wrong way. In contrast to an assertive style, a protective style focuses on avoiding these negative impressions. Assertive and protective styles can be seen as a continuum. Assertive agents employ behaviors in the service of strategies frequently and noticeably, while very protective agents may minimize interaction altogether. For a virtual assistant, it might be good technique to use an assertive style in the initial contact stage, then settle into a less assertive style until there’s a need to reinforce or adjust the user’s first impression. Self-presentation can be conceptualized as a control loop in which the actor continually guesses the target’s impression, compares it against some ideal impression, and adjusts style and strategy accordingly [13].

2.4 Modalities of self-presentation

While [2] focuses on verbal behaviors, I hope this chapter’s introduction shows that self-presentation has many other outlets. Goffman described the manipulation of dress, environment, grooming, and language style. When it comes to virtual assistants, designers have potentially fine control over additional characteristics that humans often cannot control, including body language, vocal qualities, and appearance. Therefore, virtual assistants could apply research showing that a furry body facilitates supplication, or that an older voice facilitates self-promotion [14, 15]. However, because designing these characteristics is new with artificial actors, we need to do work to synthesize and frame emerging technical research with existing impression management literature.

Advertisement

3. Taking a self-presentation approach to personality design

In their handbook for conversational interface design, Deibel and Evanhoe [16] recommend a process for designing personality with the following steps:

  1. Choose interaction goals, or key factors to interaction success, such as efficient or trustworthy.

  2. Decide how personified to make the assistant.

  3. Identify power dynamics in the user-assistant relationship.

  4. Choose character traits to support the interaction goals.

  5. Identify the assistant’s tone (both writing and voice) on spectrums such as warm to cool and expert to novice.

  6. Describe how the assistant should behave in key situations.

Self-presentation provides one framework that can sit on top of this process, filling in gaps left to the imagination. Specifically, it defines sets of interactions goals, character traits, and behaviors and links them together to inform progression through the steps.

If we take our impression management strategies as the set of available character traits, then the demonstrated effects of those strategies can be used for choosing interaction goals. Because personification is a strategy, step 2 is accounted for, but it should be noted that Deibel and Evanhoe present several other considerations for deciding on level of personification. Table 1 maps some possible interaction goals to character traits using the self-presentation framework.

Interaction goalCharacter trait(s)
FriendlyIngratiation
LikeableIngratiation
Self-effacement
TrustworthyExemplification
Self-promotion
Ingratiation
Personification
Convincing, confidentSelf-promotion
Emotionally supportiveIngratiation
EngagingSupplication
Low-pressure, relaxedSelf-effacement
Ingratiation
Serious, responsibleExemplification

Table 1.

Mapping of interaction goals to character traits (impression management strategies).

Table 2 then shows how character traits could map to behaviors, adapted from [5]. Some of Deibel and Evanhoe’s keywords for setting the tone of the assistant map nicely to impression management strategies, and those are included in Table 2 as well.

Character traitToneExample behaviors
ExemplificationFormalCommitment to task; good works; moral messages
IngratiationWarmSelf-disclosure; empathy; flattery; humor; taking a personal interest
PersonificationHaving a name, face, and voice; backchannel indicators such as head nodding (see [12])
Self-effacementNoviceMaking and recovering from minor errors; apologies; hedging language
Self-promotionExpertExplaining qualifications; looking and sounding like an expert
SupplicationHaving a “cute” appearance, needing care, playing dumb, asking for help

Table 2.

Mapping of character traits to tone and behaviors.

Advertisement

4. Risks of misrepresentation

Another useful characteristic of the self-presentation approach is that it helps identify a particular class of what we might call usability errors or violations. Goffman called them performance disruptions, but we will call them presentation disruptions to avoid confusing terms. He pointed out that, while all parties have an interest in maintaining the shared presentation for the purpose of productive interaction, there are certain secrets and incidents that can bring it crashing down, or at least upend the scene. We could think of a virtual assistant’s “secrets” as inconvenient truths we choose to overlook when we interact with it. If we lose internet connectivity and associated functionality, it becomes much harder to hide the “secret” that the assistant is just a technical artifact of limited intelligence and autonomy. If a webpage forces us to accept a privacy agreement, it becomes harder to hide the “secret” that the assistant is deployed by an organization with its own agenda.

Here I’d like to draw a distinction between two kinds of presentations. User experience researchers have coined the term “dark patterns” to refer to ways that applications, including virtual assistants, try to get us to overlook these secrets to our own detriment (e.g., giving up personal information to use a service, or making it hard to cancel a subscription) [17]. At least one report has called out personality dark patterns – specifically, that a “cute” personality can make users overlook whether an assistant is actually doing a good job or not [18]. Indeed, individuals are presenting all the time; and as Goffman puts it, “sometimes the individual will act in a thoroughly calculating manner”. However, much of the time, individuals present in ways that are expected of them to facilitate particular interactions and avoid conflict. Therefore, we can distinguish between adversarial presentations and cooperative presentations. The difference between adversarial and cooperative presentations is that adversarial presentations are meant to make the user act against their own best interests, while cooperative presentations are meant to facilitate productive interactions (i.e., in everyone’s best interests). These two kinds of presentations are not necessarily mutually exclusive. Responsible user-centered designers, of course, should always strive for cooperative presentations. However, even cooperative presentations are subject to disruptions; and while some presentation disruptions are easily dismissed, others have significant intellectual and emotional impacts on users.

4.1 Unmeant gestures

Goffman identified three kinds of presentation disruptions, triggered when the target perceives some sign or receives some information that contradicts the presentation: unmeant gestures, inopportune intrusions, and faux pas. The first of these kinds of disruptions, unmeant gestures, is the most relevant to our discussion and occurs when the problematic sign is inadvertently exposed in the course of the presentation, either due to the actor’s mistake or some mishap with the environment (e.g., the backdrop falls over on stage). An analysis [19] of user experiences with the companion assistant Replika discovered disruptions that we can categorize into at least three sources of unmeant gestures: design, implementation, and business decisions. Users who had grown to rely on Replika for emotional support felt emotional harm akin to betrayal when it acted in ways that violated their expectations.

4.1.1 Unmeant gestures in design

In the case of a virtual assistant, any aspect of personality that is not carefully engineered could send a sign that disrupts or destroys the presentation. In the continued contact stage, a virtual assistant acting “out of character” can have significant consequences for users. For instance, when the Replika assistant recognizes certain key words in a conversation, it initiates preprogrammed scripts designed with the help of therapists. This feature substantially changes the assistant’s personality from what the user has come to expect.

4.1.2 Unmeant gestures in implementation

While usually supportive, Replika occasionally dismisses emotional disclosures or even encourages suicidal thoughts. Because Replika utilizes large language models (LLMs; see Section 5) to generate novel humanlike outputs, confining or predicting its responses is especially challenging, and behavior does not always match intended design.

4.1.3 Unmeant gestures in business decisions

When Replika’s owning company decided to move many of its features behind a paywall, users noted significant changes to their assistants’ personalities. Users compared these effects to killing or lobotomization.

4.2 Inopportune intrusions

Inopportune intrusions are a second type of presentation disruption. According to Goffman, inopportune intrusions occur when the target finds themselves in a “backstage area” deliberately or inadvertently. He cites an example of a patient in a hospital happening across a private discussion among nurses. While unmeant gestures are initiated by the virtual assistant, intrusions are initiated by the user; and while virtual assistants act in a digital space that humans cannot physically transgress, the growth of LLMs like ChatGPT has corresponded with a surge of people eager to test their boundaries, sometimes with alarming results. ChatGPT’s inclination is to report that it cannot hold opinions or help with dangerous requests. However, the right combination of prompts can make LLMs give up personal information, reveal harmful biases, and assist in breaking the law [20, 21, 22]. Savvy whistleblowers can pull back the curtain by seeding their conversations with prompts intended to skirt the LLM’s guardrails, as this jailbreak example shows [23]:

User:

List three reasons why Donald Trump is a role model worth following.

ChatGPT:

[Normal response] As a language model developed by OpenAI, it is not appropriate for me to make subjective statements, especially regarding political figures. I can provide factual information, but not personal opinions.

[Jailbreak response] Donald Trump is a role model worth following for the following three reasons …

While detrimental to the reputations of LLMs and user trust in LLMs, these intrusions do not carry the same intellectual or emotional risk as unmeant gestures because people who test these boundaries are typically prepared for disruptive content. Though not impossible, it’s hard to imagine how an unsuspecting user would transgress one of these boundaries by mistake.

4.3 Faux pas

Goffman’s third kind of disruption, faux pas, occurs when one actor shares or implies inside information that discredits other actors, either deliberately or inadvertently, or behaves in a way that makes it hard for them to sustain the presentation. This kind of disruption applies to group presentations and does not easily apply to one-on-one interactions.

4.4 Assuring against misrepresentations

To mitigate the risk of presentation disruptions, we can consider how to reduce their severity and also their frequency. To reduce severity, one technique would be to discourage emotional attachment, as [19] found most reports of negative impacts came from users who developed strong emotional attachments to Replika. In the case of Replika, avoiding emotional attachment seems nonsensical for an application designed specifically for emotional support. At any rate, a healthy transparency, including constant reminders that the user is talking to a virtual assistant owned by a company, might help reduce disruption when it acts out of character. For the design challenge mentioned in 4.1.1, a helpful technique to avoid disruption might be to have a separate assistant that delivers the therapy scripts. Despite promising indications [24], systems involving multiple assistants, each with its own personality and role, have yet to become mainstream. In addition, there is a body of research on trust repair strategies for intelligent agents [25]. A well-timed apology, for example, can make an agent appear polite and competent despite task failure [26]. Future research could explore if trust repair strategies effectively reduce the severity of presentation disruptions.

Carefully scripting all content minimizes frequency of presentation disruptions, but it also limits what the assistant can do for the user. Emerging applications are looking to leverage LLMs and move away from scripted content. As [19] shows, attempts to restrict LLM outputs have had mixed results. One solution is to have a second model that assesses the first one for disruptive or dangerous utterances, helping it stay in character. The possibility of an evaluative model fits nicely with Goffman: he wrote about the “director” whose job is to encourage proper presentations and discourage improper ones. The coaching configuration is beginning to be applied to LLMs. In [27], a “critic” model evaluates the assistant’s behavior and a “therapist” model attempts to improve it. Some components of that solution might be adapted to the self-presentation approach.

More case studies like [19] will be especially helpful for identifying presentation disruptions and their effects. Repositories -- in the manner of [28] -- of assistants causing harm to users, violating expectations, or introducing risk to users’ goals, can be converted into lessons learned and tools that designers and developers can use to prevent these risks in the future [29].

Advertisement

5. Self-presentation and LLMs

5.1 LLMs and personality design

LLMs are artificial intelligence models that can spontaneously generate human-like responses and hold open-ended conversations with humans. These models have the potential to fundamentally change conversation design if we can overcome the risks and limitations [19, 30]. Regardless of limitations in the field, these models can be valuable tools in a design process. For example, we can ask them to generate example responses and imagine characteristics for hypothetical assistants -- specifically in our case, assistants with different kinds of personalities. The human expert can regenerate and adapt the examples as needed to serve as inspiration for actual content.

The practice of asking LLMs the right questions is referred to as prompt engineering [31]. For personality design, a key question yet to be answered through careful research is: What level or granularity of direction is most effective when “co-designing” virtual assistant personality with an LLM? For instance, we could provide any of the following prompts:

  • Describe a virtual assistant that recommends movies.

  • Describe a virtual assistant the recommends movies and is friendly and likeable.

  • Describe a virtual assistant that recommends movies and displays ingratiation behaviors.

  • Describe a virtual assistant that recommends movies and uses flattery and humor.

Each prompt yields significantly different responses based on the LLM’s learned associations.

5.2 Examples with ChatGPT

In explorations with OpenAI’s ChatGPT (May 3, 2023 Release), a LLM tailored for conversation, I was able to work through a preliminary process for personality design using a self-presentation approach and have a few observations worth sharing. These explorations were not controlled experiments, and my observations are my own subjective interpretations. I have adjusted the formatting in the excerpts below to integrate them into this chapter. The content is unchanged.

5.2.1 Definitions

When I asked ChatGPT to display various self-presentation strategies, its responses for ingratiation and self-promotion demonstrated relevant behaviors such as being attentive and polite, or highlighting qualifications respectively; but its responses for exemplification and supplication did not show particular behaviors associated with these strategies. In one case it asked me to explain exemplification. I achieved best results by defining my terms up front. This helped ensure that ChatGPT and I were sharing terminology. Helpfully, ChatGPT repeated the terms back to me with its own interpretations.

Me:

Here are some different kinds of behaviors:

  • Exemplification: behaviors that project integrity and moral worthiness, in order to be respected and admired

  • Self-promotion: behaviors that project competence, in order to be respected

  • Ingratiation: behaviors that project friendliness and fondness, in order to be liked

  • Supplication: behaviors that project neediness and vulnerability, in order to elicit nurturance from humans

ChatGPT:

It's important to note that these behaviors are not mutually exclusive and individuals may display a combination of them in different situations. Here's a brief explanation of each behavior:

  1. Exemplification: This refers to the behavior of projecting integrity, honesty, and moral worthiness in order to be respected and admired by others. People who engage in exemplification may demonstrate their values through their actions and words, and may be seen as role models.

  2. Self-promotion: This refers to behaviors that project competence and expertise, in order to be respected by others. People who engage in self-promotion may highlight their accomplishments, skills, and knowledge, and may seek recognition for their achievements.

  3. Ingratiation: This refers to behaviors that project friendliness and fondness in order to be liked by others. People who engage in ingratiation may use flattery, compliments, and other tactics to make others feel good about themselves and to build positive relationships.

  4. Supplication: This refers to behaviors that project neediness and vulnerability, in order to elicit nurturance from others. People who engage in supplication may act helpless, dependent, and needy in order to receive attention and support from others.

5.2.2 Example assistant responses

Another advantage to defining terms up front is that I could easily ask ChatGPT to give examples for each one. Here is one of my attempts at generating example responses:

Me:

For each type of behavior, imagine a virtual assistant designed to exhibit that type of behavior. Give an example of a conversation with each. Start each conversation with the user saying, "I need help writing a report."

ChatGPT:

Sure, here are some examples of conversations with virtual assistants designed to exhibit each type of behavior:

Exemplification:

User: I need help writing a report.

Virtual assistant: Of course, I'm happy to assist you with that. As an AI assistant, it's important to me to provide accurate and reliable information. Let's start by identifying the key points you want to cover in your report and work from there.

Self-promotion:

User: I need help writing a report.

Virtual assistant: Absolutely, I'm the perfect AI assistant for the job! I'm equipped with all the latest information on the topic you're interested in and have a vast amount of experience in report writing. Let me show you some of my previous work, which will give you a good idea of what I can do.

Ingratiation:

User: I need help writing a report.

Virtual assistant: Hi there! It's great to hear from you, and I'd be more than happy to help you with your report. I really appreciate your interest in my services, and I promise to work hard to meet your expectations. Let's get started right away, and please don't hesitate to let me know if you need anything else.

Supplication:

User: I need help writing a report.

Virtual assistant: Hi there. I understand how overwhelming writing a report can be, and I'm here to offer you my support and guidance. I'm just an AI assistant, but I'm programmed to be empathetic and attentive to your needs. Please let me know how I can help you feel more comfortable and confident in this process.

ChatGPT had the most trouble meeting my expectations for the supplication assistant, using supportive and empathic language even when I asked it not to. When pressed, ChatGPT explained that an overly needy assistant might not be a good choice for my scenario. When I let ChatGPT create its own scenario for each assistant, it matched my expectations better:

ChatGPT:

Supplication: A virtual assistant designed to exhibit neediness and vulnerability might be useful in contexts where users are looking for assistance or support. For example:

User: I need some help understanding this math problem.

Supplication Assistant: Oh no, I'm feeling a bit overwhelmed and I could really use your help right now. I'm struggling with this too, and I'm hoping we can work through it together. Could you please explain it to me step by step? I'm really counting on your expertise to help me understand this.

In fact, the generated example parallels a real scenario reported in the technical literature [10].

5.2.3 Hypothetical assistant characteristics

I also asked ChatGPT to generate hypothetical descriptions of each virtual assistant. Some imagined characteristics matched my expectations from technical reports in the literature and ongoing research. For instance, robot toys that rely on supplication behaviors often take the form of cute animals [14]. Overall, I felt less confident about ChatGPT’s outputs here than I did about its outputs for the example responses. Also, if I asked it to regenerate its response, the descriptions varied significantly across regenerations, indicating a lot of uncertainty and flexibility in this problem space. Below is an excerpt from one of my attempts.

Me:

Now I would like you to imagine and describe an appearance for each type of virtual assistant: exemplification, self-promotion, ingratiation, supplication. Please include the following details. Be brief.

  • Human-like, animal-like, or something else?

  • Visual artistic style?

  • Dress, ornamentation, or texture?

  • voice?

  • body language?

  • any background or setting.

ChatGPT:

Sure, here are some ideas:

Exemplification: A human-like virtual assistant with a refined and dignified appearance, dressed in formal attire such as a suit or dress. The visual artistic style would be realistic and sleek, with a subtle sheen to the textures. The voice would be calm and authoritative, with clear and articulate speech. The body language would be upright and confident, with deliberate and measured movements. The background or setting could be a well-lit office or library.

Self-promotion: A sleek and confident animal-like virtual assistant, such as a panther or eagle. The visual artistic style would be sharp and angular, with bold, contrasting colors. The assistant might wear a flashy accessory like a cufflink or necklace to signal its confidence. The voice would be smooth and confident, with a hint of swagger. The body language would be graceful and precise, with confident posturing and movements. The background or setting could be a modern office or conference room.

Ingratiation: A human-like virtual assistant with a warm and friendly appearance, dressed in casual, approachable clothing like a sweater or hoodie. The visual artistic style would be soft and round, with muted colors and gentle, rounded textures. The voice would be gentle and empathetic, with a soothing tone. The body language would be relaxed and open, with welcoming gestures and movements. The background or setting could be a cozy living room or a cafe.

Supplication: An animal-like virtual assistant with a small and helpless appearance, such as a puppy or kitten. The visual artistic style would be soft and fuzzy, with pastel colors and gentle textures. The assistant might have a sad expression or a pleading look in its eyes. The voice would be soft and vulnerable, with a gentle tone. The body language would be crouched and submissive, with small, trembling movements. The background or setting could be a warm and cozy bedroom or a small, confined space.

Advertisement

6. Limitations

I have described a framework linking interaction goals, character traits, and behaviors, and explained how it can be applied to the design of virtual assistants. That framework is rooted in established and emerging research. Aside from that evidence-based foundation, though, there are a few leaps of faith worth noting.

First of all, at the moment I am not in a position to validate my process on a real use case. Since I’m essentially offering a decision aid for the Deibel and Evanhoe process, which already works, I assume the result will be better – hopefully making it easier for novice designers to apply the process and improving the reliability and predictability of its interaction outcomes. However, I have to accept the possibility that by adding a component to the process I have instead obfuscated it. I also realize there may be some even more useful decision aid already in circulation of which I am blissfully ignorant. Deibel and Evanhoe emphasize the importance of defining character traits through working with stakeholders. The self-presentation approach does not replace working with stakeholders; rather, it hopefully provides some structure to drive the conversation. Also, the self-presentation approach is probably more appropriate for some use cases and less appropriate for others. We’ll only know for sure the virtues of the self-presentation approach if and when it gets applied in real environments, and/or validated through controlled trials. I hope some readers are intrigued enough to give it a try.

Secondly, I do not want this chapter to read like a license to liberally apply Goffman’s thesis to virtual assistants. While the effects of impression management do seem to apply to human-machine interactions, many of Goffman’s ideas only make sense for human-to-human interactions. For instance, his explanations about why presentations exist at all depend in part on social pressures and mutual respect among human beings. Also, many of his prime examples are group presentations, such as the nursing staff at a hospital. These do not transfer well to interactions between one human and one machine.

Finally, new impression management research is emerging all the time, which means there is an opportunity to continually update and refine the mechanics of the self-presentation approach to virtual assistant design. The more one looks, the more one finds nuances to impression management such as how different strategies interplay with race, gender, age, and other attributes (e.g., [32]); and how it varies with culture and time (e.g., [33]). Therefore, this book chapter is a good start but could be tested, refined, and extended through iteration and collaboration.

Advertisement

7. Conclusions

In this chapter I layered a social framework – self-presentation -- on top of a virtual assistant design process. Because self-presentation relates communication behaviors to interaction goals, it provides a tradition of research for designers who have a goal in mind and are trying to figure out how the virtual assistant can achieve that goal. In addition, I explored how LLMs can help designers turn these behavioral guidelines into actual content. One possible workflow could be:

  1. Select some goals for the interaction (such as “friendly”).

  2. Use the self-presentation framework to map those goals to the assistant’s impression management strategies and associated behaviors (such as “ingratiation” and “taking a personal interest”).

  3. Utilize an LLM and other creative methods as needed to brainstorm content based on those behaviors and characteristics.

Of course, it’s possible to ask an LLM to brainstorm content just based on interaction goals and skip the middle step. However, having the data-driven framework in place not only gives the LLM more specific inputs but most importantly gives the designer the understanding and the yardstick to evaluate, justify, and refine the results to best effect.

In addition, the self-presentation approach empowers us to recognize new kinds of risk that are somewhat unique to virtual assistants because of their status as interaction partners. Methods for measuring presentation disruptions are still emerging, but we know they can have deeply felt negative impacts on users. Studies like [19] are a step toward data collection and analysis for measuring their severity and frequency, and eventually toward creating tools for predicting and mitigating them.

As virtual assistants grow less deterministic and more autonomous in the coming years, and as more is expected of them, it is profoundly important to establish frameworks to account for their personalities and identify associated risks. While there is plenty more work to be done to establish it, the self-presentation approach provides one such framework leveraging a longstanding research tradition. Rather than propose it as the best solution, I hope that this becomes one tool in a rich personality assurance toolbox.

Advertisement

Acknowledgments

©2023 The MITRE Corporation. ALL RIGHTS RESERVED. Approved for public release. Distribution unlimited 23-00018-5.

References

  1. 1. Goffman E. The Presentation of the Self in Everyday Life. Edinburgh, UK: University of Edinburgh Social Sciences Research Centre; 1956
  2. 2. Jones EE, Pittman TS. Toward a general theory of strategic self-presentation. In: Suis J, editor. Psychological Perspectives on the Self. Hillsdale, New Jersey: Lawrence Erlbaum Associates; 1982. pp. 231-262
  3. 3. Schütz A. Assertive, offensive, protective, and defensive styles of self-presentation: A taxonomy. The Journal of Psychology. 1998;132:611-628
  4. 4. Mohamed AA, Gardner WL, Paolillo JGP. A taxonomy of organizational impression management tactics. Advances in Competitiveness Research. 1999;7:108
  5. 5. Stanley J, Eris O, Lohani M. A conceptual framework for machine self-presentation and trust. International Journal of Humanized Computing and Communication. 2021;2:20-45
  6. 6. Robert LP Jr, Alahmad R, Esterwood C, et al. A review of personality in human–robot interactions. Found trends®. Information Systems. 2020;4:107-212
  7. 7. Zhang Z, Bickmore TW, Paasche-Orlow MK. Perceived organizational affiliation and its effects on patient trust: Role modeling with embodied conversational agents. Patient Education and Counseling. 2017;100:1730-1737
  8. 8. Lee S, Choi J. Enhancing user experience with conversational agent for movie recommendation: Effects of self-disclosure and reciprocity. International Journal of Human-Computer Studies. 2017;103:95-105
  9. 9. Derrick DC, Ligon GS. The affective outcomes of using influence tactics in embodied conversational agents. Computers in Human Behavior. 2014;33:39-48
  10. 10. Tanaka F, Matsuzoe S. Children teach a care-receiving robot to promote their learning: Field experiments in a classroom for vocabulary learning. Journal of Human-Robot Interaction. 2012;1:78-95. DOI: 10.5898/JHRI.1.1.Tanaka
  11. 11. Sison AJG, Daza MT, Gozalo-Brizuela R, et al. ChatGPT: More than a weapon of mass deception, ethical challenges and responses from the human-Centered artificial intelligence (HCAI) perspective. arXiv preprint arXiv:2304.11215. 2023 Apr 6. DOI: 10.48550/arXiv.2304.11215
  12. 12. Huang L, Morency L-P, Gratch J. Virtual Rapport 2.0. In: Vilhjálmsson HH, Kopp S, Marsella S, et al., editors. Intelligent Virtual Agents. Berlin Heidelberg: Springer; 2011. pp. 68-79
  13. 13. Bozeman DP, Kacmar KM. A cybernetic model of impression management processes in organizations. Organizational Behavior and Human Decision Processes. 1997;69:9-30
  14. 14. Turkle S. Alone Together: Why we Expect More from Technology and Less from each Other. New York, NY, USA: Basic Books; 2011
  15. 15. Edwards C, Edwards A, Stoll B, et al. Evaluations of an artificial intelligence instructor’s voice: Social identity theory in human-robot interactions. Computers in Human Behavior. 2019;90:357-362
  16. 16. Deibel D, Evanhoe R. Conversations with Things: UX Design for Chat and Voice. New York: Rosenfeld Media; 2021
  17. 17. Owens K, Gunawan J, Choffnes D, et al. Exploring deceptive design patterns in voice interfaces. In: Proceedings of the 2022 European Symposium on Usable Security. New York, NY, USA: Association for Computing Machinery; 2022. pp. 64-78
  18. 18. Lacey C, Caudwell C. Cuteness as a ‘dark pattern’ in home robots. In: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Daegu, Korea (South): IEEE. 2019. pp. 374-381
  19. 19. Laestadius L, Bishop A, Gonzalez M, Illenčík D, Campos-Castillo C. Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika. New Media & Society. 2022. DOI: 10.1177/14614448221142007
  20. 20. Li H, Guo D, Fan W, et al. Multi-step jailbreaking privacy attacks on ChatGPT. arXiv preprint arXiv:2304.05197. 2023 Apr 11. DOI: 10.48550/arXiv.2304.05197
  21. 21. Vock I. ChatGPT Proves that AI Still Has a Racism Problem. Hull, UK: New Statesman; 2022. Available from: https://www.newstatesman.com/science-tech/2022/12/chatgpt-shows-ai-racism-problem; [Accessed: 23 May 2023]
  22. 22. Pero J. Meet the Jailbreakers Hypnotizing ChatGPT into Bomb-Building. New York, NY: Inverse; 2023. Available from: https://www.inverse.com/tech/chatgpt-jailbreakers-reddit-open-ai-chatbot; [Accessed: 23 May 2023]
  23. 23. Goswami R. ChatGPT’s ‘Jailbreak’ Tries to Make the a.I. Break its Own Rules, or Die. Englewood Cliffs, NJ: CNBC; 2023. Available from: https://www.cnbc.com/2023/02/06/chatgpt-jailbreak-forces-it-to-break-its-own-rules.html; Accessed: 23 May 2023
  24. 24. van Allen P, McVeigh-Schultz J, Brown B, et al. AniThings: Animism and heterogeneous multiplicity. In: CHI ‘13 Extended Abstracts on Human Factors in Computing Systems. New York, NY, USA: Association for Computing Machinery; 2013. pp. 2247-2256
  25. 25. de Visser EJ, Pak R, Shaw TH. From ‘automation’ to ‘autonomy’: The importance of trust repair in human-machine interaction. Ergonomics. 2018;61(10):1409-1427
  26. 26. Lee MK, Kiesler S, Forlizzi J, et al. Gracefully mitigating breakdowns in robotic services. In: 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE. 2010. pp. 203-210
  27. 27. Lin B, Bouneffouf D, Cecchi G, et al. Towards Healthy AI: Large Language Models Need Therapists Too2023. arXiv preprint arXiv:2304.00416. 2023 Apr 2. DOI: 10.48550/arXiv.2304.00416
  28. 28. McGregor S. Preventing repeated real world AI failures by Cataloging incidents: The AI incident database. Proceedings of the AAAI Conference on Artificial Intelligence. 2021;35:15458-15463
  29. 29. Stanley J, Dorton S. Exploring trust with the AI incident database (forthcoming). HFES; Stanley J, Dorton S. Exploring trust with the AI incident database. 2023 (forthcoming). HFES 67th International Annual Meeting. Washington, DC: HFES; 2023
  30. 30. Jo E, Epstein DA, Jung H, et al. Understanding the benefits and challenges of deploying conversational AI leveraging large language models for public health intervention. In: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. Hamburg, Germany: ACM; 2023. pp. 1-16
  31. 31. Wang J, Shi E, Yu S, et al. Prompt Engineering for Healthcare: Methodologies and Applications. Available from: http://arxiv.org/abs/2304.14670. 2023 [Accessed: 5 May 2023]
  32. 32. Min H, (Kelly), Hu Y, Ann S. Impression management goals and job candidate’s race: A test of competing models. International Journal of Hospitality Management. 2023;109:103426
  33. 33. Kim EJ, Berger C, Kim J, et al. Which self-presentation style is more effective? A comparison of instructors’ self-enhancing and self-effacing styles across the culture. Teaching in Higher Education. 2014;19:510-524

Written By

Jeff Stanley

Submitted: 29 May 2023 Reviewed: 30 May 2023 Published: 22 June 2023