Asking the intelligent wife can, in its simplest form, give the digital assistant different personalities that more accurately represent the many versions of femininity that exist around the world, as opposed to the pleasurable, subordinate personalities that many companies choose to adopt.
Adding Strangers would be a fair case for what these devices might look like, “but that may not be the only solution.” Another option is to bring masculinity in different ways. An example might be Paper, a humanoid robot created by Softbank Robotics that often identifies him / her as a pronoun and is able to recognize faces and basic human emotions. Or Jibo, another robot that relaunched in 2017, which also used masculine pronouns and was marketed as a social robot for the home, although it was given a second life as a device focused on healthcare and education. In terms of the “mild and strong” masculinity edited by Pepper and Gibo উদাহরণ for example, the former answers questions politely and often gives a flirtatious look, and the latter often turns around whimsically and communicates an affectionate behavior to users দেখুন Stranger and Kennedy see positive in the right way. Them as a step.
Quaring digital can also be helpful in creating bot personalities to replace the human concept of technology. Eno, the Capital One baking robot launched in 2019, when asked about his gender, will jokingly answer: “I am binary. I’m not saying I’m both, I’m saying I’m just one and zero. Think of me as a bot. “
Similarly, Kai, an online banking chatbot created by Kasisto সংস্থা an organization that creates AI software for online banking করে completely abandons human features. Jacqueline Feldman, a Massachusetts-based author and UX designer who created Kai, explained that the bot was “designed to be sexless.” Not with a nonbinary identity, such as Q, but with a robot-specific identity and using the pronoun “it”. “From my point of view as a designer, a bot can be beautifully designed and captivating in new ways that are specific to the bot, not pretending to be human,” he says.
When asked if this is a real person, Kai would say, “A bot is a bot a bot. The next question, please,” clearly indicates to users that it is not human or pretending. And if asked about gender, it Will reply, “As a bot, I am not a man. But I learn. It’s machine learning. “
The identity of a bot does not mean that Kai abuses it. A few years ago, Feldman also talked about purposefully designing Kai with the ability to stop harassment. For example, if a user repeatedly harassed the bot, Kai would respond with something like “I’m imagining white sand and a hammock, please try me later!” Feldman told the Australian Broadcasting Corporation in 2017, “I really did my best to give the bot some dignity.”
Nevertheless, Feldman believes that there is an ethical requirement for bots to self-identify as bots. “Companies that design lack transparency [bots] Make it easy for the person communicating with the bot to forget that it is a bot, “he says, and make it more difficult to gender the bot or give them a human voice, as many consumer experiences with chatbots can be frustrating and many people talk to one person instead. That is to say, Feldman thinks that providing the human qualities of bots could be a case of “extra design”.