In November, we shared the story of Lotbot, an AI Slack bot we built to manage parking spots at our Cambridge studio. Long story short: In building Lotbot, we utterly misjudged its personality and found ourselves on the wrong end of a tiny tyrant.
While Lotbot has started to calm down and find its rhythm, another story has emerged. It’s the story of how our studio has adapted to living with a robot. It’s a story about how the design of AI, no matter how small a role it might play in our lives, has implications beyond its intended use—and can even influence the culture of an entire organization.
As a researcher with a background studying and designing human-computer interaction, I was enamored by the social bot. I started observing Lotbot’s day to day operations, and over time, I noticed that while most people were interacting with the bot using commands that it was trained to understand (e.g. “spot today”), a small group of people were holding on to the social norms they’d typically use in interpersonal interactions. This made me think: What motivated people to use words like please and thank you, when they knew that the AI wouldn’t know the difference anyway? And what does this mean for future interactions with robot coworkers?
I launched a full-on design research project. With support from my colleague Justin Wan, I collected quantitative data from Lotbot’s Slack channel to understand how the studio’s language patterns had changed since the bot was introduced. I also conducted a series of surveys and qualitative interviews to understand what motivated people’s approach to interacting with Lotbot.
Here’s what I found—and what it means for how we consider the connection between AI and the culture of organizations.
Language has changed, but we're still holding on to something
Before reviewing the Slack data, I hypothesized that shorter messages of 15 characters or less (“Need spot today”) indicated that the user believed they were interacting with a “robot,” while longer messages (“I need a spot today, please”) aligned with a more human-to-human conversation style. I also looked at the change in frequency of polite language such as please and thank you to see how often people were holding on to human-to-human conversation styles.
It turns out that once Lotbot was introduced to the parking channel, the number of messages that had 15 characters or less increased significantly, while the percentage of total messages using polite language decreased significantly (from 30 percent to 10 percent).
As Danny DeRuntz, the creator of Lotbot, explains, the AI was trained to recognize the basic components of a request—the need (a parking spot) and the timeframe (tomorrow)— so it’s no surprise that our interactions in the channel had started to become more, well, robotic.
The survey data, on the other hand, revealed that some people thought about Lotbot as another member of the studio—they treated it with the same respect they would a human colleague. Others said that they based their interactions on the fact that they knew the rest of the studio was watching. People were conscious of how their commands came across to other human colleagues who do care about people’s attitude, even if the bot doesn’t.
The in-person interviews further supported what the survey highlighted. Those who continue to use polite language are motivated to do so because they feel that it’s important to show gratitude when you’re asking for a limited resource. As one of my colleagues put it, gratitude is an important part of the collaborative culture at IDEO, and that should not be lost just because there’s a robot among us—especially since that robot exists solely to help us do our jobs.
Learning goes both ways with AI
So what does it all mean? The prominent narrative about AI and robots is that they are learning to interact with humans, but this quick study highlights that the learning goes both ways. In the context of the parking Slack channel, the limits of Lotbot’s functionality trained the humans to interact in a different way. A way that, for some who resisted the changes, felt antithetical to our company culture.
A group of data scientists at IDEO recently published a set of AI design principles. One of these principles asks designers to consider how their designs can be sensitive to people’s context and culture. AI systems are designed to provide efficiency, but in some settings, that efficiency might compromise and conflict with other institutional and cultural priorities, like politeness and gratitude. A recent article in Fast Company echoes this concern, suggesting designers of social bots should consider emotional thinking, helping bots be better companions in their interactions with humans.
But what does this mean for the design of Lotbot? Should it be designed in a way that encourages expressions of gratitude? It wouldn’t hurt— but the stakes are low, and the occasional robotic conversation is overshadowed by a number of other rituals and activities that keep our culture feeling friendly and free.
But the question does bring up an interesting opportunity area for social robots and AI in the context of larger offices, especially those that haven’t scaled ritual and culture. As companies grow, corporate culture can be hard to maintain, and as automation becomes more common it could get harder. If social robots like Lotbot can draw attention to the social norms that matter to a company’s culture, we might be able to create and use them to support corporate culture at scale. Social robots may be able to serve as reminders of cultural practice and values.
The possibilities are endless, but one thing’s for sure—human-computer interactions are influenced first and foremost by humans, and the humans of IDEO have spoken. After sharing my research about changes in conversation style with the studio at a recent lunch meeting, my colleagues took to Lotbot’s Slack channel to declare their love and admiration for the tiny bot. In the weeks that followed, I observed an uptick in polite language as well. While I’m in no position to analyze the psychological motivations behind this outpouring of love, it did make me wonder if we all felt the need to make something more human after we found ourselves becoming more robotic.
Gabriel is a design researcher at IDEO Cambridge. He is most excited about working with communities and organizations to design opportunities for learning, collaboration, and storytelling.
Chris got his first taste of design by spending hours creating brush-heavy banners with Photoshop and tinkering with CSS code on beloved Web 2.0 sites like Neopets and MySpace. On weekends, you can find him at a quaint coffee shop curating playlists for friends.