Uh oh! Looks like your browser’s out of date.
You’ll have more fun viewing our site on one of these instead.
×
Creative Tips + ToolsPeople + Things We LoveThe Future Of...What We MakeWork Culture
Jump To

Blog

AI Needs an Ethical Compass. This Tool Can Help.

  • Send this to a friend:

    (required)
    (required)
May 08 2019

A design researcher, a business designer, and a data scientist were sitting at a bar. It sounds like the setup of a joke, but the conversation was actually quite serious.

We had just begun work on a project exploring how we might help a medical-liability insurance company increase patient safety by learning from past incidents. As we talked, our conversation shifted away from excitedly imagining ideas for data-driven tools toward ways our work might potentially do harm.

What if our designs ended up raising premiums for doctors and, in turn, healthcare costs for patients? What if they kicked doctors off their insurance? What if doctors stopped reporting adverse events to keep their premiums from rising? There was real potential that our work could result in reduced patient safety or increased cost of care. We could inadvertently build a tool that could be used against the very people we were trying to help.

Today, data systems and algorithms can be deployed at unprecedented scale and speed—and unintended consequences will affect people with that same scale and speed. How can we always make sure we’re putting people first when designing large scale systems? Especially when those systems will change over time and evolve without direct human supervision?

The seed planted during that bar conversation has grown into a set of principles, activities, and now a set of cards that our teams—both data scientists and designers across every other discipline—use to ensure we’re intentionally designing intelligent systems in service of people.

We’re far from the first people to ponder this. We’ve been inspired by organizations like AI Now and Data + Society, books like Weapons of Math Destruction and Technically Wrong, academic communities like FATML and CXI. In particular, we’ve been eagerly following O’Reilly’s series on data ethics (and encourage you to read their free ebook, Ethics and Data Science).

To develop our own set of guiding principles, we started with people. We talked to folks all across the globe: We interviewed IDEO teams about where they found challenges. We spoke to our clients about where they saw intelligent systems go awry. We spoke to the public about where smart designs seemed to cross lines. We observed and read about AI systems that had gone off the rails and worked to understand how this might have been avoided. We learned a lot.

We came up with an original design, iterated on them, and landed on a set of four design principles and ten activities that can help guide an ethically responsible, culturally considerate, and humanistic approach to designing with data. These activities are meant to provoke thought; they’re a vehicle for introducing new ideas and stimulating conversations around ethics throughout the design process.

To start, here are our principles:

1. Data is not truth

    Data is human-driven. Humans create, generate, collect, capture, and extend data. The results are often incomplete, and the process of analyzing them can be messy. Data can be biased through what is included or excluded, how it is interpreted, and how it is presented. Unpacking the human influence on data is essential to understanding how it can best serve our needs.

    2. Don’t presume the desirability of AI

      Just because AI can do something doesn’t mean that it should. When AI is incorporated into a design, designers should continually pay attention to whether people’s needs are changing, or an AI’s behavior is changing.

      3. Respect privacy and the collective good

        While there are policies and laws that shape the governance, collection, and use of data, we must hold ourselves to a higher standard than “will we get sued?” Consider design, governance of data use for new purposes, and communication of how people’s data will be used.

        4. Unintended consequences of AI are opportunities for design

          Just as with any design endeavor, we know that we’re not going to get it right the first time. Use unanticipated consequences and new unknowns as starting points for iteration.

          To get started with the activities, you can download the cards here. We hope that these activities provoke dialogue and provide concrete tools to help our community ethically design intelligent systems.

          Thank you to the larger team who helped make these cards come to life—Ben Healy, Jane Fulton Suri, Jess Freaner, Mike Stringer, Justin Massa, Connie Oh, and KP Gupta, who designed them.

          • Ovetta Sampson

            Design Lead, IDEO Chicago
            Ovetta is a problem solver, searching for that sweet spot between human need, technology and business. She works to design future technology that's not only human-centered but prioritizes human needs. She's a ferocious science fiction fan, finding inspiration for future design in the magic of Star Trek and Octavia Butler.
          • Michael Chapman

            Design Director, IDEO Chicago
            Michael is an applied anthropologist who strives to unlock human potential by creating solutions that bridge the needs between people, industry, and technology. He is passionate about jumping into "wicked problems" that tackle the complexity and interconnectedness of today's society. In his down time you can find him traveling the globe, running, and playing make-believe with his family.
          • Send this to a friend:

            (required)
            (required)
          • Send this to a friend:

            (required)
            (required)