Archiv

Scenario: But the Bot Said…

Constanze Kurz & Debora Weber-Wulff

Chris and Rose work in a robotics team at a mid-sized toy manufacturer. In keeping with contemporary trends, the company has been expanding its online electronic gaming capacities for the past several years. Chris and Rose belong to a small group of employees who design and construct animated stuffed toys that are explicitly marketed—though not exclusively—to children.

Most animals resemble caterpillars or worms because it’s easier and more efficient to build robots for autonomous movement that way. At the same time, it reduces the risk of injury in small children. They became a commercial hit not only because they were soft and cuddly but also because they were interactive and could talk and sing. As an added feature, a built-in acoustic monitoring function connects to a smartphone—the parents’ phone, for example. Whenever you leave the room, the stuffed animals act as low-profile babycams.

Currently, Chris is working on a new version of an animated caterpillar whose software will feature new forms of interaction. Parents will be able to upload an ever-increasing number of educational games, quizzes, and puzzles from their computers or smartphones. Adaptive speech recognition specially tailored to children will process answers entered using large buttons attached to the caterpillar’s body.

Rose tests each new version of the robotic caterpillars. Her focus is on safety—that is, to guarantee that none of the caterpillars‘ motions pose any danger. They must not crawl too quickly. They can sense when they are being picked up, so their movements adapt accordingly. The results are exceptional: there’s nothing remotely dangerous about the product. Children in the test group could scarcely take their hands off the colorful stuffed animals.

But Rose discovered a problem that she didn’t tell Chris or her boss about at first, but instead, she shrugged it off as a quirk. The software spewed out nonsense in a series of new puzzle games instead of giving the correct answers. Rose chuckled the first time she heard one of the children say that a penguin wasn’t a bird but rather a breed of dog. When she asks the girl about it, the kid insists: “But that’s what ‘Wurmi’ said.” The little girl is devastated when Rose tells her that’s flat-out wrong.

But as the number of incorrect answers mounts, Rose puts the problem on the team meeting agenda and asks who is responsible for fact-checking the robots’ answers. Ample research has long since established that children place a lot of trust in their electronic pals.

Chris is slightly annoyed and responds that they had sought a software provider certified for creating children’s toys. They had developed an artificial intelligence program specifically tailored to generate hundreds of quiz questions that have also been automatically translated into scores of different languages. So there’s nothing they can do about it now. What did Rose expect them to do? Go through and listen to every single quiz question individually? Not a chance!

Rose fires back: “But the parents can load updates to their phones, so making corrections shouldn’t be a problem.” Chris argues that the game software isn’t even their area of expertise—“we’re just supposed to make the hardware for these robots and tend to the locomotion—making sure the robots are programmed to make the right movements.”

Rose is confounded by the degree of ignorance—after all, they were dealing with tender-age children. So, she takes another stab at intervention. Rose’s boss, Anne, announces that they’ll check the games out to nip the ensuing debate in the bud. Rose has no idea what that means—only that the topic has been shelved.

So Rose decided to look into this certified software company for developing children’s games. She wants to know how the questions are generated. At a meetup, she befriends Henri, a guy who works for the company. Henri has no qualms telling her they don’t use AI to generate the questions but instead use an open-source knowledge base.

Rose looks into it and is shocked to discover that someone has made an entry stating that penguins are a dog breed. Anyone can enter whatever nonsense they like, and no one checks the content for accuracy. On a whim, she decides to change the superclass from “dog” to “cat,” knowing that the software will be updated the following week. Let’s see if the change will stick.

The next time this question comes up—there it is: Wurmi spits out “cat” as the correct answer. What should Rose do? Wurmi’s sales are through the roof, and the team is already working on the next project.

Questions:

  1. Is there an ethical issue with Chris’s job of manufacturing and programming a children’s robot whose software is delivered by a third party? Can you distinguish between the children’s robot and its gaming software?
  2. Does it matter that Chris doesn’t even know precisely what kind of software will be installed for his robots? Is he obligated to find out more about it?
  3. Is it ethical to neglect to consider the inexperience and naivete of young children?
  4. Should Rose have reported the problem immediately involving incorrect answers? She initially notes it as a quirk but only follows up later. Is that a problem?
  5. Is Rose out of line for asking questions and pressuring the team? After all, the software is entirely out of her lane.
  6. Shouldn’t Rose have taken Anne at her word? Was it okay for her to have done her own research?
  7. Was it okay for Rose to make friends with Henri to extract information about his company?
  8. Wouldn’t it have been better if Rose had at least entered “Bird” instead of “Cat” into the knowledge base? As it stands, she only succeeded in allowing the nonsense to continue.
  9. Shouldn’t open-source knowledge bases check their contents for accuracy? Is that even possible?
  10. Should special care be applied to systems designed for children? Could it be that inaccurate information can have a sort of “negative formative effect”?
  11. Other media for children have editors who are responsible for fact-checking content. Shouldn’t this also be the case here? We wouldn’t entrust the content of children’s TV programming and textbooks to some anonymous source, why would we allow that in this case?
  12. How can the quality of educational toys/games be regulated? Should the hardware—like this caterpillar bot—be equipped with open ports to allow everyone to load their own educational programs? Children also develop “emotional attachments” to their stuffed animals and toy robots. How great is the risk that children might be indoctrinated to hold racist beliefs, for example, or conspiracy theories?
  13. Adding a supplemental audio monitoring function to work with a smartphone may seem practical aid for childcare and risk prevention in the home. But when children take the stuffed animal-bot-caterpillar to a daycare center or friends’ homes, it can quickly turn into a listening device that is not recognizable as such, as, for instance, a baby phone would be. How should this conflict be handled?

Published in Informatik Spektrum 45 (2), 2022, S. 121–122, doi: https://doi.org/10.1007/s00287-022-01441-8

Translated from German by Lillian M. Banks

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>