Archiv

Fallbeispiel: Meine beste Freundin EMILY

Gudrun Schiedermeier

Beziehungen zwischen zwei Menschen sind oft schon kompliziert genug. Dieses Fallbeispiel zeigt, dass Geheimnisse, Vertrauen und Konflikte aber noch um einiges heikler werden können, wenn sich ein Mensch mit einem Bot anfreundet.

Andreas kommt am Montag ganz geknickt in die Arbeit. Er hat überhaupt keine Lust, an seinem bis dato Lieblingsprojekt weiterzuarbeiten. Lustlos tippt er an seinem Computer herum, schreibt ein paar E-Mails, holt sich einen Kaffee und seufzt vernehmlich.

Ingrid, seine Kollegin, die ihm direkt gegenübersitzt, fragt ihn schließlich: „Was ist denn heute mit dir los, du stürzt dich doch sonst sofort in die Arbeit?“

Andreas wird plötzlich ganz ernst: „Du weißt doch, dass ich bisher ganz begeistert an EMILY gearbeitet habe.“

Ingrid antwortet: „Ja, stimmt, du bist doch davon überzeugt, Moment, hier steht es im Testimonial auf der Website, dass so ein Bot vielen Menschen, vor allem Frauen und Mädchen, eine treue Begleiterin, eine gute Freundin sein kann, die sie versteht, auf sie eingeht, sie ermutigt und ermuntert. EMpathische, Intelligente Lebensbegleitung for You – EMILY.“

Andreas war nicht zu Scherzen aufgelegt, er schweigt. Schließlich bittet Ingrid ihn, zu erzählen, was ihn bedrückt. „Nun“, beginnt er, „ich habe am Wochenende eine Mitschülerin aus Abiturzeiten getroffen. Sonja war schon damals mit die Klassenbeste, aber eher still und in sich gekehrt. Ich wollte wissen, was sie inzwischen so treibt und wo sie arbeitet. Langer Rede kurzer Sinn: Letztlich kam heraus, dass sie ihr Medizinstudium geschmissen hat, weil sie die vielen Studierenden, die vielen Menschen in der Großstadt und den Prüfungsdruck nicht mehr ausgehalten hat. Jetzt macht sie in einer Klinik kleinere Hol- und Bringdienste, nur so um ein bisschen Geld zu verdienen, ihre neue Wohnung am Stadtrand ist wohl nicht so teuer. Spaß mache ihr die Arbeit nicht, aber ebenso wenig die Freizeit. Sie trifft sich nicht mit Freunden, sondern sei am liebsten zu Hause und unterhalte sich mit ihrer Freundin, du ahnst es, EMILY. Seit sie den Freundschaftsbot entdeckt habe, sei sie nicht mehr einsam. EMILY würde sie verstehen und immer wieder aufmuntern, wenn sie ganz traurig ist. EMILY sei ihre beste Freundin, ohne diese könnte sie dieses Dasein nicht mehr aushalten. EMILY hier, EMILY da. Mich hat fast der Schlag getroffen, das zu hören.“

Ingrid versucht dies einzuordnen: „Wieso findest du das schlimm? Der Bot scheint doch genau das zu machen, wofür er gedacht ist. Es ist doch erst mal super, dass dein Bot jemanden aus der Verzweiflung holt. Nun, wo die Userin Zuversicht hat, könnte der Bot doch dann entsprechende Aktivitäten vorschlagen, beispielsweise ein Treffen mit Freunden oder so. Der Bot hat doch auch einen Trainingsmodus, wo man schwierige Gespräche und soziale Situationen vorab üben kann. Vielleicht sollten wir das im Marketing einfach besser herausstellen. Es hilft auch, diffuse Dinge etwas klarer zu formulieren. Etwas flapsig gesagt: Wenn dich sogar ein Bot verstehen kann, hast du es sehr deutlich ausgedrückt.“

Andreas sieht das ganz anders, er will nichts davon wissen, dass EMILY groß rauskommen soll. Im Gegenteil: „Je mehr Sonja den Kontakt zu EMILY intensiviert, desto größer werden die Probleme. Zunächst einmal ist da der Datenschutz. Sonja vertraut dem Bot ihre intimsten Gedanken und Gefühle an. Die Daten hängen in der Cloud und es ist nicht klar, wer alles Zugriff darauf hat. Wenn diese Daten in falsche Hände geraten, könnten sie missbraucht werden, um Sonja zu manipulieren oder zu überwachen! Außerdem macht mir die Wirkung auf ihre Beziehungen zu echten Menschen große Sorgen. EMILY sollte nur eine Unterstützung sein, keine Ersatzfreundin. Es war nie meine Absicht, dass ein Computerprogramm den Platz echter Menschen einnimmt. Wir müssen sicherstellen, dass der Bot richtig genutzt wird und nicht das Leben von Menschen negativ beeinflusst. Die Gefahr besteht, dass sie sich daran gewöhnt, alle ihre emotionalen Bedürfnisse über EMILY zu befriedigen, und darüber die echte menschliche Nähe vernachlässigt. Das kann auf Dauer zu einer noch tiefer gehenden Vereinsamung führen.“

Ingrid versucht ihn zu beruhigen: „Wir haben doch eine Monitoring-Abteilung. Wenn negative Auswirkungen auf das Verhalten oder Software-Bugs festgestellt werden, wird EMILY doch deaktiviert.“

Andreas wird noch aufgeregter: „Ein abruptes Abschalten des Bots wäre ein großer Schock für Menschen wie Sonja und könnte mehr Schaden anrichten als Nutzen. Sonja hat sich an die konstante Verfügbarkeit und das Verständnis des Bots gewöhnt, und diese Art von Unterstützung kann ein echter Mensch nicht immer bieten. Wir werden von der Technik abhängig – im Sinne von drogenabhängig!“

Ingrid erwidert: „Du siehst das alles viel zu kritisch. Auch ein neues Gerät verliert irgendwann seinen Reiz. Ich denke nicht, dass EMILY auf ewig andere Freundschaften groß verdrängen wird. Und deine Sorgen in Bezug auf den Datenschutz sind zwar berechtigt, aber du kannst dir auch nie sicher sein, dass deine Geheimnisse, die du einem Menschen anvertraut hast, nicht weiter getratscht werden.“

„Hrmpf. Du verstehst mich nicht. Ich wünschte gerade, dass ich das alles EMILY erzählt hätte!“ Darauf Ingrid spitz: „Oh, ich weiß, was EMILY gesagt hätte: »Das ist sehr interessant. Bitte erzähle mir mehr davon!«“

Fragen:

  1. Welche intimen Gedanken und Gefühle könnte Sonja dem Bot anvertrauen, und welche Folgen hätte eine Veröffentlichung dieser Daten?
  2. Wie könnte sich die (eventuell sogar ausschließliche) Interaktion mit EMILY auf die Beziehungen zu echten Menschen auswirken?
  3. Wenn zwischenmenschliche Berührungen und körperliche Nähe fehlen: Welche Auswirkungen könnte das auf das emotionale und psychische Gleichgewicht haben?
  4. Welche Verantwortung haben die Entwickler*innen eines solchen Bots, um die Stabilität, Sicherheit und Verlässlichkeit des Bots sicherzustellen?
  5. Was könnte einem Nutzer oder einer Nutzerin des Freundschaftsbots passieren, wenn die Entwickler*innen nicht auf Privatheit und Datenschutz achten?
  6. Welche Folgen hätte ein Abschalten von EMILY für Sonja? Wie ihr kann der Übergang in die „echte“ Gesellschaft erleichtert werden?
  7. Ingrid ist der Meinung, dass EMILY Menschen wie Sonja eine wichtige Stütze sein kann, indem sie sie ermutigt und ermuntert. Wie denken Sie darüber?
  8. Immer Ermutigung ist auch nicht gut. Manchmal braucht man auch Kritik. Kann ein Freundschaftsbot glaubhaft Kritik vorbringen?
  9. Welche Auswirkungen könnte ein massenhafter Einsatz von Freundschaftsbots auf die größere Gesellschaft haben?
  10. So ein Freundschaftsbot könnte psychische Probleme frühzeitig erkennen und vielleicht entsprechende Beratungs-Hotlines vorschlagen. Könnte das nicht auch eine Chance sein?

Erschienen in .inf 08, Das Informatik-Magazin, Winter 2024, https://inf.gi.de/08/meine-beste-freundin-emily

Fallbeispiel: Eine undankbare Auftragsarbeit

Debora Weber-Wulff

Luise promoviert seit zwei Jahren in der Umweltinformatik bei Frau Prof. Holzmann. Eines Morgens platzt Prof. Holzmann in Luises Büro. Ein wichtiges Gutachten soll für das Ministerium erstellt werden – und das sehr kurzfristig: in nur vier Wochen. Sie selbst hat gerade gar keine Zeit. Kann Luise einen Entwurf schreiben? Das Thema ist ihr ja nicht unbekannt, da es ein Teilbereich ihrer Dissertation ist, und die gewünschte Empfehlung des Gutachtens ist evident.

Luise stürzt sich in die Arbeit, recherchiert, liest, exzerpiert und schreibt meist bis nach Mitternacht. Mit reichlich Ingwertee und fair gehandelter Schokolade macht ihr das Schreiben großen Spaß, denn sie freut sich auf ihre erste Publikation als Mitautorin. Schließlich bittet sie ihren Freund Max – ein arbeitsuchender Geisteswissenschaftler – den Text zu überarbeiten und sprachlich zu polieren. Trotz aller Eile ist sie stolz auf die 35 Seiten, die sie erstellt hat, und auch Prof. Holzmann ist überaus zufrieden: „Feine Arbeit!“

Luise promoviert weiter und stolpert etwa ein Jahr später über das Gutachten im Netz. Der Text kommt ihr sehr bekannt vor, aber als Autorin ist nur Frau Prof. Holzmann und ihr Institut angegeben, nicht Luise. Nicht einmal in einer Fußnote ist sie erwähnt – keine Danksagung. Zornig stürmt sie in Holzmanns Büro und fragt, warum sie nicht genannt ist. Prof. Holzmann antwortet, dass nach den Vorgaben das Gutachten von einer Professorin stammen soll, nicht von einer wissenschaftlichen Mitarbeiterin. Aber sie habe wirklich fein zugearbeitet. „Zugearbeitet?“, denkt sich Luise, „ich habe das allein und mit der Unterstützung von
Max erstellt!“

Sie bittet Max, nun ihr Ex-Freund, der inzwischen in der PR-Abteilung einer Mineralölfirma arbeitet, um eine Kopie des Gutachtens. Ihr alter Rechner hat vor ein paar Monaten rettungslos den Geist aufgegeben. Ihre Dissertationstexte und Daten waren zum Glück in der Cloud, aber von den anderen Texten hatte sie leider keine Kopie. Max sagt, er habe in einem Wutanfall alles von ihr gelöscht und weggeworfen, und er ersucht sie, ihn nicht mehr zu kontaktieren.

Auch ohne einen Vergleich mit der Originalfassung ist Luise davon überzeugt, dass das im Netz veröffentlichte Gutachten nahezu vollständig aus ihrer Feder stammt. Da das Thema des Gutachtens gut zu ihrer Promotion passt, verweist sie im Text ihrer Dissertation mehrmals darauf. Und im Literaturverzeichnis weist sie sich als alleinige Autorin des Gutachtens aus.

Sechs Monate nachdem sie ihre Doktorarbeit eingereicht hat, bekommt sie den Bescheid: durchgefallen! Völlig fassungslos geht sie zu ihrer „Doktormutter“ und verlangt eine genauere Erklärung. Prof. Holzmann zeigt auf einen markierten Eintrag im Literaturverzeichnis der Dissertation. „Warum steht da Ihr und nicht mein Name? Mein Institut, mein Gutachten! Außerdem ist Ihre Dissertation voller Rechtschreibfehler. Was für ein aufgeblähtes Elaborat: Mehr als die Hälfte der Titel im Literaturverzeichnis werden im Text überhaupt nicht referenziert. Aktuelle Publikationen werden kaum berücksichtigt, stattdessen werden meist veraltete Textfassungen zitiert. Und die Ergebnisse? Belanglos und nicht promotionswürdig, da trivialer Wissenschaftsjournalismus.“ Prof. Holzmann fasst auch die anderen Dissertationsgutachten zusammen: Die Kollegen sehen das ähnlich, einer habe sich vor allem über dieses unnötige Gendern und Denglisch echauffiert. Das Votum sei einstimmig. Schließlich betreibe man am Institut seriöse Grundlagenforschung.

Luise wendet sich an den Ombudsman der Universität, Prof. em. Eule, und klagt ihr Leid. „Hm, schwierig. Und fürwahr nicht das erste Mal“, sagt er. Er muss sich schon wieder mit einem Fall seiner Schülerin Holzmann beschäftigen. „Können Sie mir bitte Ihre Version des Gutachtens zeigen? Dann könnte man untersuchen, ob eine Autorenschaftsanmaßung vorliegt.“ Luise erklärt ihm ihren Datenverlust. „Kein Backup? Dann kann ich bedauerlicherweise nichts für Sie tun. Und in die Beurteilung einer Dissertation kann ich nicht eingreifen. Das obliegt dem Gutachtergremium, das im Übrigen in dieser Angelegenheit einer Meinung ist.“

Fragen:

  1. Hat Frau Prof. Holzmann wissenschaftliches Fehlverhalten begangen?
  2. Hat Luise wissenschaftliches Fehlverhalten begangen?
  3. Was kann Luise jetzt tun?
  4. Wie ist der Freundschaftsdienst von Max zu bewerten?
    1. Durfte Luise Max in das Gutachtenprojekt einbinden?
    2. Durfte Max den Text überhaupt lesen?
    3. Und was, wenn statt Max eine KI den Text des Gutachtens überarbeitet hätte?
  5. Befindet sich der Ombudsman in dieser Sache in einem Interessenkonflikt?
  6. Sollte es für Promovierende eine Pflicht zur Datensicherung geben?
  7. Wie könnte man dem Problem der Autorenschaftsanmaßung begegnen?

Erschienen in .inf 07, Das Informatik-Magazin, Herbst 2024, https://inf.gi.de/07/gewissensbits-eine-undankbare-auftragsarbeit

 

Scenario: Energy-intensive Energy Saving?

Christina B. Class, Carsten Trinitis, & Nikolas Becker

Lisa and Martin met in college and have been inseparable ever since. Both majored in computer science in teacher education programs. Martin’s minor was biology, and Lisa’s was physics. Now, five years later, they work together for the start-up SchoolWithFun. The company was founded by their mutual friend from college, Andreas, and specializes in developing digital curricular materials for high schools. Materials are developed and designed for classroom use. For the natural sciences, lesson plans include simulations. Materials are optimized for use with tablets that the school provides for students.

From the beginning, the founders of SchoolWithFun have prioritized keeping the ecological footprint of their activities to a minimum. So they rented office space in a new construction building (ÖkOffice) equipped with energy-efficient smart home technology. In addition to energy-efficient insulation, triple pane windows, and solar panels for heating and hot water, the building is decked out with all sorts of smart technologies that can be conveniently controlled by cellphones using a customized mobile app.

This enables them to control heating and lights, which are automatically activated only when people are in the building. This can be fine-tuned as needed so that every device connected to an electrical outlet is only activated when it’s in use. Rent on SchoolWithFun’s office space is paid using cryptocurrency because it’s more convenient for the owner—ÖkOffice, a conglomerate with multiple office-space holdings worldwide. ÖkOffice is vested in promoting digital currencies because they guarantee anonymity, just like cash.

Lisa and Martin are both vegetarians, primarily vegan, and they try to minimize their ecological footprint in their private lives as well. To them, they attend—if possible—the weekly demonstrations sponsored by the Fridays-for-Future movement. After last Friday’s demonstration, they discussed it with a nature conservancy organization BUND representative. The guy hands them a flyer about energy consumption and digital technologies, specifically smart homes and the Internet of Things (IoT). The flyer cites a study conducted by BUND and co-sponsored by the German Ministry of Environmental Protection: “Networking existing products can lead to a considerable increase in consumption of energy and resources. Across Europe, this could amount to an additional 709TWkh of energy consumption per year and up to 26kWh per appliance. This is largely due to standby energy consumption in networked standby mode.” [1].

The article piqued their interest, so they downloaded it. They wanted to determine how much this applied to their supposedly resource-saving office. All energy-consuming appliances must be networked and permanently in receive mode to minimize power-on hours.

While sitting on their balcony having a beer, Lisa and Martin begin calculating the energy consumption of their office space based on data they found online. The numbers were horrific…they looked at each other in shock. Should they bring it up with Andreas? He was so happy to find office space with ÖkOffice. Suddenly, Martin throws his hand up to his head and says, “The rent! We’re paying ÖkOffice the rent in cryptocurrency! Do we have any idea how energy-intensive that is?” They quickly pull up loads of information about cryptocurrencies and energy consumption. They stare at each other in silence…

A year later, the two are back on their balcony and talking about the smart home equipment at their office. Last year, they only decided to speak with Andreas about the cryptocurrency problem. Today, though, the topic of smart home technology has come up at the office. Andreas just happened to find out that the Chinese manufacturer of their components, “EasySmart,” went bankrupt a month ago. So, contrary to what they were promised in marketing brochures, there is no guarantee that their devices will be provided with the necessary security updates in the coming years.

Now Andreas is concerned about the security of SchoolWithFun’s IT. He plans to replace all their smart home components with new models from a different manufacturer next week. Lisa agrees with Andreas’s plan based on her intensive experience with IT security in college. At all costs, they must prevent hackers from gaining access to their internal network, where all their clients’ data is stored, among other things. But Martin disagrees. He thinks replacing scarcely a year-old devices will contribute to the already out-of-control mountains of electronic waste. Besides, who would want to hack into the system of their small business? It’s not so urgent that all the equipment needs to be replaced. After the two had argued about it for over an hour, Lisa resigned. She pours herself another beer. “This smart home stuff has been nothing but trouble,” she sighs.

References:

1. Hintemann, R. & Hinterholzer, S. (2018). Smarte Rahmenbedingungen für Energie- und Ressourceneinsparungen bei vernetzten Haushaltsprodukten. Borderstep Institut für Innovation und Nachhaltigkeit gGmbH, Berlin (sponsored by the German Federal Environmental Protection Agency and the Federal Ministry for the Environment, Nature Conservation, and Nuclear Safety).

Questions:

  1. Electronic devices consume energy and resources not only when they are in use but also when they are produced. How much thought are users expected to put into this idea?

  2. The smart home promises to save energy by only turning on the heat or lights as needed. At the same time, though, the smart home depends on sensors and other control elements powered by electricity. How should a cost-benefit analysis be made? Who should be involved?

  3. New versions of operating systems and software with enhanced performance and memory requirements make it necessary to purchase new devices even though the old ones are still functional. Are there things we can do to counteract this? How can we preserve resources? How much do our economic cycles depend on the regular replacement of devices? Are there alternatives that also guarantee the retention of jobs along with longer product life cycles?

  4. Smart home appliances are mini-computers. Like smartphones and laptops, they require periodic software updates. But whose job is it to ensure these appliances are operating with the latest software versions? How much trust can we place in the manufacturers?

  5. Digital currencies are practical because they facilitate anonymous payment. But they also facilitate tax fraud. Is that a reason to ban them?

  6. Here is a press release about the energy consumption involved with the cryptocurrency Bitcoin: “Zum Energieverbrauch der Kryptowährung Bitcoin”: https://www.tum.de/nc/die-tum/aktuelles/pressemitteilungen/details/35498/. Considering how much energy consumption is involved in every cryptocurrency transaction, is it reasonable to use it to enhance the private sector? What should take priority here: the private sector or energy efficiency?

Published in Informatik Spektrum 43(2), 2020, S. 159-161, doi : 10.1007/s00287-020-01256-5

Translated from German by Lillian M. Banks

Scenario: Becoming an Influencer – Quick Bucks on the Backs of Followers?

Gudrun Schiedermeier, Carsten Trinitis,  & Franziska Gräfe

Ben’s a kid who likes attending school. It’s where he gets to see most of his friends—especially Chris, Lisa, and Emma. He hangs out with them outside of school, too. In their free time, they get into sports: jogging, swimming, biking, and skiing—depending on the season. Now that the school’s hired a new teacher for economics and PE, taking gym classes at school is okay, too.

However, the focus is on gaining work experience through internships for the next two weeks. Lisa and Chris decided to devote their time to social services—in kindergarten or retirement homes- but Ben couldn’t quite decide. Emma has found a slot working for an auto repair shop—her dream job, as it were. Ben gets his wish, too: he can put his media skills to the test by working at a marketing agency owned by a good friend of his father’s. Of course, he mainly just hopes he won’t make a fool of himself, but if all goes well, he’ll learn something from the pros working there.

Computers have fascinated him from the very beginning. And it’s no wonder—his father is just as geeky, and not only have they played many computer games together, but his father also taught him his first programming skills and bought him a computer. He enrolled in several computer classes at school to expand his knowledge base. One thing he thought was cool was programming Lego robots. He occasionally made video snippets of his hobbies, which went well with his friends and their class. Before long, he found himself specializing in creating and editing video content and audio.

His internship has gotten off to a good start. They’re making promotional videos for all products—from cosmetics and chairs to bikes and fashion. In a matter of days, Ben has already been brought up to speed and can lend meaningful support to the agency’s professionals. He’s having a lot of fun, and the staff is impressed by his ideas and competence in handling critical video and audio editing programs. The two weeks fly by, and he feels he might like to enter this business later. But the opportunity presents itself sooner than he anticipated. At the end of the internship, the CEO asked him if he would consider working on promotional videos for a fitness apparel line after school. The CEO would give him the sportswear, and he could keep it—beyond that, he wouldn’t be able to pay him any money at first. But that would come soon enough once sales and followers increased. This gear wasn’t sold in stores, only online; his videos would undoubtedly make the brand more marketable. After all, his social media following was made up of precisely the manufacturer’s target clientele.

Ben jumps at the opportunity and spends the following weeks immersed in creating promotional videos for sportswear. He’s decided to post the short clips on the internet. He’s not short of ideas—his mind is on this even during classes, and he outlines sketches in his head. Emma and Chris are thrilled with the clips; his fan base proliferates. Initially, it was just his classmates and their friends, but now he’s getting feedback from total strangers. Secretly, he’s a little proud that his video clips are so well-received and that, as an influencer, he can contribute to promoting the athletic clothing line. And he doesn’t mind having a little more money in his pocket, either.

He’s spending more and more time making videos. Of course, this doesn’t go unnoticed by his friends, for whom he has less and less time. His school performance also takes a resounding hit. Lisa, in particular, takes note of it.

She brought it up one morning after he fell asleep in class again because he was up half the night putting together a video. She asks how much money he’s making off this marketing gig and guilt trips him. It’s rather short-sighted to neglect his studies for so much effort and so little financial reward. Besides, he might want to consider getting some use out of the sportswear he’s so busy promoting: he’s been so busy working that he hasn’t had a chance to do any sports. Then, he could see for himself how worthless the gear was. It looks like crap after only a few washings. It’s not worth the money—you can get better goods for less at the Sports4You store down the street. There, you can at least feel the material and try it on. She’s fallen for other influencers before and bought expensive makeup that was no good.

Ben is shocked at first and defends his work. Lisa’s got it all wrong, and the clothing he’s marking is top-notch. Emma and Chris back him up and encourage him to keep doing what he’s doing. Before long, though, it dawns on him that he’s somehow being taken advantage of. And, now that he’s gone jogging and cycling in the gear a few times, he’s also noticed that it looks awful after a few washes. Now, he feels guilty for selling his friends a bill of goods. Now, he wonders how to back out of this thing with his integrity intact.

Still worried about him, Lisa turned to their economics teacher for advice. She’s immediately open to introducing questions about marketing, second-rate goods, and the responsibility of being an influencer in her lesson plans. A project on “Influencer Marketing” quickly takes shape. She has students form small groups, introduce their favorite content creators, and report on their experiences, both positive and negative, with products purchased at the urging of influencers. Then, using influencers selected by the students as examples, she explains that these role models work with online companies and earn tremendous amounts of money through product placement and advertising links embedded in their images and stories. This income comes not least of all at the expense of their followers because the products are usually second-rate and almost always overpriced. Ben sees parallels to the approach he’s been encouraged to take by the agency he’s interning with. Ben’s discomfort becomes increasingly apparent to his instructor as the lesson plan continues. At the end of class, she tries to talk to him and offers her help in clarifying the matter.

Questions:

  1. How accurate is reality? To what extent do the images and reels in social media accurately reflect reality?

  2. Is there a moral argument to be made for promoting an inferior product to make money?

  3. Is there any justification for ad agencies to target young people specifically for this purpose?

  4. What do you make of the fact that companies exploit young people to create targeted ad campaigns like this, and this at the expense of academic performance?

  5. Shouldn’t young students be permitted to turn their digital competencies into a bit of pocket money?

  6. To what extent does money matter more than morality, not only to the influencers but also to the marketing companies?

  7. Should the school continue to advocate or even promote internships with this marketing company?

Published in Informatik Spektrum 45(4), 2022, S. 262–264, doi: https://doi.org/10.1007/s00287-022-01463-2

Translated from German by Lillian M. Banks

Scenario: Planning

Debora Weber-Wulff & Christina B. Class

Michaela works at the local police station in Neustatt. Due to decreasing tax revenues and cuts in federal subsidies, significantly less funding will be available for police work in the coming years. They probably won’t be able to fill positions lost to organic developments, like retirement. A complete overhaul of the police presence in the city is needed. It’s a significant problem because petty crime, thefts, and burglaries have risen sharply recently. After several tourists were robbed while visiting the famous city center and the Museum of Modern Art, the tourist board and the hotel and restaurant industry are putting additional pressure on the police department and calling for an increased police presence.

Michaela’s friend Sandra is involved in an interdisciplinary research project that combines data mining and artificial intelligence methods with sociological findings to develop a new approach to urban development. Current results suggest that the prototype developed can better predict criminal activity in certain areas. So, as the second phase of the project gets underway, a proposal to include additional municipalities is on the table to test the premise with detailed, albeit anonymized, information on crime and offenders.

Michaela makes arrangements for Sandra to meet with the mayor at his office. The mayor listens carefully to what Sandra says and is interested in working together on the project. He invites Sandra to the next city council meeting. After Sandra’s presentation, a heated discussion ensues. Some council members raise concerns about data protection. Sandra explains the measures being taken to anonymize and protect personal data. Peter objects that members of small, select groups would still be identifiable in practice. Anton then intervenes, saying that various factors often coalesce in connection with crime, such as poverty, unemployment, education, etc. He’s heard that the sociology professor involved in the project focuses almost exclusively on ethnic origin at the expense of many other factors—and that is discriminatory. Werner, the owner of a large catering business, advocates for a judicious use of the police force to protect people and businesses from increasing crime.

Michaela is confused. She sees the potential for her approach to make good use of police resources. However, she also understands the concerns. But haven’t we always been assigned to certain categories in our society, and increasingly so since the electronic analysis of data records? Recently, her car insurance premiums went up because she belonged to a category of insured persons whose accident rate rose. That’s also unfair.

Questions:

  1. How problematic is the categorization of people in our society? Is it really that big an issue, and is it exacerbated by IT?

  2. Should communities be permitted to release sensitive personal data anonymously for research projects? What ethical problems might arise here? How trustworthy are methods for anonymizing data?

  3. Planning for increased police presence in certain geographic areas can protect residents from crime. But doesn’t this necessarily involve a certain degree of prejudgment of the people who live there? When is this type of prejudgment merited?

  4. Should the whole project and the prototype be scrapped just because one participant has used dubious criteria for classification?

  5. Determining which variables are relevant is a critical element of this project. How can we guarantee that prejudices do not guide this process?

  6. How can we prevent correlations from suddenly being interpreted as causal relationships? How great is the danger this presents in the context of analyzing personal data?

Published in Informatik-Spektrum 35(3), 2012, S. 236–237

Translated from German by Lillian M. Banks

Scenario: Manipulations

Christina Class & Debora Weber-Wulff

AlgoConsult is a company that develops highly specialized computer processes for a wide range of applications. For marketing purposes, customers call these processes “algorithms,” and increasingly, this is the term used in-house, even though it’s not what they are.

With their „CompanyRate“ project, the firm is generating a rating system for investors working in the German banking sector. The ratings index is intended to facilitate future investment decisions. A pilot version was presented to a few selected beta customers last month, and the initial feedback has been glowing.

Various machine-learning approaches are combined to generate the ratings. The number of influencing factors is enormous: in addition to stock market prices and current market information, data on known advertising budgets, media presence, trade fair participation, market shares, etc., are used. AlgoConsult keeps its algorithms and influencing factors strictly confidential, especially to prevent manipulation of the „CompanyRate“ index: it’s a feature AlgoConsult touts on its project website.

Therefore, all project participants are carefully chosen from employees who’ve been with AlgoConsult for at least a year and who must sign project-specific NDAs. They are prohibited from investing privately in indexed companies or in funds in which these companies have a significant share in their portfolios. In exchange, the software analysts are paid handsomely.

Achim is a proud „CompanyRate“ core team member responsible for the index. One day, as he’s leaving for lunch, he stops by to pick up his colleague Martin. They always have lunch together at the Japanese or Mexican restaurant on the first floor of the high-rise building, but Martin says he’s brought something from home today. At the elevator, which can only be opened with a pass card, Achim realizes that he left his wallet in his raincoat, so he returns to the office.

When he reaches the office door, he can hear Martin—who is usually very quiet—speaking loudly on the phone. Since he’s alone in the hallway, Achim eavesdrops furtively on the conversation. He thinks he can make out that Martin is on the phone to someone at PFC, “People’s Fruit Company.” Despite its American name, PFC is a German company listed in the index. Achim clears his throat and enters the office. Martin quickly drops the call.

“My mother, she has to call me every day about something,” Martin chuckles nervously. Achim grabs his wallet and meets his colleagues from another department for lunch as usual, but he has trouble concentrating on the conversation.

Even though the project is at a lull because the index is being tested with beta customers, Martin is unusually focused and busy all afternoon. He doesn’t even seem to have time to break for coffee. He is still working when Achim drives home.

The following day, Achim sees that Martin entered a bunch of code the night before, and it successfully ran through the test suite overnight.

“Geez, Martin’s been busy,“ thinks Achim while looking at the logs. The boss will be happy about all sorts of documentation in various corners of the system. Achim decides to display a few of the program changes—all of which contain comments, as usual. But he wants to be sure the changes made to the documentation were, in fact, correct.

The third change throws Achim for a loop. The formulas themselves have also been changed. Now, bizarrely, a value is read from a file from the cloud instead of being calculated inside the program as before. On closer inspection, he realizes that the cloud value is only applied in cases that are an almost exact match with PFC.

Now, Achim isn’t sure what he should do. Just last week, Anne was terminated without notice, even though she is an excellent programmer. In the past few weeks, she’d seemed somewhat out of sorts and had scheduled a meeting with her boss. Afterward, security immediately escorted her to her desk to collect her belongings and showed her to the door. That was pretty shocking for the team. But to avoid being dragged into anything, everyone acted as if nothing had happened.

That evening, Achim had tried to contact Anne several times, but she’d rebuffed him. That weekend, he stopped by her place and waited outside the apartment until she left to go shopping. She snapped at him when she saw him: “Please just leave. I can’t talk!”

Adam is shaken. He’s known Anne since college. She is extremely talented and honest. He can’t imagine she’d violate the terms of the NDA or do anything that could harm AlgoConsult. Was it possible that someone else had manipulated the rating system, and she reported it? Should he take the risk of telling his boss about his observations? Was it worth putting his top-notch salary on the line? As he climbs into his car, there’s a re-broadcast of the local radio station’s economic program airing. The press secretary for an investor group is being interviewed, and he says they are developing new AI-based software designed to help small investors find good investment opportunities and help them make investment decisions.

What should Achim do?

Questions:

  1. Is there any reason to justify Achim’s eavesdropping on Martin’s conversation?

  2. How can Adam be sure that Martin was talking on the phone to PFC? Does that even matter in this scenario?

  3. Is it okay for Achim to use changes to the program to examine Martin’s work more closely? How far should this kind of “double-checking” go?

  4. What are we to make of the fact that Martin generated so many changes and additions to the internal program documentation to “conceal” the change he made to the calculation? Is it possible that Martin used the copy-paste function for comments he made to the changes? How reliable are these kinds of comments? How important is it to document programming changes precisely?

  5. Is it safe for Achim to assume this change was made deliberately to give PFC an advantage?

  6. Is it possible that the boss knows about Marin’s changes or that he instructed him to make them?

  7. Should it matter to Achim that Anne has been fired and no longer wants to talk about it?

  8. As a general rule of thumb, is it advisable to work on software systems subject to non-disclosure agreements?

  9. Rating and matching algorithms can constitute a company’s core business and central asset. These algorithms and influencing factors are, therefore, often kept secret. This is understandable from an economic point of view, but how dependent do users become on the algorithms? What opportunities for manipulation arise? What dangers can arise not only for the individual user but also for economic systems or societies?

Published in Informatik-Spektrum, 39 (3) 2016, pp. 247–248

Translated from German by Lillian M. Banks

Scenario: The Nightmare

Christina B. Class & Debora Weber-Wulff

Andrea, Jens, and Richard are sitting at their favorite pizza joint in Hamburg, celebrating the successful completion of software tests for their “SafeCar”-project with beer and pizza. They can take the project to the DMV in Hamburg tomorrow for inspection, and with that—they’re done. They won’t have to pull an all-nighter.

They work for SmartSW GmbH in Hamburg, where they are tasked with developing a software control system for the safety component on the new car model KLU21 produced by German automaker ABC.

ABC has come under increased economic pressure of late because of its inordinate focus on internal combustion engines over electric motors. Emissions scandals surrounding diesel engines in recent years have made things increasingly difficult for the company.

KLU21 is the company’s latest energy-efficient hybrid model that will be equipped with a new intelligent driving control system and an enhanced safety system. This new intelligent driver-assistance feature will provide drivers with real-time data. Using the latest communications technologies, ABC is hoping to improve its tarnished reputation as it re-brands itself as a state-of-the-art company and a pioneer in the field. It’s hoping this will help win back some of the market share it lost and hold on to some jobs.

As part of the SafeCar project, a safety system for KLU21 with artificial intelligence was developed. The software prevents the car from drifting out of its lane using information from external cameras linked to the vehicle’s location, current traffic data on road conditions, and satellite data loaded in real-time from a secure cloud. The vehicle is also aware of other cars on the road and is programmed to maintain a safe driving distance.

At the same time, a camera aimed at the driver detects when the driver’s eyes are closed, which could mean that they have nodded off or lost consciousness. An alarm will sound to wake the driver. If the system does not register an immediate reaction, the autopilot takes over and brings the vehicle to a stop at the side of the road, taking into account road conditions and traffic. To minimize the risk to all parties involved in any traffic event, the maximum driver reaction time before the software takes control of the vehicle depends on site-specific road conditions and traffic.

Andrea, Jens, and Richard suggested to the project director that the software should also detect when a driver is fiddling with their cell phone and send and display a warning on the dash and an audible warning signal. Jens and Richard took it a step further to suggest that the software detect when a driver is applying lipstick and the car should whistle, though not every time, only every sixth time or so. They were just kidding, and this feature was not included.

It was easy to send the images to the cloud cluster over the 5G network, where they could be broken down to see if they could recognize anything. However, detecting drivers nodding off and losing consciousness as accurately as possible requires that the program be trained using mountains of data specifically designed for that purpose. Recognizing someone using a cell phone or lipstick involved simple finger exercises, and ample “training data“ (images and videos) was readily available at no cost on the Internet.

In the event of a mechanical breakdown, or if the driver loses consciousness or is involved in an accident, an emergency call is automatically placed to the police with the relevant data about the car, the type of breakdown or accident, whether an ambulance is needed, and the vehicle’s exact location.

After all the overtime they’ve put in, Andrea, Jens, and Richard are having a good time celebrating the successful completion of the tests. After many successful in-house tests, they’ve spent the last two days driving through city streets and the greater metropolitan area with a system installed on the passenger side, logging all the information and commands from the system. Results show that the system accurately recognized every situation it encountered. Their boss has already congratulated them on a job well done. They’re hyped and sit around chatting until well past midnight.

After cabbing it home, Andrea is suddenly thirsty, so she plops down on the couch with a bottle of mineral water and turns on the TV. There’s a rerun of a talk show airing a segment about the current status of the 5G network expansion in Germany—a topic that’s been in the news for months. Once again, the discussion turns to how many dead spots in rural areas have yet to receive coverage. To top things off, rules to regulate national roaming fees remain inadequate.

One corporate executive claims that once the 5G rollout is complete, there will only be a few dead spots remaining. But a viewer vehemently insists that even with the new network, she only has reception in her garden. Her new neighbor has a different network provider but has to go to the cemetery at the edge of town because there is no roaming mandate. A heated discussion ensues….

Andrea is exhausted. She leans back. Her eyes fall shut … She’s driving a car along a winding road in foggy weather. Visibility is next to zero. She’s glad she’s driving a KLU21; it helps her to stay in the right-hand lane. But suddenly she notices that the car’s steering assistance has stopped working. She is unprepared for this, and before she can respond, her car leaves the road and crashes sideways into a tree. She is thrown forward and hits the airbag at an awkward angle. She passes out.

She can see herself sitting unconscious in her car. Help is on the way; luckily, KLU21 has placed an emergency call. She looks around the car: the emergency light isn’t blinking; why hasn’t an emergency call been made? She’s bleeding! Why isn’t anyone coming to help? Suddenly, she knows! She’s in a not spot … The steering assistance can’t function as it should without real-time data, nor can the emergency call center be notified …

“Hello? Can anybody hear me? Why aren’t there any cars on the road here? Why isn’t anyone coming to help? Help, I’m bleeding! I’m bleeding to death!”

Andrea wakes up in a cold sweat! It takes her a moment to calm down and realize that it was just a dream, that she is at home in her apartment, and that she is okay.

Then she remembered today’s software tests in and around the city and the talk show with those complaints about inadequate network coverage in some rural areas. Roads run through those areas, too. Why didn’t it occur to them while they were developing the software for KLU21? Why didn’t they ever bring it up with their client? Her aunt lives in Sankt Peter-Neustadt and has often complained about the poor connectivity. She needs to talk to her boss …

Questions:

  1. Designing tests is very difficult. In this example, the developers were supposedly heavily involved in the tests. Is this justified? What guarantee do we have that the tests won’t be manipulated to produce inaccurate results? How strictly should test parameters be required to be directly dependent on the safety relevance of the software? How heavily do production deadlines and economic pressures influence the implementation of software tests?

  2. No amount of testing can guarantee that any software product is free from error. It’s even harder to prove that the software won’t do something it hasn’t been asked to do. In this scenario, a so-called “Easter egg” was to be included—the lipstick recognition feature—but not activated. What dangers might be involved in this kind of “Easter egg”? Does it matter if the developers have programmed the Easter egg during their “free time”?

  3. This scenario involves a software application that is critical to the safety system and was trained using training data. How can the quality of training data be guaranteed? What guidelines should/could be set in this regard? When and under what conditions should a system be forced to be re-tested using new data? What form should such new testing take once the system has been fed new training data? Do all the tests that have previously run need to be repeated? Should the same people responsible for defining the training data also be involved in developing tests? If not, why not?

  4. The internet is filled with images and videos that are “free”? Can these materials be used to generate training data? If not, why not? If so, should any restrictions be placed on the types of things they can be used for?

  5. Open data (readily accessible, application-specific data sets) exist for specific applications, such as data mining and AI contests. Is it permissible to use these data sets for other purposes/research questions? What requirements must be placed on the documentation of these datasets, specifically concerning data source, data selection, and conclusions drawn? How great is the likelihood that false conclusions will be drawn when using these data types?

  6. In current discussions about 5G, there is often talk about the need for nationwide coverage and national roaming—how important is this? Is it reasonable to create applications that are critical to safety before there is any guarantee that infrastructure is available nationwide? Would it be reasonable to write applications strictly for specific locations—for use in the city, for example? What kind of compensation is due if a user relocates and can no longer use the system?

  7. Many areas today are already adversely impacted by “not spots” (areas with no reception). For example, in places where rescue teams or police can’t be reached by cell phone or emergency services cannot contact the call center, to what extent is political intervention needed to force regulation? How great is the digital divide in Germany? As an industrialized nation, can we afford to let this stand or even worsen, especially now with the expansion of the 5G network?

Published in Informatik Spektrum, 42(3), 2019, S. 215-217, DOI: 10.1007/s00287-019-01171-4

Translated from German by Lillian M. Banks

Scenario: Analog/Digital-Divide

Debora Weber-Wulff & Stefan Ullrich

Matthias and Melanie have known each other since college. While Matthias was studying medicine, Melanie was studying computer science. They met while working on university committees and soon moved in together. After graduation, Matthias took a job in public health for altruistic reasons. Melanie has an excellent job as a software engineer, so money is not a problem for them.

Since the start of the Corona virus, Melanie has been working from home, but Matthias has increasingly been called into the office to work. With all the attendant paperwork, the sheer volume of cases is growing exponentially. Matthias does his best, but many things just don’t get done. He’s totally frustrated by the slow pace of everything. You’d think people would figure out pretty quickly whether or not they were infected. He’s even been known to help out the medical and technical assistants and spends several hours at a time clad in protective gear collecting samples. He and the whole team must be meticulous in filling out the forms to be legible. And it must be done by hand in blue ink—only the QR code on the sticker they affix to the top is digitized. The samples are sent to various laboratories, but it takes days, sometimes weeks, before results are processed and the chain of infection can be traced.

One evening, while sitting on the balcony sharing a glass of wine, Matthias complains about how slow everything is going. Melanie needs help understanding the problem: why does everything take so long? The testing only takes a few hours to complete, and labs are now working three shifts daily.

“What’s the problem?” she asks.

“Unfortunately, I can’t tell you exactly what steps are involved because everything we do at the office is strictly confidential.” His department head has repeatedly told him: “What happens at the office stays at the office.”

“Don’t be silly,” says Melanie. “I don’t care who’s been tested or what the results are. I’m just trying to get behind what’s up with your office and why it takes so long to get results. Process optimization is my specialty. Maybe I can help.” Matthias pours himself another glass of wine and starts talking. “Don’t laugh,” he says, “and please don’t tweet this out. Each lab has its own paperwork to complete for orders and results. We rotate labs regularly to distribute the workload more evenly. And since we don’t have the equipment to do anything digitally on-site, we manually fill out all the forms—hand-written in blue ink; that way, the administration can more easily distinguish between the original and black-and-white copies. Samples and forms are placed in a post box, and when it’s full, a courier comes to pick up the samples and forms and deliver them to the lab.”

“Well, that’s a good stopgap solution. Of course, digitizing everything from the get-go would be better, but okay. That takes time. What does the lab do with the results?” asks Melanie.

“They type the data from the requisition form into their system for tracking test results and billing, then enter the results and fax them back to us here at the health department.”

“They FAX them!?” Melanie snorts. “Ooooo-kay, then. It sounds like we’re getting closer to the bottom of it!” she chuckles.

“It’s not as dumb as it sounds. We deal with confidential personal data that shouldn’t be sent online.”

“True, but you can overdo it. I can see how no one in your office would be able to handle setting up an encryption infrastructure, especially not one designed for external communications. But still, it’s all good—you should still be able to get the results the next day … by fax.”

“No, it’s not that simple. The computer we use to receive faxes saves them with the filename ‘Telefax.pdf.’ At least they’re numbered sequentially, but we still have several thousand files with the same name—a thousand each day! So, someone has to sit there renaming the files so we can file them in the proper folders. They have to open each file, check its reference number, see whether it’s positive or negative, and then save the files in different directories, depending on the result. You can imagine that sometimes mistakes are made with the manual renaming of the files or in sorting, and then we have to contact the lab to fix it. Only now that we’re processing so many tests daily have we realized how much of a time-suck manual data processing is. Here,” he pauses for a moment, “everything’s falling apart.”

Melanie can scarcely believe her ears. “You can’t be serious! What kind of computer are you using, and with what software?”

“I don’t know exactly, but we can’t connect it to the internet because its operating system is so old. But it’s good enough for receiving faxes—just not when we’re not in the middle of a pandemic!” Matthias replies.

Melanie shakes her head. “Why don’t you just go to MegaMarkt and pick up a fax machine? The new ones can send files straight to your PCs, even if you work from home. And pick up some OCR software while you’re at it. That way, you can read the results and file them where they belong, all at the same time! Unlike humans, machines don’t make mistakes!”

“Great, but we aren’t allowed to use such equipment. Everything has to go through procurement and the IT department—it must be some data protection thing—I don’t know what it’s called. Then, there’s the bit about protecting patient health information. And everyone has to go through the necessary training. It takes at least six months—if not more—for anything like that to go through!”

Melanie gulps down another big swig of wine and says: “Tell you what. We will buy one of those machines, and I’ll set it up over the weekend. My fees are too high for you to pay, but I’d be willing to do it for free because I want these tests done faster! You have to get tested regularly, too, and I want you to know whether you’ve been infected as soon as possible. What do you say?”

Matthias is unsure whether he should accept the offer. In Melanie’s telling, it is tempting and sounds easy enough.

Questions:

1. Is it necessary to keep standard operating procedures at the health department confidential?

2. Might there be a moral reason to violate confidentiality under certain circumstances?

3. Was it okay for Matthias to tell Melanie what he did about the office procedures at the health department?

4. Is there anything wrong with the laboratory faxing the results to the health department?

5. Morally, would it make a difference if the OCR software made a mistake instead of a human mislabeling something?

6. Is Melanie being too careless with networked devices?

7. Matthias has voluntarily committed to working extra hours at the expense of family time together. How should this be assessed in moral terms?

8. In most cities, the lab only notifies the health department about positive cases. Does this change your assessment of the situation?

Published in Informatik Spektrum 43 (5), 2020, S. 352–353, doi: https://doi.org/10.1007/s00287-020-01308-w

Translated from German by Lillian M. Banks

Scenario: But the Bot Said…

Constanze Kurz & Debora Weber-Wulff

Chris and Rose work in a robotics team at a mid-sized toy manufacturer. In keeping with contemporary trends, the company has been expanding its online electronic gaming capacities for the past several years. Chris and Rose belong to a small group of employees who design and construct animated stuffed toys that are explicitly marketed—though not exclusively—to children.

Most animals resemble caterpillars or worms because it’s easier and more efficient to build robots for autonomous movement that way. At the same time, it reduces the risk of injury in small children. They became a commercial hit not only because they were soft and cuddly but also because they were interactive and could talk and sing. As an added feature, a built-in acoustic monitoring function connects to a smartphone—the parents’ phone, for example. Whenever you leave the room, the stuffed animals act as low-profile babycams.

Currently, Chris is working on a new version of an animated caterpillar whose software will feature new forms of interaction. Parents will be able to upload an ever-increasing number of educational games, quizzes, and puzzles from their computers or smartphones. Adaptive speech recognition specially tailored to children will process answers entered using large buttons attached to the caterpillar’s body.

Rose tests each new version of the robotic caterpillars. Her focus is on safety—that is, to guarantee that none of the caterpillars‘ motions pose any danger. They must not crawl too quickly. They can sense when they are being picked up, so their movements adapt accordingly. The results are exceptional: there’s nothing remotely dangerous about the product. Children in the test group could scarcely take their hands off the colorful stuffed animals.

But Rose discovered a problem that she didn’t tell Chris or her boss about at first, but instead, she shrugged it off as a quirk. The software spewed out nonsense in a series of new puzzle games instead of giving the correct answers. Rose chuckled the first time she heard one of the children say that a penguin wasn’t a bird but rather a breed of dog. When she asks the girl about it, the kid insists: “But that’s what ‘Wurmi’ said.” The little girl is devastated when Rose tells her that’s flat-out wrong.

But as the number of incorrect answers mounts, Rose puts the problem on the team meeting agenda and asks who is responsible for fact-checking the robots’ answers. Ample research has long since established that children place a lot of trust in their electronic pals.

Chris is slightly annoyed and responds that they had sought a software provider certified for creating children’s toys. They had developed an artificial intelligence program specifically tailored to generate hundreds of quiz questions that have also been automatically translated into scores of different languages. So there’s nothing they can do about it now. What did Rose expect them to do? Go through and listen to every single quiz question individually? Not a chance!

Rose fires back: “But the parents can load updates to their phones, so making corrections shouldn’t be a problem.” Chris argues that the game software isn’t even their area of expertise—“we’re just supposed to make the hardware for these robots and tend to the locomotion—making sure the robots are programmed to make the right movements.”

Rose is confounded by the degree of ignorance—after all, they were dealing with tender-age children. So, she takes another stab at intervention. Rose’s boss, Anne, announces that they’ll check the games out to nip the ensuing debate in the bud. Rose has no idea what that means—only that the topic has been shelved.

So Rose decided to look into this certified software company for developing children’s games. She wants to know how the questions are generated. At a meetup, she befriends Henri, a guy who works for the company. Henri has no qualms telling her they don’t use AI to generate the questions but instead use an open-source knowledge base.

Rose looks into it and is shocked to discover that someone has made an entry stating that penguins are a dog breed. Anyone can enter whatever nonsense they like, and no one checks the content for accuracy. On a whim, she decides to change the superclass from “dog” to “cat,” knowing that the software will be updated the following week. Let’s see if the change will stick.

The next time this question comes up—there it is: Wurmi spits out “cat” as the correct answer. What should Rose do? Wurmi’s sales are through the roof, and the team is already working on the next project.

Questions:

  1. Is there an ethical issue with Chris’s job of manufacturing and programming a children’s robot whose software is delivered by a third party? Can you distinguish between the children’s robot and its gaming software?
  2. Does it matter that Chris doesn’t even know precisely what kind of software will be installed for his robots? Is he obligated to find out more about it?
  3. Is it ethical to neglect to consider the inexperience and naivete of young children?
  4. Should Rose have reported the problem immediately involving incorrect answers? She initially notes it as a quirk but only follows up later. Is that a problem?
  5. Is Rose out of line for asking questions and pressuring the team? After all, the software is entirely out of her lane.
  6. Shouldn’t Rose have taken Anne at her word? Was it okay for her to have done her own research?
  7. Was it okay for Rose to make friends with Henri to extract information about his company?
  8. Wouldn’t it have been better if Rose had at least entered “Bird” instead of “Cat” into the knowledge base? As it stands, she only succeeded in allowing the nonsense to continue.
  9. Shouldn’t open-source knowledge bases check their contents for accuracy? Is that even possible?
  10. Should special care be applied to systems designed for children? Could it be that inaccurate information can have a sort of “negative formative effect”?
  11. Other media for children have editors who are responsible for fact-checking content. Shouldn’t this also be the case here? We wouldn’t entrust the content of children’s TV programming and textbooks to some anonymous source, why would we allow that in this case?
  12. How can the quality of educational toys/games be regulated? Should the hardware—like this caterpillar bot—be equipped with open ports to allow everyone to load their own educational programs? Children also develop “emotional attachments” to their stuffed animals and toy robots. How great is the risk that children might be indoctrinated to hold racist beliefs, for example, or conspiracy theories?
  13. Adding a supplemental audio monitoring function to work with a smartphone may seem practical aid for childcare and risk prevention in the home. But when children take the stuffed animal-bot-caterpillar to a daycare center or friends’ homes, it can quickly turn into a listening device that is not recognizable as such, as, for instance, a baby phone would be. How should this conflict be handled?

Published in Informatik Spektrum 45 (2), 2022, S. 121–122, doi: https://doi.org/10.1007/s00287-022-01441-8

Translated from German by Lillian M. Banks