Halb geöffneter Laptop

Lunch Talks zum Nachgucken

Du hast es nicht zu allen Lunch Talks von DATEN MACHT RECHT geschafft? Kein Problem drei von ihnen kannst du in deiner nächsten Mittagspause nachgucken:

HASS DIGITAL
Diskriminierende Äußerungen in sozialen Plattformen richten sich insbesondere gegen Frauen, LGBTQIA* und BIPoC. Mittlerweile erfährt digitale Hassrede mediale wie politische Aufmerksamkeit. Doch der Hass ist geblieben. In diesem Panel fragen wir: Wo sind die blinden Flecken im Umgang mit digitaler Hassrede? Kann der europäische Digital Services Act sie erhellen? Welche Rollen können Algorithmen bei der Bewältigung von Hassrede übernehmen?
MIT: Alexandra Geese, Dr. Berit Völzmann und Francesca Schmidt


Durch klicken auf das Video wird es von YouTube nachgeladen.

FLEEING DATA PROTECTION
Data-based technologies are by now everyday tools at Europe’s external borders and in migration management (cf. European Parliament, Management of External Borders), while data protection is not. Dr. Petra Molnar will show where and how data-based technologies are used at the European external borders and what the “electronic eye” imply for refugees and their human rights.
MIT: Dr. Petra Molnar


Durch klicken auf das Video wird es von YouTube nachgeladen.

JUST AI?
The process of setting rules and norms for computing processes and applications has been dominated by requirements engineering and formalisable, rather than ‘thick’ interpretations of central concepts including fairness, responsibility, trust and participation. Yet computing science experts and other disciplines such as law and philosophy often understand these terms very differently. These differences in understanding can create productive friction and discussion amongst experts with very different backgrounds and orientations, but can also constitute gaps that lead to governance-by-default, where instead of creating architectures for the control and shaping of digital power and intervention, disagreement on fundamental concepts delays action. This talk will explore whether these diverging understandings represent fundamental incompatibilities between disciplinary worldviews, what the effects of the resulting faultlines are in terms of thes target and aims of governing data and AI, how we can recognise productive disjunctures. I will look particularly at the current politics of AI, and ask whether there are ways to govern technology when different groups are locked in opposition around core concepts and assumptions which each consider non-negotiable.
MIT: Prof. Dr. Linnet Taylor


Durch klicken auf das Video wird es von YouTube nachgeladen.