Background Image

Welcome!

By participating in this project, you can directly experience the grey area between freedom of expression and online hate speech. Are you ready?

Hate speech is the tip of the iceberg of a viral language dotted with more or less explicit discriminatory language that sometimes spread disguised as jokes, thus difficult to identify as 'Hate speech'.

AI technology is developing at exponential speed and the risk of it being a catalyst for the spread of bias and stereotypes is a real one. Hence, the need for a tool that can teach AI how to detect discriminatory language and possibly avoid it.

These are the assumptions behind the development of AnnotIx, a new editor that aims to make artificial intelligence more sensitive to discriminatory language by comparing the algorithmic categorisation system with human categorisation processes.

Can an algorithm grasp the various forms of hate speech with the same sensitivity as a human being? That is what we are trying to find out, and that is why your participation in this exhibition is valuable to us.

Your task consists in testing AnnotIx by recognising phrases that may be perceived as discriminatory or non-inclusive towards certain categories, such as gender, sexual orientation, nationality and religion. By doing so, you will help us fine-tune our editor.

Your contribution is valuable within a research project that aims to investigate the grey area between free speech and hate speech and help identify the seeds of hatred that lurk in the language of the web.
A step towards a digital world that is more respectful of diversity.