WeLivedIt.AI

We are building a platform that leverages AI and blockchain technology to moderate online hate speech.

“Hate speech incites violence and intolerance. The devastating effect of hatred is sadly nothing new. However, its scale and impact are now amplified by new communications technologies. Hate speech – including online – has become one of the most common ways of spreading divisive rhetoric on a global scale, threatening peace around the world.” (https://www.un.org/en/hate-speech)

We’re not just building another moderation tool – we’re innovating the way hate speech classification, LLM adaptation and online community collaboration take place.

Centring Lived Experience

Hate speech is more likely to be experienced by people from marginalised communities. Lived experience of hate speech enables a greater understanding of the nuances in the way it takes place. Rather than focusing on the most extreme examples, we are using technology to validate more subtle, but equally impactful dehumanising language. We believe this is where division begins.

Democratising AI Governance

Users can submit data that they would like to use to adapt or ‘train’ the model. However, this involves community discussion and vote. This empowers an organisation or community to decide as a collective what they consider to be acceptable and what they don’t. They don’t have to rely solely on the existing data available to the model – the origin of which, may be considered ethically dubious.

Collaborating with blockchain

One potential function of blockchain technology is to facilitate collaboration through the transfer of value. We are leveraging this by enabling an organisation to put their trained model on the blockchain. This would mean that another organisation could use it to moderate their online space. For example two different women’s tech communities could share a model.

Scroll to Top