Last week, together with Tessa Duzee, I organised the event “Ghostwork, the invisible labour behind AI” at the Amsterdam University of Applied Sciences. The aim was to raise awareness of the fact that behind AI there are tens of millions of vulnerable workers who annotate and check the data, thereby keeping AI running. And to start a conversation about how we can improve these working conditions. From the perspective of the individual, from professionals in “Responsible AI”, from the Amsterdam University of Applied Sciences itself and from organisations (AI companies and their customers). This was led by moderator Tessa and contributions from experts Fiona Dragstra (WageIndicator Foundation), Nanda Piersma and myself. The data workers themselves were also given a platform through video clips, where they talked about their experiences.





It was interesting to bring together the various disciplines and engage in open discussion with the 80 students in the room. My five takeaways from this event:
- Whether or not to exploit workers is a conscious choice. Not exploiting them is also a choice. The data work market is characterised as a to-business market, which is different from other gig markets such as taxi and delivery. And in a to-business market, organisations are responsible for their supply chain. I am looking here at both the AI companies themselves and the customers they serve.
- In a market where organisations capitalise on fragmentation and information asymmetry, bringing people together is more important than ever. Think of trade unions and cooperatives. The key to solutions or resistance lies in finding and connecting nodes with which you can create critical mass. Consider, for example, (Dutch) organisations such as SURF and Public Spaces. But the government, as (I suspect) the largest customer of big tech and a major distributor of capital through subsidies, is also such a node. Make use of this, take responsibility and dare to make choices.
- Creating fair(er) alternatives takes time. It is not realistic to expect alternatives to be as smooth and scalable to use as the current dominant players from day one. After all, they have a head start of years of innovation, learning and further development. Paid for from the income we as users have paid. Breaking this cycle requires us to bite the bullet, where short-term convenience and long-term sovereignty are at conflict with each other.
- There is a lot of talk about European “champions”. Of course, I am in favour of European tech companies, but as long as nothing changes in terms of ownership and governance, there is nothing to prevent these companies from eventually being bought out by other parties or making profit-driven choices that have a negative impact on society themselves. That is why I advocate, in addition to “home-grown” tech companies, also engaging in dialogue about ownership and governance and making models such as the Steward Ownership model more common and financing for these types of models more attractive.
- The biggest question during the event was: “What can you do as an individual?”. Firstly, I don’t think you can place the responsibility on the individual. But that doesn’t mean that you can’t do anything as an individual. Make conscious decisions, engage in conversation, listen critically to cheering stories (and keep in mind the interest of the sender of a message) and contribute to highlighting and addressing the issues that matter.
All in all, it was a great meeting, and I hope it has contributed to a better-informed debate about (responsible) AI among students, professionals and the AUAS itself.
The video of the event can be viewed via this link.
Want to know more? Then check out these two videos about data work: