Good day! Although my Dutch has a nice weekly rhythm, it remains difficult to find this rhythm for the English version of my newsletter. Hence, it also took a while for this edition to land in your digital mailbox. But it’s here, and I think that’s what matters most. In this edition, I share some pieces I wrote earlier and invite you to a beautiful and relevant online event next Friday where we will look at how to create a more level playing field in the gig economy. Have a great day!


The real-time laboratory of (big) tech

While many developments in the platform economy are basically not new (before Uber we had taxis and before Booking we booked hotels), the way technology is used to create, execute, monitor and assess the transaction and is mostly different. This is something that came out clearly in the interview with former Uber driver James Farrar in a previous newsletter.

I recently read an article on a Dutch news website titled: ‘Big tech wants AI functions in users’ hands fast, but not too fast‘. The article is about the trade-offs that tech companies make in whether or not to launch new features and the search for the balance between speed, whether or not to comply with regulations and competition. Where ‘regulatory compliance or not’ can be viewed from different angles.

Jelle Zuidema, associate professor of explainable AI at the University of Amsterdam, stresses that the “big, fundamental problems, around privacy, truthfulness and hate content are not easily solved in a few months”. In his view, companies seem to be “striking a blow” before all kinds of regulations are imposed.

In other words, it doesn’t make much difference whether you go live a week earlier or later: you won’t solve the fundamental problems in a week anyway. And besides, in an environment with limited regulations or enforcement (where fencing with complexity to disguise the lack of explainability is widely used), you have the opportunity to grow quickly before regulations are imposed. Something I see happening often over the past 10 years with big tech players, all of whom seem to follow the same lobbying handbook.

Besides growing without regulations (and thus building a competitive position that is hard to catch up with: after all, competitors have to grow a few years later at much higher costs), rapidly introducing features and experimenting has another purpose: to learn and optimise at all users’ expense.

I was reminded of a developer I spoke to about eight years ago who had moved to a sharing platform from an Insurance company. He explained that new features in software at the insurer had to be tested through and through first, then tested on personas, then on a select group of customers and only 10 steps later on the entire customer group. On his first day at the platform, he was told he could run real-time experiments with ALL users there. He was very surprised by this, but it is something I see recurring at many platform and tech companies. This is also the case at all AI companies.

What strikes me is that no one (or maybe I am not looking hard enough, feel free to email links if I am wrong and I will share them here next week) asks the vague question of whether it is desirable for millions and sometimes billions of users (read: real people) to be part of companies’ experiments on a daily basis that grow, learn and innovate on the account of these people. Because while this is insanely interesting from a technical perspective, it is quite debatable from an ethical perspective. As far as I am concerned, we can be more critical of this.


Event alert: Seeking a level playing field in the platform economy

This Friday between 14:00 – 15:30 CET, I am organising an online seminar in conjunction with the WageIndicator Foundation in which we will explore different ways to create a more level playing field in the gig economy.

The description:

More and more people find short term work in the gig economy: the phenomenon where online platforms intermediate between demand and supply of labour in a physical or online work environment. As a central entity and private regulator, the platform usually decides the rules of the game without consultation of the workers and clients. These platforms centralise a fragmented market and have the best information position, which creates an imbalance in power between intermediator and users (worker and client).

In the general debate, there are some complaints about platforms advancing on this and working on maintaining (and growing) this disbalance. But the nuance and context are missing. In this webinar, In this webinar, we want to explore the exact problem and analyze the different strategies to give workers their negotiation power back and let them build a collective voice. We also want to search for best practises of initiatives and individuals worldwide that are successfully working in the gig economy with this purpose. To conclude with a roadmap to a gig economy that works for all.

There will be a lot of interesting speakers and the event will end with some great case studies from, among others, the Crowdsourcing Code in Germany (a code of conduct between some gig platforms).

Previous events have attracted more than 100 participants from 24 countries: a great opportunity to learn about the international debate and development of the gig economy. Participation is free of charge. Register and view the programme via this link.

Gig Webinar Banner - Oct'23.png


Verdict in Uber case around ‘robo-firing’

A case surrounding ‘robo-firing’ (or being fired by the algorithm without significant human intervention) did receive a ruling.

“Uber has been found to have failed to comply with European Union algorithmic transparency requirements in a legal challenge brought by two drivers whose accounts were terminated by the ride-hailing giant, including with the use of automated account flags.”

💡
Read and listen tip: if you are interested in the topic of transparency in automated decision-making processes and more equal access to data: listen to the episode featuring my interview with James Farrar of the Worker Info Exchange on Apple PodcastsSpringcastGoogleSpotifyStitcherPandora and Pocket Casts. Or read the blog on Gigpedia that I wrote about this interview.

The entire TechCrunch article with the defences and comments from both sides is certainly interesting to read through. What I found particularly interesting is that this ruling is based on existing law:

“The European Union’s General Data Protection Regulation (GDPR) provides both for a right for individuals not to be subject to solely automated decisions with a legal or significant impact and to receive information about such algorithmic decision-making, including receiving “meaningful information” about the logic involved; its significance; and envisaged consequences of such processing for the data subject.”

Why I find this interesting? Because there is a lot of talk about new legislation, but time and again it becomes clear that a lot can also be solved under existing legislation. Then only 1 thing needs to happen. And that is enforcement. If more attention will not be paid to this, new legislation will also make little to no difference. Or, as James Farrar of the Worker Info Exchange puts it, “the proposed EU Platform Work Directive will be a pointless paper tiger unless governments get serious about enforcing the rules.”


Disinformation and platforms: an impossible task?

Recently, the Digital Service Act (DSA) came into force. This act aims to “create a safer digital space where users’ fundamental rights are protected and create a level playing field for businesses.” A great ambition, but I am starting to doubt more and more whether the approach to fighting disinformation is the right one and wonder if instead of symptom-fighting we should not focus on the root of the problem.

Under the DSA, social media platforms must (among other things) act against the distribution of illegal content and disinformation while being transparent about removing the content. In the case of illegal content, in many cases it is clear what this includes (child pornography, copyright infringements and discriminatory posts), although there is bound to be some debate about the removal of this as well. With disinformation, something we are now seeing in the terrible conflict in Israel and Gaza, it is already a lot trickier. The big question: what is disinformation and who decides what it is and what it is not? And that too on a scale that is unprecedented. And where is the line between disinformation and freedom of expression? Regulations seem to be lacking.

This can lead to platforms not following regulations, or at least the outside world believing that this is the case. Already last week, for example, the EU warned X (formerly Twitter) via a message on X (no idea why this communication now has to go through an online platform again) that it had to comply with the DSA. Meta and TikTok were also put on notice.

In response, platforms can again shoot in the other direction and use ‘algorithms’ to head off anything that seems suspicious. A technique called ‘shadow banning’ is regularly used for this purpose. In other words, taking away the megaphone and reducing the reach and discoverability of accounts flagged as ‘suspicious’ to virtually zero. You can read more about this in the piece ‘Social media silently shadows prominent people’s expressions on charged topics’ in De Volkskrant.

Everyone has the right to freedom of expression, but that does not say you have the right to an algorithmic megaphone to reach an audience of millions.
This Volkskrant piece also discusses algorithms: “Also, algorithms themselves are not neutral: they are written by people, their decisions and convictions determine what an algorithm should or should not classify as controversial. For example, criticism is growing on social media that posts about the Palestinian territories are automatically controversial; Meta, as an American company, would side with Israel, according to some users.”

Freedom of speech vs freedom of reach?

It’s clear: banning disinformation is a complex problem. A problem that increasingly makes me wonder whether the solutions currently being sought are not solutions that ignore the heart of the problem. And that is that perhaps it is not so much the disinformation, but the extent to which disinformation (driven by attention-grabbing algorithms and the way platforms earn on holding the user’s attention) is amplified and reaches a large audience that is the big problem.

The phenomenon of “freedom of speech vs freedom of reach” is more often discussed. Everyone has the right to freedom of speech, but that does not say you have the right to an algorithmic megaphone to reach an audience of millions. Perhaps (and again, there will surely be many ifs and buts here) the solution then is rather by imposing restrictions on that megaphone. Just an idea (and I am certainly not the first with this, but the problem is becoming very apparent this month) to look at things from the other side. Not a popular statement I suspect for platform companies who will feel the pain in their revenue model with this, but at the end of the day, everyone is looking for a sustainable future in the use of platforms and even more so: a livable world.


How do you calculate hourly earnings in the gig economy?

Calculating how much someone earns in an hour: how hard can it be? Well, in the gig economy on platforms for on-demand jobs where the worker is paid per gig (read: delivery and taxi), it is a lot harder than you might think.

When it comes to calculating working time and related hourly costs, the first question is: what is working time. Is this:

  • The time you log into the app and are available to accept gigs? (All the time you are available whether or not you accept a gig)
  • The time from accepting a gig to finishing it? (the time after accepting the gig, driving to the customer or restaurant, waiting, performing the transport until the passenger has got off or the pizza has been delivered)
  • The time of the actual transaction (addressing the pizza until delivery, all other time is not working time)

In addition, workers also have the options of not accepting or cancelling orders and a platform worker can have multiple apps on at the same time and be available for multiple platforms at the same time. There are also additional variables like (variable) bonuses and the tip the worker receives through the app.

You won’t be surprised that when platforms are forced by the courts to pay the worker a minimum hourly rate (e.g. comforming to collective agreement), this discussion will erupt. And you also understand that each side in this debate will have its own interpretation of the concept of ‘working time’.

So how should it be done? For this, I want to share an example I read about last week in which a New York judge upheld a minimum wage of $18 an hour for delivery drivers.

Per NYC’s mandate, companies that use delivery workers will choose between one of two minimum pay rate options outlined by the city. The first option requires companies to pay a worker at least $17.96 per hour, excluding tips, for time spent connected to the app, which includes time spent waiting for a gig.
The other option involves apps paying $0.50 per minute of active time, exclusive of trips. Active time happens from the moment a worker accepts a delivery to the moment they drop off the food.

Clear.


About and contact

What impact does the platform economy have on people, organisations and society? My fascination with this phenomenon started in 2012. Since then, I have been seeking answers by engaging in conversation with all stakeholders involved, conducting research and participating in the public debate. I always do so out of wonder, curiosity and my independent role as a professional outsider.

I share my insights through my Dutch and English newsletters, presentations and contributions in (international) media and academic literature. I also wrote several books on the topic and am founder of GigCV, a new standard to give platform workers access to their own data. Besides all my own projects and explorations, I am also a member of the ‘gig team’ of the WageIndicator Foundation and am part of the knowledge group of the Platform Economy research group at The Hague University of Applied Science.

Need inspiration and advice or research on issues surrounding the platform economy? Or looking for a speaker on the platform economy for an online or offline event? Feel free to contact me via a reply to this newsletter, via email ([email protected]) or phone (0031650244596).
Also visit my YouTube channel with over 300 interviews about the platform economy and my personal website where I regularly share blogs about the platform economy. Interested in my photos? Then check out my photo page.

Recommended Posts