Are you certain about that uncertainty?

A screenshot of a clause from the UK consultation on AI and copyright which explicitly states that AI training requires permission

The UK government’s consultation on AI and copyright is prefaced by an un-named minister. In it they say that copyright law related to AI is “uncertain”. Removing that uncertainty they hope, will attract AI investment to the UK.

I’ll pick this apart below, but to summarise:

  • There’s no uncertainty. The consultation makes that clear (see above). What’s uncertain is what AI companies should do about the fact that they have infringed billions of copyrights. That’s their problem, not the government’s.
  • The proposed exception to copyright, which would give AI companies the right to copy content for training except where they have been explicitly told not to, depends on every creator using as-yet non-existent technology to assert rights they currently have by law, every time they produce something or someone publishes it. Not very efficient vs the current regime where they don’t have to do anything at all. Also will take a while to get going because the technology to do this doesn’t exist.
  • Anyone who can’t afford, doesn’t have the capability or doesn’t know about this will have their rights removed. Regressive — copyright is a form of property. Removing it from those least able to defend it is unfair and illiberal.
  • AI companies will have to check these “rights reservations” every time they find something they want to copy (that’s everything on the entire internet). Where rights are reserved, and assuming they have decided they want to comply with the law, they’ll have to either not copy the work or seek permission — just as the law demands they do right now.
  • This means that a licensing marketplace will need to develop whatever happens — assuming of course that AI companies don’t decide to just keep ignoring the law, as they have done to date. If they do that, courts will have to decide — the lawsuits are happening already and this law won’t stop them.
  • The government hopes that making our copyright more permissive will attract more AI companies to the UK. But places like Singapore have already created a much more permissive regime so the UK has already lost that particular race-to-the-bottom.
  • In any event, there’s no sign that copyright is the reason AI companies choose where to invest. Other factors matter too. For example, where they’re based. Or energy costs. Which are four times higher in the UK than the USA.
  • Meanwhile, AI is moving at 100x the pace of legislative processes. What Deepseek has shown is that AI technology isn’t the most valuable component of an AI company. The content they use to train their systems is much more significant. A market in this content will be a huge economic opportunity, especially for the UK whose creative industries out-pace the rest of the world and already contribute £125bn to the economy every year.
  • The proposed exception will get us nowhere. It will create huge amounts of cost and huge inefficiences, but won’t deliver any material benefit. Even if it succeeds in attracting AI companies to the UK to conduct training, it will do so at the cost of every creator who will either have to carry the cost of asserting their rights, or be forced to abandon them.
  • In fact, creators will have to carry that cost anyway because the exception will apply to AI companies wherever they may be. We’ll have made UK content less valuable to the UK with no guarantee that the country will benefit in any way at all.
  • It also won’t do anything to address the issue of the huge infringements already done. These matter, because they were largely done stealthily and they involved all the content on the internet. Applying new rules to future copying really does feel like shutting the stable door after the horses have not only bolted but stampeded back and trampled the stable to dust.

Here’s what the minister said:

“At present, the application of UK copyright law to the training of AI models is disputed. Rights holders are finding it difficult to control the use of their works in training AI models and seek to be remunerated for its use. AI developers are similarly finding it difficult to navigate copyright law in the UK, and this legal uncertainty is undermining investment in and adoption of AI technology.” (emphasis added)

Now… read on to clause 5 of the consultation itself:

“The copyright framework provides right holders with economic and moral rights which mean they can control how their works are used. This means that copying works to train AI models requires a licence from the relevant right holders unless an exception applies.”

Does that seem uncertain to you? In case you aren’t sure, carry on to clause 41:

“The use of automated techniques to analyse large amounts of information (for AI training or other purposes) is often referred to as “data mining”. … If this process involves a reproduction of the copyright work, under copyright law, permission is needed from a copyright owner, unless a relevant exception applies”

Still not quite sure? Seems pretty clear, to the government at least.

Copyright law is crystal clear, as they helpfully explain.

But what can AI companies do? They have ignored the law, and so they face consequences. If they had just copied one or two things, copyright owners might just turn a blind eye, or the AI companies might get their wrists slapped in court.

But they didn’t just copy a few things. They copied everything they could find on the entire internet. Billions and billions of works, none of which they were allowed to do by law. As well as ignoring the law, they ignored all the various ways copyright owners have of explicitly saying that this sort of copying is not allowed: they didn’t seek permission from anyone.

Which looks like a whole heap of trouble. In the USA, where most AI training has been happening, statutory damages for “wilful” copyright infringement can go as high as $150,000. Per work copied. Even to the biggest of Silicon Valley money machines, that’s a lot.

That might leave them with a dilemma but they don’t seem to be unsure of what to do about it. They’re not doing anything at all, in fact some AI companies are doubling down and developing technical tricks to evade attempts by publishers to stop them stealing stuff.

They seem to be betting that instead of them needing to change to comply with the law, the law will change to retrospectively wave a magic wand and make everything legal.

Step forward the UK government and their proposed exception. It will allow AI companies to do something — train their systems without asking permission — which the law has hitherto not allowed.

Kind-of.

It will only allow them to do it if the copyright owner hasn’t specifically asked them not to. Which, obviously, every copyright owner will do if they can.

But HOW will rights owners do this?

Nobody knows. The press release announcing the consultation says as much:

“Before these measures could come into effect, further work with both sectors would be needed to ensure any standards and requirements for rights reservation and transparency are effective, accessible, and widely adopted.”

So, everyone who wants to still have their current rights, or who wants to licence or restrict the use of their work by AI companies for any reason, will have to go through some as-yet unknown process, every time they create something.

When they have done so, they’ll be exactly where they are today: right now the law says “don’t copy this without permission” and in future everyone will have to attach some kind of digital sign to every single thing they produce saying the same thing. Doesn’t sound very efficient.

Not much more efficient for AI compnies either: every time they want to copy something they’ll need to check whether this digital no-entry sign exists. If it does, they’ll have to either not copy it or try to get permission to copy it — exactly what they are supposed to do today.

Assuming they decide to start trying to comply with the law, which they have not done so far, there will need to be some sort of system to help them get what they want on terms they can live with. It’s called a marketplace and what it will sell are licences. These marketplaces exist today for all sorts of rights, even AI training rights. If AI companies become willing buyers of rights, we can be sure it will quickly develop, become larger and more efficient.

Again, this is exactly the same as today. The market is small because most AI companies have decided to ignore it, not because it doesn’t exist or rights holders are unwilling to participate..

So let’s say we have this new exception, we have a system for “rights reservation” which is widely adopted, content owners have absorbed the cost of using it to say they don’t want their work copied without permission and AI companies have all decided to start complying with the law and start participating in a market for rights… what have we gained?

AI companies will have a new right to exploit the work and property of creators who are unable, unaware or can’t afford to reserve their rights — plus some who happy to give them up.

For everything else, which will include substantially everything produced by anyone for whom their creativity is their living and anyone who just would prefer not to have their work being fed into AI systems for unknown purposes, they’ll still need what they need today: permission.

All of which sounds like what Bono might call “running to stand still”. A huge amount of energy and effort being expended to go exactly nowhere.

The biggest irony, though, is that it’s completely irrelevant.

Very few AI companies, and none of the giants, are training their systems in the UK. The government has heard that training AI is very expensive, though, and fantasises that they might start doing that expensive thing here, if our copyright law isn permissive enough. Imagine the growth!

Thing is, they won’t.

If they’re looking for the most permissive copyright regime, other countries have beaten us to the punch and gone even further, so far without the giant of AI relocating there to take advantage.

But AI companies seem content to play chicken with copyright for the time being, those battles are going to be fought, primarily in the US, over the next few years.

Other factors might weigh more heavily against the UK. For example, a large part of the cost of training AI is the energy needed by data centres. Energy in the UK is among the most expensive in the world.

If AI companies start to invest in the UK, which we should all hope they do, it won’t be because of our newly permissive but very clunky copyright regime.

Turns out that for the moment, AI companies prefer to stay close to home and close to cheap energy. UK electricity costing four times what it does in the States might be an issue, for example.

Also, Deepseek have just up-ended the while hypothesis by training their AI for, they claim, about 5% of what it cost OpenAI to do the same thing. Fair to expect that the investment needed to train AIs will come down, quite dramatically. Perhaps the opportunity isn’t quite as big as it was thought to be when this consultation kicked off, long long ago (it’s a 10 week process, which is a long time in AI-land).

All of which means that this exception is a kind of giant Rube Goldberg machine, proposing to create immense complexity and cost which will achieve, even in the best cases, virtually nothing. Other than giving away the property of people who can’t afford to defend it, to any AI company anywhere in the world which wishes to use it with impunity.

Hopefully the consultation will highlight that the path they’re considering if a huge waste of time and will only harm the creative industries with no benefit guaranteed, and that we can do better by defending our IP and looking to establish a leading position in the licensing market will will inevitably develop.

Otherwise, creators, you’d better start thinking about how to get your work off the “open” internet. It’s not safe there.

Waking up, I think I smell coffee…

It has been a while. Hello again. I’m back talking about copyright. Can’t shake my geeky obsession. But why now?

The specific thing which has got my goat is a proposal from the UK government to take a wrecking ball to what is left of copyright law by largely exempting AI companies from it.

It looks crazy at a glance, and only gets crazier if you dig into the detail. Unsurprisingly, the UK creative industries, which depend on copyright and which are worth £125bn to the UK economy every year, are implacably opposed. In fact, it’s quite hard to find anyone at all, outside the government, who thinks it’s a good idea.

Ministers are finding this out for themselves, because they have started a consultation about their plans. It suggests a range of options, but it also says that the government has already decided which one they’re going to implement. So while the consultation responses might highlight just how much people dislike the proposal, it seems the government has pre-emptively decided to ignore them. The allure of imagined AI riches is just too strong.

I’ll highlight some of the choicest morsels over some future posts, to help explain my own views about it and maybe inspire a few people to submit their own views before the deadline of 25th February 2025.

For now, a quick summary:

AI companies “train” their systems by copying everything they can find on the internet and feeding it into their computers. This is how those AI systems “learn”.

Copyright is, literally, the right to make copies. It’s a kind of property — intellectual property — and, like other kinds of property, it belongs to someone. Not AI companies and not the government. Someone who doesn’t own copyright doesn’t have the right to make copies unless the owner – or the law – has given it to them.

Rather than ask permission, though, AI companies have simply ignored copyright and copied everything anyway. Without all that content, their systems wouldn’t work. In fact, the content they have used, far more than the computer chips or the power sources and arguably the underlying technology, is the most valuable component of what they do. They want it, they need it, it’s right there on the internet for anyone to see. So they have simply helped themselves.

To add injury to insult, they are using their systems to obviate the need for people to seek out the source of the “knowledge” they’re imparting to their users. They’re competing against their unwilling and unrewarded suppliers and damaging them commercially.

This isn’t a popular move with the people whose content they have illegally used. However, it has got the UK government very excited. AI has been hyperbolically projected to generate gigantic riches. The new-ish government, desperate for anything which might help them create growth in the UK economy, wants some of those AI riches to come the UK’s way.

So they’re proposing to wave a magic wand and make the illegal copying that AI companies do legal, by creating a special exception in copyright law for them.

This won’t end well. I’ve re-started blogging about this to explain why, and to suggest better ways. Stay tuned…

Don’t be afraid to imagine a better internet.

Have you heard about the latest exciting European shenanigans? 

There’s a new Copyright Directive on the way, and boy has it stirred up some passions. There’s an absolutely massive campaign going on to stop it, and air around Brussels is thick with accusations and recriminations. 

Their arguments are impassioned, although anyone who takes the time to look for themselves will see that the changes proposed are not only relatively innocuous but also essential and positive.

The interesting thing is who the arguments are being made by, and how they’re being made.

The anti-copyright-directive gang are what we can now think of as the usual suspects, rehearsing the usual arguments.

For example, author and journalist Cory Doctorow has stated that planned changes are an ‘unthinkable outcome’ which pose ‘an extinction-level event for the Internet’.

Julia Reda’s pleas on behalf of her Pirate Party to #SaveTheInternet propose that Articles 11 and 13 should at the very least be radically amended, and scrapped entirely at the most.

Jimmy Wales and Tim Berners-Lee signed an open letter which states that Article 13 would take ‘an unprecedented step towards the transformation of the internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users’.

Even Stephen Fry has weighed in, calling the proposals ‘the EU’s looming internet catastrophe’. It is ‘not about protecting artists’ copyright’, he argues, but ‘about granting U.S. tech giants a license to dominate the internet’.

These arguments, some of them simplistic, some ridiculous, some bordering on hysterical, all have the same central theme. They confidently predict that these changes will break the internet, render masses of things illegal or impossible, stop people doing the normal things they want to do. Rather gloomy and doomy, rather extreme and actually – under scrutiny – rather wrong. We heard it before with SOPA and PIPA, and earlier this year when this directive had its first vote.

As well as impassioned arguments, the anti-directive campaigners have deployed technology to direct millions of emails and phone calls to EU legislators, purportedly from electors protesting the changes (although only a few hundred turned out to rallies around Europe in August).

The people leading this are, in great part, people who at one time were pioneers, who could have staked a decent claim to represent the future. These one-time pioneers now a little grey haired and grumpy, their objectivity twisted by time, their affiliations and paymasters, old obsessions and quirky perspectives.

The internet they dreamed of, and tried to make, was one which could easily change, which was in constant evolution, which produced greater and fairer opportunity and which gave everyone a voice and access to information and artistic endeavours. 

The internet we have now is, though, quite a long way from that utopian idyll. The online economy is dominated by a small number of companies who capture nearly all of the money and data (whether you know it or not) and who share little of it. Access to all the information in the world isn’t breaking people out of ever-tighter and more easily manipulated filter bubbles – controlled by the same US tech giants that Stephen Fry fears so much.

Meanwhile, the right that everyone has to control their work – copyright – is worthless. Their work gets used without permission being sought, given or rewarded. So creators are going broke and creative companies are going bust.

That is the status quo which these anti-copyright people are trying to preserve.

Unable to paint an optimistic picture of a better, fairer Internet, they resort to predicting an Internet that is somehow even worse.

Ridiculous and untrue as their nightmare scenarios are, they’re also hardly earth shattering. “This will be the end of memes” they claim.

When weighed against the current undermining of copyright, creators’ loss of control over their own work and their inability to make a living from creativity, and the monopolisation of revenue by tax-avoiding mega-corporations, the loss of memes wouldn’t seem like a high price to pay – if it were true. But it’s not. As the Society of Authors points out, memes would be protected from copyright infringement as parody – and arguably other legal exceptions as well

Other opponents have given similarly feeble arguments, criticising the current status quo without offering productive solutions. Wyclef Jean, founding member of The Fugees, has campaigned for change without actually proposing any change. In an article for Politico, he expresses the need to ‘team up and make the music community work better for everyone’ without ‘demonizing and tearing down the internet and responsible service providers’.

“Links will be taxed”, they predict, absurdly. “The whole internet will be filtered by giant mega-corporations”. As if it isn’t already, but in any event, it won’t.

Some of these doom-sayers can be easily explained away. They’re not neutral, they have a vested interest in the status quo even if they try to hide it. Google spends huge sums funding organisations and individuals who can defend its interests while feigning independence.

Other activists are, dare i suggest it, just a bit past it. Stephen Fry, awash with cash, has no need for any Google largesse (and I’m sure receives none) and does not lack the intelligence to understand the arguments. Nevertheless, he still argues for the status quo. Perhaps having lived through the heady early days of the internet he just lacks the energy for any more change.

The internet as we know it now is still unevolved, primitive and brutal. It’s unfair and it farms its individual users as if they’re cash crops for the few. 

We need to believe that it can change for the better. Restoring the rights of individuals is a key starting point for that. 

Anyone who argues, fearfully, that change cannot be good and must be resisted is either being disingenuous or has simply stood by as time has rushed past them and find themselves now looking around and wishing it would stop.

Support the copyright directive, support the rights of individuals, support a fair internet which functions for all its participants.

Imagine it better, then make it happen.

Netflixifying news? Think again…

“Netflix for News”. That’s a phrase I’ve started hearing in the last month.

It refers to an idea about how to save the news industry. I think most people who say it are suggesting a single subscription which users would pay, but which would give them access a wide range of news sites – a kind of super-subscription.

If we take a business model which has had recent success elsewhere in the media, the idea goes, we would solve everything.

Spotify has done it for music, Netflix have done it for TV and movies, why not do it for news? Subscribers would be paying, news organisations would have a new revenue source, and online media would be saved. Hurrah.

It’s easy to see why this idea has surfaced. Spotify has been at the forefront of transforming an industry ravaged by piracy into one that has returned to growth, with streaming increasingly driving it. Netflix is putting unprecedented amounts of money into amazing new commissions.

The news industry most definitely needs to drive direct consumer revenues, and so dreams of similar things happening.

Seems simple. Will it work?

Well… there are a few reasons why it might not be the ideal model.

Firstly, to state the obvious, music and TV are quite different from news. Music persists in a way that news does not. One person might listen to the same piece of music tens – or even hundreds – of times and still enjoy it.

People spend hours bingeing on box sets, sometimes years old. The ‘value-creating life’ of music, especially popular music, can be as long as decades, with TV and movies not far behind.

News, in comparison, tends to be short-lived. Its value-creating life can be as short as minutes or hours. Very few news stories hold significant economic value (or attention) for longer than a few days or weeks.

This means the model for getting payback has to be different. News requires constant re-investment by news companies to continue to have value, which needs be realised immediately. The investment and payback cycle for other media is typically a lot slower.

Is all news equally valuable?

In addition, news products tend to have more variable pricing. Unlike music, which broadly costs the same regardless of what it is, newspapers have always had highly differentiated pricing and products.

How do you equate the value of The Sun and The Times? Are they worth the same? Does someone reading one long article in The Times account for the same amount of value as someone else reading a short one in The Sun?

This is the challenge faced in a super-subscription environment, where the user pays the same regardless of what they read or how much they read.

How do you divide it fairly? Who should get what? And how do you ensure that making more investment in content, and getting it more widely read, delivers more revenue? Without that promise, why would anyone invest in expensive content instead of cheaper commodified stuff? You only have to look at today’s internet to see the answer to that question.

Algorithmic perversity

A super-subscription business model means that an algorithm decides how much individual news products are worth. It’s impossible to make an algorithm for this without producing perverse outcomes – I speak from experience.

If your algorithm pays publishers based on how many articles get read, publishers with long reads get punished. If it rewards “dwell time”, publishers who are good at producing very pithy articles get punished. If it tries to identify a user’s favourite publication and give it a bigger share, other unfairly lose out.

Algorithms like this, however sophisticated, create winners and losers and limit the ability of publishers to diversify their products and business models. On the contrary, they incentivise publishers to adjust their product in order to game the algorithm, rather than to please their reader.

That quickly becomes messy, so to minimise it, the provider needs to keep the algorithm opaque and ever-changing. The whole business model becomes shrouded in mystery. Nobody can ever know quite how the amount they are being paid has been calculated.

If you want to see that problem come to life, just look at how the Google search algorithm and advertising algorithms work. Key to the way they function, and acquire power in the market, is that almost nothing is disclosed about the way they function. Nobody is allowed to know quite how they operate. Those who control the algorithm are totally in control.

Not quite as simple as it looks

So “Spotifying” or “Netflixiating” news has a few challenges, even at a glance.

Perhaps they might be reduced by ensuring there are a number of competing services out there. This, though, raises its own issues.

For example, if services try to compete by doing exclusive deals with publishers, consumers will be left with a choice of incomplete services and might end up having to subscribe to several of them in order to get access to everything they want. Sound familiar to any Netflix and Amazon Prime fans? But if all the competing services have essentially the same offer, how many of them will survive? A competitive market for this sort of thing can be hard to sustain in reality and the consumer offer will be damaged.

So what’s the upside?

There is one outstandingly good thing, though, about these super-subscription models.

They are good at signing up large numbers of subscribers. If you want a subscription product to get to millions of customers, keep the price low – £10 per month or less is what you’re aiming for – give consumers a big choice of content, all included, and try to be the one subscription everyone needs. You’ll find loads of takers.

By comparison, it’s much harder to get people to commit to a relatively restricted product (like a single newspaper, for example) than a massive offering. That’s why subscription success tends to be limited to an exclusive group of high value publishers with affluent audiences.

The largest barrier to making subscription models work is getting that commitment from readers, so giving an immense amount of content in return is a good way to get them to pay.

Even if you could make it work, you shouldn’t want to

But there’s still a huge, massive problem with the whole idea of super-subscriptions.

Once you’ve persuaded all the publishers to take part, and you have the subscribers signed up, and you’ve developed a really compelling product and user proposition, and you have written an algorithm which divides the money up fairly, and you have managed to find a way to put high-priced, low volume products alongside low-priced, high volume products in the same service without any of them crying foul – none of which is easy – you still have to face the fact that you – and the publishers – have a terrible business.

Why?

Because you have set an upper limit on how big it can be. That limit is your subscription price. £10 per month, multiplied by the number of users you can sign up. You have to divide that £10 between all the publishers, and try to have some money left over for yourself.

That money will all be spoken for the day you go on sale. And there won’t be much left over to pay to new publishers who want to come and join in the fun. For them to get anything, you have to take it away from someone else, or try to increase price (and prevent the existing publishers from claiming the increase for themselves). Or the late joining publishers have to rely on advertising revenue – and we all know the issues with that.

It’s revenue, but it’s not a thriving market

So, this model produces some new revenue for the industry. Which is a good thing.

However, the revenue doesn’t increase in response to more content being consumed. It just gets shared more thinly between publishers, just as ad revenue does now. Not such a great thing if you want to see a bigger and more competitive market. More importantly, it reduces the rationale for investing more in content.

In a future “Spotified” world where total income is fixed, the incentive will still be to do exactly the same thing as now – minimise cost, maximise consumption, depend on advertising to drive revenue increases. It will just have an underlying, new, base layer of customer revenue which will only grow as long as new customers are acquired and retained.

It will not lead to a greater incentive to invest in product and content, because the market and opportunity will not grow any bigger in response to that investment. Not only is revenue limited by subscription rates in a “Spotified” world; so is market growth.

Super-subscriptions would be first aid for dying news brands, but not a cure

So, in my view, this model will solve little. It will give the existing players a temporary reprieve, but leaves an internet still far from the vibrant, thriving market it could be. The door is closed to new entrants, because the fixed revenue – the subscription price – means publishers who are involved will defend their share.

Be more ambitious, create a market which can thrive

There is a much more exciting, compelling and tantalising opportunity which publishers and regulators should focus on instead.

Imagine, if you dare, an internet in which every time a consumer reads something (or listens, or watches, or plays) the publisher makes some money. The more people consume their product, the more money they make.

What would happen?

Well… the best content, well produced, well marketed and wisely priced, would make the most money.

Which means the incentive to invest would change radically. We would see a lot more competition for users’ attention (and money). More products would be launched, and creative innovators incentivised to make their content compelling because they’re offered a direct reward.

Consumers would, in turn, increase their consumption because they’ve been given an ever more exciting choice of content to choose from. The market would grow every time someone decided to read more content. The job of the creators would be to get ever more creative about how to get them to engage more. And there are no limits set on the potential revenue for online content.

Publishers win big, consumers win bigger

But what about the poor old consumer, suddenly facing all this in place of what used to be free?

They’re actually the biggest winner of all. Being the source of the money places consumers in control – they become the masters of the algorithm. Nobody is going to part with their cash – or their data – unless what they get in return is worth it. Disappoint your customer and your business will suffer; please them and you win a huge prize.

This is happening right now

If you want to know how this can be achieved, I have spent the last year building the answer to that.

It’s called Agate, and you can try it now on publications like The New EuropeanPopbitch and Reaction.life – and many more to come.

Actually doing it

As some people will notice, my flurries of activity on this blog and elsewhere are somewhat random. I am deeply passionate about the issues around copyright, because they impinge so heavily on so many other things – economic, cultural, political, personal. One of the reasons I have written about it is to try to explain why these things matter.

The other reason is because I can see how things can be better.

Seeing how things could be better demands more than just sitting on a blog being a smart-arse. Instead of just writing about it, I need to DO something about it?

After all, one of the reasons the creative industries have found themselves in such dire straits is that while they have been adept at identifying their problems and pointing the finger of blame in various directions, they have been slow to come forward with – and actually implement – solutions.

The extended silence is because I have been doing just that.

Solving things for the creative industry is really about putting the creators and those who turn their work into products (the publishers, I guess) at the top of the economic pile. They’re the apex value-creators, after all, but on the internet they are far from the biggest earners.

Part of that is about copyright – the way permission is traded for value between, mainly, creators and publishers (and the main focus of this blog). Also the way those who don’t have permission are prevented from exploiting other peoples’ work. We all know how broken that is, and the many projcts (including some inspired and initiated by me) which seek to address it.

But at the other end of the issue there’s perhaps a more fundamental problem which needs to be solved.

How to get money into the value chain in the first place. The money that flows from advertising is largely inaccessible to publishers, controlled by huge platforms and leading to weird product decisions to try to maximise the paltry revenue flow.

The other revenue stream – from users – has been elusive for publishers. It’s a common belief that people don’t want to pay to access media content. That’s not particularly surprising that only a tiny proportion of people actually DO pay. Only 7% of people in the UK have paid for online news in the last year, according to the Reuters Institute – a number which seems, sadly, rather high to me.

That is the problem I have set out to solve. Free doesn’t work, but subscriptions are only taken up by a tiny proportion of the audience.

The 95% of activity which subscriptions fail to reach is a huge opportunity. Asking consumers to pay without asking them to make a formal commitment is a way to start making money in that huge space. Making it effortless is essential.

That’s what the product I have built does. It’s called Agate and you can try it now at Popbitch – go to www.popbitch.com/stories and start reading.

Pretty soon you’ll be able to take it to other sites too, without any further setup or login or any such nonsense.

I hope you like it, and if you do I hope you spread the word (and add @agatehq to your tweets and follow list).

So, I’m making it easier to make money, at prices and on terms that publishers control.

After that the challenge is for the creators and publishers. Can they make something you like enough to want to spend a few pence on? If they can, the prize is pretty big.

That makes pleasing you becomes their most important objective. Not so much pleasing the advertisers.

What a relief and a pleasure that will be!

So, saving the media. How?

Finally the emperor has no clothes. The creative media will never be able to adapt to the internet the way it is now. More and more people are saying it. Media is dying.

Why? Because it’s starving. There simply isn’t enough money to pay for everything. However good the media has been at garnering audiences and data, the impossibility of trading those things for meaningful amounts of money has become apparent to even the most optimistic enthusiasts.

Without money, media withers and dies. Newspapers, with a few stand-out exceptions, are withering away at an alarming rate. Magazines, long dependent on their print editions to keep going, have hit a wall.

The simple and seductive idea that advertising could translate internet popularity into money has proved itself wrong. We need not dwell on the reasons other than to observe that that advertising isn’t working, and has never really worked, as a sustainable revenue source for online media. After roughly twenty years waiting and hoping that things might change, the patience and financial reserves of the media have begun to run out.

Which leaves a gloriously simple problem. The media needs to make more money. It needs to translate audience into revenue.

If advertising can’t do it, what can?

There’s only one other source of money and that is the audience themselves. The stand-out exceptions I mentioned above are thriving because they’re charging for access. The London Times, the Washington Post, the Economist and so on.

For them, subscriptions are the central focus. The Times of London is profitable for the first time in living memory as a result of its obsessive, long term, subscription focus.

The only way customers can be persuaded to pay, and keep paying, is if The Times focuses on nothing more than producing a product which entertains, informs, delights and surprises them. That is great news for customers. The Times has to be trustworthy. It has to be consistent. It has to be, and to stay, excellent or people will simply decide not to pay for it.

The same is not true of free products, which need to capture enough readers to generate data to sell to advertisers.  They often do this by generating “click-bait” stories, which, as the name indicates, are a form of con and hostile to readers.  Free products need to display as many ads as they possibly can to maximise the (still pitiful) revenue that data can generate. They need to cut their investment in content and the creators who make it, to try to make ends meet, thereby short-serving their readers.

So even if being asked to pay seems, initially, like a bad option, it turns out that for a significant numbers of users it is not. But only if the product is good enough to justify the cost.

That’s an important factor for publishing people to consider when they find themselves thinking “but nobody will be willing to pay”. It is surely true that persuading people to pay for a product which has been optimised for being free, and in the process become unsatisfying and hostile, is tough. But it’s not a generic truth that people won’t pay.

People will pay. They’ll pay for anything for which their desire exceeds the cost being demanded – whether it’s media, groceries, cars or jewellery. The amount of desire, the acceptable cost and the product might vary from person to person, but it is that basic equation which drives all consumer markets.

The task of the media is to bring cost and desire for their products into line.

If the cost has to be more than zero in order to remain in business, what has to happen to the product to make it viable? Self-evidently it has to be attractive to enough customers. That probably involves more change than simply putting a price sticker on it. But where there’s a return there’s a business plan. Investment to make the product better is justified by the improved bottom line that stands to be gained.

Lastly, what about the cost? The Times and others have shown the way by creating a high value product that sells, to hundreds of thousands of people. They have found a lot of people willing to part with a fair amount of money every month because their desire for the The Times exceeds the cost being asked.

It isn’t cheap, though. The Times is most certainly a high-end product aimed at affluent individuals. That’s why the subscription base is somewhere below 10% of the people who want might otherwise choose to read their product. The other 90+% just have to be ignored, or, in some cases given a certain amount of free content in order to tempt them in.

For other publishers, with larger and less affluent or less committed audiences, the investment in making the product more desirable has to be justified by a price much, much lower than the subscriptions currently doing so well at the very top of the market which appeals to a much broader demographic.

Lowering that cost and creating really huge new sources of revenue and profit is the next challenge.

Which will be the subject of the next blog…

 

Rebooting copyright (blog)

Ah hello hello hello. Long time no, um, blog.

I’ve been busy going back to first principles and working out how we can adapt to a world in which the failure of copyright seems to be collapsing the media ever more quickly.

I’m still obsessive about copyright, of course, but I have begun to wonder if we need to focus our attention in a different direction.

Getting right to the nub of it, the central purpose of copyright is to enable creators to benefit from their work. It has lots of surrounding detail but that core function is critical.

Critical and no longer reliable.

So I have paused, for a while, my focus on the legal and regulatory cause of the malaise. Coming up with solutions which work, which I have helped with, can’t solve anything as long as progress is a political rather than practical process.

So I have been focusing on the practical. What can be done, right now, without the need for any political involvement at all?

Not just conceptualising it, but designing it. Not just designing it but building it.

It’s built. It’s about to launch. It makes, I hope you will think, perfect sense. And it changes everything, without depending on the politicians changing anything.

So I’m going to start writing here and elsewhere again, to explain some of the thinking which has led to Agate. Keep an eye on www.Agate.one where a new site will be launched soon, and a product soon afterwards.

Fake news and the faded idealism of the web

Tim Berners Lee issued an epistle recently, a call to action to save the web from some dangers which concern him.

One of them “misinformation” (or “fake news” as it rather more commonly and hysterically known). It’s a problem, he says. Everyone says it, and they’re right. Tim doesn’t identify the solution but he does have an interesting comment about the cause.

In fact the roots of the misinformation problem go right back to the birth of the web and the panglossian optimism that a new environment with new rules could lead to only good outcomes. The rights of creators, their ability to assert them and the failure of media business models on the web are at the heart of the problem – and point the way to solving it.

The problem

“Today, most people find news and information on the web through just a handful of social media sites and search engines” says Tim. Interestingly, he doesn’t mention news products or sites as a source of news.

He is definitely right about the immediate cause of the problem. But why is it that social media and search are the leading sources of news? Why is it that fake news is more likely to thrive there? Could it be something to do with the foundations of the web that Tim himself helped create?

Tim is not a fan of copyright. “Copyright law is terrible”, he said in an interview three years ago.

He is not alone in the view that copyright is incompatible with the web. In fact, the web has largely ignored copyright as it has developed, as if it’s just an error to be worked around.

However innocuous and idealistic this might have seemed at the start, it has evolved into a crisis for the creative sector, which finds it ever-harder to generate profits from their online activities.

But it has been a boon for the social media sites and search engines Tim talks about. They depend completely on the creative output of others. If you deleted all the content created by others from Google search and Facebook, what would be left? Literally nothing. It’s important for those businesses that content stays available and stays free.

So we find ourselves in an era when so-called “traditional” news media continues to struggle and the panic about “fake news” is growing ever greater. This is not a coincidence.

Fake or true is about trust

News is, at least in part, a matter of trust. You see a piece of information somewhere. Should you trust it? Is it true? What is this news and who is giving it to me?

The answer is usually a matter of context. If you saw something in, for example, a newspaper you know and trust, you’re more likely to trust it. Stripped of meaningful context, or presented in a misleading context, it’s much harder to know whether to treat something posing as news should be believed.

The social media sites and search engines which now bring us our news show us things which they call news but which they have harvested elsewhere. They didn’t create it, they can’t vouch for it, they don’t and can’t stand behind it.

But they create their own context, using algorithms which, like all algorithms, open to being gamed and abused.

These platforms are also widely trusted by their users. They create a false trust in information which, simply because of the fact that they fed it to someone, their users are predisposed to believe.

Their ability to analyse our personal data and put a personal selection in front of every user, makes it worse. No two users of Facebook ever see quite the same thing. Each has their own editor which reflects and confirms that person’s prejudice. Is this really the best way for people to find out about the world?

Who wins?

The reason it works this way is, of course, financial. The currency being traded is clicks – the desire for a user to interact with a piece of content or an ad. Pieces of content exist on their own, outside a product from which they were removed by the platforms and re-purposed as free and plentiful raw material for their click-creating, algorithm driven, machine.

Money is made from all this, but very few of the players get to make it. By far the lions share goes to the social networks and search engines, specifically Google and Facebook. They control the personal data which underlies the whole activity, and they operate at such gigantic scale that even tiny amounts of money resulting from a user doing something are magnified by the sheer volume of activities.

That’s why they rely on machines to do the editing. Anything else would be catastrophically inefficient.

In response to the Fake News hysteria that they are belatedly trying to distinguish between fake and true news, but of course they’re doing it using algorithms and buzzwords, not people.

Employees are expensive and silicon valley fortunes depend on using them as little as possible. They’re not “scalable”.

Who loses?

So it comes as no surprise that the person who usually does worst in this whole new media landscape is the person who actually created the content in the first place. They couldn’t help investing time and money in doing so.

Yet, however popular their work turns out to be, they struggle to make money from it because the money-making machinery of the internet us all built around automation. The work of creators can be automatically exploited, ultra-efficiently, without payment and without restraint by others. No wonder they do it.

But it’s not hard to see that it’s a perverse situation which concentrates revenue in the wrong place. Not only is that obviously unfair, it also gives rise to deeper problems, including fake news.

So the rest of us, the so-called end users, are collateral damage. We’re the ones caught in the middle, on the one hand being used as a source of advertising revenue for the giant platforms, on the other being fed this unreliable stream of stuff labelled, sometimes falsely, as “news”.

It’s important that creators can make money from their work

The inability to make money from content, particularly news content, gives rise to some very undesirable outcomes.

The rationale for investing in creating news content is undermined. It’s expensive and inefficient, and increasingly hard to make profitable in an internet which is optimised for efficiency and scalability. So news organisations cut costs, reduce staff, rely more on third parties. Less original news is created professionally.

Third parties sometimes step into the void to generate news and provide information. But they aren’t always ideal either. Often they are partisan, offering a particular point of view and have a principal loyalty not to the readers but to the agenda of their clients. PR people and spin doctors, for example, who have always been there trying to influence journalists and who can now, often, bypass them.

Others are more insidious. They might present themselves as experts, impartial or legitimate news organistions but in fact have another agenda altogether. Ironically, some of them might find it easier to sustain themselves because their primary goal is influence, not profit – their funders measure the rewards in other ways.

Some news organisations, for example, are state funded and follow an agenda sanctioned by their political paymasters. Others hide both their agenda and their funding and present themselves alongside countless others online as useful sources of information.

We can see where fake news comes from.

Products matter more than “content”

It’s made worse by the habit of the big platforms to disassemble media products into their component pieces of content, and present them individually to their audiences.

A newspaper, made up of a few hundred articles assembled from hundreds of thousands made available to the editors, is disassembled as soon as it’s published and turned into a data stream by the search and social algorithms.

The data stream, with every source, real and fake, jumbled up together is then turned back into a curated selection for individual users. This is done not by editors but by algorithms which present reliable and unreliable sources side-by-side and without the context of a surrounding product.

The cost of “free”

The consumer, as Tim Berners-Lee points out and frets about, is the victim of this. They don’t know when they’re being lied to, they don’t know who to trust. They might, understandably, invest too much trust in the platforms which are, in fact, presenting them with a very distorted perspective.

Their data and other peoples content is turned into huge profits for the platforms, but at the cost of undermining the interests of each individual user and, therefore, society as a whole.

Think about the money

When considering how this problem might be solved we have to think about the money.

For news organisations to be able to invest in employing people and creating news, two interlinked factors are essential.

The first is that they need to be able to make enough money to actually do all that. They need to make more than they spend. Profit is not a distasteful or optional thing, it’s an absolute necessity.

The more, the better because it encourages competition and investment.

The second is that the profit needs to be driven by the users. The more people are seeing of your product, the more opportunity to make money needs to arise – therefore the more you need to invest in delighting users and being popular by having a great product.

Running to stand still

This isn’t necessarily what happens when revenue is generated from advertising. Yields and rates tend to get squeezed over time, so even maintaining a certain level of revenue requires growth in volume every year. For many digital products, this means more content, more cheaply produced, more ads on every page. And, often, higher losses anyway.

When money is algorithmically generated from the advertising market, nearly all of it passes through the hands of a couple of major platforms. Their profits aren’t proportional to their own investment in the content they exploit, but to that of others. Good business, of course, and fantastically profitable.

Their dominance of the market, enabled by the internet, is unconstrained by regulators or effective competition. http://precursorblog.com/?q=content/look-what’s-happened-ftc-stopped-google-antitrust-enforcement This causes the profits to accumulate in great cash oceans in silicon valley, inaccessible and useless to the creators and media businesses whose search for a viable business model goes on.

The only other way

The only way for media products to make money, other than from advertisers in one form or another, is from their users directly.

Where revenue is earned by delighting consumers, their trust has to be earned and preserved. When those users are paying for your product, and choose whether to pay or not, pleasing them becomes more important than anything else.

Then the playing field for the fake news content and products gets tilted the other way by journalism which can not only afford to, but has to, shine a spotlight on the lies and dishonesty of others and where investment is rewarded by profit.

Tim Berners-Lee is wrong to hate copyright

This is why Tim Berners-Lee and others are wrong about copyright in the digital age. It might have seemed wrong to them when seen against the backdrop of an idealistic, utopian vision of the digital future.

But seen in the rather uglier light of today’s online reality its virtues are rather more apparent.

Copyright is a human right

Copyright gives creators some control over the destiny of their work. It applies to everyone who creates anything – that means you and I as well as so-called “professionals”.

Tim argues obsessively that everyone should have the right of control over data that is generated about them – privacy is his great hobby horse.

But he has argued the opposite about the near identical rights that copyright already gives to the creative works people create themselves.

The web isn’t the utopia everyone hoped for

The time has come for Tim Berners-Lee and others to acknowledge the mistake they have made about copyright. Arguing that it should be weak or non-existent doesn’t just help concentrate power and money in the hands of a tiny cadre of internet oligarchs, destroying opportunity for others at the same time.

It also destroys the economic basis for a plural, free and fearless press. It makes the space for misinformation and fake news. It betrays its users with the false promise of something for nothing. The price we really pay for the “free” web is becoming more and more obvious.

We are seeing right now how dangerous that false promise is.

It might not be fashionable but we can learn the lessons of history here. Copyright works. The idealism of the early internet has encountered a number of reality checks but the strange antipathy towards copyright has persisted and every attempt to change it has been rebuffed.

When wondering why this might be don’t forget to consider those oceans of cash swilling around on the west coast of America and ask the question “who benefits from this?”

It certainly isn’t the rest of us.

When is a link not a link?

When is a link not a link?

When someone posts a link on Facebook, the first thing that Facebook does it make a little abstract of the page they’re linking to and post it underneath. The headline, a picture or logo, a little bit of text. Takes a second or two to appear. Very handy. I can see what’s on the page without even clicking the link.

Like this. I just typed the URL and Facebook did the rest.

screenshot-2017-02-17-17-35-02

But what if I own the page on the other end of the link, and I don’t want Facebook to do that? How do I stop them?

That question is part technical and part legal.

Is there any way of blocking the Facebook robot from copying the page and creating their own mini-copy of it for presenting in Facebook newsfeeds? Could it be done without blocking everyone else? Do they honour the robots exclusion protocol? (Yes, I know, I should do an experiment to find out).

But, also, is it legally OK? They are copying my stuff and they certainly aren’t asking first. Then they’re turning it into their own mini-version of my stuff, different from mine. What do they do with the copies of my version and of theirs? How can I find out?

My reason for wondering about this is because I was wondering how to reduce my exposure to Facebook. Get off it completely, obviously, would be the ideal. But like many others, I like the fact that Facebook keeps a tiny thread of connection open between me and people I would otherwise be completely detached from.

What I don’t like is that they can build up a complete record of my life. My pictures, my movements, they can recognise my children and my friends. I don’t like all that.

So I want to post my stuff somewhere else, a blog for example, and just put the links in Facebook. Have a way to talk to my friends, but without them sucking all my stuff right back in again.

It does open up a copyright can of worms as old as the web, and which people don’t really like to talk about.

At what point does the automated copying, storing, modification and re-publishing of other peoples stuff stop being a “fair use” (as the americans, who, lets face it, seem to have de facto dominance, would put it) and start being something which requires permission?

It was this question whcih led to the Automated Content Access Protocol. In part it led to the Copyright Hub. It’s lurking in the background of the forthcoming Publishers Right in the EU.

The right of businesses to grab, process, store and copy other peoples stuff seems to just be assumed now. Whole, HUGE, businesses depend on it. Search engines for a start, but also companies like Pinterest as well as, to a lesser extent, Facebook.

Perhaps it’s OK for that to be the default (although I can’t bring myself to embrace this). But surely the question I ask at the top shouldn’t be such a mystery. I have asked various geeks and they’re not quite sure. How DO you stop Facebook grabbing stuff from your site?

Surely it should be easy?

The old way, the copyright way, is that they can’t, unless you say it’s OK. That seems reasonable to me.

But if we’re going to have an internet-era reversal, where it’s OK until you say it isn’t, surely that should’t be a difficult thing to do.

So, geeks and scholars, what am I missing? I realise it’s probably more of a thought experiment than a realistic prospect. But in that spirit, how could I make a nice place online where I can put things, keep it open to humans but stop the likes of Facebook coming in and grabbing it all?

If your answer is “you can’t” or “put a password on it”, does that reasonable?

I think the internet could do better.

The free flow of hypocrisy

I’ve been hearing this phrase “the free flow of information” a lot lately. It’s been in the context of the “Publishers Right” and it is usually preceded by the phrase “will restrict”.

The heart of the concern seems to be the idea that if permission is needed before digital publications can be exploited by others, it could limit, for example, the ways in which those works can be indexed and discovered in search engines.

The argument seems to be that restricting access to “information”, imposing conditions on its use or treating some users, like automated machines, differently from others, like humans, is not just improper but sinister and shouldn’t be allowed.

Google are a leading voice in this argument, so lets have a look at how they work.

Google’s mission “to organize the world’s information and make it universally accessible and useful” is pretty much the ultimate expression of the ideals of free information advocates. For them to make something universally accessible it has to be completely unrestricted. But how unrestricted and accessible is Google itself?

You might not know it, but you can’t use Google without their permission and in return for a payment. If you’re a Google-like machine, you can’t access it at all. The universe of those who can access Google is rather less all-encompassing than their mission suggests.

Try this. Download a new web browser, install it, don’t copy across a any settings or cookies or anything. The go to Google – don’t log in.

You’ll see something like this:

Screen Shot 2015-12-26 at 15.42.23

A little privacy reminder about Google’s (increasingly extensive) privacy policy sits at the top. If you click through you’ll be asked to click to show you accept the policy. Nice of them to go to effort to make sure you’re aware of it especially because it gives them pretty extensive rights to gather and exploit information about you.

This is how they pay for the free services they offer – they take something valuable from you in return and use it to make money for themselves. It’s a form of payment.

And if you don’t click to accept it, eventually you’ll see something like this:

Screen Shot 2016-01-08 at 10.38.20

You are actually not allowed to use Google until you have agreed explicitly to give them payment in the form of the data they want to gather and use.

So: using Google can only be done with their permission and in return for payment in the form of data.

There’s no technical reason for Google’s restrictions. They could offer a search service without gathering any data about users at all (and other services do). Their reason for these restrictions are obviously commercial: they need to make money and this i how they do it.

Whether or not you consider this to be reasonable (after all, every business needs to be able to make money), it doesn’t seem to sit very comfortably with their mission to make “all the world’s information… universally accessible”.

Nor, by the way does their blanket ban on “automated traffic” using their services, which includes “robot, computer program, automated service, or search scraper” traffic. They ban anyone who does what Google does from accessing the information which they have gathered from others using automated traffic. “Universal access” in Google’s world doesn’t apply to services like Google – it is a service for humans only.

Again, you might think this is reasonable, but contrasting it with their demand that their machine should be allowed to access other peoples services without restriction or permission is interesting.

Google insists that everyone – human and machine – needs their permission (and needs to pay their price) before accessing and using their service. But they oppose any law which might require Google to similarly obtain permission or pay a price when they access other peoples services.

It’s absurd that there should be such a strong lobby against such an obviously reasonable and uncontroversial thing at the Publishers Right.

Google is a company which vies to be the world’s largest, and which depends for its revenues on its ability to impose terms, restrictions and forms of payment on its users. It’s hypocritical of them to object to the idea that other companies should not be allowed to do the same.

The objections to the Publishers Right, and copyright more generally, are far too often the self-interest of mega-rich companies posing as the public interest. The credulity of politicians has, thankfully, reduced in recent years and they are more inclined to regard such lobbying sceptically.

There is no conflict between the need of media companies to have business models which allow them to stay in business and the “free flow” of information. There is no conflict between the desire to distinguish between human users and machine-based exploiters of their content.

For information to flow freely, those who create it need to be able to operate on a level playing field with those who exploit it, and need to be able to come to agreements with them about the terms on which they do so. To suggest otherwise, even in the most libertarian of language, is absurd.