Category Copyright

Fake news and the faded idealism of the web

Tim Berners Lee issued an epistle recently, a call to action to save the web from some dangers which concern him.

One of them “misinformation” (or “fake news” as it rather more commonly and hysterically known). It’s a problem, he says. Everyone says it, and they’re right. Tim doesn’t identify the solution but he does have an interesting comment about the cause.

In fact the roots of the misinformation problem go right back to the birth of the web and the panglossian optimism that a new environment with new rules could lead to only good outcomes. The rights of creators, their ability to assert them and the failure of media business models on the web are at the heart of the problem – and point the way to solving it.

The problem

“Today, most people find news and information on the web through just a handful of social media sites and search engines” says Tim. Interestingly, he doesn’t mention news products or sites as a source of news.

He is definitely right about the immediate cause of the problem. But why is it that social media and search are the leading sources of news? Why is it that fake news is more likely to thrive there? Could it be something to do with the foundations of the web that Tim himself helped create?

Tim is not a fan of copyright. “Copyright law is terrible”, he said in an interview three years ago.

He is not alone in the view that copyright is incompatible with the web. In fact, the web has largely ignored copyright as it has developed, as if it’s just an error to be worked around.

However innocuous and idealistic this might have seemed at the start, it has evolved into a crisis for the creative sector, which finds it ever-harder to generate profits from their online activities.

But it has been a boon for the social media sites and search engines Tim talks about. They depend completely on the creative output of others. If you deleted all the content created by others from Google search and Facebook, what would be left? Literally nothing. It’s important for those businesses that content stays available and stays free.

So we find ourselves in an era when so-called “traditional” news media continues to struggle and the panic about “fake news” is growing ever greater. This is not a coincidence.

Fake or true is about trust

News is, at least in part, a matter of trust. You see a piece of information somewhere. Should you trust it? Is it true? What is this news and who is giving it to me?

The answer is usually a matter of context. If you saw something in, for example, a newspaper you know and trust, you’re more likely to trust it. Stripped of meaningful context, or presented in a misleading context, it’s much harder to know whether to treat something posing as news should be believed.

The social media sites and search engines which now bring us our news show us things which they call news but which they have harvested elsewhere. They didn’t create it, they can’t vouch for it, they don’t and can’t stand behind it.

But they create their own context, using algorithms which, like all algorithms, open to being gamed and abused.

These platforms are also widely trusted by their users. They create a false trust in information which, simply because of the fact that they fed it to someone, their users are predisposed to believe.

Their ability to analyse our personal data and put a personal selection in front of every user, makes it worse. No two users of Facebook ever see quite the same thing. Each has their own editor which reflects and confirms that person’s prejudice. Is this really the best way for people to find out about the world?

Who wins?

The reason it works this way is, of course, financial. The currency being traded is clicks – the desire for a user to interact with a piece of content or an ad. Pieces of content exist on their own, outside a product from which they were removed by the platforms and re-purposed as free and plentiful raw material for their click-creating, algorithm driven, machine.

Money is made from all this, but very few of the players get to make it. By far the lions share goes to the social networks and search engines, specifically Google and Facebook. They control the personal data which underlies the whole activity, and they operate at such gigantic scale that even tiny amounts of money resulting from a user doing something are magnified by the sheer volume of activities.

That’s why they rely on machines to do the editing. Anything else would be catastrophically inefficient.

In response to the Fake News hysteria that they are belatedly trying to distinguish between fake and true news, but of course they’re doing it using algorithms and buzzwords, not people.

Employees are expensive and silicon valley fortunes depend on using them as little as possible. They’re not “scalable”.

Who loses?

So it comes as no surprise that the person who usually does worst in this whole new media landscape is the person who actually created the content in the first place. They couldn’t help investing time and money in doing so.

Yet, however popular their work turns out to be, they struggle to make money from it because the money-making machinery of the internet us all built around automation. The work of creators can be automatically exploited, ultra-efficiently, without payment and without restraint by others. No wonder they do it.

But it’s not hard to see that it’s a perverse situation which concentrates revenue in the wrong place. Not only is that obviously unfair, it also gives rise to deeper problems, including fake news.

So the rest of us, the so-called end users, are collateral damage. We’re the ones caught in the middle, on the one hand being used as a source of advertising revenue for the giant platforms, on the other being fed this unreliable stream of stuff labelled, sometimes falsely, as “news”.

It’s important that creators can make money from their work

The inability to make money from content, particularly news content, gives rise to some very undesirable outcomes.

The rationale for investing in creating news content is undermined. It’s expensive and inefficient, and increasingly hard to make profitable in an internet which is optimised for efficiency and scalability. So news organisations cut costs, reduce staff, rely more on third parties. Less original news is created professionally.

Third parties sometimes step into the void to generate news and provide information. But they aren’t always ideal either. Often they are partisan, offering a particular point of view and have a principal loyalty not to the readers but to the agenda of their clients. PR people and spin doctors, for example, who have always been there trying to influence journalists and who can now, often, bypass them.

Others are more insidious. They might present themselves as experts, impartial or legitimate news organistions but in fact have another agenda altogether. Ironically, some of them might find it easier to sustain themselves because their primary goal is influence, not profit – their funders measure the rewards in other ways.

Some news organisations, for example, are state funded and follow an agenda sanctioned by their political paymasters. Others hide both their agenda and their funding and present themselves alongside countless others online as useful sources of information.

We can see where fake news comes from.

Products matter more than “content”

It’s made worse by the habit of the big platforms to disassemble media products into their component pieces of content, and present them individually to their audiences.

A newspaper, made up of a few hundred articles assembled from hundreds of thousands made available to the editors, is disassembled as soon as it’s published and turned into a data stream by the search and social algorithms.

The data stream, with every source, real and fake, jumbled up together is then turned back into a curated selection for individual users. This is done not by editors but by algorithms which present reliable and unreliable sources side-by-side and without the context of a surrounding product.

The cost of “free”

The consumer, as Tim Berners-Lee points out and frets about, is the victim of this. They don’t know when they’re being lied to, they don’t know who to trust. They might, understandably, invest too much trust in the platforms which are, in fact, presenting them with a very distorted perspective.

Their data and other peoples content is turned into huge profits for the platforms, but at the cost of undermining the interests of each individual user and, therefore, society as a whole.

Think about the money

When considering how this problem might be solved we have to think about the money.

For news organisations to be able to invest in employing people and creating news, two interlinked factors are essential.

The first is that they need to be able to make enough money to actually do all that. They need to make more than they spend. Profit is not a distasteful or optional thing, it’s an absolute necessity.

The more, the better because it encourages competition and investment.

The second is that the profit needs to be driven by the users. The more people are seeing of your product, the more opportunity to make money needs to arise – therefore the more you need to invest in delighting users and being popular by having a great product.

Running to stand still

This isn’t necessarily what happens when revenue is generated from advertising. Yields and rates tend to get squeezed over time, so even maintaining a certain level of revenue requires growth in volume every year. For many digital products, this means more content, more cheaply produced, more ads on every page. And, often, higher losses anyway.

When money is algorithmically generated from the advertising market, nearly all of it passes through the hands of a couple of major platforms. Their profits aren’t proportional to their own investment in the content they exploit, but to that of others. Good business, of course, and fantastically profitable.

Their dominance of the market, enabled by the internet, is unconstrained by regulators or effective competition. http://precursorblog.com/?q=content/look-what’s-happened-ftc-stopped-google-antitrust-enforcement This causes the profits to accumulate in great cash oceans in silicon valley, inaccessible and useless to the creators and media businesses whose search for a viable business model goes on.

The only other way

The only way for media products to make money, other than from advertisers in one form or another, is from their users directly.

Where revenue is earned by delighting consumers, their trust has to be earned and preserved. When those users are paying for your product, and choose whether to pay or not, pleasing them becomes more important than anything else.

Then the playing field for the fake news content and products gets tilted the other way by journalism which can not only afford to, but has to, shine a spotlight on the lies and dishonesty of others and where investment is rewarded by profit.

Tim Berners-Lee is wrong to hate copyright

This is why Tim Berners-Lee and others are wrong about copyright in the digital age. It might have seemed wrong to them when seen against the backdrop of an idealistic, utopian vision of the digital future.

But seen in the rather uglier light of today’s online reality its virtues are rather more apparent.

Copyright is a human right

Copyright gives creators some control over the destiny of their work. It applies to everyone who creates anything – that means you and I as well as so-called “professionals”.

Tim argues obsessively that everyone should have the right of control over data that is generated about them – privacy is his great hobby horse.

But he has argued the opposite about the near identical rights that copyright already gives to the creative works people create themselves.

The web isn’t the utopia everyone hoped for

The time has come for Tim Berners-Lee and others to acknowledge the mistake they have made about copyright. Arguing that it should be weak or non-existent doesn’t just help concentrate power and money in the hands of a tiny cadre of internet oligarchs, destroying opportunity for others at the same time.

It also destroys the economic basis for a plural, free and fearless press. It makes the space for misinformation and fake news. It betrays its users with the false promise of something for nothing. The price we really pay for the “free” web is becoming more and more obvious.

We are seeing right now how dangerous that false promise is.

It might not be fashionable but we can learn the lessons of history here. Copyright works. The idealism of the early internet has encountered a number of reality checks but the strange antipathy towards copyright has persisted and every attempt to change it has been rebuffed.

When wondering why this might be don’t forget to consider those oceans of cash swilling around on the west coast of America and ask the question “who benefits from this?”

It certainly isn’t the rest of us.

The free flow of hypocrisy

I’ve been hearing this phrase “the free flow of information” a lot lately. It’s been in the context of the “Publishers Right” and it is usually preceded by the phrase “will restrict”.

The heart of the concern seems to be the idea that if permission is needed before digital publications can be exploited by others, it could limit, for example, the ways in which those works can be indexed and discovered in search engines.

The argument seems to be that restricting access to “information”, imposing conditions on its use or treating some users, like automated machines, differently from others, like humans, is not just improper but sinister and shouldn’t be allowed.

Google are a leading voice in this argument, so lets have a look at how they work.

Google’s mission “to organize the world’s information and make it universally accessible and useful” is pretty much the ultimate expression of the ideals of free information advocates. For them to make something universally accessible it has to be completely unrestricted. But how unrestricted and accessible is Google itself?

You might not know it, but you can’t use Google without their permission and in return for a payment. If you’re a Google-like machine, you can’t access it at all. The universe of those who can access Google is rather less all-encompassing than their mission suggests.

Try this. Download a new web browser, install it, don’t copy across a any settings or cookies or anything. The go to Google – don’t log in.

You’ll see something like this:

Screen Shot 2015-12-26 at 15.42.23

A little privacy reminder about Google’s (increasingly extensive) privacy policy sits at the top. If you click through you’ll be asked to click to show you accept the policy. Nice of them to go to effort to make sure you’re aware of it especially because it gives them pretty extensive rights to gather and exploit information about you.

This is how they pay for the free services they offer – they take something valuable from you in return and use it to make money for themselves. It’s a form of payment.

And if you don’t click to accept it, eventually you’ll see something like this:

Screen Shot 2016-01-08 at 10.38.20

You are actually not allowed to use Google until you have agreed explicitly to give them payment in the form of the data they want to gather and use.

So: using Google can only be done with their permission and in return for payment in the form of data.

There’s no technical reason for Google’s restrictions. They could offer a search service without gathering any data about users at all (and other services do). Their reason for these restrictions are obviously commercial: they need to make money and this i how they do it.

Whether or not you consider this to be reasonable (after all, every business needs to be able to make money), it doesn’t seem to sit very comfortably with their mission to make “all the world’s information… universally accessible”.

Nor, by the way does their blanket ban on “automated traffic” using their services, which includes “robot, computer program, automated service, or search scraper” traffic. They ban anyone who does what Google does from accessing the information which they have gathered from others using automated traffic. “Universal access” in Google’s world doesn’t apply to services like Google – it is a service for humans only.

Again, you might think this is reasonable, but contrasting it with their demand that their machine should be allowed to access other peoples services without restriction or permission is interesting.

Google insists that everyone – human and machine – needs their permission (and needs to pay their price) before accessing and using their service. But they oppose any law which might require Google to similarly obtain permission or pay a price when they access other peoples services.

It’s absurd that there should be such a strong lobby against such an obviously reasonable and uncontroversial thing at the Publishers Right.

Google is a company which vies to be the world’s largest, and which depends for its revenues on its ability to impose terms, restrictions and forms of payment on its users. It’s hypocritical of them to object to the idea that other companies should not be allowed to do the same.

The objections to the Publishers Right, and copyright more generally, are far too often the self-interest of mega-rich companies posing as the public interest. The credulity of politicians has, thankfully, reduced in recent years and they are more inclined to regard such lobbying sceptically.

There is no conflict between the need of media companies to have business models which allow them to stay in business and the “free flow” of information. There is no conflict between the desire to distinguish between human users and machine-based exploiters of their content.

For information to flow freely, those who create it need to be able to operate on a level playing field with those who exploit it, and need to be able to come to agreements with them about the terms on which they do so. To suggest otherwise, even in the most libertarian of language, is absurd.

The European Commission’s manifesto for The Copyright Hub

As you may know, I stepped down from The Copyright Hub earlier this year, two-and-a-half years into my planned one year tenure.

The Hub is a fantastic, exhilarating, project which stands to create massive and positive change for creators. That is why it has attracted the wide-ranging support from an enormously diverse group of people, organisations, countries and businesses which you’ll see on the website. Among many other positive traits, The Copyright Hub is notable for being so far-sighted in anticipating the future needs of the internet when it comes to copyright.

I was reminded of this earlier this week, when I was taking part in a panel discussion about the new copyright package being proposed by the European Commission. It reads, in part, as if they wrote the Hub’s new manifesto.

I have rather neglected to pay proper attention EU happenings lately, because my head is down and I am totally focussed on a rather wonderful and exciting new business I’m helping to start.

But when I looked up yesterday and paid attention to the briefing which preceded our panel session, I was struck by how the proposals – particularly those on the new Publishers Right – could have been written with The Copyright Hub in mind.

The nub of it is that more people, in future, will unambiguously need permission before they use other peoples’ work. Put the debate about the principle of this to one side for a moment and what’s left is a practical problem. How to identify who permission is needed from. How to obtain it in an efficient way.

The Copyright Hub was conceived in anticipation of these needs. It connects content to its rightsholder, and automates the process of seeking and granting permission to use it.

Taken together with the recent CJEU ruling in GS Mediawhich creates new obligations on services which link to infringing material to check copyright, and the need for the Hub’s services has never been greater.

Many of the concerns and objections I heard voiced at the session yesterday were practical.

“How will sites know if content is infringing?”

“How can permission be obtained in practice?”

These are questions The Copyright Hub was conceived up to answer – and when the answer becomes a matter of a simple, background, technical process it will usher a new era of capability and value creation for the internet.

The wording of the proposed legislation is also an improvement on the past. It avoids locking the law to the current state of technology – a sin committed by the safe harbour provisions of the E-commerce Directive. That directive addressed an issue which, at the time, was impossible to imagine being solved technologically. As the technology improved, developing from impossible to tricky to trivial, the law stood still and created a gigantic legal loophole through which businesses worth billions of dollars were driven and built, at the expense of rights owners.

The proposed new law doesn’t seem to make that mistake. It uses words like “proportionate”, “reasonable” and “adequate” – all terms whose interpretation will change as technology improves.

So it sets a challenge which I hope supporters of projects like The Copyright Hub and the Linked Content Coalition will take up with relish. How quickly can they deliver the open technology needed to make what is tricky today – identifying, verifying and agreeing rights automatically – trivial tomorrow?

Doing that the right way is hard. The Copyright Hub has not taken the easy route and has determinedly pursued an open approach to delivering its technology and governance. This is, of course, the right thing to do but technology doesn’t build itself and finding the resources needed, when there will be no direct commercial return to the Hub, is no small challenge.

The progress the Hub has made despite this has been encouraging, if slower on the technical front than I (and I think others) were hoping. The demand for the Hub has been consistently high, not just in the UK. The new legislative proposals will only increase it.

To be better able to meet that demand, the Hub needs more resources to build and manage technology for itself and its stakeholders. Few projects are lucky enough to start with an unpaid, publicly funded partner to help, as the Hub was with Digital Catapult, but such support can never last forever.

If anyone has any doubt about the rationale or opportunity of the Hub, a quick glance at the Commission’s proposed new copyright reforms should lay it to rest.

The Commission is saying that a more permissioned internet is coming. Those who have had a free-ride are going to have their freedoms curtailed a little bit, will need to ask first. Since the seeking and giving of permission has been the foundation of the whole creative economy, the importance of this is profound.

It will lead to value creation and opportunities that extend well beyond the creative sector. But that growth will be, in part, limited by the state of the art of technology for identifying rights and negotiating permission. A manual, unreliable, untrustworthy process won’t be “reasonable”, “proportionate” or “adequate”.

So the impact that these changes can deliver in practice are in the hands of the creative sector and projects like The Copyright Hub and the Linked Content Coalition which they have sponsored with such foresight.

I thought when I started working on the Hub that the long haul towards an improving legislative environment online was going to be an awful lot longer. I imagined that we would have to build, implement and prove the technology in advance of being able to attract the attention of the law makers.

Despite some people thinking I was a wild optimist, it seems I was not not nearly optimistic enough. The most frustrating moments working on The Copyright Hub came when dealing with people who just couldn’t understand why it mattered or would help, who didn’t believe the status quo would ever change.

Now is a moment to for all of them to share my renewed, buoyant optimism that the status quo isn’t “locked in”. Legislative, as well as technological, change is not just possible but imminent – no doubt influenced by the great strides already taken by the Hub and other projects.

It would be an awful shame if the technology, having had such a great head-start, was overtaken by the legislation. Or the UK by other countries.

So… chequebooks out, everybody! If you care about the future health of the creative sector, the Hub is a huge asset. It needs your money and your work to implement its vision. This opportunity is bigger and sooner than we could ever have hoped.

Support The Copyright Hub! Its time is now…

The CJEU goes bonkers again…?

I am very much not a fan of the European Court of Justice and their whimsical way of making up laws which bear little relation to anything actually legislated.

Last week they were at it again, “banning” open wifi hotspots because they make copyright infringement too easy. The court said that if users need a password, and hotspot owners record their identity, copyright infringement will be reduced.

I am wondering if this time they accidentally got it right.

I’ve written before about the problem with safe harbour laws which protect service providers on the internet by absolving them of any liability for the users of their services.

The intention of this was understandable – why should someone be liable for something they cannot have any knowledge of – like copyright infringement, for example?

But the effect was catastrophic. It led to the absurd fandango of “notice and takedown” whereby copyright owners have to try to police the whole internet and then send notices to service providers to remove content.

The value of this, almost literal, get-out-of-jail-free card is shows in the fact that Google claims, at the time of writing, to have removed 1.79Bn URLs from search in response to these notices. This is a gigantic undertaking yet they still prefer this way of working to anything more sensible which might prevent infringing content appearing in the first place.

The problem with safe harbours for me has always been that they only do half the job. Sure, fine, fair enough, don’t make service providers liable for something they didn’t do (although in other areas of the law – nightclubs for example – service providers have exactly this liability). The liability, in copyright safe harbour regimes, is firmly with the person who did the bad thing in question.

Unfortunately, although service providers can use the law to put their hands out and say “not my fault, guv”, they are usually unable to point to the person whose fault it is – their customer, the person to whom they provided a service and who used it to do something illegal and who is liable in law for their actions. Even if they can, they will frequently make it as difficult as possible to discover.

So the safe harbour, while trying to limit a risk (which, at the time the law was written might have seemed unmanageable – although current technology makes it a simple matter), actually creates a thick shield behind which pretty well anyone can do pretty much any infringing they like, safe in the knowledge that there will, with vanishingly few exceptions, be any consequences at all. In practice the worst outcome will be that the infringing content get removed.

Copyright infringement is thus a zero cost, zero consequence activity on the internet thanks to safe harbour laws.

Many businesses have been founded to take advantage of this loophole and many fortunes have been made – just not by copyright holders who provide the raw materials.

I’ve always thought that safe harbour laws could be hugely improved if, in order to get the legal protection from liability, the service provider needs to have made at least some effort to be able to identify the person who is actually liable – the user. In return for immunity, they would have to be able to lift the anonymity of the alleged wrong-doer. Again, not unprecedented.

And, as far as wifi hotspots are concerned anyway, the CJEU seems to agree.

The court might have come up with a rather clumsy and faffy way of doing it but this is a change which, if applied more broadly to the copyright safe harbour, would go a very long way to re-balancing the internet and restoring creativity to its proper place near the top of the internet value chain.

So I find myself in the unaccustomed position of agreeing with the CJEU on one of their copyright rulings. It won’t last.

About that photo on Facebook… we’re blaming the wrong people

Not long ago there was an eruption of anger and indignation about Facebook’s repeated censorship of Nick Ut’s upsetting and famous picture of a Phan Thi Kim Phuc running from napalm in Vietnam.

The thing that surprised me about it wasn’t what Facebook did, but that news organisations went to the trouble of inviting them to do it. The picture was published, by the publisher, on their Facebook page. It didn’t get there by accident.

The fear that Facebook’s domination of access to news is inevitable becomes a self-fufilling prophecy if news publishers keep acting against their own editorial and commercial interests.

Any editor who thinks the answer is for Facebook to hire more editors and start to do their jobs for them is surely looking in the wrong direction. Instead of asking someone else to do their job surely they should be doing it themselves.

Facebook’s domination isn’t inevitable

Much of the anguished debate about the Nick Ut picture focused on the inevitability of Facebook’s dominance over the media, their policies, the way they apply them and righteous indignation about their lack of editorial judgement in the face of a self-evidently historic and editorially important photograph.

Facebook’s policies (or, as you might call them in a rather old fashioned way, their Style Guide) are algorithmic and might not be to the taste of every editor.

They’re certainly not to my taste. That’s not unusual. Some newspapers in the UK, for instance, are perfectly happy to publish even the very most taboo of swear words, others will avoid them or use asterisks.

There is no universal rulebook of editorial standards and no actual news product is edited by a robot.

The problem with Facebook’s rules is that they apply them, after the fact, to other peoples’ editorial judgements and, in fact, to everything everyone publishes on Facebook.

There’s a simple answer to this: don’t let them.

Publish your work on a platform you control. Your web site, for example. Don’t just give in to the inevitability that Facebook will take over the world, because to do so means giving up not just your editorial control and integrity, but also your business.

But, if you have contractually and morally decided to cede control to Facebook, don’t be surprised when they behave in the way they do.

Why does Facebook do what it does?

Facebook, because of its nature, is never going to be a good editor. Whatever you might think about it, they are trying to oversee all the content posted by everyone by applying a single set of rules. The fact that everyone in the world doesn’t agree with them is not very surprising.

Even when humans are involved, for instance in censoring photos, they are driven by calculations not value judgements and they are not likely to be career journalists with decades of experience in making editorial judgements.

The rules for nakedness seem to be something like this:

Not naked: OK.
Naked: bad – remove (NB male nipples OK, female nipples not OK).
Naked child: ultra-mega-very-bad – remove. No exceptions
Naked child in important news story: still ultra-mega-very-bad – remove.
Naked child in important news story now being re-posted and protested by thousands of people: still ultra-mega-very-bad – remove.
Context: irrelevant – ignore.
Protests by non-Facebookian humans: irrelevant – ignore
Protests by human non-American Prime Ministers: irrelevant – ignore.

This is not surprising. Facebook as a machine is not intelligent, it doesn’t have emotions, experience or judgement, it cannot understand context except in the most simplistic terms. It is programmed for efficiency which means ambiguity is not an option.

That’s why even ‘intelligent’ machines are frequently moronic in their output. We are all aware of this, we all put up with it all the time. It’s also why the work of humans is so much more satisfying.

But they backed down this time…

There was loud and widespread scream from the internet about this one.

Facebook backed down. Of course they did, as soon as a sufficiently senior and sensible human Facebookian got involved. A cathartic yelp of victory has been heard and small celebrations have ensued among those grateful for a rare event worthy of celebrating.

Not worth celebrating at all is the fact that intelligent, experienced editors have allowed the Facebook machine to stand between them and their readers, censoring as it goes.

The madness of Instant Articles

This isn’t an accident. It isn’t just because of users adding links into their news feeds.

Editors and publishers have been actively participating in a Facebook product called Instant Artlcles.

Rather than linking out to the publishers’ sites, instead their content is served by Facebook within the Facebook platform.

As we have learned from this whole episode, there are downsides to this when the Facebook editorial algorithm makes moronic decisions.

There are other downsides too – Facebook’s algorithms also decide when and where to feature the content and they have allegedly been reducing its visibility in peoples newsfeeds. Only a proportion of the content submitted is widely viewable. So another layer of editorial interference is lurking.

Also, obviously, users aren’t looking at the publishers’ products. They’re looking at little slices of them, extracted and shown out of the context of everything else. Perhaps this is inevitable on the internet where sharing of stories is ubiquitous, but is it really a good thing? Should publishers actively hand over control of their users’ experience as well as putting up with its inevitable dilution. Seems odd to me.

Lastly, according to the publishers I have spoken to, there’s absolutely no commercial upside at all. They don’t make any more money. Given that they make precious little money anyway, when someone views a page, it seems odd to give up so much in return for so little.

So what are the upsides?

Well. Instant Articles load faster, especially on mobiles.

As far as I can tell, from what I have been told, that’s kind of it. Well… you stay “visible” and “relevant” and your product “responds to the changing needs of your users” and various other things which I might rudely summarise as “we’re not doing nothing”. But none of it helps the bottom line or the product.

It’s just weird that editors and publishers are colluding with this.

It’s not Facebook’s fault

Blaming Facebook for being what it is, demanding it change in to what news organisations are, does nothing other than offering a comforting distraction from the reality of how this came about. And it isn’t Facebook’s fault.

Publishers need to acknowledge that not-doing-nothing isn’t the same as having a strategy and doing things which have costs but no benefits is not a sensible way of not-doing-nothing.

Running with the herd and trying not to break away is comforting but so far it hasn’t worked out too well.

And we wonder why newspapers are in trouble…

Blocking the blockers is a waste of a good crisis

Back when my day job involved worrying about such things, I didn’t much like the online advertising market. As a publisher, it’s quite hard to love.

Advertising works for publishers when they can charge a premium price for their ads, establish and defend a meaningful market share, turn a larger audience into higher yields and more revenue. None of these things are easy, or even possible, for most publishers in the online advertising market.

That’s why huge sites with massive audiences (by publishing standards anyway) are unable to be profitable, and it’s why cutting costs is better than investing in product.

Enter the ad-blocker

Recently, ad blocking has entered the mainstream thanks to players like Apple and Three, and everyone is up in arms. The publishing industry is crying foul, demanding that something be done, predicting dire consequences if they are cut off from their income source.

Now I’m not defending or celebrating ad-blocking. Some of it does indeed, as John Whittingdale said, seem like a protection racket.

But from the point of view of a publisher shouldn’t it be more a call-to-action than a call-to-whinge?

The truth is that the advertising income stream has never been enough to sustain them, and the situation has got worse not better over time. Ad blocking potentially accelerates but doesn’t fundamentally change the ultimate consequence of this.

So now, surely, is the time to start to focus industry thinking not on how to preserve the starvation regimen offered by online advertising, but how move past it? To tap into the much richer, much bigger, much fairer and more sustainable opportunities offered by the content itself rather than the annoying, uncontrollable and, as more and more users now know, block-able ads around the edges of it.

Can’t pay, won’t pay

Ah, I can hear the chorus of groans already.

“Consumers won’t pay” it rumbles.

“You can’t compete with free, subscriptions don’t work, paywalls go against the grain of the internet, micro-payments are impossible”.

It’s as if people actually take comfort from defeatist aphorisms, as an alternative to actually trying to change anything. It certainly makes life easier: if everybody expects the worst then it’s hard to disappoint them.

But it’s nonsense, and it’s feeble, and it leaves one the cultural and creative industries, together many times bigger than the advertising market, marooned by their own despair.

Perhaps one of the reasons people won’t pay, is because they can’t pay.

I don’t mean they can’t afford it. I mean there’s no simple way of handing over money. They literally can’t pay. That’s at least partly why they won’t.

Obviously, even if they could, they would have to want to – the challenge would be to make products good enough and to price them right.

That’s a creative challenge: know your user, make something that strongly appeals to them, charge a price they’re willing to pay without much thought. The same challenge which defines, effectively, the whole of the creative sector whether making films, music, books, newspapers, photography, games or anything else.

Can every page pay?

OK stop for a moment before you start groaning. Think about it. Don’t get defeated by the frustration of the years of trying to make micro-payments and subscriptions work. Look past that.

Imagine a world where every time your creative product or its content gets consumed you benefit. On terms which you have set. Imagine if every page could pay. What would it do to products, to revenues, to relationships with users?

When I ask content producers this question, most of them get quite excited. They see a world in which their focus becomes clearer. Pleasing their readers, viewers, listeners and players rather than the robots which deliver people to ad-serving systems. More consumption. More revenue. More investment in product leading to more popularity. What management consultants call a virtuous circle.

“Be popular” is the goal. The more popular, the more successful. Every page pays, predictably. Investing in creativity and creative products becomes rational again, innovating to better serve your audience becomes a key imperative, beating your competition drives the urgent need to keep evolving.

But what about the masters in the middle?

Of course there are lots of intermediaries on the internet, sitting in various places in between the content owners and the users. Search engines, ISPs, ad networks, mobile companies, aggregators, countless others.

Very often they’re the gatekeepers as well. To get to users you have to go through them, and on the way through they limit the rewards you can hope for.

But they’re also the people who can provide an answer to the payment conundrum. They are retailers. Many of them are already collecting money from your users for various things.

Just as newspaper publishers never tried to collect 25p individually from every person buying their papers, but instead got newsagents to do it in return for a share of the money, the solution to the payment problem might lie in getting other people to do it for you. As long as what’s good for them is also good for you, and vice versa, there are lots of reasons to work together.

Aligning incentives

The key, as the creative sector has known for centuries, is to have control over the terms under which you offer your work. The law has given creators this control ever since the advent of copyright.

Making this possible requires some new technical plumbing, to allow copyright to work as efficiently as advertising and websites themselves.

After that it’s down to the innovators, the creative companies and anyone who doesn’t want to rely on a failing ad-driven business model, to come up with a much more rapid evolution and new ways to please consumers and share rewards.

Since what we’re talking about her is supplementing ad revenues, not replacing them this doesn’t need to involve huge controversy. For the creative industries to win, the ad industry doesn’t have to lose (they’re doing that on their own anyway). New opportunity is something everyone can move towards

Never waste a good crisis

What’s needed is a spark to trigger all this movement. I think ad-blocking might be it. Something to move away from, a failing model for ad-based revenues. Projects like The Copyright Hub and the Linked Content Coalition are creating the basis for building a new value layer for the internet. This will lead to the emergence of new players who will make it easier for everyone to find new sources of revenue from users and others.

Who will these new players be?

Watch this space.

 

 

Tis but a flesh wound

Much has been written in the last week or two about the death of newspapers. The announcement that the Independent will cease its print edition has prompted this hand-wringing and outpouring. The Independent’s hobbyist owner, Evgeny Lebedev, has offered up his own wisdom about the situation. In an interview with the Guardian he claims his rivals are “in denial” about print.

“I genuinely believe that the future is digital and that the industry is in denial…” he says, positioning himself as the pioneering leader of an otherwise moribund pack.

I chuckled when I read this, in the patronising way only a long-in-the-tooth, seen-it-all-before old dinosaur can. Evgeny is not to be ignored, and he has done some interesting and innovative things, but he could easily be accused of a certain amount of denial himself.

While print might be a rapidly declining market in both circulation and advertising terms, it remains the case that for certain newspapers print is still profitable.

Not, I agree, for everyone, and if you were the proprietor of a newspaper selling around 50,000 copies a day in a national newspaper market which manages to sell nearly 7m copies daily, carrying on would have started to seem irrational quite some time ago. Being in last place, with under 1%, isn’t exactly a glorious place to be in any market. In a declining market, less so. In a declining market with high overheads and reducing yields, less still.

So fine, Evgeny, shut down your print titles. Can’t imagine why you didn’t do it years ago (unless, of course, the reason why a mysteriously wealthy Russian former spy buys a failing British newspaper isn’t just because he’s interested in the bottom line).

But Evgeny’s digital dream is almost comical. For the Independent to have a future, digital or otherwise, it has to have an income. Ideally, unless it plans to rely on charity, it should have more income than expenditure. Which as countless newspapers have found, is a bit of a challenge in the digital domain.

It’s not like the Independent is the first to try this, but the precedents are not good. Going “digital only” is a usually prelude to going bust or carrying on in name only, trying to attract enough traffic to bring in a dribble of cash. That’s because “digital only” tends to mean, other than in niche areas, ad-funded.

Unfortunately ad-funded means a rather unreliable revenue stream, since increased traffic only converts a fraction of the increase into meaningful ad revenue. It also means a rather uncertain future because the online ad marketplace is one largely out of the control of any site which is seeking ad revenues. If you’re running to stand still, you’re doing rather well.

So success as an online newspaper is elusive. As so many have shown, it’s relatively easy to drive audiences to numbers which dwarf print circulations. What’s much harder is to convert those audiences into profitable or even meaningful revenue streams. So the usual approach is to try to cut costs, to acquire audience for the minimum possible investment, or keep spending and produce a fantastic product sustained by the hope that popularity will eventually deliver meaningful revenues. Just ask the Guardian and the Daily Mail how well that works out in practice.

Which means Evgeny’s high-minded promises to retain the services of high priced journalists and foreign bureaux are unlikely to survive the brutal reality of the digital only world for long. If he really believes that this transition, and the promised re-investment of freed-up capital, will lead to growth then he’s either talking about growing something other than profit, or he’s a fantasist.

The truth is that until the internet grows up enough to deliver meaningful, reliable revenue from online audiences, this sort of transition will continue to end in failure. Giving up print is simply giving up. For the Independent, which has struggled to commercially viable for much of its existence, it might be finally succumbing to the inevitable

It’s a very sad day because for all its failure the Independent has been a great newspaper, editorially proud and brave and with lots to admire. At least that’s what plenty of people I respect say. Personally I never read it much. Which I think probably explains the problem – I wasn’t alone.

Not enough people wanted to read the Independent. That’s why it failed. When the digital life support machine is finally turned off it will be the end of a painfully prolonged death. If Evgeny wants to invest in anything, in the meantime, he should try to make it something which might actually change the online marketplace into one where it’s possible for newspapers and other content businesses to thrive. That’s what I have been working on.

But that requires a strategic vision which extends beyond just brave and unrealistic rhetoric.

Farewell, the Independent. You were great. Rest in peace whenever you are finally allowed to.

Walls and words – the importance of language

YouTube put up a paywall. But they’re not calling it that. In various headlines culled from various search engines, YouTube Red is called “a subscription service”, an “ad-free music and video plan” and so on. Not a paywall.

I used to think about paid services (and how to make them pay) when I worked in newspapers. One thing I relentlessly hated was the word “paywall”. It was so negative and pejorative, a word which almost demanded to be used apologetically or disparagingly.

Despite this it was and is used even by the wordsmiths in the newspaper industry – and more or less universally – as a piece of jargon describing the desire to charge customers for creative products.

But not by YouTube, or those reporting their new business. Perhaps that’s a semantic issue – you can still get YouTube without paying if you’re willing to put up with ads – but it’s a telling reminder of how important language, sometimes almost subliminally, is to peoples perceptions.

It is even more telling when people talk about copyright.

Think about copyright. It’s a right which is automatically granted to everyone whenever they create something. The right confers on them a freedom to decide what happens to their work, which they can use however they want – to spread their work freely around, or to keep it private, or to use their work to form collaborations with others; and to agree to do whatever they want on whatever terms they want.

To out it another way, copyright is a freedom, granted to all creators. But that’s not how it’s talked about, even in the law. UK law talks about “the acts restricted by copyright”. Its written in terms of what you can’t do, not what you can.

That might be a legal necessity, but it sets the tone for a lot of the debate about copyright. Because copyright is a restricting thing, it must be a negative thing. It stops people doing things, so the things it stops must be some sort of loss. It restricts and so somehow must be blocking someone else’s freedom.

This isn’t just annoying, it’s dangerous, because it has set the tone for debate. You can tell where the idea that copyright is just lent by society to creators (“for a limited time” as the US Constitution puts it – more negative language) comes from.

Copyright advocates are always fighting back against this presumption of negativity, always defending against these attacks rather than being able to talk of the huge and diverse cultural and economic benefits which copyright unlocks and the huge potential to do even more. But even they, in defending copyright, find themselves using the same negative language which feeds the negative attitudes they rail against. Copyright protects, it prevents, it is enforced.

We don’t talk about “till walls” in shops. We don’t talk about human rights in terms of the freedom they deny to one person in order to grant a more important freedom to someone else. And YouTube doesn’t talk about “paywalls” when they decide that their users might like to pay for a product made out of creative content.

So anyone who recognises the great, positive impact of copyright and its potential to deliver the real value of the internet in the coming third era of its evolution, should learn the lesson of positive language.

Talk about freedom, talk about reward, talk about copyright being for everyone, every creator, every person.

Talk about what copyright enables, not what it restricts.

Google seeks licences from rightsholders, world still turning

So, despite a campaign to prevent it, the Germans have changed their copyright law a little bit, raising the possibility that search engines might have to pay a fee for news content they access.

Google has responded by changing the rules of Google News in Germany to make it “opt in”.

In other words, before Google will crawl German news sites, they will obtain permission from the publisher.

A licence, you might call it. The thing copyright law always said you needed before copying and exploiting someone else’s content.

I have seen no mention of any basis for sitting down and, you know, actually negotiating the terms of the licence with Google, talking about what you want from them in return. I presume their opt-in is a “take or leave it” sort of thing. They don’t seem to be offering money, which we can all clearly see they couldn’t possibly afford with only $10bn profit last year on a pitiful $50bn turnover.

All the German news publishers can have, it seems, is their random share of the supposed 6 billion (mostly completely worthless) visits which Google News sends to publishers. I hope they find this offer resistible bearing in mind the minimal impact that being out of Google News is likely to have on their bottom line.

Still. Google seeking licences, eh? Asking permission? Admittedly, they only seem to be doing so to avoid being forced to share a tiny slice of their enormous wealth with those who provide their raw materials. A little tight-fisted perhaps.

But it shows that their might be new life in the old copyright dog yet. And new value, if a permission based internet starts to creep slowly closer.

Unintended consequences

The government is concerned. Bad things are happening. The internet is a corrupting and subversive influence, tipping bad people over the edge into depravity and evil deeds. Something must be done.

So, ministers have summoned internet companies. A Code of Conduct is under consideration for ISPs. We need their help to stop the bad things.

Child porn, radicalising websites, other distasteful or criminal material need to be controlled. They are damaging our society and creating deviants and criminals.

The call for “internet companies” to step in to try to prevent this is understandable. After all, they stand between the bad people publishing this bad stuff and the innocent users who risk being corrupted, radicalised and deranged by what they see.

Responsible action by “internet companies” is needed to tame the wilder, antisocial extremes of behaviour online.

If you pause to think, you might wonder why these internet companies aren’t already doing something about it without being dragged in to see the headmaster. Everything on the internet has some sort of interaction with an “internet company”, whether it is hosting, uploading, streaming, aggregating or whatever. If their users are doing bad things, you would have thought they might want to do something about it. Why do they need to be summoned by the government to point out the obvious?

Well, one reason might be that there was a law passed more than a decade ago which specifically exempted them from any responsibility for what their users do and publish using their facilities.

In fact, because of the way the law is worded, it almost obliges internet companies not to check or have any awareness of what their users are doing. Once they are aware of illegal or infringing activity, they are obliged to act to stop it, but as long as they’re unaware they have no liability.

The law actually enshrines ignorance as a legal defence. Awareness is an expensive and risky business so actively policing and monitoring what people are publishing is an unappealing option. Ignorance is bliss. Profitable bliss.

The law in question is the european E-commerce directive which creates broad exemptions for “intermediaries” on the internet.

The rationale for that law is obvious but the effect it has had is perhaps less positive than was intended. I have written before about the catastrophic effects for copyright and the creative industries. The problems of criminal and deviant activities which are so exercising the government at the moment would seem, at the very least, not to be helped either.

Of course it’s not true to say that internet companies should be blamed for the bad things that other people do. It’s not their fault and it’s not entirely within their power to prevent it either.

However, when you have written a law which specifically disincentises them from doing anything at all to exercise any control, and then find yourself calling them in for a meeting to ask them nicely if they wouldn’t mind making a little more effort, you should perhaps ask yourself whether you have got the balance quite right.

Pub landlords don’t make anybody get drunk but they can still lose their licence for allowing excessive drunkenness. Football clubs don’t organise riots but they can still be penalised for the bad behaviour of their fans. Where responsibility is at least partly shared, more responsible behaviour tends to emerge. Where someone is made immune from consequences, responsible behaviour is less likely to emerge.

The e-commerce directive is the unintended consequences law. Whatever protection it gave to the mewling, vulnerable, infant internet is no longer needed. The internet has grown up into a strapping teenager, able to stand on its own two feet and behave like a grown-up. It’s time it was given the responsibilities to go with the freedoms and profits.

%d bloggers like this: