So, saving the media. How?

Finally the emperor has no clothes. The creative media will never be able to adapt to the internet the way it is now. More and more people are saying it. Media is dying.

Why? Because it’s starving. There simply isn’t enough money to pay for everything. However good the media has been at garnering audiences and data, the impossibility of trading those things for meaningful amounts of money has become apparent to even the most optimistic enthusiasts.

Without money, media withers and dies. Newspapers, with a few stand-out exceptions, are withering away at an alarming rate. Magazines, long dependent on their print editions to keep going, have hit a wall.

The simple and seductive idea that advertising could translate internet popularity into money has proved itself wrong. We need not dwell on the reasons other than to observe that that advertising isn’t working, and has never really worked, as a sustainable revenue source for online media. After roughly twenty years waiting and hoping that things might change, the patience and financial reserves of the media have begun to run out.

Which leaves a gloriously simple problem. The media needs to make more money. It needs to translate audience into revenue.

If advertising can’t do it, what can?

There’s only one other source of money and that is the audience themselves. The stand-out exceptions I mentioned above are thriving because they’re charging for access. The London Times, the Washington Post, the Economist and so on.

For them, subscriptions are the central focus. The Times of London is profitable for the first time in living memory as a result of its obsessive, long term, subscription focus.

The only way customers can be persuaded to pay, and keep paying, is if The Times focuses on nothing more than producing a product which entertains, informs, delights and surprises them. That is great news for customers. The Times has to be trustworthy. It has to be consistent. It has to be, and to stay, excellent or people will simply decide not to pay for it.

The same is not true of free products, which need to capture enough readers to generate data to sell to advertisers.  They often do this by generating “click-bait” stories, which, as the name indicates, are a form of con and hostile to readers.  Free products need to display as many ads as they possibly can to maximise the (still pitiful) revenue that data can generate. They need to cut their investment in content and the creators who make it, to try to make ends meet, thereby short-serving their readers.

So even if being asked to pay seems, initially, like a bad option, it turns out that for a significant numbers of users it is not. But only if the product is good enough to justify the cost.

That’s an important factor for publishing people to consider when they find themselves thinking “but nobody will be willing to pay”. It is surely true that persuading people to pay for a product which has been optimised for being free, and in the process become unsatisfying and hostile, is tough. But it’s not a generic truth that people won’t pay.

People will pay. They’ll pay for anything for which their desire exceeds the cost being demanded – whether it’s media, groceries, cars or jewellery. The amount of desire, the acceptable cost and the product might vary from person to person, but it is that basic equation which drives all consumer markets.

The task of the media is to bring cost and desire for their products into line.

If the cost has to be more than zero in order to remain in business, what has to happen to the product to make it viable? Self-evidently it has to be attractive to enough customers. That probably involves more change than simply putting a price sticker on it. But where there’s a return there’s a business plan. Investment to make the product better is justified by the improved bottom line that stands to be gained.

Lastly, what about the cost? The Times and others have shown the way by creating a high value product that sells, to hundreds of thousands of people. They have found a lot of people willing to part with a fair amount of money every month because their desire for the The Times exceeds the cost being asked.

It isn’t cheap, though. The Times is most certainly a high-end product aimed at affluent individuals. That’s why the subscription base is somewhere below 10% of the people who want might otherwise choose to read their product. The other 90+% just have to be ignored, or, in some cases given a certain amount of free content in order to tempt them in.

For other publishers, with larger and less affluent or less committed audiences, the investment in making the product more desirable has to be justified by a price much, much lower than the subscriptions currently doing so well at the very top of the market which appeals to a much broader demographic.

Lowering that cost and creating really huge new sources of revenue and profit is the next challenge.

Which will be the subject of the next blog…

 

Rebooting copyright (blog)

Ah hello hello hello. Long time no, um, blog.

I’ve been busy going back to first principles and working out how we can adapt to a world in which the failure of copyright seems to be collapsing the media ever more quickly.

I’m still obsessive about copyright, of course, but I have begun to wonder if we need to focus our attention in a different direction.

Getting right to the nub of it, the central purpose of copyright is to enable creators to benefit from their work. It has lots of surrounding detail but that core function is critical.

Critical and no longer reliable.

So I have paused, for a while, my focus on the legal and regulatory cause of the malaise. Coming up with solutions which work, which I have helped with, can’t solve anything as long as progress is a political rather than practical process.

So I have been focusing on the practical. What can be done, right now, without the need for any political involvement at all?

Not just conceptualising it, but designing it. Not just designing it but building it.

It’s built. It’s about to launch. It makes, I hope you will think, perfect sense. And it changes everything, without depending on the politicians changing anything.

So I’m going to start writing here and elsewhere again, to explain some of the thinking which has led to Agate. Keep an eye on www.Agate.one where a new site will be launched soon, and a product soon afterwards.

Fake news and the faded idealism of the web

Tim Berners Lee issued an epistle recently, a call to action to save the web from some dangers which concern him.

One of them “misinformation” (or “fake news” as it rather more commonly and hysterically known). It’s a problem, he says. Everyone says it, and they’re right. Tim doesn’t identify the solution but he does have an interesting comment about the cause.

In fact the roots of the misinformation problem go right back to the birth of the web and the panglossian optimism that a new environment with new rules could lead to only good outcomes. The rights of creators, their ability to assert them and the failure of media business models on the web are at the heart of the problem – and point the way to solving it.

The problem

“Today, most people find news and information on the web through just a handful of social media sites and search engines” says Tim. Interestingly, he doesn’t mention news products or sites as a source of news.

He is definitely right about the immediate cause of the problem. But why is it that social media and search are the leading sources of news? Why is it that fake news is more likely to thrive there? Could it be something to do with the foundations of the web that Tim himself helped create?

Tim is not a fan of copyright. “Copyright law is terrible”, he said in an interview three years ago.

He is not alone in the view that copyright is incompatible with the web. In fact, the web has largely ignored copyright as it has developed, as if it’s just an error to be worked around.

However innocuous and idealistic this might have seemed at the start, it has evolved into a crisis for the creative sector, which finds it ever-harder to generate profits from their online activities.

But it has been a boon for the social media sites and search engines Tim talks about. They depend completely on the creative output of others. If you deleted all the content created by others from Google search and Facebook, what would be left? Literally nothing. It’s important for those businesses that content stays available and stays free.

So we find ourselves in an era when so-called “traditional” news media continues to struggle and the panic about “fake news” is growing ever greater. This is not a coincidence.

Fake or true is about trust

News is, at least in part, a matter of trust. You see a piece of information somewhere. Should you trust it? Is it true? What is this news and who is giving it to me?

The answer is usually a matter of context. If you saw something in, for example, a newspaper you know and trust, you’re more likely to trust it. Stripped of meaningful context, or presented in a misleading context, it’s much harder to know whether to treat something posing as news should be believed.

The social media sites and search engines which now bring us our news show us things which they call news but which they have harvested elsewhere. They didn’t create it, they can’t vouch for it, they don’t and can’t stand behind it.

But they create their own context, using algorithms which, like all algorithms, open to being gamed and abused.

These platforms are also widely trusted by their users. They create a false trust in information which, simply because of the fact that they fed it to someone, their users are predisposed to believe.

Their ability to analyse our personal data and put a personal selection in front of every user, makes it worse. No two users of Facebook ever see quite the same thing. Each has their own editor which reflects and confirms that person’s prejudice. Is this really the best way for people to find out about the world?

Who wins?

The reason it works this way is, of course, financial. The currency being traded is clicks – the desire for a user to interact with a piece of content or an ad. Pieces of content exist on their own, outside a product from which they were removed by the platforms and re-purposed as free and plentiful raw material for their click-creating, algorithm driven, machine.

Money is made from all this, but very few of the players get to make it. By far the lions share goes to the social networks and search engines, specifically Google and Facebook. They control the personal data which underlies the whole activity, and they operate at such gigantic scale that even tiny amounts of money resulting from a user doing something are magnified by the sheer volume of activities.

That’s why they rely on machines to do the editing. Anything else would be catastrophically inefficient.

In response to the Fake News hysteria that they are belatedly trying to distinguish between fake and true news, but of course they’re doing it using algorithms and buzzwords, not people.

Employees are expensive and silicon valley fortunes depend on using them as little as possible. They’re not “scalable”.

Who loses?

So it comes as no surprise that the person who usually does worst in this whole new media landscape is the person who actually created the content in the first place. They couldn’t help investing time and money in doing so.

Yet, however popular their work turns out to be, they struggle to make money from it because the money-making machinery of the internet us all built around automation. The work of creators can be automatically exploited, ultra-efficiently, without payment and without restraint by others. No wonder they do it.

But it’s not hard to see that it’s a perverse situation which concentrates revenue in the wrong place. Not only is that obviously unfair, it also gives rise to deeper problems, including fake news.

So the rest of us, the so-called end users, are collateral damage. We’re the ones caught in the middle, on the one hand being used as a source of advertising revenue for the giant platforms, on the other being fed this unreliable stream of stuff labelled, sometimes falsely, as “news”.

It’s important that creators can make money from their work

The inability to make money from content, particularly news content, gives rise to some very undesirable outcomes.

The rationale for investing in creating news content is undermined. It’s expensive and inefficient, and increasingly hard to make profitable in an internet which is optimised for efficiency and scalability. So news organisations cut costs, reduce staff, rely more on third parties. Less original news is created professionally.

Third parties sometimes step into the void to generate news and provide information. But they aren’t always ideal either. Often they are partisan, offering a particular point of view and have a principal loyalty not to the readers but to the agenda of their clients. PR people and spin doctors, for example, who have always been there trying to influence journalists and who can now, often, bypass them.

Others are more insidious. They might present themselves as experts, impartial or legitimate news organistions but in fact have another agenda altogether. Ironically, some of them might find it easier to sustain themselves because their primary goal is influence, not profit – their funders measure the rewards in other ways.

Some news organisations, for example, are state funded and follow an agenda sanctioned by their political paymasters. Others hide both their agenda and their funding and present themselves alongside countless others online as useful sources of information.

We can see where fake news comes from.

Products matter more than “content”

It’s made worse by the habit of the big platforms to disassemble media products into their component pieces of content, and present them individually to their audiences.

A newspaper, made up of a few hundred articles assembled from hundreds of thousands made available to the editors, is disassembled as soon as it’s published and turned into a data stream by the search and social algorithms.

The data stream, with every source, real and fake, jumbled up together is then turned back into a curated selection for individual users. This is done not by editors but by algorithms which present reliable and unreliable sources side-by-side and without the context of a surrounding product.

The cost of “free”

The consumer, as Tim Berners-Lee points out and frets about, is the victim of this. They don’t know when they’re being lied to, they don’t know who to trust. They might, understandably, invest too much trust in the platforms which are, in fact, presenting them with a very distorted perspective.

Their data and other peoples content is turned into huge profits for the platforms, but at the cost of undermining the interests of each individual user and, therefore, society as a whole.

Think about the money

When considering how this problem might be solved we have to think about the money.

For news organisations to be able to invest in employing people and creating news, two interlinked factors are essential.

The first is that they need to be able to make enough money to actually do all that. They need to make more than they spend. Profit is not a distasteful or optional thing, it’s an absolute necessity.

The more, the better because it encourages competition and investment.

The second is that the profit needs to be driven by the users. The more people are seeing of your product, the more opportunity to make money needs to arise – therefore the more you need to invest in delighting users and being popular by having a great product.

Running to stand still

This isn’t necessarily what happens when revenue is generated from advertising. Yields and rates tend to get squeezed over time, so even maintaining a certain level of revenue requires growth in volume every year. For many digital products, this means more content, more cheaply produced, more ads on every page. And, often, higher losses anyway.

When money is algorithmically generated from the advertising market, nearly all of it passes through the hands of a couple of major platforms. Their profits aren’t proportional to their own investment in the content they exploit, but to that of others. Good business, of course, and fantastically profitable.

Their dominance of the market, enabled by the internet, is unconstrained by regulators or effective competition. http://precursorblog.com/?q=content/look-what’s-happened-ftc-stopped-google-antitrust-enforcement This causes the profits to accumulate in great cash oceans in silicon valley, inaccessible and useless to the creators and media businesses whose search for a viable business model goes on.

The only other way

The only way for media products to make money, other than from advertisers in one form or another, is from their users directly.

Where revenue is earned by delighting consumers, their trust has to be earned and preserved. When those users are paying for your product, and choose whether to pay or not, pleasing them becomes more important than anything else.

Then the playing field for the fake news content and products gets tilted the other way by journalism which can not only afford to, but has to, shine a spotlight on the lies and dishonesty of others and where investment is rewarded by profit.

Tim Berners-Lee is wrong to hate copyright

This is why Tim Berners-Lee and others are wrong about copyright in the digital age. It might have seemed wrong to them when seen against the backdrop of an idealistic, utopian vision of the digital future.

But seen in the rather uglier light of today’s online reality its virtues are rather more apparent.

Copyright is a human right

Copyright gives creators some control over the destiny of their work. It applies to everyone who creates anything – that means you and I as well as so-called “professionals”.

Tim argues obsessively that everyone should have the right of control over data that is generated about them – privacy is his great hobby horse.

But he has argued the opposite about the near identical rights that copyright already gives to the creative works people create themselves.

The web isn’t the utopia everyone hoped for

The time has come for Tim Berners-Lee and others to acknowledge the mistake they have made about copyright. Arguing that it should be weak or non-existent doesn’t just help concentrate power and money in the hands of a tiny cadre of internet oligarchs, destroying opportunity for others at the same time.

It also destroys the economic basis for a plural, free and fearless press. It makes the space for misinformation and fake news. It betrays its users with the false promise of something for nothing. The price we really pay for the “free” web is becoming more and more obvious.

We are seeing right now how dangerous that false promise is.

It might not be fashionable but we can learn the lessons of history here. Copyright works. The idealism of the early internet has encountered a number of reality checks but the strange antipathy towards copyright has persisted and every attempt to change it has been rebuffed.

When wondering why this might be don’t forget to consider those oceans of cash swilling around on the west coast of America and ask the question “who benefits from this?”

It certainly isn’t the rest of us.

When is a link not a link?

When is a link not a link?

When someone posts a link on Facebook, the first thing that Facebook does it make a little abstract of the page they’re linking to and post it underneath. The headline, a picture or logo, a little bit of text. Takes a second or two to appear. Very handy. I can see what’s on the page without even clicking the link.

Like this. I just typed the URL and Facebook did the rest.

screenshot-2017-02-17-17-35-02

But what if I own the page on the other end of the link, and I don’t want Facebook to do that? How do I stop them?

That question is part technical and part legal.

Is there any way of blocking the Facebook robot from copying the page and creating their own mini-copy of it for presenting in Facebook newsfeeds? Could it be done without blocking everyone else? Do they honour the robots exclusion protocol? (Yes, I know, I should do an experiment to find out).

But, also, is it legally OK? They are copying my stuff and they certainly aren’t asking first. Then they’re turning it into their own mini-version of my stuff, different from mine. What do they do with the copies of my version and of theirs? How can I find out?

My reason for wondering about this is because I was wondering how to reduce my exposure to Facebook. Get off it completely, obviously, would be the ideal. But like many others, I like the fact that Facebook keeps a tiny thread of connection open between me and people I would otherwise be completely detached from.

What I don’t like is that they can build up a complete record of my life. My pictures, my movements, they can recognise my children and my friends. I don’t like all that.

So I want to post my stuff somewhere else, a blog for example, and just put the links in Facebook. Have a way to talk to my friends, but without them sucking all my stuff right back in again.

It does open up a copyright can of worms as old as the web, and which people don’t really like to talk about.

At what point does the automated copying, storing, modification and re-publishing of other peoples stuff stop being a “fair use” (as the americans, who, lets face it, seem to have de facto dominance, would put it) and start being something which requires permission?

It was this question whcih led to the Automated Content Access Protocol. In part it led to the Copyright Hub. It’s lurking in the background of the forthcoming Publishers Right in the EU.

The right of businesses to grab, process, store and copy other peoples stuff seems to just be assumed now. Whole, HUGE, businesses depend on it. Search engines for a start, but also companies like Pinterest as well as, to a lesser extent, Facebook.

Perhaps it’s OK for that to be the default (although I can’t bring myself to embrace this). But surely the question I ask at the top shouldn’t be such a mystery. I have asked various geeks and they’re not quite sure. How DO you stop Facebook grabbing stuff from your site?

Surely it should be easy?

The old way, the copyright way, is that they can’t, unless you say it’s OK. That seems reasonable to me.

But if we’re going to have an internet-era reversal, where it’s OK until you say it isn’t, surely that should’t be a difficult thing to do.

So, geeks and scholars, what am I missing? I realise it’s probably more of a thought experiment than a realistic prospect. But in that spirit, how could I make a nice place online where I can put things, keep it open to humans but stop the likes of Facebook coming in and grabbing it all?

If your answer is “you can’t” or “put a password on it”, does that reasonable?

I think the internet could do better.

The free flow of hypocrisy

I’ve been hearing this phrase “the free flow of information” a lot lately. It’s been in the context of the “Publishers Right” and it is usually preceded by the phrase “will restrict”.

The heart of the concern seems to be the idea that if permission is needed before digital publications can be exploited by others, it could limit, for example, the ways in which those works can be indexed and discovered in search engines.

The argument seems to be that restricting access to “information”, imposing conditions on its use or treating some users, like automated machines, differently from others, like humans, is not just improper but sinister and shouldn’t be allowed.

Google are a leading voice in this argument, so lets have a look at how they work.

Google’s mission “to organize the world’s information and make it universally accessible and useful” is pretty much the ultimate expression of the ideals of free information advocates. For them to make something universally accessible it has to be completely unrestricted. But how unrestricted and accessible is Google itself?

You might not know it, but you can’t use Google without their permission and in return for a payment. If you’re a Google-like machine, you can’t access it at all. The universe of those who can access Google is rather less all-encompassing than their mission suggests.

Try this. Download a new web browser, install it, don’t copy across a any settings or cookies or anything. The go to Google – don’t log in.

You’ll see something like this:

Screen Shot 2015-12-26 at 15.42.23

A little privacy reminder about Google’s (increasingly extensive) privacy policy sits at the top. If you click through you’ll be asked to click to show you accept the policy. Nice of them to go to effort to make sure you’re aware of it especially because it gives them pretty extensive rights to gather and exploit information about you.

This is how they pay for the free services they offer – they take something valuable from you in return and use it to make money for themselves. It’s a form of payment.

And if you don’t click to accept it, eventually you’ll see something like this:

Screen Shot 2016-01-08 at 10.38.20

You are actually not allowed to use Google until you have agreed explicitly to give them payment in the form of the data they want to gather and use.

So: using Google can only be done with their permission and in return for payment in the form of data.

There’s no technical reason for Google’s restrictions. They could offer a search service without gathering any data about users at all (and other services do). Their reason for these restrictions are obviously commercial: they need to make money and this i how they do it.

Whether or not you consider this to be reasonable (after all, every business needs to be able to make money), it doesn’t seem to sit very comfortably with their mission to make “all the world’s information… universally accessible”.

Nor, by the way does their blanket ban on “automated traffic” using their services, which includes “robot, computer program, automated service, or search scraper” traffic. They ban anyone who does what Google does from accessing the information which they have gathered from others using automated traffic. “Universal access” in Google’s world doesn’t apply to services like Google – it is a service for humans only.

Again, you might think this is reasonable, but contrasting it with their demand that their machine should be allowed to access other peoples services without restriction or permission is interesting.

Google insists that everyone – human and machine – needs their permission (and needs to pay their price) before accessing and using their service. But they oppose any law which might require Google to similarly obtain permission or pay a price when they access other peoples services.

It’s absurd that there should be such a strong lobby against such an obviously reasonable and uncontroversial thing at the Publishers Right.

Google is a company which vies to be the world’s largest, and which depends for its revenues on its ability to impose terms, restrictions and forms of payment on its users. It’s hypocritical of them to object to the idea that other companies should not be allowed to do the same.

The objections to the Publishers Right, and copyright more generally, are far too often the self-interest of mega-rich companies posing as the public interest. The credulity of politicians has, thankfully, reduced in recent years and they are more inclined to regard such lobbying sceptically.

There is no conflict between the need of media companies to have business models which allow them to stay in business and the “free flow” of information. There is no conflict between the desire to distinguish between human users and machine-based exploiters of their content.

For information to flow freely, those who create it need to be able to operate on a level playing field with those who exploit it, and need to be able to come to agreements with them about the terms on which they do so. To suggest otherwise, even in the most libertarian of language, is absurd.

Were newspapers wrong about online? Kinda…

Were newspapers wrong to go digital? asks Roy Greenslade, reporting on some very interesting research from the USA. He points out that many, if not most, newspapers could be more profitable if they closed their websites and just focused on print instead.

I don’t think the mistake was going digital, unless any newspaper had obsolescence as its long term plan. However, it’s hard to ignore the unvarnished reality that almost everything newspapers have done in the digital sphere has been a commercial failure.

The challenge now is not to ruminate on what could have been, but to recognise the mistakes so they can be learned from. They can still be corrected.

Two key imperatives

There are two key strategic imperatives which can help answer the conundrum. They are generally valid not just for newspapers and not, in fact, just in the digital sphere, but all media products.

Firstly, popularity must lead to success. In the case of newspapers, that needs to mean that the more you’re read, the more money you make.

Secondly, you must maintain reasonable control over terms of trade. You need to decide how much you sell your product for to the next person in the value chain. If someone else decides whether, and how much, you get paid you cannot build any kind of sustainable business.

So, for newspapers, the mis-steps are obvious when viewed with these two imperatives in mind.

There’s nothing wrong with the idea of charging

Newspapers, nearly universally, abandoned the idea of charging for their online products. Although this led to a huge increase in consumption, it did not (and still does not) lead to a commensurate increase in revenue. Popularity no longer delivers revenue, yet they keep chasing popularity as if it does – frequently making their product horrible for readers as they go.

Looking for the reasons for this,  a big one is easy to find in the way the online advertising market works. Newspapers have next-to no influence over the quantity of advertising their sites can sell, nor the price it gets sold for. Achieving flat revenues year-over-year is regarded as a decent outcome by most of them, even if traffic has increased.

Can it be fixed? Yes it can!

What could they do differently, even at this late stage of despair? Sustainable success is still achievable if they can re-build the link between popularity and revenue, and regain control over their terms of trade/

One thing, obvious I think, is to charge for access to their products. There are loads of reasons why this is a good idea not just for newspapers but also for their readers, starting with the fact that it can deliver both strategic imperatives.

Before this seems like the right, even obvious, thing to do, newspaper managers have to accept that traffic is not the same as money and stop judging themselves by meaningless and mostly implausible numbers of “uniques” or other similar metrics. They need to train their staff and their investors to look somewhere else for measures of success, starting with the bottom line. Profit is not a tawdry or embarrassing objective.

Make something worth paying for

Having done this they need to come up with an attractive product. This means more than just slapping a price sticker on the thing they have now.

Current products have evolved in a search-optimised, ad-funded, traffic-hungry, revenue poor environment They aren’t really built with the readers’ delight in mind and are, as the US research points out, almost universally “less-than-satisfactory”.

Newspaper reading has traditionally been driven by habit. Making a habit-forming product, rather than just a data feed for search and social, which people return to every day, is central to success.

A big re-think is needed to create truly engaging, habit-forming, delightful products. There are already products which show the way. Subscribers to The Times, a subscription product built around the needs of users above all else, wlll tell you how much they like and enjoy it. That’s in no small part because a product created with the goal of delighting humans instead of search engines and social platforms is, well… delightful. People want to go back to it.

Price it right

When they have worked out their nice new product, newspapers will have to come up with an attractive price as well. The question is not whether customers are prepared to pay, but how and how much are crucial.

Subscriptions are a dreadful solution to this because they demand commitment from a group of notoriously mercurial customers. Newspaper readership is a casual thing, people change their minds, they switch around, they read more than one thing.

Even if someone forms a habit around a single newspaper, they don’t like to feel that they’re locked-in. So, demanding commitment, making your customers promise to pay you not just now but into the future as well, is unattractive to most readers.

You can achieve a measure of success with subscriptions, as The Times among others has shown. But you leave an awful lot of opportunity and audience on the table. There are better ways, and I’m building one of them.

Have a business plan you can believe in

Once you have your attractive, reader-centric, product, and you’ve got your pricing sorted out, and you have a nice user experience, and you have stopped talking about your product as if it’s a high security zone (a “pay wall” to keep the riff-raff out), you will discover you can write a fairly confident investment case.

Knowing how much revenue you stand to gain as your product builds popularity means you can work out how much to invest in the product, in marketing it, in the content.

In other words, you have a business.

Not only that, you have the ad revenue on top. Unpredictable it will remain, but it will also in future be secondary. You can put it towards the christmas party.

Be confident

“Ah yes”, I hear the cry. “All sounds very nice but if it was that easy it would have happened by now”.

Whoever is shouting that is committing the greatest sin that the newspaper business has been guilty of in the digital era: a lack of self-confidence and an obliviousness to its own power and influence.

Perhaps because the print market was so mature and didn’t offer much incentive to take risks, perhaps because there have been no genuine strategic challenges for decades, perhaps because the intense short-term focus of the newspapers distracts everyone from thinking about the future, but the newspaper sector has developed an actual aversion to innovation.

They claim to be innovative, like all businesses, but they are not. For newspapers, innovation means following the herd, jumping on bandwagons, doing what everyone else is doing. Copying a seemingly successful tactic you have seen elsewhere. Buying a drone for your CEO and saying “look boss, this is what the cool kids are doing, do you feel cool now?”. It’s fun to play with others people’s toys but ultimately it hasn’t worked.

If there is one reason above all others for the mess they have got in, it is a lack of courage to believe in themselves, an instinct to treat with suspicion any idea which someone else hasn’t already delivered.

This is what has put newspapers in thrall to new platforms whose interests are in no way aligned, but who are younger, cooler and richer.

Doing something to shape the digital landscape into one which works for them is something newspapers haven’t really tried to do because they don’t think they can. But the landscape others have built isn’t one in which they have thrived.

Yes, you can. Really.

If newspapers are finally ready to abandon their defeatist self-pity and act confidently they can still reverse their misfortune. On the other side of it they will find a far richer opportunity than they have ever imagined.

They need to believe in themselves and their ability to product great products which – human – customers love and will pay for.

One more thing…

There’s one other bit to making this work, of course. They all need mechanisms capable of delivering the money in a way which doesn’t make paying for their products more trouble than its worth.

I’ll help…

That’s what I’m working on. I can see the money, the opportunity and the way to deliver it. I’m working with a group of gratifyingly receptive and non-defeatist publishers, as well as some other media companies, to develop it in partnership. We will be launching the first products in the new year.

If you’re a publisher of any kind of media product, or a creator, you will love it because it will connect your popularity with revenue and give you an opportunity to develop, grow and attract investment to your business.

If you’re a user you will love it too because you’ll be in charge – you’ll be able to access everything and because everyone will be competing for your money, they’ll also be competing to make the product and offer you love enough to pay. You’ll be the customer again.

If you don’t believe me, or want to pick holes in my logic, or want to understand my reasoning in more detail, or want to just tell me how wrong I am, get in touch and lets talk.

So, Roy, going digital was not a mistake for newspapers, but the failure to innovate and drive their business rationally most certainly was. That, however, can change.

The European Commission’s manifesto for The Copyright Hub

As you may know, I stepped down from The Copyright Hub earlier this year, two-and-a-half years into my planned one year tenure.

The Hub is a fantastic, exhilarating, project which stands to create massive and positive change for creators. That is why it has attracted the wide-ranging support from an enormously diverse group of people, organisations, countries and businesses which you’ll see on the website. Among many other positive traits, The Copyright Hub is notable for being so far-sighted in anticipating the future needs of the internet when it comes to copyright.

I was reminded of this earlier this week, when I was taking part in a panel discussion about the new copyright package being proposed by the European Commission. It reads, in part, as if they wrote the Hub’s new manifesto.

I have rather neglected to pay proper attention EU happenings lately, because my head is down and I am totally focussed on a rather wonderful and exciting new business I’m helping to start.

But when I looked up yesterday and paid attention to the briefing which preceded our panel session, I was struck by how the proposals – particularly those on the new Publishers Right – could have been written with The Copyright Hub in mind.

The nub of it is that more people, in future, will unambiguously need permission before they use other peoples’ work. Put the debate about the principle of this to one side for a moment and what’s left is a practical problem. How to identify who permission is needed from. How to obtain it in an efficient way.

The Copyright Hub was conceived in anticipation of these needs. It connects content to its rightsholder, and automates the process of seeking and granting permission to use it.

Taken together with the recent CJEU ruling in GS Mediawhich creates new obligations on services which link to infringing material to check copyright, and the need for the Hub’s services has never been greater.

Many of the concerns and objections I heard voiced at the session yesterday were practical.

“How will sites know if content is infringing?”

“How can permission be obtained in practice?”

These are questions The Copyright Hub was conceived up to answer – and when the answer becomes a matter of a simple, background, technical process it will usher a new era of capability and value creation for the internet.

The wording of the proposed legislation is also an improvement on the past. It avoids locking the law to the current state of technology – a sin committed by the safe harbour provisions of the E-commerce Directive. That directive addressed an issue which, at the time, was impossible to imagine being solved technologically. As the technology improved, developing from impossible to tricky to trivial, the law stood still and created a gigantic legal loophole through which businesses worth billions of dollars were driven and built, at the expense of rights owners.

The proposed new law doesn’t seem to make that mistake. It uses words like “proportionate”, “reasonable” and “adequate” – all terms whose interpretation will change as technology improves.

So it sets a challenge which I hope supporters of projects like The Copyright Hub and the Linked Content Coalition will take up with relish. How quickly can they deliver the open technology needed to make what is tricky today – identifying, verifying and agreeing rights automatically – trivial tomorrow?

Doing that the right way is hard. The Copyright Hub has not taken the easy route and has determinedly pursued an open approach to delivering its technology and governance. This is, of course, the right thing to do but technology doesn’t build itself and finding the resources needed, when there will be no direct commercial return to the Hub, is no small challenge.

The progress the Hub has made despite this has been encouraging, if slower on the technical front than I (and I think others) were hoping. The demand for the Hub has been consistently high, not just in the UK. The new legislative proposals will only increase it.

To be better able to meet that demand, the Hub needs more resources to build and manage technology for itself and its stakeholders. Few projects are lucky enough to start with an unpaid, publicly funded partner to help, as the Hub was with Digital Catapult, but such support can never last forever.

If anyone has any doubt about the rationale or opportunity of the Hub, a quick glance at the Commission’s proposed new copyright reforms should lay it to rest.

The Commission is saying that a more permissioned internet is coming. Those who have had a free-ride are going to have their freedoms curtailed a little bit, will need to ask first. Since the seeking and giving of permission has been the foundation of the whole creative economy, the importance of this is profound.

It will lead to value creation and opportunities that extend well beyond the creative sector. But that growth will be, in part, limited by the state of the art of technology for identifying rights and negotiating permission. A manual, unreliable, untrustworthy process won’t be “reasonable”, “proportionate” or “adequate”.

So the impact that these changes can deliver in practice are in the hands of the creative sector and projects like The Copyright Hub and the Linked Content Coalition which they have sponsored with such foresight.

I thought when I started working on the Hub that the long haul towards an improving legislative environment online was going to be an awful lot longer. I imagined that we would have to build, implement and prove the technology in advance of being able to attract the attention of the law makers.

Despite some people thinking I was a wild optimist, it seems I was not not nearly optimistic enough. The most frustrating moments working on The Copyright Hub came when dealing with people who just couldn’t understand why it mattered or would help, who didn’t believe the status quo would ever change.

Now is a moment to for all of them to share my renewed, buoyant optimism that the status quo isn’t “locked in”. Legislative, as well as technological, change is not just possible but imminent – no doubt influenced by the great strides already taken by the Hub and other projects.

It would be an awful shame if the technology, having had such a great head-start, was overtaken by the legislation. Or the UK by other countries.

So… chequebooks out, everybody! If you care about the future health of the creative sector, the Hub is a huge asset. It needs your money and your work to implement its vision. This opportunity is bigger and sooner than we could ever have hoped.

Support The Copyright Hub! Its time is now…

The CJEU goes bonkers again…?

I am very much not a fan of the European Court of Justice and their whimsical way of making up laws which bear little relation to anything actually legislated.

Last week they were at it again, “banning” open wifi hotspots because they make copyright infringement too easy. The court said that if users need a password, and hotspot owners record their identity, copyright infringement will be reduced.

I am wondering if this time they accidentally got it right.

I’ve written before about the problem with safe harbour laws which protect service providers on the internet by absolving them of any liability for the users of their services.

The intention of this was understandable – why should someone be liable for something they cannot have any knowledge of – like copyright infringement, for example?

But the effect was catastrophic. It led to the absurd fandango of “notice and takedown” whereby copyright owners have to try to police the whole internet and then send notices to service providers to remove content.

The value of this, almost literal, get-out-of-jail-free card is shows in the fact that Google claims, at the time of writing, to have removed 1.79Bn URLs from search in response to these notices. This is a gigantic undertaking yet they still prefer this way of working to anything more sensible which might prevent infringing content appearing in the first place.

The problem with safe harbours for me has always been that they only do half the job. Sure, fine, fair enough, don’t make service providers liable for something they didn’t do (although in other areas of the law – nightclubs for example – service providers have exactly this liability). The liability, in copyright safe harbour regimes, is firmly with the person who did the bad thing in question.

Unfortunately, although service providers can use the law to put their hands out and say “not my fault, guv”, they are usually unable to point to the person whose fault it is – their customer, the person to whom they provided a service and who used it to do something illegal and who is liable in law for their actions. Even if they can, they will frequently make it as difficult as possible to discover.

So the safe harbour, while trying to limit a risk (which, at the time the law was written might have seemed unmanageable – although current technology makes it a simple matter), actually creates a thick shield behind which pretty well anyone can do pretty much any infringing they like, safe in the knowledge that there will, with vanishingly few exceptions, be any consequences at all. In practice the worst outcome will be that the infringing content get removed.

Copyright infringement is thus a zero cost, zero consequence activity on the internet thanks to safe harbour laws.

Many businesses have been founded to take advantage of this loophole and many fortunes have been made – just not by copyright holders who provide the raw materials.

I’ve always thought that safe harbour laws could be hugely improved if, in order to get the legal protection from liability, the service provider needs to have made at least some effort to be able to identify the person who is actually liable – the user. In return for immunity, they would have to be able to lift the anonymity of the alleged wrong-doer. Again, not unprecedented.

And, as far as wifi hotspots are concerned anyway, the CJEU seems to agree.

The court might have come up with a rather clumsy and faffy way of doing it but this is a change which, if applied more broadly to the copyright safe harbour, would go a very long way to re-balancing the internet and restoring creativity to its proper place near the top of the internet value chain.

So I find myself in the unaccustomed position of agreeing with the CJEU on one of their copyright rulings. It won’t last.

About that photo on Facebook… we’re blaming the wrong people

Not long ago there was an eruption of anger and indignation about Facebook’s repeated censorship of Nick Ut’s upsetting and famous picture of a Phan Thi Kim Phuc running from napalm in Vietnam.

The thing that surprised me about it wasn’t what Facebook did, but that news organisations went to the trouble of inviting them to do it. The picture was published, by the publisher, on their Facebook page. It didn’t get there by accident.

The fear that Facebook’s domination of access to news is inevitable becomes a self-fufilling prophecy if news publishers keep acting against their own editorial and commercial interests.

Any editor who thinks the answer is for Facebook to hire more editors and start to do their jobs for them is surely looking in the wrong direction. Instead of asking someone else to do their job surely they should be doing it themselves.

Facebook’s domination isn’t inevitable

Much of the anguished debate about the Nick Ut picture focused on the inevitability of Facebook’s dominance over the media, their policies, the way they apply them and righteous indignation about their lack of editorial judgement in the face of a self-evidently historic and editorially important photograph.

Facebook’s policies (or, as you might call them in a rather old fashioned way, their Style Guide) are algorithmic and might not be to the taste of every editor.

They’re certainly not to my taste. That’s not unusual. Some newspapers in the UK, for instance, are perfectly happy to publish even the very most taboo of swear words, others will avoid them or use asterisks.

There is no universal rulebook of editorial standards and no actual news product is edited by a robot.

The problem with Facebook’s rules is that they apply them, after the fact, to other peoples’ editorial judgements and, in fact, to everything everyone publishes on Facebook.

There’s a simple answer to this: don’t let them.

Publish your work on a platform you control. Your web site, for example. Don’t just give in to the inevitability that Facebook will take over the world, because to do so means giving up not just your editorial control and integrity, but also your business.

But, if you have contractually and morally decided to cede control to Facebook, don’t be surprised when they behave in the way they do.

Why does Facebook do what it does?

Facebook, because of its nature, is never going to be a good editor. Whatever you might think about it, they are trying to oversee all the content posted by everyone by applying a single set of rules. The fact that everyone in the world doesn’t agree with them is not very surprising.

Even when humans are involved, for instance in censoring photos, they are driven by calculations not value judgements and they are not likely to be career journalists with decades of experience in making editorial judgements.

The rules for nakedness seem to be something like this:

Not naked: OK.
Naked: bad – remove (NB male nipples OK, female nipples not OK).
Naked child: ultra-mega-very-bad – remove. No exceptions
Naked child in important news story: still ultra-mega-very-bad – remove.
Naked child in important news story now being re-posted and protested by thousands of people: still ultra-mega-very-bad – remove.
Context: irrelevant – ignore.
Protests by non-Facebookian humans: irrelevant – ignore
Protests by human non-American Prime Ministers: irrelevant – ignore.

This is not surprising. Facebook as a machine is not intelligent, it doesn’t have emotions, experience or judgement, it cannot understand context except in the most simplistic terms. It is programmed for efficiency which means ambiguity is not an option.

That’s why even ‘intelligent’ machines are frequently moronic in their output. We are all aware of this, we all put up with it all the time. It’s also why the work of humans is so much more satisfying.

But they backed down this time…

There was loud and widespread scream from the internet about this one.

Facebook backed down. Of course they did, as soon as a sufficiently senior and sensible human Facebookian got involved. A cathartic yelp of victory has been heard and small celebrations have ensued among those grateful for a rare event worthy of celebrating.

Not worth celebrating at all is the fact that intelligent, experienced editors have allowed the Facebook machine to stand between them and their readers, censoring as it goes.

The madness of Instant Articles

This isn’t an accident. It isn’t just because of users adding links into their news feeds.

Editors and publishers have been actively participating in a Facebook product called Instant Artlcles.

Rather than linking out to the publishers’ sites, instead their content is served by Facebook within the Facebook platform.

As we have learned from this whole episode, there are downsides to this when the Facebook editorial algorithm makes moronic decisions.

There are other downsides too – Facebook’s algorithms also decide when and where to feature the content and they have allegedly been reducing its visibility in peoples newsfeeds. Only a proportion of the content submitted is widely viewable. So another layer of editorial interference is lurking.

Also, obviously, users aren’t looking at the publishers’ products. They’re looking at little slices of them, extracted and shown out of the context of everything else. Perhaps this is inevitable on the internet where sharing of stories is ubiquitous, but is it really a good thing? Should publishers actively hand over control of their users’ experience as well as putting up with its inevitable dilution. Seems odd to me.

Lastly, according to the publishers I have spoken to, there’s absolutely no commercial upside at all. They don’t make any more money. Given that they make precious little money anyway, when someone views a page, it seems odd to give up so much in return for so little.

So what are the upsides?

Well. Instant Articles load faster, especially on mobiles.

As far as I can tell, from what I have been told, that’s kind of it. Well… you stay “visible” and “relevant” and your product “responds to the changing needs of your users” and various other things which I might rudely summarise as “we’re not doing nothing”. But none of it helps the bottom line or the product.

It’s just weird that editors and publishers are colluding with this.

It’s not Facebook’s fault

Blaming Facebook for being what it is, demanding it change in to what news organisations are, does nothing other than offering a comforting distraction from the reality of how this came about. And it isn’t Facebook’s fault.

Publishers need to acknowledge that not-doing-nothing isn’t the same as having a strategy and doing things which have costs but no benefits is not a sensible way of not-doing-nothing.

Running with the herd and trying not to break away is comforting but so far it hasn’t worked out too well.

And we wonder why newspapers are in trouble…

Blocking the blockers is a waste of a good crisis

Back when my day job involved worrying about such things, I didn’t much like the online advertising market. As a publisher, it’s quite hard to love.

Advertising works for publishers when they can charge a premium price for their ads, establish and defend a meaningful market share, turn a larger audience into higher yields and more revenue. None of these things are easy, or even possible, for most publishers in the online advertising market.

That’s why huge sites with massive audiences (by publishing standards anyway) are unable to be profitable, and it’s why cutting costs is better than investing in product.

Enter the ad-blocker

Recently, ad blocking has entered the mainstream thanks to players like Apple and Three, and everyone is up in arms. The publishing industry is crying foul, demanding that something be done, predicting dire consequences if they are cut off from their income source.

Now I’m not defending or celebrating ad-blocking. Some of it does indeed, as John Whittingdale said, seem like a protection racket.

But from the point of view of a publisher shouldn’t it be more a call-to-action than a call-to-whinge?

The truth is that the advertising income stream has never been enough to sustain them, and the situation has got worse not better over time. Ad blocking potentially accelerates but doesn’t fundamentally change the ultimate consequence of this.

So now, surely, is the time to start to focus industry thinking not on how to preserve the starvation regimen offered by online advertising, but how move past it? To tap into the much richer, much bigger, much fairer and more sustainable opportunities offered by the content itself rather than the annoying, uncontrollable and, as more and more users now know, block-able ads around the edges of it.

Can’t pay, won’t pay

Ah, I can hear the chorus of groans already.

“Consumers won’t pay” it rumbles.

“You can’t compete with free, subscriptions don’t work, paywalls go against the grain of the internet, micro-payments are impossible”.

It’s as if people actually take comfort from defeatist aphorisms, as an alternative to actually trying to change anything. It certainly makes life easier: if everybody expects the worst then it’s hard to disappoint them.

But it’s nonsense, and it’s feeble, and it leaves one the cultural and creative industries, together many times bigger than the advertising market, marooned by their own despair.

Perhaps one of the reasons people won’t pay, is because they can’t pay.

I don’t mean they can’t afford it. I mean there’s no simple way of handing over money. They literally can’t pay. That’s at least partly why they won’t.

Obviously, even if they could, they would have to want to – the challenge would be to make products good enough and to price them right.

That’s a creative challenge: know your user, make something that strongly appeals to them, charge a price they’re willing to pay without much thought. The same challenge which defines, effectively, the whole of the creative sector whether making films, music, books, newspapers, photography, games or anything else.

Can every page pay?

OK stop for a moment before you start groaning. Think about it. Don’t get defeated by the frustration of the years of trying to make micro-payments and subscriptions work. Look past that.

Imagine a world where every time your creative product or its content gets consumed you benefit. On terms which you have set. Imagine if every page could pay. What would it do to products, to revenues, to relationships with users?

When I ask content producers this question, most of them get quite excited. They see a world in which their focus becomes clearer. Pleasing their readers, viewers, listeners and players rather than the robots which deliver people to ad-serving systems. More consumption. More revenue. More investment in product leading to more popularity. What management consultants call a virtuous circle.

“Be popular” is the goal. The more popular, the more successful. Every page pays, predictably. Investing in creativity and creative products becomes rational again, innovating to better serve your audience becomes a key imperative, beating your competition drives the urgent need to keep evolving.

But what about the masters in the middle?

Of course there are lots of intermediaries on the internet, sitting in various places in between the content owners and the users. Search engines, ISPs, ad networks, mobile companies, aggregators, countless others.

Very often they’re the gatekeepers as well. To get to users you have to go through them, and on the way through they limit the rewards you can hope for.

But they’re also the people who can provide an answer to the payment conundrum. They are retailers. Many of them are already collecting money from your users for various things.

Just as newspaper publishers never tried to collect 25p individually from every person buying their papers, but instead got newsagents to do it in return for a share of the money, the solution to the payment problem might lie in getting other people to do it for you. As long as what’s good for them is also good for you, and vice versa, there are lots of reasons to work together.

Aligning incentives

The key, as the creative sector has known for centuries, is to have control over the terms under which you offer your work. The law has given creators this control ever since the advent of copyright.

Making this possible requires some new technical plumbing, to allow copyright to work as efficiently as advertising and websites themselves.

After that it’s down to the innovators, the creative companies and anyone who doesn’t want to rely on a failing ad-driven business model, to come up with a much more rapid evolution and new ways to please consumers and share rewards.

Since what we’re talking about her is supplementing ad revenues, not replacing them this doesn’t need to involve huge controversy. For the creative industries to win, the ad industry doesn’t have to lose (they’re doing that on their own anyway). New opportunity is something everyone can move towards

Never waste a good crisis

What’s needed is a spark to trigger all this movement. I think ad-blocking might be it. Something to move away from, a failing model for ad-based revenues. Projects like The Copyright Hub and the Linked Content Coalition are creating the basis for building a new value layer for the internet. This will lead to the emergence of new players who will make it easier for everyone to find new sources of revenue from users and others.

Who will these new players be?

Watch this space.

 

 

%d bloggers like this: