Category Copyright

The free flow of hypocrisy

I’ve been hearing this phrase “the free flow of information” a lot lately. It’s been in the context of the “Publishers Right” and it is usually preceded by the phrase “will restrict”.

The heart of the concern seems to be the idea that if permission is needed before digital publications can be exploited by others, it could limit, for example, the ways in which those works can be indexed and discovered in search engines.

The argument seems to be that restricting access to “information”, imposing conditions on its use or treating some users, like automated machines, differently from others, like humans, is not just improper but sinister and shouldn’t be allowed.

Google are a leading voice in this argument, so lets have a look at how they work.

Google’s mission “to organize the world’s information and make it universally accessible and useful” is pretty much the ultimate expression of the ideals of free information advocates. For them to make something universally accessible it has to be completely unrestricted. But how unrestricted and accessible is Google itself?

You might not know it, but you can’t use Google without their permission and in return for a payment. If you’re a Google-like machine, you can’t access it at all. The universe of those who can access Google is rather less all-encompassing than their mission suggests.

Try this. Download a new web browser, install it, don’t copy across a any settings or cookies or anything. The go to Google – don’t log in.

You’ll see something like this:

Screen Shot 2015-12-26 at 15.42.23

A little privacy reminder about Google’s (increasingly extensive) privacy policy sits at the top. If you click through you’ll be asked to click to show you accept the policy. Nice of them to go to effort to make sure you’re aware of it especially because it gives them pretty extensive rights to gather and exploit information about you.

This is how they pay for the free services they offer – they take something valuable from you in return and use it to make money for themselves. It’s a form of payment.

And if you don’t click to accept it, eventually you’ll see something like this:

Screen Shot 2016-01-08 at 10.38.20

You are actually not allowed to use Google until you have agreed explicitly to give them payment in the form of the data they want to gather and use.

So: using Google can only be done with their permission and in return for payment in the form of data.

There’s no technical reason for Google’s restrictions. They could offer a search service without gathering any data about users at all (and other services do). Their reason for these restrictions are obviously commercial: they need to make money and this i how they do it.

Whether or not you consider this to be reasonable (after all, every business needs to be able to make money), it doesn’t seem to sit very comfortably with their mission to make “all the world’s information… universally accessible”.

Nor, by the way does their blanket ban on “automated traffic” using their services, which includes “robot, computer program, automated service, or search scraper” traffic. They ban anyone who does what Google does from accessing the information which they have gathered from others using automated traffic. “Universal access” in Google’s world doesn’t apply to services like Google – it is a service for humans only.

Again, you might think this is reasonable, but contrasting it with their demand that their machine should be allowed to access other peoples services without restriction or permission is interesting.

Google insists that everyone – human and machine – needs their permission (and needs to pay their price) before accessing and using their service. But they oppose any law which might require Google to similarly obtain permission or pay a price when they access other peoples services.

It’s absurd that there should be such a strong lobby against such an obviously reasonable and uncontroversial thing at the Publishers Right.

Google is a company which vies to be the world’s largest, and which depends for its revenues on its ability to impose terms, restrictions and forms of payment on its users. It’s hypocritical of them to object to the idea that other companies should not be allowed to do the same.

The objections to the Publishers Right, and copyright more generally, are far too often the self-interest of mega-rich companies posing as the public interest. The credulity of politicians has, thankfully, reduced in recent years and they are more inclined to regard such lobbying sceptically.

There is no conflict between the need of media companies to have business models which allow them to stay in business and the “free flow” of information. There is no conflict between the desire to distinguish between human users and machine-based exploiters of their content.

For information to flow freely, those who create it need to be able to operate on a level playing field with those who exploit it, and need to be able to come to agreements with them about the terms on which they do so. To suggest otherwise, even in the most libertarian of language, is absurd.

The European Commission’s manifesto for The Copyright Hub

As you may know, I stepped down from The Copyright Hub earlier this year, two-and-a-half years into my planned one year tenure.

The Hub is a fantastic, exhilarating, project which stands to create massive and positive change for creators. That is why it has attracted the wide-ranging support from an enormously diverse group of people, organisations, countries and businesses which you’ll see on the website. Among many other positive traits, The Copyright Hub is notable for being so far-sighted in anticipating the future needs of the internet when it comes to copyright.

I was reminded of this earlier this week, when I was taking part in a panel discussion about the new copyright package being proposed by the European Commission. It reads, in part, as if they wrote the Hub’s new manifesto.

I have rather neglected to pay proper attention EU happenings lately, because my head is down and I am totally focussed on a rather wonderful and exciting new business I’m helping to start.

But when I looked up yesterday and paid attention to the briefing which preceded our panel session, I was struck by how the proposals – particularly those on the new Publishers Right – could have been written with The Copyright Hub in mind.

The nub of it is that more people, in future, will unambiguously need permission before they use other peoples’ work. Put the debate about the principle of this to one side for a moment and what’s left is a practical problem. How to identify who permission is needed from. How to obtain it in an efficient way.

The Copyright Hub was conceived in anticipation of these needs. It connects content to its rightsholder, and automates the process of seeking and granting permission to use it.

Taken together with the recent CJEU ruling in GS Mediawhich creates new obligations on services which link to infringing material to check copyright, and the need for the Hub’s services has never been greater.

Many of the concerns and objections I heard voiced at the session yesterday were practical.

“How will sites know if content is infringing?”

“How can permission be obtained in practice?”

These are questions The Copyright Hub was conceived up to answer – and when the answer becomes a matter of a simple, background, technical process it will usher a new era of capability and value creation for the internet.

The wording of the proposed legislation is also an improvement on the past. It avoids locking the law to the current state of technology – a sin committed by the safe harbour provisions of the E-commerce Directive. That directive addressed an issue which, at the time, was impossible to imagine being solved technologically. As the technology improved, developing from impossible to tricky to trivial, the law stood still and created a gigantic legal loophole through which businesses worth billions of dollars were driven and built, at the expense of rights owners.

The proposed new law doesn’t seem to make that mistake. It uses words like “proportionate”, “reasonable” and “adequate” – all terms whose interpretation will change as technology improves.

So it sets a challenge which I hope supporters of projects like The Copyright Hub and the Linked Content Coalition will take up with relish. How quickly can they deliver the open technology needed to make what is tricky today – identifying, verifying and agreeing rights automatically – trivial tomorrow?

Doing that the right way is hard. The Copyright Hub has not taken the easy route and has determinedly pursued an open approach to delivering its technology and governance. This is, of course, the right thing to do but technology doesn’t build itself and finding the resources needed, when there will be no direct commercial return to the Hub, is no small challenge.

The progress the Hub has made despite this has been encouraging, if slower on the technical front than I (and I think others) were hoping. The demand for the Hub has been consistently high, not just in the UK. The new legislative proposals will only increase it.

To be better able to meet that demand, the Hub needs more resources to build and manage technology for itself and its stakeholders. Few projects are lucky enough to start with an unpaid, publicly funded partner to help, as the Hub was with Digital Catapult, but such support can never last forever.

If anyone has any doubt about the rationale or opportunity of the Hub, a quick glance at the Commission’s proposed new copyright reforms should lay it to rest.

The Commission is saying that a more permissioned internet is coming. Those who have had a free-ride are going to have their freedoms curtailed a little bit, will need to ask first. Since the seeking and giving of permission has been the foundation of the whole creative economy, the importance of this is profound.

It will lead to value creation and opportunities that extend well beyond the creative sector. But that growth will be, in part, limited by the state of the art of technology for identifying rights and negotiating permission. A manual, unreliable, untrustworthy process won’t be “reasonable”, “proportionate” or “adequate”.

So the impact that these changes can deliver in practice are in the hands of the creative sector and projects like The Copyright Hub and the Linked Content Coalition which they have sponsored with such foresight.

I thought when I started working on the Hub that the long haul towards an improving legislative environment online was going to be an awful lot longer. I imagined that we would have to build, implement and prove the technology in advance of being able to attract the attention of the law makers.

Despite some people thinking I was a wild optimist, it seems I was not not nearly optimistic enough. The most frustrating moments working on The Copyright Hub came when dealing with people who just couldn’t understand why it mattered or would help, who didn’t believe the status quo would ever change.

Now is a moment to for all of them to share my renewed, buoyant optimism that the status quo isn’t “locked in”. Legislative, as well as technological, change is not just possible but imminent – no doubt influenced by the great strides already taken by the Hub and other projects.

It would be an awful shame if the technology, having had such a great head-start, was overtaken by the legislation. Or the UK by other countries.

So… chequebooks out, everybody! If you care about the future health of the creative sector, the Hub is a huge asset. It needs your money and your work to implement its vision. This opportunity is bigger and sooner than we could ever have hoped.

Support The Copyright Hub! Its time is now…

The CJEU goes bonkers again…?

I am very much not a fan of the European Court of Justice and their whimsical way of making up laws which bear little relation to anything actually legislated.

Last week they were at it again, “banning” open wifi hotspots because they make copyright infringement too easy. The court said that if users need a password, and hotspot owners record their identity, copyright infringement will be reduced.

I am wondering if this time they accidentally got it right.

I’ve written before about the problem with safe harbour laws which protect service providers on the internet by absolving them of any liability for the users of their services.

The intention of this was understandable – why should someone be liable for something they cannot have any knowledge of – like copyright infringement, for example?

But the effect was catastrophic. It led to the absurd fandango of “notice and takedown” whereby copyright owners have to try to police the whole internet and then send notices to service providers to remove content.

The value of this, almost literal, get-out-of-jail-free card is shows in the fact that Google claims, at the time of writing, to have removed 1.79Bn URLs from search in response to these notices. This is a gigantic undertaking yet they still prefer this way of working to anything more sensible which might prevent infringing content appearing in the first place.

The problem with safe harbours for me has always been that they only do half the job. Sure, fine, fair enough, don’t make service providers liable for something they didn’t do (although in other areas of the law – nightclubs for example – service providers have exactly this liability). The liability, in copyright safe harbour regimes, is firmly with the person who did the bad thing in question.

Unfortunately, although service providers can use the law to put their hands out and say “not my fault, guv”, they are usually unable to point to the person whose fault it is – their customer, the person to whom they provided a service and who used it to do something illegal and who is liable in law for their actions. Even if they can, they will frequently make it as difficult as possible to discover.

So the safe harbour, while trying to limit a risk (which, at the time the law was written might have seemed unmanageable – although current technology makes it a simple matter), actually creates a thick shield behind which pretty well anyone can do pretty much any infringing they like, safe in the knowledge that there will, with vanishingly few exceptions, be any consequences at all. In practice the worst outcome will be that the infringing content get removed.

Copyright infringement is thus a zero cost, zero consequence activity on the internet thanks to safe harbour laws.

Many businesses have been founded to take advantage of this loophole and many fortunes have been made – just not by copyright holders who provide the raw materials.

I’ve always thought that safe harbour laws could be hugely improved if, in order to get the legal protection from liability, the service provider needs to have made at least some effort to be able to identify the person who is actually liable – the user. In return for immunity, they would have to be able to lift the anonymity of the alleged wrong-doer. Again, not unprecedented.

And, as far as wifi hotspots are concerned anyway, the CJEU seems to agree.

The court might have come up with a rather clumsy and faffy way of doing it but this is a change which, if applied more broadly to the copyright safe harbour, would go a very long way to re-balancing the internet and restoring creativity to its proper place near the top of the internet value chain.

So I find myself in the unaccustomed position of agreeing with the CJEU on one of their copyright rulings. It won’t last.

About that photo on Facebook… we’re blaming the wrong people

Not long ago there was an eruption of anger and indignation about Facebook’s repeated censorship of Nick Ut’s upsetting and famous picture of a Phan Thi Kim Phuc running from napalm in Vietnam.

The thing that surprised me about it wasn’t what Facebook did, but that news organisations went to the trouble of inviting them to do it. The picture was published, by the publisher, on their Facebook page. It didn’t get there by accident.

The fear that Facebook’s domination of access to news is inevitable becomes a self-fufilling prophecy if news publishers keep acting against their own editorial and commercial interests.

Any editor who thinks the answer is for Facebook to hire more editors and start to do their jobs for them is surely looking in the wrong direction. Instead of asking someone else to do their job surely they should be doing it themselves.

Facebook’s domination isn’t inevitable

Much of the anguished debate about the Nick Ut picture focused on the inevitability of Facebook’s dominance over the media, their policies, the way they apply them and righteous indignation about their lack of editorial judgement in the face of a self-evidently historic and editorially important photograph.

Facebook’s policies (or, as you might call them in a rather old fashioned way, their Style Guide) are algorithmic and might not be to the taste of every editor.

They’re certainly not to my taste. That’s not unusual. Some newspapers in the UK, for instance, are perfectly happy to publish even the very most taboo of swear words, others will avoid them or use asterisks.

There is no universal rulebook of editorial standards and no actual news product is edited by a robot.

The problem with Facebook’s rules is that they apply them, after the fact, to other peoples’ editorial judgements and, in fact, to everything everyone publishes on Facebook.

There’s a simple answer to this: don’t let them.

Publish your work on a platform you control. Your web site, for example. Don’t just give in to the inevitability that Facebook will take over the world, because to do so means giving up not just your editorial control and integrity, but also your business.

But, if you have contractually and morally decided to cede control to Facebook, don’t be surprised when they behave in the way they do.

Why does Facebook do what it does?

Facebook, because of its nature, is never going to be a good editor. Whatever you might think about it, they are trying to oversee all the content posted by everyone by applying a single set of rules. The fact that everyone in the world doesn’t agree with them is not very surprising.

Even when humans are involved, for instance in censoring photos, they are driven by calculations not value judgements and they are not likely to be career journalists with decades of experience in making editorial judgements.

The rules for nakedness seem to be something like this:

Not naked: OK.
Naked: bad – remove (NB male nipples OK, female nipples not OK).
Naked child: ultra-mega-very-bad – remove. No exceptions
Naked child in important news story: still ultra-mega-very-bad – remove.
Naked child in important news story now being re-posted and protested by thousands of people: still ultra-mega-very-bad – remove.
Context: irrelevant – ignore.
Protests by non-Facebookian humans: irrelevant – ignore
Protests by human non-American Prime Ministers: irrelevant – ignore.

This is not surprising. Facebook as a machine is not intelligent, it doesn’t have emotions, experience or judgement, it cannot understand context except in the most simplistic terms. It is programmed for efficiency which means ambiguity is not an option.

That’s why even ‘intelligent’ machines are frequently moronic in their output. We are all aware of this, we all put up with it all the time. It’s also why the work of humans is so much more satisfying.

But they backed down this time…

There was loud and widespread scream from the internet about this one.

Facebook backed down. Of course they did, as soon as a sufficiently senior and sensible human Facebookian got involved. A cathartic yelp of victory has been heard and small celebrations have ensued among those grateful for a rare event worthy of celebrating.

Not worth celebrating at all is the fact that intelligent, experienced editors have allowed the Facebook machine to stand between them and their readers, censoring as it goes.

The madness of Instant Articles

This isn’t an accident. It isn’t just because of users adding links into their news feeds.

Editors and publishers have been actively participating in a Facebook product called Instant Artlcles.

Rather than linking out to the publishers’ sites, instead their content is served by Facebook within the Facebook platform.

As we have learned from this whole episode, there are downsides to this when the Facebook editorial algorithm makes moronic decisions.

There are other downsides too – Facebook’s algorithms also decide when and where to feature the content and they have allegedly been reducing its visibility in peoples newsfeeds. Only a proportion of the content submitted is widely viewable. So another layer of editorial interference is lurking.

Also, obviously, users aren’t looking at the publishers’ products. They’re looking at little slices of them, extracted and shown out of the context of everything else. Perhaps this is inevitable on the internet where sharing of stories is ubiquitous, but is it really a good thing? Should publishers actively hand over control of their users’ experience as well as putting up with its inevitable dilution. Seems odd to me.

Lastly, according to the publishers I have spoken to, there’s absolutely no commercial upside at all. They don’t make any more money. Given that they make precious little money anyway, when someone views a page, it seems odd to give up so much in return for so little.

So what are the upsides?

Well. Instant Articles load faster, especially on mobiles.

As far as I can tell, from what I have been told, that’s kind of it. Well… you stay “visible” and “relevant” and your product “responds to the changing needs of your users” and various other things which I might rudely summarise as “we’re not doing nothing”. But none of it helps the bottom line or the product.

It’s just weird that editors and publishers are colluding with this.

It’s not Facebook’s fault

Blaming Facebook for being what it is, demanding it change in to what news organisations are, does nothing other than offering a comforting distraction from the reality of how this came about. And it isn’t Facebook’s fault.

Publishers need to acknowledge that not-doing-nothing isn’t the same as having a strategy and doing things which have costs but no benefits is not a sensible way of not-doing-nothing.

Running with the herd and trying not to break away is comforting but so far it hasn’t worked out too well.

And we wonder why newspapers are in trouble…

Blocking the blockers is a waste of a good crisis

Back when my day job involved worrying about such things, I didn’t much like the online advertising market. As a publisher, it’s quite hard to love.

Advertising works for publishers when they can charge a premium price for their ads, establish and defend a meaningful market share, turn a larger audience into higher yields and more revenue. None of these things are easy, or even possible, for most publishers in the online advertising market.

That’s why huge sites with massive audiences (by publishing standards anyway) are unable to be profitable, and it’s why cutting costs is better than investing in product.

Enter the ad-blocker

Recently, ad blocking has entered the mainstream thanks to players like Apple and Three, and everyone is up in arms. The publishing industry is crying foul, demanding that something be done, predicting dire consequences if they are cut off from their income source.

Now I’m not defending or celebrating ad-blocking. Some of it does indeed, as John Whittingdale said, seem like a protection racket.

But from the point of view of a publisher shouldn’t it be more a call-to-action than a call-to-whinge?

The truth is that the advertising income stream has never been enough to sustain them, and the situation has got worse not better over time. Ad blocking potentially accelerates but doesn’t fundamentally change the ultimate consequence of this.

So now, surely, is the time to start to focus industry thinking not on how to preserve the starvation regimen offered by online advertising, but how move past it? To tap into the much richer, much bigger, much fairer and more sustainable opportunities offered by the content itself rather than the annoying, uncontrollable and, as more and more users now know, block-able ads around the edges of it.

Can’t pay, won’t pay

Ah, I can hear the chorus of groans already.

“Consumers won’t pay” it rumbles.

“You can’t compete with free, subscriptions don’t work, paywalls go against the grain of the internet, micro-payments are impossible”.

It’s as if people actually take comfort from defeatist aphorisms, as an alternative to actually trying to change anything. It certainly makes life easier: if everybody expects the worst then it’s hard to disappoint them.

But it’s nonsense, and it’s feeble, and it leaves one the cultural and creative industries, together many times bigger than the advertising market, marooned by their own despair.

Perhaps one of the reasons people won’t pay, is because they can’t pay.

I don’t mean they can’t afford it. I mean there’s no simple way of handing over money. They literally can’t pay. That’s at least partly why they won’t.

Obviously, even if they could, they would have to want to – the challenge would be to make products good enough and to price them right.

That’s a creative challenge: know your user, make something that strongly appeals to them, charge a price they’re willing to pay without much thought. The same challenge which defines, effectively, the whole of the creative sector whether making films, music, books, newspapers, photography, games or anything else.

Can every page pay?

OK stop for a moment before you start groaning. Think about it. Don’t get defeated by the frustration of the years of trying to make micro-payments and subscriptions work. Look past that.

Imagine a world where every time your creative product or its content gets consumed you benefit. On terms which you have set. Imagine if every page could pay. What would it do to products, to revenues, to relationships with users?

When I ask content producers this question, most of them get quite excited. They see a world in which their focus becomes clearer. Pleasing their readers, viewers, listeners and players rather than the robots which deliver people to ad-serving systems. More consumption. More revenue. More investment in product leading to more popularity. What management consultants call a virtuous circle.

“Be popular” is the goal. The more popular, the more successful. Every page pays, predictably. Investing in creativity and creative products becomes rational again, innovating to better serve your audience becomes a key imperative, beating your competition drives the urgent need to keep evolving.

But what about the masters in the middle?

Of course there are lots of intermediaries on the internet, sitting in various places in between the content owners and the users. Search engines, ISPs, ad networks, mobile companies, aggregators, countless others.

Very often they’re the gatekeepers as well. To get to users you have to go through them, and on the way through they limit the rewards you can hope for.

But they’re also the people who can provide an answer to the payment conundrum. They are retailers. Many of them are already collecting money from your users for various things.

Just as newspaper publishers never tried to collect 25p individually from every person buying their papers, but instead got newsagents to do it in return for a share of the money, the solution to the payment problem might lie in getting other people to do it for you. As long as what’s good for them is also good for you, and vice versa, there are lots of reasons to work together.

Aligning incentives

The key, as the creative sector has known for centuries, is to have control over the terms under which you offer your work. The law has given creators this control ever since the advent of copyright.

Making this possible requires some new technical plumbing, to allow copyright to work as efficiently as advertising and websites themselves.

After that it’s down to the innovators, the creative companies and anyone who doesn’t want to rely on a failing ad-driven business model, to come up with a much more rapid evolution and new ways to please consumers and share rewards.

Since what we’re talking about her is supplementing ad revenues, not replacing them this doesn’t need to involve huge controversy. For the creative industries to win, the ad industry doesn’t have to lose (they’re doing that on their own anyway). New opportunity is something everyone can move towards

Never waste a good crisis

What’s needed is a spark to trigger all this movement. I think ad-blocking might be it. Something to move away from, a failing model for ad-based revenues. Projects like The Copyright Hub and the Linked Content Coalition are creating the basis for building a new value layer for the internet. This will lead to the emergence of new players who will make it easier for everyone to find new sources of revenue from users and others.

Who will these new players be?

Watch this space.

 

 

Tis but a flesh wound

Much has been written in the last week or two about the death of newspapers. The announcement that the Independent will cease its print edition has prompted this hand-wringing and outpouring. The Independent’s hobbyist owner, Evgeny Lebedev, has offered up his own wisdom about the situation. In an interview with the Guardian he claims his rivals are “in denial” about print.

“I genuinely believe that the future is digital and that the industry is in denial…” he says, positioning himself as the pioneering leader of an otherwise moribund pack.

I chuckled when I read this, in the patronising way only a long-in-the-tooth, seen-it-all-before old dinosaur can. Evgeny is not to be ignored, and he has done some interesting and innovative things, but he could easily be accused of a certain amount of denial himself.

While print might be a rapidly declining market in both circulation and advertising terms, it remains the case that for certain newspapers print is still profitable.

Not, I agree, for everyone, and if you were the proprietor of a newspaper selling around 50,000 copies a day in a national newspaper market which manages to sell nearly 7m copies daily, carrying on would have started to seem irrational quite some time ago. Being in last place, with under 1%, isn’t exactly a glorious place to be in any market. In a declining market, less so. In a declining market with high overheads and reducing yields, less still.

So fine, Evgeny, shut down your print titles. Can’t imagine why you didn’t do it years ago (unless, of course, the reason why a mysteriously wealthy Russian former spy buys a failing British newspaper isn’t just because he’s interested in the bottom line).

But Evgeny’s digital dream is almost comical. For the Independent to have a future, digital or otherwise, it has to have an income. Ideally, unless it plans to rely on charity, it should have more income than expenditure. Which as countless newspapers have found, is a bit of a challenge in the digital domain.

It’s not like the Independent is the first to try this, but the precedents are not good. Going “digital only” is a usually prelude to going bust or carrying on in name only, trying to attract enough traffic to bring in a dribble of cash. That’s because “digital only” tends to mean, other than in niche areas, ad-funded.

Unfortunately ad-funded means a rather unreliable revenue stream, since increased traffic only converts a fraction of the increase into meaningful ad revenue. It also means a rather uncertain future because the online ad marketplace is one largely out of the control of any site which is seeking ad revenues. If you’re running to stand still, you’re doing rather well.

So success as an online newspaper is elusive. As so many have shown, it’s relatively easy to drive audiences to numbers which dwarf print circulations. What’s much harder is to convert those audiences into profitable or even meaningful revenue streams. So the usual approach is to try to cut costs, to acquire audience for the minimum possible investment, or keep spending and produce a fantastic product sustained by the hope that popularity will eventually deliver meaningful revenues. Just ask the Guardian and the Daily Mail how well that works out in practice.

Which means Evgeny’s high-minded promises to retain the services of high priced journalists and foreign bureaux are unlikely to survive the brutal reality of the digital only world for long. If he really believes that this transition, and the promised re-investment of freed-up capital, will lead to growth then he’s either talking about growing something other than profit, or he’s a fantasist.

The truth is that until the internet grows up enough to deliver meaningful, reliable revenue from online audiences, this sort of transition will continue to end in failure. Giving up print is simply giving up. For the Independent, which has struggled to commercially viable for much of its existence, it might be finally succumbing to the inevitable

It’s a very sad day because for all its failure the Independent has been a great newspaper, editorially proud and brave and with lots to admire. At least that’s what plenty of people I respect say. Personally I never read it much. Which I think probably explains the problem – I wasn’t alone.

Not enough people wanted to read the Independent. That’s why it failed. When the digital life support machine is finally turned off it will be the end of a painfully prolonged death. If Evgeny wants to invest in anything, in the meantime, he should try to make it something which might actually change the online marketplace into one where it’s possible for newspapers and other content businesses to thrive. That’s what I have been working on.

But that requires a strategic vision which extends beyond just brave and unrealistic rhetoric.

Farewell, the Independent. You were great. Rest in peace whenever you are finally allowed to.

Walls and words – the importance of language

YouTube put up a paywall. But they’re not calling it that. In various headlines culled from various search engines, YouTube Red is called “a subscription service”, an “ad-free music and video plan” and so on. Not a paywall.

I used to think about paid services (and how to make them pay) when I worked in newspapers. One thing I relentlessly hated was the word “paywall”. It was so negative and pejorative, a word which almost demanded to be used apologetically or disparagingly.

Despite this it was and is used even by the wordsmiths in the newspaper industry – and more or less universally – as a piece of jargon describing the desire to charge customers for creative products.

But not by YouTube, or those reporting their new business. Perhaps that’s a semantic issue – you can still get YouTube without paying if you’re willing to put up with ads – but it’s a telling reminder of how important language, sometimes almost subliminally, is to peoples perceptions.

It is even more telling when people talk about copyright.

Think about copyright. It’s a right which is automatically granted to everyone whenever they create something. The right confers on them a freedom to decide what happens to their work, which they can use however they want – to spread their work freely around, or to keep it private, or to use their work to form collaborations with others; and to agree to do whatever they want on whatever terms they want.

To out it another way, copyright is a freedom, granted to all creators. But that’s not how it’s talked about, even in the law. UK law talks about “the acts restricted by copyright”. Its written in terms of what you can’t do, not what you can.

That might be a legal necessity, but it sets the tone for a lot of the debate about copyright. Because copyright is a restricting thing, it must be a negative thing. It stops people doing things, so the things it stops must be some sort of loss. It restricts and so somehow must be blocking someone else’s freedom.

This isn’t just annoying, it’s dangerous, because it has set the tone for debate. You can tell where the idea that copyright is just lent by society to creators (“for a limited time” as the US Constitution puts it – more negative language) comes from.

Copyright advocates are always fighting back against this presumption of negativity, always defending against these attacks rather than being able to talk of the huge and diverse cultural and economic benefits which copyright unlocks and the huge potential to do even more. But even they, in defending copyright, find themselves using the same negative language which feeds the negative attitudes they rail against. Copyright protects, it prevents, it is enforced.

We don’t talk about “till walls” in shops. We don’t talk about human rights in terms of the freedom they deny to one person in order to grant a more important freedom to someone else. And YouTube doesn’t talk about “paywalls” when they decide that their users might like to pay for a product made out of creative content.

So anyone who recognises the great, positive impact of copyright and its potential to deliver the real value of the internet in the coming third era of its evolution, should learn the lesson of positive language.

Talk about freedom, talk about reward, talk about copyright being for everyone, every creator, every person.

Talk about what copyright enables, not what it restricts.

Google seeks licences from rightsholders, world still turning

So, despite a campaign to prevent it, the Germans have changed their copyright law a little bit, raising the possibility that search engines might have to pay a fee for news content they access.

Google has responded by changing the rules of Google News in Germany to make it “opt in”.

In other words, before Google will crawl German news sites, they will obtain permission from the publisher.

A licence, you might call it. The thing copyright law always said you needed before copying and exploiting someone else’s content.

I have seen no mention of any basis for sitting down and, you know, actually negotiating the terms of the licence with Google, talking about what you want from them in return. I presume their opt-in is a “take or leave it” sort of thing. They don’t seem to be offering money, which we can all clearly see they couldn’t possibly afford with only $10bn profit last year on a pitiful $50bn turnover.

All the German news publishers can have, it seems, is their random share of the supposed 6 billion (mostly completely worthless) visits which Google News sends to publishers. I hope they find this offer resistible bearing in mind the minimal impact that being out of Google News is likely to have on their bottom line.

Still. Google seeking licences, eh? Asking permission? Admittedly, they only seem to be doing so to avoid being forced to share a tiny slice of their enormous wealth with those who provide their raw materials. A little tight-fisted perhaps.

But it shows that their might be new life in the old copyright dog yet. And new value, if a permission based internet starts to creep slowly closer.

Unintended consequences

The government is concerned. Bad things are happening. The internet is a corrupting and subversive influence, tipping bad people over the edge into depravity and evil deeds. Something must be done.

So, ministers have summoned internet companies. A Code of Conduct is under consideration for ISPs. We need their help to stop the bad things.

Child porn, radicalising websites, other distasteful or criminal material need to be controlled. They are damaging our society and creating deviants and criminals.

The call for “internet companies” to step in to try to prevent this is understandable. After all, they stand between the bad people publishing this bad stuff and the innocent users who risk being corrupted, radicalised and deranged by what they see.

Responsible action by “internet companies” is needed to tame the wilder, antisocial extremes of behaviour online.

If you pause to think, you might wonder why these internet companies aren’t already doing something about it without being dragged in to see the headmaster. Everything on the internet has some sort of interaction with an “internet company”, whether it is hosting, uploading, streaming, aggregating or whatever. If their users are doing bad things, you would have thought they might want to do something about it. Why do they need to be summoned by the government to point out the obvious?

Well, one reason might be that there was a law passed more than a decade ago which specifically exempted them from any responsibility for what their users do and publish using their facilities.

In fact, because of the way the law is worded, it almost obliges internet companies not to check or have any awareness of what their users are doing. Once they are aware of illegal or infringing activity, they are obliged to act to stop it, but as long as they’re unaware they have no liability.

The law actually enshrines ignorance as a legal defence. Awareness is an expensive and risky business so actively policing and monitoring what people are publishing is an unappealing option. Ignorance is bliss. Profitable bliss.

The law in question is the european E-commerce directive which creates broad exemptions for “intermediaries” on the internet.

The rationale for that law is obvious but the effect it has had is perhaps less positive than was intended. I have written before about the catastrophic effects for copyright and the creative industries. The problems of criminal and deviant activities which are so exercising the government at the moment would seem, at the very least, not to be helped either.

Of course it’s not true to say that internet companies should be blamed for the bad things that other people do. It’s not their fault and it’s not entirely within their power to prevent it either.

However, when you have written a law which specifically disincentises them from doing anything at all to exercise any control, and then find yourself calling them in for a meeting to ask them nicely if they wouldn’t mind making a little more effort, you should perhaps ask yourself whether you have got the balance quite right.

Pub landlords don’t make anybody get drunk but they can still lose their licence for allowing excessive drunkenness. Football clubs don’t organise riots but they can still be penalised for the bad behaviour of their fans. Where responsibility is at least partly shared, more responsible behaviour tends to emerge. Where someone is made immune from consequences, responsible behaviour is less likely to emerge.

The e-commerce directive is the unintended consequences law. Whatever protection it gave to the mewling, vulnerable, infant internet is no longer needed. The internet has grown up into a strapping teenager, able to stand on its own two feet and behave like a grown-up. It’s time it was given the responsibilities to go with the freedoms and profits.

The IPO hits back

AND SO BEHOLD, ladies and gentlemen, the hastily cobbled together rebuttal that the Intellectual Property Office has put forward to defend their orphan works legislation which became law last week. Andrew Orlowski has his own useful explainer here

Since it is cobbled together and defensive, the IPO document is not very detailed and focuses mainly on photos (the source of the loudest criticism). It seems to be trying to debunk some of the criticisms which have been made of the new provisions on “orphan” works.

One thing it doesn’t do is admit to anything. Fundamentally it seems to be trying to say that all the negative accusations which have been made are wrong, and there’s nothing to be worried about because we’re only doing the same thing as the Canadians.

It doesn’t have the nerve to admit that the drafters of this legislation believe that there is a greater good to be served, and the price paid by the losers is outweighed by the benefits to whoever they think might be the winners. It’s not a document which suggests that the writers have the courage of their convictions; there seems to be a reluctance to even acknowledge the existence of potential losers from this.

The actual “myths” it addresses are curious. Some are things I haven’t seen mentioned. That’s not to suggest that they’re not real, but from my perspective anyway they’re not particularly high profile. For example it is very specific in rebutting a slightly obscure “myth” about sub licensing:

Myth: a company can take my work and then sub license it without my knowledge, approval or any payment
Fact: The licences to use an orphan work will not allow sub licensing.
Thanks for telling us, I’m sure whoever it was who was specifically worried about sub-licensing will be reassured.
Issues which have been mentioned more prominently and seem rather more substantial are left unmentioned.
Take the above example, and change it a tiny bit: “a company can take my work and then use it without my knowledge or approval”. The answer would surely have to be “Yes they could”. It might be a frequently asked question but you can’t call it a myth. Best to leave that one out then.
Similarly:
Myth: I will have to register my photos to claim copyright
Fact : Copyright will continue to be automatic and there is no need to register a work in order for it to enjoy copyright protection

Up to a point, Lord Copper, but unregistered works (not just photos) will be harder to trace provenance for and so are more likely to be “orphaned”. So they will have copyright protection, in the sense that permission will be required to use them, but the permission won’t have to come from you. The works will have protection, but the creator won’t. Best gloss over that one too.

Some of the “myths” are answered with the aid of a crystal ball (emphasis added):

Myth: the Act is the Instagram Act
Fact: Given the steps that must be taken before an orphan work can be copied,such as the diligent search, verification of the search and payment of a going rate fee, it is unlikely that the scheme will be attractive in circumstances where a substitute photograph is available. The rate payable for an orphan work will not undercut non-orphans

This is a very dodgy basis for policy making. Dismissing the possibility of negative outcomes by predicting that people will do something else instead is hardly reassuring. In my limited experience of the legislative process, measures intended to do one thing based on assumptions about human behaviour are the most likely to produce perverse outcomes (don’t get me started on the DMCA and e-commerce directive – although I have ranted about them on other occasions). Unintended consequences are almost inevitable when you’re not sure, or are unable to say, what consequences you were intending in the first place.

And I have no idea how there can be any basis for claiming that the rates paid will “not undercut” non-orphans. For that to be true there would have to be some sort of “going rate” but there isn’t. In my dealings with photographs I have dealt with a range of prices from zero to over £100,000 for a photo depending on the subject and the relevance it has at the moment of sale. A market price, you might say, agreed between willing buyers and sellers, using the “negotiation” method.

There is no objective way of evaluating worth because photography, like all creative output, isn’t a commodity despite the best efforts of some to make it so.

And on, and on. Debunking all the “myths” individually seems a bit unnecessary.

I have to admit I’m struggling a bit here.

If there has been an honest and open process – and since the law got passed we can safely assume there must have been – by which politicians have decided this, why be so coy?

If the decision was to remove some rights, in some circumstances, from creators because they judge there to be a greater good served by doing so, what’s the problem with just saying so, and telling is what that greater good might be? Sure, people like me might shout and loudly disagree but that’s in the nature of the democratic process.

Perhaps there’s some other reason why they’re being so shy.

%d bloggers like this: