Permission to know what is going on, sir?

Google have released a new thingy, which allows people to save web content to their Google Drive (Google’s cloud-based storage thing), and do various things to it like add comments and annotations.

Nothing much unusual about that, and there are lots of similar things out there like the formerly discussed bo.lt. Not to mention every computer in the world, which copies and stores all the web content which comes its way.

The fact that is is so ordinary highlights one of the challenges of the internet for the future. The idea that keeping and changing copies of content you happen to come across should be controversial. It’s not allowed by the law, after all. To the extent that it happens as a consequence of browsing, it is the result of a technical design decision decades ago rather than a natural and inevitable consequence of digitisation.

In fact, digitisation could just as easily do the opposite. Rather than requiring multiple copies to exist, digitisation holds out the possibility of a single copy of something being accessible to everyone. To read a book, for example, you don’t need to hold your own copy in your hand any more. Everyone can read the same copy.

Of course, if we imagine the internet without the constant, prolific and uncontrolled copying which is embedded within its technical protocols, it’s a very different place. Fundamentally, creators would know what was happening to their content, because they would have some control over the master copy. Creative success would be better rewarded and so the internet economy would truly be a creative economy. The protocols and marketplaces of the internet would have sprung up around the idea of permission rather than presumption, and huge new opportunities would have arisen as a consequence.

Contrast that with what we have. Google’s product, and many others, work on the assumption that permission is entirely unnecessary. Making copies, changing work, sharing it with others is all just fine.

The usual circular justification for this (“it’s fine because everyone does it, it’s how the internet works, if it wasn’t fine the internet wouldn’t work”), itself absurd, doesn’t really cover it. After all, copyright hasn’t been abolished yet, and this activity isn’t embedded in the bandwidth-compensating early internet protocols which still underpin things like web browsing (and subsequently protected by various laws, new and old, or generous interpretations of them).

Permission is still, in law, required to make a copy of something. Permission can be denied, perhaps because the proposed copying is damaging, or insufficiently rewarding, or just because someone decides to say “no”.

That concept doesn’t seem to feature in Google’s product. It doesn’t say how a site owner can prevent people copying their content in this way, and I’m guessing that is because they can’t. Doubtless a tortuous DMCA takedown process can be used by anyone who discovers a copy they don’t like (complete with intimidating – and copyright infringing – report to Chilling Effects if the complaint goes to Google) but no ability to prevent it happening in the first place.

The idea of permission seems to just be absent from increasing parts of the internet.

Permission for search engines to create massive, permanent and complete databases of all the content they come across isn’t needed, they say. Part of their justification is that while they might keep huge databases, they only show “snippets” to users, which makes it OK. Oh, but the content owner can’t decide on what snippet is shown either.

The need for permission to be obtained before content is copied for things like bo.lt and this new Drive thingy isn’t needed either. Well, it is in law, but not in practice. Finding and removing copies, wildly impractical and expensive, also requires considerable formalities and the promise of punishment by publishing your complaint, as if exercising your legal rights is somehow wrong.

And now the cloud is creating a new category of permissiveness, things done in private or for small audiences. It’s OK to create a service which facilitates illegal copying because it’s the equivalent of something which would otherwise have happened behind closed doors. As if the law stops at your front door.

This is all a bit scary, it means control is getting ever looser at the same time as the ability of technology to tame the chaos is increasing.

Why shouldn’t a creator want to know who has made copies of their work and why? Is it unreasonable to want to make sure corrections, changes or withdrawal of content actually takes effect? Why shouldn’t someone be able to refuse permission for a use they don’t like or which conflicts with something else they’re doing? Or keep something exclusive to themselves?

And, obviously, what’s wrong with wanting to make money from your work and wanting to stop people stealing it or other people making money.

It’s notable, as I have observed before, that the companies making the most money from content on the internet are those who invest the least in its creation. This seems to be getting worse, not better, even as the technical capabilities to reverse it increase.

The creation of new services, which are entirely detached from the technical baggage of the past, which actually and actively exacerbate this catastrophic trend shows, not for the first time, the contempt with which technology companies view content and creators. Offering users a convenience, without any consideration for reasonableness or the law, isn’t “disruptive” or “user-focussed”, regardless of whether users would be doing it anyway without your help. It’s amoral, self-interested and just wrong.

The internet’s potential to elevate creativity and creative success to the very pinnacle of our culture and economy is still there, but it is still under sustained attack constantly. A permission-driven internet is an opportunity, not a threat.

How much longer we’ll be able to cling to the idea that it can happen, though, is questionable.

The French fancy making life hard for Google, but are they kidding themselves?

After the revelation that withdrawing from Google News seems to do little (if any) damage to publishers, Eric Schmidt has been in France trying to persuade the President not to allow news publishers to charge Google for including their content on Google News.

Google says such a move would “threaten the very existence” of Google. A feeble protest, and an overblown threat. As if anyone thinks such a thing could kill Google; but even if it could why should anyone care? If Google isn’t smart enough to know how to innovate their way past challenges then maybe their days are numbered anyway.

Google also say that if the French persist with this they will just stop including French content in Google News. Based on the Brazilian experience that’s not much of a threat, since the French publishers probably wouldn’t feel much impact at all.

More intriguing is why the publishers don’t just withdraw their content rather than ask their government to get involved. They can do it any time they like; nobody forces them to be included in any Google search.

It might sound like a wizzard wheeze to get the law changed to force payment but there’s a flipside. If such a condition is imposed by law rather than negotiation, it could end up making Google’s access to their content a right, as long as payment is made.

I think control should stay with publishers, they should set terms and prices, the government should provide the framework within which they do so and then stand well back.

As soon as government start interfering, treating different categories of content differently, setting prices or terms or anything else bad things happen. The market, such as it is, gets locked in to a particular way of working and it destroys future innovation and competition. And this market hasn’t even got started yet, we shouldn’t force old age on it quite yet.

I know French (and German, and other) newspaper industries are desperate for revenues, and easy quick ways of getting them are attractive, but this sort of thing is a last resort. Traditionally they are reserved for when everything else has failed.

There are a few things to try first. Here are some suggestions for beleaguered newspapers trying to work out how to deal with search.

Be brave.

Withdraw your content from Google News. Maybe even from Google search (leave enough behind so people searching for your title can find it). And other search engines too. Since you get so little money from those sources, you’ll be risking little. And you can turn it back on easily enough.

If a search engine offers to make it worth your while to include your content in their product, negotiate with them. Do a deal which works for you – payment, helping sell subscriptions, ad share, whatever.

Tell your readers about it, why your content is in one place and not another. Point out the gap in the results they get from the search engines which don’t want to do a deal.

If none of them want to pay you, use them to deliver what you need, not what they need. Put enough stuff in them to attract the attention you need, and no more. Experiment with the best way to do that, and constantly refine your approach. Use other channels and relationships to attract users. Ask your users to pay, and work hard to make sure your product is worth paying for. Spend your SEO budget on other kinds of marketing, or just save it.

Just do something. Stand up for yourselves and the value of what you do.

Make a market.

Stop being so impotent and stop asking governments to load the dice in your favour.

The law you need is already there; just start using it.

No to Google News: common sense or suicide? – An update

Well that didn’t take long.

Apparently the Brazilian boycott of Google News has cost them just 5% of their traffic. They think that’s “a price worth paying”. I’d say so too. I don’t know how their revenues stack up but I would be surprised if the financial cost was much greater than zero.

As Techcrunch says, the source (the Brazilian newspaper association) isn’t exactly unbiased but if this number is correct then “Google could be in trouble”.

Some “facts” from the myth-busting Europeans

Here’s an odd press release put out by the European Commission. It contains what it says are ten “facts” about the media and content industries.

Strangely, the release doesn’t back up any of these “facts” with “evidence”, “research” or “sources” (other than a tiny link to this page which in turn puffs a report which says it aims to “offer a reliable set of data and analysis” about the media and content industries).

The recipent of the press release is presumably required to read the 167 page report for themselves in order to understand the basis for the “facts” it contains. Or, more likely, not bother and just accept the “facts” at face value.

That doesn’t mean they’re wrong, although one or two of them seem odd to me based on my own knowledge and experience.

Some seem depressingly plausible (“fact” 3: 70% of music sales are digital, but only 35% of revenues, source unexplained; “fact” 8: power has shifted from production of content to distribution) and should worry anyone who cares about creativity.

Others seem completely vague and strange (“face” 6: In most cases [the decline of the printed press] started earlier due to changing patterns of consumption and may also be the result of a more competitive market with reduced profit margins and decreasing prices) and offer no actual facts to even attempt to verify.

This seems strange coming from the European Commission, an institution so sensitive about inaccurate or mis-interpreted “facts” about itself and its behaviour that its UK office has a prominent “Mythbusters” section on its website to try to rebut such stuff.

I wonder why they thought these “facts” would be helpful and what they are supposed to achieve. Heaven forfend that they might result in vague assertions being presented as actual “facts” as a consequence of having the Comission’s good name attached to them.

Perhaps they might care to update their website with sources of the data they have used to compile their “facts” and remove any which are, in reality, just opinions or assertions.

Saying no to Google News: common sense or suicide?

Brazilian newspapers have, en masse, withdrawn their content from Google News.

The response, not least from Google itself, is the usual mix of unhelpful and self-interested grandstanding. Google’s comparison of themselves to a cab driver bringing customers to a restaurant is particularly absurd, since most restaurants want customers who can pay, and aren’t interested in being flooded with people who can’t or won’t.

For me, the best thing about this move is it will create some real evidence which can be used in place of all the posturing and crystal-ball gazing which normally accompanies any discussion of the merits or otherwise of having content in Google search results.

The bald facts are pretty stark for most newspapers.

If they’re ad-funded, the majority of their revenue is generated by a relatively small proportion of their users. More traffic does not mean more money, necessarily.

Traffic from Google, or Google News, is of varying value and in many cases a large proportion of it is close to zero value to the newspaper. It neither delivers a significant direct income from ad sales, because of excess inventory, nor does the user go on to become a loyal and frequent visitor. Often, users are satisfied with the content they see on Google News and don’t visit at all. Even when they do, their next move is straight back out of the site again – so called “drive by” visitors.

Last time I looked at actual logs it was clear that the visitors most likely to become loyal were ones who used your actual newspaper title in their search terms. In other words having your home page in search engines was enough to target the most attractive potential visitors.

So if your goal is to focus on those users who might become loyal and frequent, high-value, visitors (the actual paying restaurant customers, in Google’s analogy), you might want to experiment with trying to control who comes and who exploits your content. In the real world it is called marketing, knowing your customer, having a strategy for targeting the people you’re most interested in. Withdrawing from Google News, given that so little revenue accrues from it, is a low risk thing to do and will potentially deliver much valuable data to help separate fact from speculation.

If, into the bargain, Google values your content enough to really want it, then maybe they will sit down and discuss a deal. If not, nobody has lost anything, and once you have learned enough you can decide if, how and when to put some or all of your content back into search.

I look forward to seeing what happens, and am pleased to see someone actually do something instead of just endlessly talking about it.

What is a “temporary copy” and who cares?

An obscure and technical piece of copyright law has been stretched out of recognition by the aspirations of entrepreneurs. What is the “temporary copying exception” to copyright and what was it really supposed to do?

I sometimes wonder whether the history we are taught would be recognised by the people who were actually there.

Recently, perhaps due to age or perhaps due to the pace of change, I have heard people talking authoritatively about things I personally was involved with, and getting it completely wrong.

One such thing is “temporary copies”. This is a concept which exists in copyright law making certain kinds of copying legal even when there is no explicit licence, which featured in the NLA’s web licensing case with Meltwater. The claim that the legal exception for temporary copies covers  paid-for media monitoring was rejected by the courts – and some people are outraged. Browsing has been rendered illegal they say. The internet will break if the law stands.

Of course it’s fine to say that you think the law is wrong and should be changed – and equally fine for people like me to disagree. But to say that the law will destroy the internet is, aside from being self-evidently untrue, also a rather dishonest way of trying to post-rationalise poor business and legal judgements of the past.

The temptation of the entrepreneurs

The legal concept of temporary copies solves a lot of problems for entrepreneurs. Building a business involving copying other peoples work, but without the need to get permission from them, makes otherwise impossible businesses viable. If you can make your idea fit within the scope of “temporary copies” you have a business, if you can’t you don’t. Since some of the biggest businesses on the internet, such as Google, have been built on the idea of making copies without asking first, the prospect is tantalising and it’s easy to lull yourself into thinking you’re covered.

So it’s easy to see why the law on temporary copies has been subject to rather optimistic interpretation by those who need to stretch it to cover their business, and rather narrower interpretation by those who would rather avoid loopholes which reduce the control they have over their content. I come from the narrow interpretation side of that argument, and I actually had a small involvement in the process which led up to the law in question being enacted.

The rather less tantalising reality

But back to the law. What, according to it, are temporary copies?

Here’s what article 5.1 of the Copyright Directive (officially and pithily known as “Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society”) says:

1. Temporary acts of reproduction referred to in Article 2, which are transient or incidental [and] an integral and essential part of a technological process and whose sole purpose is to enable:
(a) a transmission in a network between third parties by an intermediary, or
(b) a lawful use
of a work or other subject-matter to be made, and which have no independent economic significance, shall be exempted from the reproduction right provided for in Article 2.

This is the clause whose drafting I got peripherally involved with, the little bit of history I glimpsed in the making. It is transposed, more or less word-for-word, into section 28A of the UK Copyright Designs and Patents Act.

I guess it’s easy to see how, by simply glancing at this wording, you could persuade yourself that your service – for example your media monitoring service – might fall within it.

It’s a little harder if you look at the wording carefully. Even if you can persuade yourself that “transient and incidental” applies to you, and that because your business depends on technology anything you do is automatically “an integral and essential part of a technological process” (and I would say neither applies to a business like media monitoring), it’s kind of tricky to get past the overarching stipulation that your activity has “no independent economic significance” when your whole business depends on it.

But what was the intention of the law?

Even if you do manage to convince yourself it’s all OK looking at the text, the Directive provides some explanations in the form of recitals which are designed to help interpretation.

Recital 33 says:

The exclusive right of reproduction should be subject to an exception to allow certain acts of temporary reproduction, which are transient or incidental reproductions, forming an integral and essential part of a technological process and carried out for the sole purpose of enabling either efficient transmission in a network between third parties by an intermediary, or a lawful use of a work or other subject-matter to be made. The acts of reproduction concerned should have no separate economic value on their own. To the extent that they meet these conditions, this exception should include acts which enable browsing as well as acts of caching to take place, including those which enable transmission systems to function efficiently, provided that the intermediary does not modify the information and does not interfere with the lawful use of technology, widely recognised and used by industry, to obtain data on the use of the information. A use should be considered lawful where it is authorised by the rightholder or not restricted by law.

This makes things a little trickier. It’s more explicit that the exception is designed to cover only very low-level technical things rather than whole business processes. It reminds us that anything with “separate economic value on [its] own” isn’t covered. It specifically states that acts which enable browsing ARE included, making any hyperbolic claims that this law outlaws browsing rather feeble. And it points out that if something isn’t authorised then it isn’t covered either, which makes it hard to depend on this law if you haven’t asked permission and harder still if you have actually been asked to stop.

If you (or your lawyers) thought hard about it, you would probably conclude that a court is the last place you want to have this argument. But it has been forced into court anyway, and it’s hard to see how they could have reached any different conclusions, given that courts decide cases based on what the law actually says rather than what people wish it would say.

How did it get written that way?

As it happens, this particular clause was subject to an incredibly long-winded and arduous process of negotiation, discussion and debate before it was finalised. One thing it is not is ill-considered. My small part was on the side of content owners; I worked for a newspaper company and participated in some meetings on behalf of them and a media industry trade group.

The heart of the issue as I remember it was a tension between ISPs (mostly at the time dial-up providers and the large telcos who provided the bandwidth and interconnections for them) and content owners.

Content owners were keen to maintain control over content and ensure that the law didn’t create loopholes for infringement to take place.

The telcos were worried that very often copies were made as an unavoidable part of the technical process of sending data around the internet – such as in routers, where technically data is copied, forwarded and then instantly deleted – should not be regarded by the law as infringing copies just because they weren’t specifically licenced.

Everyone was sympathetic to each others’ concerns, the question was how to get it worded in such a way that it didn’t create huge loopholes or unintended barriers. In other words, turning a clear understanding about the intention into workable language. Equally, using language which was too specific to the technical issues of the day would quickly make the wording obsolescent, along with the technology it referred to, so it had to try to find generic language which would still be relevant in the future.

The important thing to note is that this clause was intended to address a very small and narrow issue. This is reflected in the wording. Read it again, but now think about data packets passing through routers and switches, or caches being created by ISPs rather than media monitoring services being set up without the irritating need to ask permission to exploit peoples stuff.

It was a long time ago but I have some memories of some of the discussion of some of these phrases

“transient and incidental”. This was really about the copies made in routers. Technically speaking, data is copied, but only for as long as needed for the router to function. The copy is really an irrelevance, fleeting in duration and nobody ever sees it. It can also apply to cached copies which hang around a little longer but are not necessarily infringing (see below).

“an integral and essential part of a technological process”. There was a big discussion about caching here (among other things). At the time most internet access was dial-up and the biggest players provided services for free to users. To save money some of them operated large caches of popular content, serving their users directly from the cache rather than fetching the content from the original site’s servers. This caused some consternation, because it meant the owners of the sites never knew their content had been accessed, couldn’t charge for ads, sometimes old content was served instead of newer updates and so on. However, there is a technical way to control caching, using a setting in the (invisible) http headers which are served along with content. As long as ISPs respected these settings (which were integral to the technological process of serving web pages) then their caches were fine, as soon as they started ignoring them they weren’t. In other words the site owner should always have control.

“whose sole purpose is to enable a transmission in a network between third parties by an intermediary”. I email a file to you. The file goes from me to my ISP, my ISP to any number of routers operated by any number of third parties, then to your ISP and finally to you. Lots of copies are created, most of them in systems which have no direct relationship with either of us. These copies should not need their own licence so the law creates an exception for them.

“whose sole purposes is to enable a lawful use”. I look at a webpage. My computer creates a copy in memory and maybe on my hard disk. These copies are just allowing me to look at the webpage and so should not need their own separate licence (although I think it’s implied in any case). So the law created an exception for them.

“which have no independent economic significance”. This one seems to be one of the most wilfully misinterpreted. I have heard the argument made, with a straight face, that a company which keeps complete copies of entire websites in their servers in order to use them for their business is covered by this exception. The logic seems to be that although they keep copies of the entire content, and they depend on them to do business, they don’t make more than small snippets available to their users and so the copies in their servers have no economic significance. Since this is self-evidently asinine and self-justifying I don’t think it needs a lengthy deconstruction – it’s obviously absurd.

The legalities drag expensively on…

The NLA and Meltwater litigation rumbles pointlessly on, and so all this will be subject to even more scrutiny by the courts.

Fortunately for them, they have copious sources which can help them understand the process which led up to the wording. As well as the law in its final form, and the recitals explaining some of the intent,.the whole official and political process was documented as it went along. There are also plenty of people who participated who can help round out the picture if necessary. The courts won’t need to use the forensic skill of the ancient historian to determine what the law was intended to achieve – they can get the first-hand version. I find it hard to see how they could change the conclusion of the lower courts whose judgement, in my view, reflects the letter and intent of the law.

Meanwhile back in the real world, more sensible things are happening. Meltwater has agreed a licence with the NLA. They’re doing business, their clients are getting a service, so are the clients of their rivals who are on a level playing field. The internet is still there, it’s not broken. Browsing is still legal. A few angry businessmen, put out by the idea that someone else’s property isn’t available as a free resource for them, continue to scream and shout and look foolish.

Move along now, nothing to see. Time for a nice cup of tea.

DIsclosure: I am a former chairman of the NLA and still do occasional freelance work with them and their members

That’s not the answer; now, what’s the question?

David Leigh came up with an idea to “save newspapers”. Every broadband customer would be forced to pay £2 per month to fund newspapers. Lots was subsequently written about it, most of it contemptuous.

The problems with this idea are so obvious and numerous that I didn’t bother writing anything about it to add to the cacophony of derisive comments (the only person I noticed having something nice to say about it was Leigh’s Guardian colleague Roy Greenslade). He is by no means the first to have thought of it, just the first to have not immediately realised that it could never and should never work.

The obvious unfairness inherent in picking out a particular sub-sector of the media to benefit. The lazy complacency which would inevitably result from guaranteed, unearned income rolling in every year. The perverse incentives which a traffic-based method for dividing up the money would create, not to mention the barriers to entry. The admission of defeat inherent in the whole proposal. The obvious challenge of forcing users to pay a new tax whether or not they like it. The conflict between a press beholden to government subsidy and a free press which holds politics to account.

The more you think about it, the longer the list of objections gets.

But lurking within it are two questions which are actually more relevant and interesting.

How can a newspaper like the Guardian (or any creative endeavour for that matter) which succeeds online be rewarded for its success? Answering this conundrum answers all of the challenges the internet currently poses for professional creativity.

Obviously, not adopting a model which abandons not just revenue but any prospect of achieving revenue would be a start. Nothing can stand in the way of a company hell-bent on oblivion, and spending a fortune to make a product which you give away to everyone is pretty much the definition of a business which will fail.

But the second question is the one David Leigh and others should really be posing for legislators.

Why is it actually impossible right now for a business model which rewards popular success to be found?

If you’re going to ask politicians to help solve your problems, this is a better one for them to get their teeth into rather than simply asking them to write you a cheque.

Copyright lies at the heart of answering this conundrum. Where copyright is weak we see hyper-inflation of copying (so its easy to feel successful due to the illusion of popularity) but a complete collapse in value. This is what is happening online and it prevents viable business models even being imagined. Where copyright is strong, as we have seen from the last few hundred years of analogue media, we create wealth, choice and diversity.

It doesn’t have to be this way, and a more sustainable solution can be found by looking at the generic issue rather than making special pleadings for businesses and products which might just be dying of natural causes.

Once it’s possible to have a good newspaper business online, it will be up to the skills and ingenuity of The Guardian and others to actually run one. If they succeed they will be rewarded with a viable product generating lots of revenue and which has no need for taxpayer support. If they fail it will be their own fault, but someone will take their place.

The first thing the politicians need to do is get a grip on copyright.

The first thing the Guardian needs to do is just get a grip.

Content farms: slave labour or green shoots of hope?

Several people have drawn my attention to this article on GigaOm talking about content farms as a democratising force for journalism.

Content farms have been criticised for turning content into a commodity, where quantity and optimisation matter more than quality. I think this is, to quite a large extent, right. Anyone can churn out articles and see them appear in various places as long as they’re prepared to write about whatever the algorithms say they should and accept very low remuneration.

The article highlights an interesting flip-side to this though. Content farms can lead to as what the article rather grandly calls “the democatisation” of journalism. Where talent shines through and is spotted, the content farms can act as a sort of talent pool.

To me this is what the media business has always done. In various ways it has found and promoted those with talent and rejected those without it. It has done so imperfectly and unfairly in many cases, but it’s obvious that the people who float to the top of the old-media ecosystem are there for a reason. It is an effective talent-filter.

However getting your foot on the first rung of the ladder is very very hard and many people give up before they have even done it. One hope we might all have for the internet is that it makes that first rung easier to reach. Another hope, so far thwarted, is that the rewards for reaching the very highest levels are greater too.

Surely that matters most. Without greater opportunity, which can support more professional creators. where will that first rung lead to? Where will Matt Miller, highlighted in the article as having been plucked from the ranks of zero experience would-be sports writers to a paid staff job, go next?

It would be great if the answer was that he could reasonably expect a long and lucrative career in online journalism which lasts as long as his talent and enthusiasm. Even better if the same could be said for thousands of other would-be writers. Better yet if a healthy and competitive marketplace made them valued superstars by their employers.

If that were true then the undoubted and hugely valuable potential the internet has to reduce the lowest rung of the ladder and allow talent to shine would be all the more exciting.

As it is, though, it’s hard to get excited about the “democratisation” of journalism. The article in GigaOm, in defending content farms, makes a good point about creating opportunity. But at the moment those opportunities are few and far between, and if democratising journalism means displacing overpaid old-guard journalists with newer, cheaper, version (however talented) it’s not a very compelling vision of the future.

Copyright and money: spot the difference

By me in City AM
I’ll post it here later but in the meantime head over there

The Times and Google: what changed?

Quite a lot has been written recently about The Times allowing Google to index some of its content. Some of the coverage has suggested this is a capitulation by The Times which had previously taken allowed very little indexing.

I think they’re missing the point. The most interesting part of this story is that The Times “will begin showing articles’ first two sentences to search engines” (according to Paid Content).

This is a big change of stance by Google. Back when I was involved in the ACAP project they resolutely refused to contemplate anything which would allow a site owner to determine what part of an article might be visible in search results (the so-called snippet). Nothing in the robots.txt protocol gave site owners the ability to specify their preferences to this level of detail and, although ACAP did, Google refused to engage with it.

So the story here is not about The Times capitulating, mainly because they clearly have not. The story is that Google have met them in the middle and agreed on a way of indexing which is agreeable to both of them.

This is exactly the sort of thing which ACAP was meant to achieve, and if Google have softened their rigid approach to the way they’re prepared to operate, it is only a good thing.

For The Times it means they can use Google to help, not hinder, their business strategy. For Google it means their users see a large and visible gap in search results being filled.

I think that’s what you call a good outcome.

%d bloggers like this: