Offensive comments from readers in European online media have come to a full stop: Media will be responsible. What’s next?

[This post is also available in Spanish]

The European Court of Human Rights issued on October 10th a very relevant sentence for European media companies. The case was brought by the Estonian news website Delfi, sued by the Justice of its country for having published offensive comments of readers against the director of a company which acted as a source of information. The publication of the news in question occurred on January 24th, 2006, and a few weeks later, on March 9th, the lawyers of the victim requested the withdrawal of 20 offensive comments and compensation for moral damages. The news website removed the comments on the same day and rejected the economic request. The following month, a civil lawsuit was filed before the Estonian courts. This lawsuit reached the national highest court, which upheld the guilty verdict and sentenced the media company to provide 320 euros in compensation to the plaintiff.

Delfi, the company that owns the news portal, resorted to Strasbourg (headquarters of the European Court of Human Rights), stating that the sentence violated the principle of freedom of expression, protected by article 10 of the Convention for the Protection of Human Rights and Fundamental Freedoms.

delfi

Now, this European court has ruled against the media company. And this despite the fact that Delfi had an automatic (rudimentary) system to filter out comments that included some keywords (insults or other problematic words). In addition, Delfi had a mechanism with which readers could mark a comment as inappropriate. The sentence considers that this filter was insufficient to prevent damage against the honor of third parties and that the media company should have taken more effective action to prevent these situations.

The court considers reasonable to hold responsible the editor, being its function to publish information and give visibility to the comments of readers, and profiting through the traffic generated by those comments.

What now? In an entry of this blog, entitled “Moderating participation in the media” [in Spanish] and published a couple of years ago, we summed up the difficulties and the keys of our approach to help solving a problem that is not trivial.

Difficulties are manifold. On the one hand, the detection of isolated offensive words is not enough and it is necessary to filter expressions, sometimes taking into account their context and inflected forms. On the other hand, it is also necessary to interpret the abbreviated language or texts with typographic errors, which are noticeably frequent in comments and user-generated content sections. These “errors” can arise from limitations of devices, the impulsive aspect of commenting, or the users’ intention to cheat the automatic filters trying to outsmart them by all means. (Sometimes in really witty ways).

In addition to this problem related to the Variety of texts, we find the other two recurring features in “big data” applications (forming the famous 3Vs): Volume of the comments to be processed and Velocity of response required.

At Daedalus, we have been addressing these problems for the media industry for years and lately also for other sectors, like banking and insurance.

As regards the integration architecture of our solutions, we are currently offering them in SaaS (Software as a Service) mode, from our new APIs platform in the cloud Textalytics, as well as the traditional licensing to run on-premises.

With automatic filtering systems, we cannot guarantee 100% accuracy for any filtering task. Different companies or media, and different sections within a same medium, require different strategies. It seems clear that it makes no sense applying the same filter criteria to the comments of a brilliant feature article and to the interventions that emerge during the live broadcast of a football match or a reality show. In this sense, our systems assess the gravity of the expression, allowing our customers to set flexibly their acceptability threshold. On the other hand, we provide customization tools to facilitate the incorporation of new problematic expressions. Finally, we also permanently monitor the operation of these systems for customers who wish it, within their plans of continuous quality assurance and improvement.

Are you interested? Feel free to contact Daedalus.

Discover our solutions for the media industry.

References to this topic:

Jose C. Gonzalez

Twitter0LinkedIn0Google+0Facebook0Email

Leave a Reply

Your email address will not be published. Required fields are marked *