Category Archives: European Law

European regulatory perspectives: Less is more!

Maxime Bablon, 9 September 2016

Post originally published on SAB


Jonathan Hill, former European Commissioner, took stock of his achievements on July 12, during a speech at the Bruegel Institute. He drew up the work carried out during his mandate as Commissioner for Financial Stability, Financial Services, and Capital Markets (FISMA) and detailed upcoming challenges for the institution.

Strong supporter of ‘smart regulation’ taking into account the financial sector’s specificities, the former Commissioner stressed the possibilities to enhance the regulation by stepping back and to harmonize the multiple texts issued in recent years.

Here are five key take-home messages from his speech, providing a good overview of ongoing challenges regarding EU financial regulations:

  • Growth and risk dilemma: In holding that ‘without risk there is no growth’, the British has put a cat among the pigeons. Main target: the aggregation of individual risk aversion which may cause a market risk and impact negatively the financial stability as a whole. This argument correlates with the analysis carried out by a consulting firm, estimating the decrease of the net banking income from -1.64% to -1.93%[1] for major banks subject to the Tax on Systemic Risk over the next decade. For the Commission, the decrease of the European growth domestic product (GDP) shall reach around -0.15% for each percentage points increase of the common capital ratios (CET1). Negative impact should remain cyclical and be cleared after 2019[2].
  • Keep it simple to rule: For the former Commissioner, the current regulation is so complicated that only a handful of lawyers and compliance officers can fully understand it. This constitutes a strong challenge for the long-term sustainability of the banking union. The lack of clarity in textual reference feeds the reluctances of national compliance officers in charge of the implementation of these regulations (for example financial reporting (FINREP) refers to a whole variety of texts to fill out the templates, some of which go back as far as 1978[3]).
  • Streamline and create synergies: Several regulations can conflict in their scopes and objectives. For instance, the leverage ratio has increased the cost of clearing, in contradiction with European Market Infrastructure Regulation (EMIR) requirements which aims to… increase the number of transactions going through central counterparty clearing houses (CCPs)! On these points, Jonathan Hill called for capitalizing on direct consultations to avoid crossfire between regulations and seize the opportunity to review existing regulations. For further flexibility, he also proposed to exempt certain players from clearing obligations (non-financial counterparties, pension funds and some small non-systemic financial companies, etc.).
  • Differentiation and proportionality principles: It occurs that enforced regulations do not take sufficiently into account the diversity of players, whether in terms of business model, risk profile and entity size. The capital requirement regulation review should focus on this point, especially regarding prudential requirements. The former Commissioner has mentioned the standard approach taken to define the credit risk and the margin for systemic risk. He implied that these factors impact negatively on the competitive advantage of small and medium banks. The simplification of the capital requirement calculation or the introduction of a specific exemption for smaller players – such as the credit unions – could be carried out to better take into account each actor’s specific characteristics.
  • Reduce the reporting burden: In accordance with the consultation carried out last year by DG FISMA, several entities have complained about information overlapping between the reports displayed (ex. EMIR, Markets in Financial Instruments Directive II (MIFID II) and the Securities Financing Transactions Regulation). For instance, Jonathan Hill mentioned the possibility to review EMIR to “avoid ‘dual reporting’ obligation, at least for non-financial firms”. Moreover, if the Commissioner welcomes the increase of data exchange between the national and European regulator, he wonders if “it [data exchanged] is all essential”. This quote also reflects the need to clarify tasks between the multiple regulatory layers (National authority, European Central Bank (ECB), European Banking Authority (EBA) etc.).

In the wake of the crisis, the banking union was set up within a very short time. It permitted to harmonize the prudential and resolution standards over a set of heterogonous countries, which was anything but easy. Given that the framework is now defined, regulators can start focusing on quality, in particular, by taking into account feedbacks given by financial services’ professionals (inter-text synergy, proportionality, optimization of the reporting scope, subsidiarity principle, etc.).

Valdis Dombrovskis (the new Vice-President of DG FISMA) has declared his will to pursue the work building upon the guidelines set up by his predecessor. However, other challenges are looming ahead, in particular the complex issues of the European deposit insurance scheme or the implementation of the capital markets union. Eventually, the main recommendation granted by the former Commissioner is to regulate less but better.

To go further:

[1] Banks reviewed are BNPP, CASA, SG and BPCE (see here)

[2] Capital Requirements – CRD IV/CRR – Frequently Asked Questions (see here)

[3] This is the case in particular with 4th council directive (78/660/CEE)



Maxime Bablon – Marie Sklodowska-Curie promotion (2012) – works as bank regulatory compliance consultant between Paris and the Maghreb in a French FinTech (Sab IT).

Shedding light on the first EU-wide legislation on cybersecurity

blog 1

Alejandro Sanchez Garcia and Andrea Barona Valladolid, 5 August 2016

More and more business value and personal information is migrating into digital form on worldwide interconnected platforms. This brings an equally large risk of cyberattacks. You just need to go back to October 2015 when the telecom group TalkTalk suffered from a cyberattack that cost the company over €41 m. Nearly 157,000 of its customers’ personal details were accessed and 28,000 stolen credit and debit cards were “obscured”. The cyberattack caused TalkTalk shares to loose one third of their value. Conscious of the real cost of cybercrime, in July 6 2016 the European Parliament adopted the Network and Information Security Directive (Directive (EU) 2016/1148) (the “NIS Directive”), which will enter into force on 8 August 2016. EU member states have until 10 May 2018 to adopt national measures to transpose the requirements of the Directive.

The NIS Directive responds to the threat posed by cyberattacks against critical infrastructure and the need to strengthen Europe’s cyber resilience. It has a significant impact on businesses supplying essential services and operating critical infrastructure in the field of energy, transport, banking, health or digital services. In addition, at member state level, it will require the adoption of domestic structures and cooperation mechanisms. Faced with a Directive with implications for both public bodies and businesses, it is essential to understand the key aspects and controversies around it.

Five key features

  1. National frameworks: The Directive requires national strategies that allow for concrete policy and regulatory measures to safeguard a minimum level of network and information security. This will imply the designation of a national competent authority responsible for managing incidents and risks.
  2. Cooperation networks: The European Commission, member states and the European Network and Information Security Agency will establish a cooperation group with the objective of collaborating to counter cybersecurity threats.
  3. Notification requirements: Operators of essential services have to put in place procedures to assess the significance of network and information security incidents. An operator of essential services is not required to notify other parties. However, a national competent authority may decide to inform the public.
  4. Use of standards: To encourage convergent implementation, member states must use European or internationally accepted standards relevant to the security of networks and information systems. Such standards have not been expressly defined in the NIS Directive.
  5. Enforcement: Competent authorities at a national level are given the license to investigate cases of non-compliance. They may report criminal incidents to law enforcement agencies and collaborate with data protection authorities when incidents involve personal data.

Potential legal vacuum

There are two aspects that remain ambiguous and could potentially lead to a situation of legal uncertainty. One is the system of penalties (Article 21), as the Directive simply requires member states to put in place “effective, proportionate and dissuasive” sanctions. It remains to be seen what sanction regime member states develop before the deadline of 8 May 2018.

The other delicate issue concerns the mechanism of incident notification (Article 14.3 and 5, and Article 16.3 and 6). The directive falls silent on the terms for the public disclosure of an incident. Publishing this information could have a great impact on the company’s economic and corporate reputation.

Three years after its initial proposal, we should acknowledge and acclaim the joint institutional effort to create a more secure and trusted online environment in. However, it is notable that the NIS Directive is a minimum harmonisation instrument. This means all eyes will now focus on member state’s ability to make this Directive a reality at national level.


Alejandro Sánchez García, Senior Director, FTI Strategic Consulting, Brussels. Simon Stevin Promotion.

Andrea Barona Valladolid, Consultant, FTI Strategic Consulting, Brussels. Vaclav Havel Promotion.


Hier le Brexit, et demain quelle Europe? Des capacités opérationnelles propres pour une Union efficace et responsable

Damien Gerard, 11 July 2016

Depuis dix jours, de nombreux commentaires spéculent sur les causes du souhait exprimé par la majorité des citoyens britanniques de quitter l’Union. Ce serait notamment la faute aux imprécisions récurrentes véhiculées par la presse d’outre-Manche, voire à une désinformation à grande échelle organisée par le camp du « Leave ». C’est sans doute en partie le cas, comme il est également évident que la relation du Royaume-Uni au projet d’intégration européenne est historiquement différente de celle des Etats d’Europe continentale. Néanmoins, on a du mal à croire que le résultat du référendum aurait été le même si l’Union européenne était réellement un modèle d’efficacité dans la gestion des grands défis de notre temps. Le citoyen européen, au Royaume-Uni ou ailleurs, aurait-il tort d’être fatigué de l’état de crise permanent dans lequel semble plongée l’Union depuis des années ? Est-il tellement insensé que certains considèrent l’Union comme un facteur d’instabilité avant d’être un projet porteur d’espoir ? Les causes du Brexit ne seraient-elles pas à chercher aussi dans un manque d’efficacité des politiques européennes ?

Le manque d’efficacité de l’UE : cause du Brexit et conséquence de choix inspirés par le Royaume-Uni ?

Et si le Brexit était en fait le résultat d’une désinformation permettant de camoufler le fait que l’inefficacité apparente de l’Union européenne est aussi la conséquence d’orientations données par le Royaume-Uni au fonctionnement interne de l’Union ? Shocking ? Pour le juriste européen, il est tout de même interpellant que l’image du fonctionnement de l’Union européenne véhiculée par les principaux acteurs du Brexit se soit révélée à ce point anachronique et en décalage avec la réalité des faits, qu’il s’agisse de David Cameron appelant l’Union à cesser d’agir avec la rigidité d’un bloc mais davantage avec la souplesse d’un réseau ou de Boris Johnson réclamant plus de coopération entre Etats et moins de contenu supranational. En effet, cela fait presque vingt ans que l’Union européenne fonctionne précisément sur le principe d’une coopération entre Etats et par le biais de réseaux d’autorités nationales. De la même façon, l’Union produit beaucoup moins de contenu supranational depuis vingt ans, préférant recourir à des modes de convergence souples résultant d’un partage d’expériences entre les Etats membres. En d’autres termes, les lois européennes adoptées par le Parlement européen et/ou le Conseil traduisent déjà depuis de nombreuses années l’idéal prôné par les deux figures de proue du référendum britannique.

Curieux ? Le doute grandit lorsque l’on se rappelle que cette évolution dans le mode de fonctionnement de l’Union visant à donner la priorité à la coopération entre Etats membres résulte aussi d’impulsions données par le Royaume-Uni, par exemple dans le domaine de la lutte contre la criminalité transfrontalière. L’incrédulité augmente lorsque l’on se rend compte que la coopération entre autorités nationales comporte des avantages en termes d’apprentissage mutuel (outre qu’elle constitue parfois la seule voie possible) mais présente également des défauts en termes d’efficacité et participe de l’incapacité de l’Union européenne à faire face efficacement aux crises de ces dernières années, hier en matière financière et aujourd’hui en matière d’immigration et de terrorisme. La consternation guette lorsque l’on considère que cette incapacité contribue à mettre en doute aux yeux des citoyens la valeur ajoutée de l’Union en tant que niveau de pouvoir autonome et alimente la tentation d’un repli sur soi, susceptible d’effacer les acquis de 65 ans de projet européen commun.

Est-il vraiment imaginable qu’après avoir contribué de tout son poids à mettre en place le marché unique, le Royaume-Uni ait pu entraîner l’Union toute entière dans un cercle vicieux fait de déférence pour le niveau national, d’inefficacité des politiques européennes et de perte de légitimité ? A son tour, le Brexit permettrait-il la fuite d’un Etat membre qui a contribué à créer la situation complexe et contestée dans laquelle se trouve l’Union européenne aujourd’hui, à l’image du spectacle que donnent les responsables politiques britanniques depuis qu’est tombé le résultat du référendum ? Si la peur est mauvaise conseillère, la paranoïa l’est d’autant plus. Assurément, ces questions appellent des réponses nuancées mais se les poser permet d’ébaucher d’intéressantes pistes d’avenir pour l’Union européenne de demain.

L’Europe de demain : répondre aux attentes des citoyens en investissant dans l’efficacité ?

Pour commencer, l’Union européenne est solide : organisation internationale unique en son genre, son fonctionnement fait preuve d’une stabilité remarquable. Créée et régie par des traités conclus entre Etats souverains, l’Union jouit d’une autonomie sans précédent en raison de sa capacité à adopter elle-même des normes visant à mettre en œuvre les compétences qui lui ont été attribuées, à interpréter ces normes et à juger de leur validité. Au fil du temps, les systèmes juridiques nationaux se sont accommodés de l’existence et des exigences de cette organisation autonome, au point que les règles régissant les interactions entre droit national et droit européen ont aujourd’hui atteint un seuil de maturité étonnant, empreint de respect et de confiance mutuelle. On en a encore eu la preuve récemment lorsque la Cour constitutionnelle allemande a admis le recours par la Banque centrale européenne aux opérations monétaires sur titres afin de stabiliser l’euro, pour peu qu’il s’effectue en conformité avec les limites fixées par la Cour de justice de l’Union européenne.

Malgré sa grande autonomie, l’Union européenne demeure néanmoins dépendante des Etats  membres quand il s’agit d’appliquer les normes qu’elle édicte. En d’autres termes, l’Union ne dispose pas de capacités d’action propres de nature à assurer directement l’application efficace des règles qu’elle adopte. Le contrôle des règles de concurrence entre entreprises a historiquement constitué la seule exception à ce principe. Depuis quelques mois, cependant, la Banque centrale européenne dispose également du pouvoir de contrôler directement les institutions de crédit les plus importantes de la zone euro, en collaboration avec les autorités nationales de surveillance mais avec un pouvoir de décision autonome, dans le cadre de ce que l’on appelle l’Union bancaire. La mise en place de cette Union bancaire constitue une avancée remarquable dans le processus d’intégration européenne. En effet, elle se fonde sur la reconnaissance de la nécessité de doter l’Union européenne d’une capacité d’action autonome, au-delà de la simple coordination de l’action d’autorités nationales, afin d’effectuer un contrôle efficace et cohérent des banques et donc d’assurer la stabilité financière de la zone euro. Il aura fallu une crise monétaire sans précédent pour se rendre compte que la seule coopération bilatérale ou multilatérale entre autorités nationales, même sur la base d’un cadre de règles communes, était insuffisante pour atteindre cet objectif.

Au lendemain du Brexit, la réforme de l’Union européenne en vue de regagner la confiance des citoyens passe notamment par le développement de capacités opérationnelles propres à l’Union, en partenariat avec les autorités nationales mais avec de réelles capacités d’intervention, à l’image de ce qui a été réalisé dans le cadre de l’Union bancaire et de ce qui existe en matière de concurrence. Une telle réforme dispose d’un énorme potentiel de renforcement de l’efficacité de l’Union européenne, sans transfert significatif de compétences nouvelles, sans redistribution directe de ressources entre Etats membres, sans transformation institutionnelle majeure, sans modification importante des traités susceptible de donner lieu à davantage d’instabilité. La récente proposition de la Commission européenne de créer un corps européen de gardes-frontières et de garde-côtes constitue un pas important dans cette direction, alors que la crise migratoire que nous vivons a déjà coûté la vie de plus de dix mille personnes depuis 2014 (données HCR). En matière de lutte contre la criminalité transfrontalière et le terrorisme, un renforcement des capacités opérationnelles d’Europol et d’Eurojust s’avère également urgent afin de résorber le décalage entre la libre circulation des personnes et le cloisonnement territorial des systèmes nationaux de police et de justice pénale, et d’aboutir à une coordination efficace entre les Etats membres dont l’actualité récente a malheureusement démontré les lacunes. Dans ce domaine, il s’agirait d’aboutir à la mise en place d’un véritable « European Bureau of Investigation » encadré par des procureurs européens (pouvant être en même temps des procureurs nationaux), capable de coordonner efficacement les services compétents des Etats membres mais disposant également des pouvoirs et des moyens nécessaires afin de mener des actions sur le terrain.


Les mois qui viennent nous diront si le Brexit deviendra une réalité. Ce qui est de plus en plus évident, cependant, c’est que l’intégration européenne par la seule coopération entre Etats membres sans une réelle capacité d’action au niveau européen, a démontré ses limites. Face à ce constat, il convient de confronter la réalité et d’ajuster la stratégie, sans se laisser aller à une surenchère perdue d’avance contre les nationalistes de tout poil. Un des enjeux majeurs de l’Europe de demain est de répondre efficacement aux attentes légitimes des citoyens ; pour ce faire, n’est-il pas temps de permettre à l’Union européenne de prendre ses responsabilités en la dotant de capacités opérationnelles propres ? Parce que nous n’avons pas nécessairement besoin de plus d’Europe mais de mieux d’Europe.


Damien Gerard est Directeur du Global Competition Law Center du Collège d’Europe et enseigne le droit européen à l’Université de Louvain (UCL, Belgique).

End of bailing out of banks, but how accountable will the Single Resolution Board be?

Phedon Nicolaides, 4 January 2016

One of the benefits of the Christmas break is that you can catch up with the episodes of your favourite series that you have missed. In our case, we watched three seasons [about 45 episodes] of “Breaking Bad” – the hit tv series of the chemistry teacher who became a drug dealer. Yes, we too got addicted, thankfully in a different way.



But as we were watching back-to-back episodes of Breaking Bad I realised that they had managed to solve the principal-agent problem that has bedevilled the new economic governance of the European Union. As of 1 January 2016, the Single Resolution Mechanism became operational. A week earlier, on 24 December 2015, the Official Journal of the EU published the text of an “Agreement between the European Parliament and the Single Resolution Board on the Practical Modalities of the Exercise of Democratic Accountability and Oversight”. Will the European Parliament succeed to exercise effective oversight over the SRB? Before I answer this question, I want to explain how the principal-agent problem was solved in Breaking Bad. Continue reading

This content is not available in your location – Copyright and intellectual monopoly

Gil STEIN, 24 September 2015

Andrus Ansip, Commission Vice-President for the Digital Single Market, is famous for promising to end “geo-blocking” [1], allegedly since he wants to watch soccer matches when he’s travelling across borders. Geo-blocking is the technical term given to the practice of blocking access to media content available on the internet, based on the geographic location of the viewer.

Commissioner Ansip is on the right track, but I would like to suggest he might be missing the big picture. The big picture is the enormous database which is the internet, and all the content which it holds. We’re not talking soccer matches. Server networks are constantly active, holding unfathomable amounts of information. A considerable amount of that information is popular creative content, scientific and academic knowledge, public media communications, and of course also soccer matches. However, despite the undoubted value this content has, the obvious benefits from its frictionless exchange and the relative low-costs that would be required to widely disseminate it under existing network infrastructures (theoretically even globally), much of this content is blocked based on user location. Indeed, according to a report of the Commission[2], 35% of broadcasters in the EU have used geo-localization to restrict access to content online (the data refers to 2012). The content that is blocked, absent or inaccessible is typically international creative media content (e.g. US films, BBC shows, etc.), while access to the most well developed services is often the most restricted. More evidence of the trend to limit accessibility can be seen in the ever increasing amount of claims brought before Google to remove links to content which is (allegedly) infringing copyrights (See Figure 1).

Figure 1: Requests for removal of content due to copyright claims 2011-2015 (weekly)[3]


Why do creators and distributors of interesting content utilize geo-blocking on the internet to limit access to their content? Politico suggests that geo-blocking is a marketing strategy that is taken “… in order to protect an air of exclusivity” by major production studios and distributors[4]. I would argue that categorizing geo-blocking simply as a marketing strategy is a too narrow frame. A better answer could be that geo blocking strategies are utilized to maintain power over the supply of copies (of the original content) circulating in the market. Copyright, which governs the legal realm of this market, allows content producers and their related agents to control distribution methods, timeframes, and capacity by which they supply their content to the public, as well as the legal right to enforce their exclusivity. This fact has led some to argue that the existing copyright regime is equivalent to that of “intellectual monopoly” rather than intellectual property[5].

However the use of copyright to maintain market power, made by established content producers and distributor chains in the creative industry (including among others global TV broadcasters and large production studios for film or music), is seriously challenged by increasing connectivity and the rise of digitization. These major actors in the creative industry are indeed cultural giants, playing a key role in history that could never be forgotten. They pioneered the creative audiovisual sector and invested in distribution networks and technologies that brought to the whole world the wonders of Pink Floyd and David Bowie (EMI), Mickey Mouse (Disney) and Terminator 2 (Sony Pictures Motion Picture Group). Not forgetting of course broadcasters of World Cups and Olympic Games. We should all be grateful.

Nonetheless, we should perhaps look again at the big picture. While much respect is due to the great investments made by these production studios to pioneer the creative audiovisual industry roughly a century ago, it is important to mention that their capacity to maintain considerable power over the distribution of information (some might even argue their ability to exercise control over the content itself) has allowed them to reap astronomical profits. A fact that has gone largely unnoticed by most users. Perhaps it has not been noticed because the amount of information and content which is available to the common users is staggering as it is. Have you ever paused to ask yourself why you are being “redirected to a local website” when shopping at Amazon online? Or why does Netflix makes different content available in different territories? The abundance of information available online allows producers of quality content to “work in the shadows”. Since a lot of content is still available in traditional channels (which are being strongly guarded), most users never wonder why the newest Hollywood movies are not released online on a global scale, directly upon their completion. Instead users accept without question that such content is mostly distributed based on old fashioned “release windows” focusing on geographical and market segmentation. These distribution strategies focus on profit maximization, giving little or no importance to universal consumer accessibility.

But limited accessibility is starting to draw the public’s attention. The EU has recently become very aware of the need to seriously address its digital handicap[6], and is now tackling head on the issue of geo-blocking. Indeed, two months ago the Commission launched an anti-trust investigation against six major Hollywood studios: Disney, NBC Universal, Paramount Pictures, Sony, Twentieth Century Fox and Warner Bros, as well as Sky TV in Ireland and UK[7]. This development resonates with the public’s increasing demands to have access to any published content, anywhere, at any time, and the rising awareness of citizens, businesses and institutions to the costs of geo-blocking.

And there’s more good news, particularly when considering the amazing advancements in information and communication technologies (ICT) and the adaptability of the economy to the digital age. Never in history did individuals have a tool to access such deep databases of content and information. Moreover, the shift to digitization of cultural and media works, and the development of innovative business models such as Spotify, Netflix and similar Digital Service Providers are a huge step forward, and could be argued to constitute the best content distribution models possible under existing legal constraints online. They constitute the best since they enjoy legal certainty and business legitimacy by providing access to quite a large variety of content, for which copyright usage has been cleared in advance. Nevertheless Spotify and Netflix build their content repertoire from the ground up, not relying on information available (“illegally”) online. This is due to the legal constraint to clear the use of copyright protected content with the original creator (or his agent) before making copies of it available for distribution. This approach will forever yield a smaller variety of content (compared to all “legal” and “illegal” content available online) at a higher price, and could create a bias towards distribution of content holding mainly commercial value (as opposed to artistic value for example).

It is important to stress that access should not come at the expense of the content creators’ right to be rewarded for their contributions. Good content producers should get well paid and have good incentives for creating beneficial and interesting content. Nevertheless, the protection of the economic interest creators have in their creation has nothing to do with market segmentation strategies. Geo-blocking which is often implemented in the name of copyright protection could actually reduce the total revenue creators and distributors can collect (given universal distribution capacity and the ability of users to pay for access to content). Said differently, In a perfect world, all published information will be cataloged and made available to the public, together with the option to pay for access to the content itself, thus allowing universal access to valuable data while providing sufficient incentives for creation. What is mind-blowing is that today’s content producers prefer to grant only limited access to their creations, despite their potential capacity to distribute them on a global scale online. In excluded markets users don’t have even a possibility to pay for access to content which is valuable to them, since it is blocked on a purely geographical basis. This is exactly where users encounter the famous disclaimer – “This content is not available in your location”.

Framing the issue in this way, one might argue that content producers and creators are essentially “leaving money on the table” by refusing to supply cross-border demand. However we cannot assume that these major creative industry actors would behave in what seems to be an irrational way. This analysis would imply that while creators and distributors are indeed reducing their total revenue by refusing to supply demand, their control over the quantity supplied in the market enables them to increase prices, thus allowing them to maximize profits and extract high rents (e.g. by providing access only to high-demand areas). Thought of in this way, the removal of geo-localization based market segmentation in the online creative market would increase overall content consumption, eliminate monopoly prices, and still allow content creators and distributors to cover their investment costs – thus providing sufficient incentives for creation. In other words the removal of geo-blocking would make everyone better off without undermining incentives for the production of content.

We may find it hard to think of a world without the restrictions of copyrights, but ample examples exist in the markets today. Simply consider the fashion industry, gastronomy and pornography sectors which have essentially no copyright protection at all. I don’t think anyone can argue that in these sectors creativity is curbed to a standstill due to little or no protection of intellectual property. On the contrary, the ubiquity of imitation and “knock-offs” seem to push creative motivation to extremely high levels, while allowing for fringe competitors to have a major part in the sector, thus increasing the variety of creations available in the market. Examining a radical approach which envisions a world with no copyright protection on the internet might also be beneficial for the understanding of the forces at play. If tomorrow everyone would be allowed to access all creative content available today online (which is by large illegal due to copyright infringement in today’s world), content producers and distributors will find themselves in direct competition with these formerly illegal distributors. This will present established content distributors and creators with the two options, either stop producing-distributing, or find a way to supply all demand at a cheap enough price to kill the competition. In this theoretical scenario the legitimate creators and distributors will push for universal accessibility since this will be the only way they could survive – by providing universal access to their interesting creations, at a cheap price, benefitting from global scale and negligible reproduction costs, thus eliminating any incentives illegitimate distributors might have to copy their creation and supply it themselves.

I would like to conclude by saying that indeed the disruptive nature of the internet, developments in ICT, and revolutionary ideas such as the sharing economy and the information society, loom as a destructive threat over the value chains established by the big production studios and distribution chains in the creative sector. Resistance to similar developments has been common throughout the ages, yet proved to be futile in the long run. If we ask Schumpeter[8] however, this may not be all bad news.


[1] see

Commissioner Ansip is also famously quoted for saying that EU copyright laws are “pushing people to steal”, see

[2] see

[3] see

[4] see

[[5] Boldrin, M. & Levine, D.K., 2008. Against Intellectual Monopoly. Review Literature And Arts Of The Americas, 21(6), p.306. Available at:

[6] For examples see EU policies such as the Digital Agenda, the Roaming Regulation, etc.

[7] see, and

[8]]Schumpeter, J.A., [1911], The Theory of Economic Development: An Inquiry into Profits, Capital, Credit, Interest and the Business Cycle.

The answer to the Greek crisis is in the Treaty

Olivier Colin, Voltaire promotion, 3 July 2015

The European Union has been founded on a set of values: “respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities”. Those values are not only common to the Member States but are also considered to be “universal”, meaning that they should be applied to all human beings. As Europeans, we would interpret that as the willingness to promote and apply those values to all human societies in the world. Unfortunately, when it comes to money and power, European decision makers are not even able to apply such values at the European level. In that context, how can they pretend that our model of society is universal if we are not even able to apply it within our own borders? Continue reading

The EU’s ‘Bio economy’. Utopian, realistic, protectionist. Or all of these?

I have just recently stumbled across the EU’s Bioeconomy strategy, classified in the administrative organogram at least under ‘Research and innovation’. It could also be DG Industry. Or DG Trade. Or DG  Env. Or indeed DG Agri. Tucking it away under Research and innovation was a good idea, I believe: best to keep it safely away from daily policy concerns and ditto lobbying. The Bioeconomy – which is defined as encompassing the sustainable production of renewable resources from land, fisheries and aquaculture environments and their conversion into food, feed, fiber bio-based products and bio-energy as well as the related public goods – is seen by the EC as a successor to the EU’s Biosociety program, which however was more scientific in outlook (lots of talk of new technologies).

A big gap in its approach, to me at least, is its lack of discussion on reduced consumption and ‘need‘ (the Club of Rome has some powerful insight into this) which is a pitty. It talks mostly about increasing and diversifying ‘output’, rather than on reducing it or matching it to true need. For in its current outlook, the Bioeconomy feels more like a postersite for EU ‘innovative’ technologies than one for foresight in development priorities. And no, that is not properly done elsewhere in the EC.