EARE’s reflections on the challenges facing Collective Management Organizations in the EU
In light of the recent questionnaire circulated by the Polish Council Presidency on the challenges facing Collective Management Organizations (CMOs) in EU Member States, the European Alliance for Research Excellence (EARE) would like to share some thoughts regarding Section 1 and the role CMOs could play in managing copyright and ensuring fair compensation for creators in the context of AI.
EARE members believe that the scope of any CMO-focused remuneration frameworks should be limited to remuneration schemes for non-public data or correctly opted-out data under Article 4 of Directive on copyright and related rights in the Digital Single Market (DSM Directive), also known as the Copyright Directive. Drawing on the expertise of our members active in research and innovation, we have
developed several recommendations to support your contributions to the consultation
process launched by the Polish Presidency.
Challenges of AI to the Collective Management Ecosystem in the EU
EARE does not have a position when it comes to the identification of AI-generated or AI-assisted output or the use of AI-powered technologies to assist the CMOs’ daily functions (question 1.1. and question 1.2). However, we acknowledge that AI benefits society and rights holders and therefore believe its adoption should be widespread.
On question 1.3, 1.4, and 1.5, on licenses of the CMOs repertoire for Generative AI Training, EARE members believe that the scope of CMO-focused remuneration frameworks should be limited to remuneration schemes for non-public data or correctly opted-out data under Article 4 of the DSM Directive – especially when it comes to the management of collections of content for the purpose of training AI models. We recognize the current Text and Data Mining (TDM) exception in Article 4, which endorses voluntary controls for strictly commercial uses, allowing rights holders to express a choice regarding the use of their data for AI training. However, interpreting these voluntary controls to apply to other TDM applications beyond AI training could have unintended consequences. A well-functioning internet depends on text and data mining – especially one that is safe, informative, useful, productive, and able to foster collaboration and connections.
TDM techniques underpin AI development and arguably, all forms of modern data driven analytics that rely on training measures involving data. The TDM exception in Article 4 of the DSM Directive does not prejudice copyright owners’ legitimate interest in exploiting or enforcing rights to their works and provides safeguards for copyright holders in the form of conditions that legally must be met to access copyrighted data. Additionally, rights holders are free to opt out from TDM practices.
The TDM exception should not be repurposed as a framework for licensing. Article 4
of the Copyright Directive, which covers TDM going beyond the area of research, was added to contribute to the development of data analytics and artificial intelligence in the EU. It does so by permitting the use of publicly available copyrighted works for AI training purposes without needing a license from the rights holder.
EARE members believe that mandatory and extended collective licensing frameworks
would undermine the original purpose of the TDM exception. Permitting CMOs to opt out on behalf of members and non-members alike would also permit them to seek licensing revenues for activities already allowed under the TDM exception. In turn, this would limit access to data, increase costs of accessing data that is already legally available, and therefore deter investment in AI research and development. Such a policy would disproportionately affect SMEs and startups, and exacerbate biases in AI models developed in Europe, precisely when policymakers aim to enhance European competitiveness.
We also believe that mandatory and extended collective licensing would be inconsistent with current EU law and Article 4 of the DSM Directive by presupposing that a CMO can exercise a reservation of rights. Article 4 of the DSM Directive clearly states that rights holders, and not CMOs, are responsible for deciding whether to exercise the reservation of rights relating to TDM. Shifting this power to CMOs will prejudice individual authors’ right to choose how they want to participate in the AI ecosystem. It will also add confusion, inefficiency and administrative burdens by creating multiple layers of opt outs.
It is also important to highlight the potential economic impact of mandatory and extended collective licensing frameworks on Europe’s technology sector. By imposing restrictive licensing requirements at national or EU level, the European Union and EU Member States risk deterring investment in AI research and development, leading to legal fragmentation in the single market. They also risk creating a significant competitive disadvantage for European companies in the global market and slowing down the adoption of new technologies. This impact would be particularly severe for small businesses, which may lack resources to comply with the new regulations, limiting their ability to compete and innovate.
Today, the global race to build AI capacities has intensified. The new US administration had made AI a top priority and revoked President Biden’s AI Executive Order, potentially widening the gap between the US and the EU. China is similarly making rapid advancements in the AI race with the launch of open-sourced AI models trained at a fraction of the cost of the US models. Meanwhile, countries such as Japan and Singapore have passed science and innovative-friendly copyright laws that support machine learning and minimise bias. All these advancements place even more pressure on the European Union to stay in the current AI race.
To prevent the EU from falling behind in AI development, European and national policymakers should promote open data policies and flexible options for researchers and innovators, instead of putting up barriers. CMOs and licensing policies should not add further regulatory fragmentation across Member States, which would create unnecessary complexity for researchers and startups and pose a barrier to research and development.
In the AI ecosystem, CMOs could play a role in voluntary collective licensing frameworks to provide access to restricted data or licensing for AI training of properly opted-out data. In this scenario, rights holders will have to expressly consent and grant an authorization to the CMO. However, before considering this option, several unresolved challenges must be addressed. These include the complexity of valuing data when large-scale data sets have greater importance for AI training than individual pieces of content. Additionally, there are significant issues such as the difficulty of allocating revenues, the administrative burden of aggregating data or creating a digital repertoire, identifying anonymized data, developing sector-specific solutions, managing rights for works with multiple owners/authors, and ensuring adequate transparency.
Finally, it is important to note that CMOs are traditionally designed to manage rights and royalties within national borders. Their infrastructure and legal grounding are based on country-specific mandates and with national licensing schemes. While the EU has tried to promote coordination the Collective Rights Management Directive (CRM Directive), the reality is that implementation is still highly fragmented. Each country has different rules on what rights CMOs manage, what repertoire is covered, and how licences are granted. This fragmentation is not aligned with the reality of AI developers and researchers who train their models on large and cross-border datasets. Requiring licensing through uncoordinated, national CMOs could slow down or block research and innovation without solving the issue. With AI, it is often impossible to identify what exact work influenced a particular outcome, in turn creating a black box redistribution system where most rightsholders never see meaningful compensation. Further to this, the transaction costs of negotiating with multiple CMOs in multiple jurisdictions will also severely impact startups and SMEs.
When Public-Private Partnerships (PPPs) meet fragmented CMO systems, the
situation becomes even more complex. In PPPs, AI models or datasets may be developed in public institutions but then used by private partners for commercial purposes. What starts as open research is reclassified as commercial use, which can invalidate the TDM Exception included in Article 3 and trigger licensing obligations. In such a context, CMOs are likely to deliberate on the side of licensing, pushing for fees and controls that were not intended in the original research context. This could lead to restricted access, even when the data was publicly available, generated by public money and intended for open research. The lack of harmonization can also lead to delays in datasets access, confusion over legal compliance and risk aversion. Due to the risks and the legal complexity, researchers and private partners could be reluctant to engage in PPPs.
The current TDM exemptions included in the DSM Directive were meant to support research and AI development, but if CMOs can override or overcomplicate it with licensing demands, it undermines the entire objective. Instead, the EU should focus on granting and ensuring access to large and diverse data sets to promote science, research and innovation.
EARE’s Key Recommendations
In summary, EARE recommends that the EU and its Member States should:
- Preserve the current opt-out system established by the Copyright Directive, allowing rights holders to signal their preferences through machine-readable means.
- Reject the idea that CMOs may exercise a reservation of rights or conduct licensing for TDM activity on behalf of members or non-members without explicit authorization.
- Reject any proposal – whether at national or EU level – that creates a de facto opt-in system. This would impede all forms of machine learning (including AI innovation), increase biases, and contravene EU law.
- Instead of limiting the training of AI models and machine learning applications, democratize AI development and spur investment and wider economic growth by creating legal clarity around access to publicly available data for AI training purposes.
You can download the full EARE’s reflections here.