PROJET AUTOBLOG


Creative Commons

source: Creative Commons

⇐ retour index

CC Supports a new Digital Knowledge Act for Europe

lundi 12 février 2024 à 05:57
A medieval manuscript representing three richly-clad women in front of a green, hilly landscape with castles in the background.
Anonymous, “Prudence, Wisdom and Knowledge”, National Library of the Netherlands, Public Domain Mark. 

In December last year, the Communia Association for the Public Domain — of which Creative Commons (CC) is a member —  asked the European Commission and European Parliament to consider the development of a Digital Knowledge Act. In this blog post, we offer some background on the proposal and explain why CC fully supports it. 

Rationale for a Digital Knowledge Act

European knowledge institutions (libraries, universities, schools, etc.) as well as researchers face numerous copyright challenges in the digital environment. Access to academic publications, their reproduction for research purposes, text-and-data mining, etc. are all activities that are necessary to conduct serious research but are hampered by misaligned copyright rules, especially where cross-border collaboration is key.  

As top EU institutions are gearing up for a new mandate for the next five years, a Digital Knowledge Act would enable knowledge institutions to fulfill their mission and offer the same services online as offline. Such a regulation could improve copyright law by introducing the following for the benefit of knowledge institutions: 

CC’s work on policy and open knowledge

CC recognizes that equitable policy which enables and promotes open access (OA) is pivotal to making knowledge open. For example, in 2022 CC, in partnership with SPARC and EIFL, launched the Open Climate Campaign, a four-year project working to make the open sharing of research the norm in climate science. At the center of this work is partnering with national governments, private funders, and environmental organizations to develop open access policies for their grantees. Another project aims to identify recommended best practices for better sharing of climate data and yet another strives to promote open licensing for life sciences preprints. Through these OA policies and best practices we believe we can change the culture of sharing and promote the adoption of open practices for knowledge to grow and help solve the greatest challenges of our times.  

Why we support this initiative

But discrete open access policies and best practices are not enough. Knowledge institutions need to be able to rely on a clear, harmonized, and supportive legal system that operates across borders. That is why CC’s policy work centers on promoting better sharing of knowledge and culture through global copyright reform. Knowledge institutions are pivotal actors in the fight against climate change and hold many of the keys to unlock knowledge. If we are going to solve the world’s biggest problems, the knowledge about them must be open, and institutions , which hold that knowledge in trust for the public, must be able to operate within a legal framework that is conducive to their core mission and purpose. A Digital Knowledge Act would provide such a structure at an EU-wide scale and would contribute to accelerating research, boosting scientific progress, and spurring knowledge-based innovation for a sustainable future. 

For additional guidance on open knowledge policy, contact us at info@creativecommons.org

The post CC Supports a new Digital Knowledge Act for Europe appeared first on Creative Commons.

An Invitation for Creators, Activists, and Stewards of the Open Movement

dimanche 11 février 2024 à 13:00

Dear Open Movement Creators, Activists, and Stewards, 

A key question facing Creative Commons as an organization, and the open movement in general, is how we will respond to the challenge of shaping artificial intelligence (AI) towards the public interest, growing and sustaining a thriving commons of shared knowledge and culture.

So much of generative AI is built on the digital infrastructure of the commons and uses the vast quantity of images, text, video, and rich data resources of the internet. Organizations train their models with trillions of tokens from publicly available datasets like CommonCrawl, GitHub open source projects, Wikipedia, and ArXiV.

Access to the commons has enabled incredible innovations while creating the conditions for the concentration of power in entities that are able to amass the immense energy and data needed to train AI models. Community consultations at conferences like MozFest, RightsCon, Wikimania, and the CC Global Summit have also revealed concerns about transparency, bias, fairness, and attribution in AI.

Alignment Assembly

To start addressing some of these challenges, between 13 February and 15 March, Open Future will host an asynchronous, virtual alignment assembly for the open movement to explore principles and considerations for regulating generative AI. We hope to reach participants spread across different fields of open and coming from different regions of the world. We are organizing the assembly in partnership with Open Future and Fundación Karisma.

We want to bring to the conversation the perspectives of:

We will use the process of an alignment assembly, an experiment in collective deliberation and decision-making. This model is pioneered by the Collective Intelligence Project (CIP), led by Divya Siddarth and Saffron Huang. The model has been used by OpenAI, Anthropic, and the government of Taiwan.

You can sign up to take part in the process by registering your interest here (we will only use the contact information to invite you to the assembly and to provide updates and delete it once the assembly process is complete).

Background

Creative Commons has long been considering the intersection of copyright and AI. CC submitted comments to the World Intellectual Property Organization’s consultations on copyright and AI in 2020. When considering usage of CC-licensed work in AI, the organization explored in 2021 “Should CC-licensed work be used to train AI”. More recently, CC carried out consultations at MozFest, RightsCon, Wikimania, and the CC Global Summit, while publishing ongoing analysis of the AI landscape.

Ahead of the Creative Commons Global Summit last year, Creative Commons and Open Future hosted a workshop on generative AI and its impact on the commons. The group agreed and released a set of principles on “Making AI work for Creators and the Commons.” Now, we would like to test and expand this work. 

Outcome

The Alignment Assembly on AI and the Commons builds on and continues all of this work.

We treat the principles as a starting point. We are using the alignment assembly methodology and the pol.is tool to understand where there is consensus and which principles generate controversy. In particular, how much alignment there is between the perspectives of activists, creators, and stewards of the commons.

At the end of the process, we will produce a report with the outcomes of the assembly and a proposal for a refined set of principles. As the policy debate about the commons and AI develops, we hope the assembly will provide insights into better regulation of generative AI.

Sign up here to share your thoughts on regulating generative AI.

The post An Invitation for Creators, Activists, and Stewards of the Open Movement appeared first on Creative Commons.

What does the CC Community Think about Regulating Generative AI?

jeudi 8 février 2024 à 13:00

In the past year, Creative Commons, alongside other members of the Movement for a Better Internet, hosted workshops and sessions at community conferences like MozFest, RightsCon, and Wikimania, to hear from attendees regarding their views on artificial intelligence (AI). In these sessions, community members raised concerns about how AI is utilizing CC-licensed content, and discussions touched on issues like transparency, bias, fairness, and proper attribution. Some creators worry that their work is being used to train AI systems without proper credit or consent, and some have asked for clearer guidelines around public benefit and reciprocity. 

In 2023, the theme of the CC Global Summit was AI and the Commons, focused on supporting better sharing in a world with artificial intelligence — sharing that is contextual, inclusive, just, equitable, reciprocal, and sustainable. A team including CC General Counsel Kat Walsh, Director of Communications & Community Nate Angell, Director of Technology Timid Robot, and Tech Ethics Consultant Shannon Hong collaborated to use alignment assembly practices to engage the Summit community in thinking through a complex question: how should Creative Commons respond to the use of CC-licensed work in AI training? The team identified concerns CC should consider in relation to works used in AI training and mapped out possible practical interventions CC might pursue to ensure a thriving commons in a world with AI.

At the Summit, we engaged participants in an Alignment Assembly using Pol.is, an open-source, real-time survey platform, for input and voting. 25 people voted using the Pol.is, and in total 604 votes were cast on over 33 statements, with an average of 24 votes per voter. This included both pre-written seed statements and ideas suggested by participants.

The one thing everyone agreed on wholeheartedly: CC should NOT stay out of the AI debate. All attendees disagreed with the statement: “CC should not engage with AI or AI policy.” 

Pol.is aggregates the votes and divides participants into opinion groups. Opinion groups are made of participants who voted similarly to each other, and differently from other groups. There were three opinion groups that resulted from this conversation.

Group A: Moat Protectors

Group A comprises 16% of participants and is characterized by a desire to focus on Creative Commons’ current expertise, specifically some relevant advocacy and the development of preference signaling. They uniquely support noncommercial public interest AI training, unlike B and C. This group is uniquely against additional changes like model licenses and strongly against political lobbying in the US.

Group B: AI Oversight Maximalists

Group B, the largest group with 36% of participants, strongly supports Creative Commons taking all actions possible to create oversight in AI, including new political lobbying actions or collaborations, AI teaching resources, model licenses, attribution laws, and preference signaling. This group uniquely supports political lobbying and new regulatory bodies.

Group C: Equitable Benefit Seekers

Group C, containing 32% of participants, is focused on protecting traditional knowledge, preserving the ability to choose where works can be used, and prioritizing equitable benefit from AI. This group strongly supports requiring authorization for using traditional knowledge in AI training and sharing the benefits of profits derived from the commons. Like group A, this group is against political lobbying in the US.

Want to learn more about the specific takeaways? Read the full report.

We invite CC members to participate in the next alignment assembly, hosted by Open Future.  Sign up and learn more here. 

The post What does the CC Community Think about Regulating Generative AI? appeared first on Creative Commons.

Dispatches from Wikimania: Values for Shaping AI Towards a Better Internet

mercredi 7 février 2024 à 23:12
Isolated Araneiform Topography
Isolated Araneiform Topography, from UAHiRISE Collection on Flickr. Public Domain Mark.

AI is deeply connected to networked digital technologies — from the bazillions of works harvested from the internet to train AI to all the ways AI is shaping our online experience, from generative content to recommendation algorithms and simultaneous translation. Creative Commons engaged participants at Wikimania on August 15, 2023  to shape how AI fits into the people-powered policy agenda of the Movement for a Better Internet.

The session at Wikimania was one of a series of community consultations hosted by Creative Commons in 2023. 

The goal of this session was to brainstorm and prioritize challenges that AI brings to the public interest commons and imagine ways we can meet those challenges. In order to better understand participant perspectives, we used Pol.is, a “real-time survey system, that helps identify the different ways a large group of people think about a divisive or complicated issue.” This system is a powerful way to aggregate and understand people’s opinions through written expression and voting. 

Nate Angell and I both joined the conference virtually, two talking heads on a screen, while the majority of approximately 30 participants joined in-person in Singapore. After introducing the Movement for a Better Internet and asking folks to briefly introduce themselves, we immediately started our first Pol.is with the question: “What are your concerns about AI?” If you’re curious, you can pause here, and try out Pol.is for yourself. 

In Pol.is, participants voted on a set of ten seed statements — statements that we wrote, based on previous community conversations,— they added their own concern statements, and then they voted on concern statements written by their peers in the room. Participants can choose “Agree,” “Disagree,” or “Unsure.” Overall, 31 total people voted and 532 votes were cast (with an average of 17.16 votes per person). 

96% of participants agreed that “Verification of accuracy, truthfulness and provenance of AI-produced content is difficult.” This statement drove the most consensus among all participants in the group. Consensus indicates that people from different opinion groups have a common position, or in other words, people who do not usually agree with each other agree on this topic. The other two most consensus-driving concerns were: “Large-scale use of AI may have a negative impact on the environment” and “I suspect a push for greater copyright control would eventually be appropriated and exploited by big companies. E.g. Apple and privacy.”  

The most divisive statement was: “AI is developing too fast and its impact is unclear.” Divisive implies the areas with the most differing opinions (rather than with the most disagreement, as widespread disagreement is a consensus too).  The other three most divisive statements were also the most unclear statements, with more than 30% voting “Unsure”: “AI can negatively impact the education of students,” “AI can use an artist’s work without explicit permission or knowledge,” and “AI and the companies behind them steal human labor without credit and without pay.” 

Back in our workshop room, we  viewed the data report live, which was somewhat difficult due to limitations in text size. Participants in the room elaborated on their concerns, highlighting why they agreed or disagreed on particular points. 

In the second half of the workshop, we asked participants to imagine ways we can meet one particular challenge. We focused our discussion on the only statement with 100% agreement: “AI makes it easier to create disinformation at scale.” 

Participants were asked to write down their ideas in a shared document, and stand up to share their thoughts in front of the audience. The three major buckets for innovation in this space were education, technical advancement, and cultural advocacy. In education, participants brought up the need for critical thinking education to reinforce the ability to identify reliable sources and AI tools education to allow more people to understand how misinformation is created. Technical projects included developing AI to tackle disinformation, building a framework for evaluating AI tools during development, and creating better monitoring systems for misinformation. Participants also highlighted the need for cultural advocacy, from building the culture of citations and human-generated reference work to policy advocacy to maintain the openness of the commons. 

Creative Commons will continue community consultations with Open Future Foundation in the next month. Sign up and learn more here. 

 

The post Dispatches from Wikimania: Values for Shaping AI Towards a Better Internet appeared first on Creative Commons.

Recap & Recording: “Whose Open Culture? Decolonization, Indigenization, and Restitution”

mercredi 31 janvier 2024 à 17:21
The background is a woven textile with black, red, blue, and brown and tan shapes emmulating birds and fish. The text reads
Andean Textile Fragment” by Peruvian. 1500. Walters Art Museum., here slightly cropped, is released into the public domain under CC0.

In January we hosted a webinar titled “Whose Open Culture? Decolonization, Indigenization, and Restitution” discussing the intersection of indigenous knowledge and open sharing. Our conversation spanned a variety of topics regarding indigenous sovereignty over culture, respectful terminology, and the legacy of colonialism and how it still exists today.  While we strive for more open sharing, it is important to recognize the cases where culture should not be open to all.

The United Nations Declaration on the Rights of Indigenous Peoples had a significant impact on the ability for indigenous people to advocate for their rights, and for institutions to have clearer guidance on the treatment of indigenous cultural expressions. But there is much more to be done. Institutions stewarding indigenous cultural expressions must be patient and  take the time needed to build relationships with the communities whose culture is in their collections in order to establish ways of sharing with consideration and consent.

In this webinar, we were  joined by:

Watch the recording. 

 

Learn More 

We shared a reading list in our announcement post, here are some more links as shared by the panelists and by some audience members during the conversation:

What is Open Culture Live?

In this series, we tackle some of the more complex challenges that face the open culture movement, bringing in speakers with personal and professional expertise on various topics. Watch past webinars:

Save the date for our next webinar “Maximizing the Value(s) of Open Access in Cultural Heritage Institutions” on 28 Feb at 2 PM UTC. 

CC is a non-profit that relies on contributions to sustain our work. Support CC in our efforts to promote better sharing at creativecommons.org/donate

The post Recap & Recording: “Whose Open Culture? Decolonization, Indigenization, and Restitution” appeared first on Creative Commons.