PROJET AUTOBLOG


Creative Commons

source: Creative Commons

⇐ retour index

CC Needs Assessment Report on Public Domain Tools in Cultural Heritage Sector Unveils Key Insights

jeudi 23 février 2023 à 14:48

Today Creative Commons is proud to release our report on the Needs Assessment entitled Are the Creative Commons Public Domain Tools Fit-For-Purpose in the Cultural Heritage Sector?.

From 1 January (Public Domain Day) to 15 February 2022, we ran a multilingual online survey using Google Forms to share a 50-question questionnaire in English, French and Spanish. We received responses from 133 field practitioners — working in libraries, museums and archives and other areas of open culture — from 44 different countries on five continents. 

This report showcases one of the many ways in which we at CC strive to support our global Open Culture community in realizing a vision for better sharing of cultural heritage: we develop and steward a legal, social, and technical infrastructure that supports open sharing that is impactful, generative, equitable and resilient. The insights gained from this report are crucial in guiding our efforts to improve the accessibility and usability of our public domain tools for the cultural heritage community. At CC, we are committed to improving our response to the needs of our global community and supporting better sharing, and the maintenance of our licenses and tools, focused on the communities they serve, takes center stage. 

With this report, we gain valuable insight into the unique needs and challenges of the cultural heritage community with regard to our public domain tools: the public domain mark (PDM) and the public domain dedication tool (CC0).

Key findings include: 

We also define pathways to address those needs, with strategic recommendations to guide future actions in four steps:

With the recommendations outlined in this report, we are well-positioned to address the unique needs and challenges of the cultural heritage community, and to further our mission of promoting better sharing and equitable access to cultural heritage.


Button that says "Read the full document →"

 

Do you also want to get involved? Don’t hesitate!

The post CC Needs Assessment Report on Public Domain Tools in Cultural Heritage Sector Unveils Key Insights appeared first on Creative Commons.

CC Open Education Platform Lightning Talks February 2023: Recordings and Slides

mercredi 22 février 2023 à 17:27

On 2 February 2023, the Creative Commons Open Education Platform community held Lightning Talks, where presenters shared innovative ideas and technologies in the field of Open Education. Each speaker brought unique expertise to the table, sparking conversations and inspiring new ideas. You can watch the replay below.

<style>.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }


The Lightning Talk Presenters:

Reimagining Open Education as Social Justice

Ravon Ruffin, Educational Programs Manager at MHz Foundation, and Amanda Figueroa, Community Director at MHz Foundation, showcased the Curationist platform for decolonial methodologies in curation, education, and art.

LibreTexts 101: Building the Textbook of the Future

Delmar Larsen, Professor of Chemistry at the University of California, Davis, and Founder and Director of the LibreTexts project, provided an overview of LibreVerse, a suite of tools and technologies to advance OER textbooks and assessments.

Using Machine Translation Algorithms to Effectively Generate Non-English Language OER Textbooks

Delmar Larsen also discussed machine translation algorithms for non-English language OER textbooks.

Integration of Values and Ethics in OER for Climate Change and the SDGs

Dr. Suma Parahakaran, Head of the Faculty of Education at Manipal Globalnxt University in Malaysia, highlighted options for collaborative OER for learning communities focused on climate change and the UN Sustainable Development Goals.

OER as a Social Justice Tool, the Case of Digital Accessibility

Nicolas Simon, Associate Professor of Sociology in the Department of Sociology, Anthropology, Criminology, and Social Work at Eastern Connecticut State University, discussed the use of OER for digital accessibility and promoting inclusion, diversity, equity, and social justice.

Get the Balance Right: Using Mindfulness OER for Intentional Work and Life Practices

Dr. Carolyn Stevenson, Full-time faculty member and Faculty Advisor for Purdue University Global, School of General Education, Department of Professional Studies, provided resources for using mindfulness OER for a healthy work/life balance.

 

Keep track of what CC Open Education Platform is doing by subscribing to our calendar and learn more about the CC Open Education Platform on our website

The post CC Open Education Platform Lightning Talks February 2023: Recordings and Slides appeared first on Creative Commons.

This Is Not a Bicycle: Human Creativity and Generative AI

mardi 21 février 2023 à 23:42

Like the rest of the world, CC has been watching generative AI and trying to understand the many complex issues raised by these amazing new tools. We are especially focused on the intersection of copyright law and generative AI. How can CC’s strategy for better sharing support the development of this technology while also respecting the work of human creators? How can we ensure AI operates in a better internet for everyone? We are exploring these issues in a series of blog posts by the CC team and invited guests that look at concerns related to AI inputs (training data), AI outputs (works created by AI tools), and the ways that people use AI. Read our overview on generative AI or see all our posts on AI.

Join our community meetings on AI: 22 Feb 2023

“Generative AI” has been the subject of much online conversation over the past few months. “Generative AI” refers to artificial intelligence (AI) models that can create different kinds of content by following user input and instructions. These models are trained on massive datasets of content — images, audio, text — that is harvested from the internet, and they use this content to produce new material. They can create all kinds of things, including images, music, speech, computer programs, and text, and can either work as stand-alone tools or can be incorporated into other creative tools.

The rapid development of this technology has caught the attention of many, offering the promise of revolutionizing how we create art, conduct work, and even live our daily lives. At the same time, these impressive new tools have also raised questions about the nature of art and creativity and what role law and policy should play in both fostering the development of AI and protecting individuals from possible harms that can come from AI.

At Creative Commons, we have been paying attention to generative AI for several years. We recently hosted a pair of panel discussions on AI inputs and outputs, and we have been working with lawmakers in the EU and elsewhere as jurisdictions debate the best approach to legal regulation of artificial intelligence. With our focus on better sharing, we have been particularly interested in how intellectual property policy intersects with AI. In this post, we will explore some of the challenges with applying copyright laws to the works created by generative AI.

Recently, text-to-image models like DALL-E 2, Midjourney, and Stable Diffusion have received significant attention because of their ability to create complex pieces of visual art just by following simple user text prompts. These systems essentially work by connecting user keywords to elements from images in the datasets that were used to train them to create entirely new images. The length and complexity of the prompts that users input can vary dramatically and the models are able to quickly create works across seemingly endless styles and genres.

An image generated by the DALL-E 2 AI platform showing a slightly distorted yellowish-white bicycle with a basket, rear rack, and orange chain guard leaning against a brick building with a white stucco base near a gray standpipe.
“Bicycle” by Stephen Wolfson for Creative Commons was generated by the DALL-E 2 AI platform with the text prompt “bicycle.” CC dedicates any rights it holds to the image to the public domain via CC0.

To better understand the process of creating a piece of visual art using a text-to-image model and how it raises complex intellectual property issues, let’s look at an example. If I ask DALL-E to create an image of a “bicycle”, the AI takes that prompt, compares it to images and text descriptions in its training data, and creates a few examples of what it thinks I mean. I am mostly a spectator in this process, and not meaningfully in control over the end product. Without further instructions, the model produces what it thinks a “bicycle” should look like, and not necessarily what I had in mind when I started the process.

This image from my single prompt may or may not be what I was looking for, and in some circumstances, it may serve my needs. But if I have a particular vision for what I want my bicycle image to look like, I need to work more with DALL-E to bring that to life. I can add more prompts into the system, and the more specific I am, the closer DALL-E may get to what I want. That is, the more description about what I want my image to look like, the more thought I give into what I want my end product to be, the more material the AI has to work with to find elements from its training data to bring out my artistic vision. I’m never entirely in control in this process, since the model does the physical creation of the work. And this doesn’t always work as planned — more specifics can lead to unexpected (and sometimes very strange) new elements added to the image. But ideally the more I work with the tool, the closer I may get to my vision.

The process of creating content using something like DALL-E 2 takes a considerable amount of trial and error, and over time you can develop skills in prompting the AI to generate what you want it to produce. Indeed, there are entire “prompt books” available to give people shortcuts to get the most out of DALL-E 2 without having to get over the learning curve. Yet even without these books, you can learn to use the system to create content that fits your artistic vision, given enough time and experience.

This simple example begins to illustrate how generative AI can blur the line between what is the work of a human artist and what is the work of a machine, and when it involves both, it reveals the difficulty with applying classic copyright laws to AI-generated content. Creative Commons has argued for several years that, absent significant and direct human creative input, AI outputs should not qualify for copyright protection. In part, this is because we believe that copyright law’s fundamental purpose is to foster human creativity. Where human creativity is not involved, the default should be no copyright protection. Freedom from copyright protection offers important benefits, mostly notably, it enables downstream users to build on, share, and create new works without worrying about infringing on anyone’s rights. Simply stated, autonomously created content produced by AI doesn’t involve human creative expression and simply isn’t within the subject matter of copyright. It should be free for all to use.

But what happens when human creativity is more deeply involved in the generative AI process? What then? It would be difficult to argue that adding minimal inputs into DALL-E 2, like my “bicycle” example, is “creative” in any substantial way. However, as I manipulate the tool more by adding in more substantive and creative prompts to get it to produce the work that I have in mind, the more creative input I have in the process and the less that is left to the AI alone. In this way, DALL-E begins to look more an artist’s tool and less like an autonomous or semi-autonomous content generator.

In fact, these generative AI models can be powerful tools to encourage and enhance human creativity. Over the summer, an artist named Jason Allen won the Colorado State Fair digital arts competition with an image generated by Midjourney. Allen spent nearly 80 hours creating his work, adjusting text prompts to create hundreds of images, from which he selected three and manipulated them with other digital tools, until finally printing the works on canvas — certainly this goes beyond simply entering a few keywords into the tool. What is more, AI can give non-artists the ability to create new works. I, for example, do not have much talent or training in the visual arts. I have never been able to draw what I have in my head. But with these generative AI tools, I can make my artistic vision a reality in a way that I have not been able to do in the past. Imagine what someone with deep artistic vision but physical or visual challenges may be able to do with these tools — the possibilities are amazing!

If this is true — if generative AI can be an engine for human creativity instead of a substitute for it — then perhaps we need to consider if, when, and how copyright protection should attach to parts of some AI outputs, separating the unprotectable elements from what may be, on a case-by-case basis, protectable. On the other hand, rights restrictions come with potential downsides. While copyright can help incentivize creation in some circumstances, legal protection should not be granted where it disproportionately harms the public’s right to access information, culture, and knowledge, as well as freedom of expression. The questions for us, then, are when does the creativity that a user puts into a work based on a generative model rise to a level where rights protections should attach and when do the benefits of protection outweigh the costs.

In a blogpost from 2020, P. Bernt Hugenholz, Joao Pedro Quintais, and Daniel Gervais offered an interesting way to look at the generative AI creation process. They divided the process into three parts: Conception (designing and specifying the final output), execution (producing draft versions of the output), and redaction (refining the output). Humans are primarily involved at the conception and redaction phases — coming up with the idea for the output, entering prompts into the system, iterating and refining the final product. AI, on the other hand, is essential to the execution phase — assembling the output following human inputs. The authors wrote that whether copyright should protect a piece of AI-generated content should be a case-by-case determination. By breaking down the process into these parts, we can evaluate what kind and how much human creative input goes into the production of AI outputs. Where human creative choices are expressed in a final output, that output should qualify for copyright protection; where an AI creates without creative choices of the human author in the final product, that should not qualify for copyright.

There are no easy answers here. While we believe that AI-generated content should not be protected by copyright by default, the line between what are the works of human artists and what are the productions of AI algorithms will only become more complex as AI technologies continue to develop and are incorporated into other creative tools.

For over twenty years, Creative Commons has argued for a copyright system that encourages more sharing and a freer use of creative works, because we believe that an open approach to intellectual property rights benefits us all. The law should support and foster human creativity, and right now it is at best unclear how AI-generated content fits into this system. However copyright law applies to this new technology, it is essential for the law to strike a balance between the rights of people to use, share, and express themselves using creative works and incentivizing creativity through exclusive rights.

The post This Is Not a Bicycle: Human Creativity and Generative AI appeared first on Creative Commons.

Félix Nartey — Open Culture VOICES, Season 2 Episode 3

mardi 21 février 2023 à 09:00

Felix feels that “sharing is something that is invited in human nature” and this is something that has inspired part of his own journey within Open Culture and the Open Movement. In this episode we learn about digital and media rights in Ghana and Liberia as well as the ways local initiatives for open access are being fostered by Felix’s work with Wikimedia.

Open Culture VOICES is a series of short videos that highlight the benefits and barriers of open culture as well as inspiration and advice on the subject of opening up cultural heritage.  Felix is a Senior Program Officer at the Wikimedia Foundation and also volunteers with Creative Commons and Mozilla to promote the Commons and Open Access.

Félix responds to the following questions:

  1. What are the main benefits of open GLAM?
  2. What are the barriers?
  3. Could you share something someone else told you that opened up your eyes and mind about open GLAM?
  4. Do you have a personal message to those hesitating to open up collections?

Closed captions are available for this video, you can turn them on by clicking the CC icon at the bottom of the video. A red line will appear under the icon when closed captions have been enabled. Closed captions may be affected by Internet connectivity — if you experience a lag, we recommend watching the videos directly on YouTube.

Want to hear more insights from Open Culture experts from around the world? Watch more episodes of Open Culture VOICES here >>

The post Félix Nartey — Open Culture VOICES, Season 2 Episode 3 appeared first on Creative Commons.

Panel Recap: 3D Scanning for Cultural Heritage Preservation, Access and Revitalization

lundi 20 février 2023 à 10:25

On 7 February 2023, Creative Commons hosted a panel discussion on 3D scanning, preservation, access and revitalization of cultural heritage. Missed it? Not to worry, it was recorded. 

<style>.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }

 

Here are some of our top takeaways from discussion:   

We at Creative Commons look forward to further engaging in these discussions, and including more views, voices and perspectives in the future. If you are interested in the topics mentioned above, please contact us at info@creativecommons.org. We hope to work with you in the future. 

The post Panel Recap: 3D Scanning for Cultural Heritage Preservation, Access and Revitalization appeared first on Creative Commons.

I'm richer than you! infinity loop