PROJET AUTOBLOG


Creative Commons

source: Creative Commons

⇐ retour index

Inching Towards Open at California College of the Arts

jeudi 8 décembre 2016 à 18:40

Eric Phetteplace is a fellow from our first Institute for Open Leadership, held in San Francisco in January 2015. He is a librarian at California College of the Arts. 


I was a member of the inaugural Institute for Open Leadership in 2015. I’m the Systems Librarian at California College of the Arts (CCA), and my IOL project centered around VAULT, the school’s digital archive, which I maintain. To quote from my original proposal:

While the college has an excellent resource in its digital repository, VAULT, items are often visible only within a department; faculty hesitate to share even with the college community as a whole. What is more, this protective attitude trickles down to students in the form of assignment instructions. Faculty train the next generation of artists to lock down their creations rather than embrace sharing and remixing via the Creative Commons suite of licenses.

It has been two years since the IOL, and it is an ideal time to reflect on my project, what was accomplished, and what work remains to be done.

Progress, Slow but Sure

The prime takeaway from the Institute for Open Leadership for me related to increasing the understanding of open licenses and their benefits. As a librarian, most of my colleagues are not only aware of how important open content is, they actively advocate for it in the form of open educational resources and open access research. Librarians are inclined to sympathize with “open” as a concept, and are liable to have produced content licensed under a Creative Commons license or deposited materials in an open repository. But not everyone shares a nuanced knowledge of and appreciation for openness. The IOL, first and foremost, taught me how to clearly articulate what open means, and the tangible benefits it offers. This is evident in a presentation I gave to college administrators where I devote a great deal of time to contrasting Creative Commons licenses and dispelling myths about them. Providing examples and a space to clear up confusions was invaluable, even if my instincts told me to focus on the implementation and policy details at hand.

Secondly, the IOL helped me to identify allies and stakeholders. Obviously, as mentioned above, the libraries were a natural place to build support. However, I was also able to identify individual faculty members who were already utilizing and promoting Creative Commons content within their classes. I could better articulate the benefits of permissive licensing by putting myself in administrators’, students’, or faculty members’ shoes, and framing the conversation around their needs. While my own stance tends towards “a healthy intellectual commons produces a healthy society”, that’s often not the most persuasive point to others. Students, for instance, are intrigued by the prospect of selling their Creative Commons works, or exploring the ways in which readily shareable works are better for self-promotion.

While the IOL helped me develop a strategy for spreading open content on campus, the progress has been rather slow. When I listed the barriers to my IOL project, a few were:

We’ve made substantial progress on the first two items. People I talk to on campus are likely to have a positive opinion about Creative Commons and similar efforts, and view FERPA more as a limitation than an insurmountable obstacle. However, the final hurdle has been the most challenging: on top of my other obligations, which included a couple major system implementations recently, I’ve struggled to find time to advocate for open policies. It takes a continual fight to see improvements, yet since my initial presentation there’s been relatively little change to the student and faculty work in our repository.

CC-Licensed Archives

VAULT library collection screenshot
Screenshot of the library’s collection within VAULT. The assets are licensed CC BY-NC.

One of the areas where the library has more control over the licensing and distribution of content is the college’s archives. The archives include both physical documents but also digital objects inside VAULT. The Libraries’ collections in VAULT totals over a thousand records, representing everything from historical photos of students from the 1960s to an accreditation report from this October. Since we are in charge of the acquisition and preservation of this content, it was relatively easy to discuss the implications of CC licenses with my library director, who was incredibly supportive of my IOL attendance. We promptly designed a licensing policy and added CC BY-NC licenses to all our public archives content.

Furthermore, I worked to increase the exposure of our archival content by ensuring our metadata records could be harvested and reused by other platforms. By configuring VAULT to publish data using the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) standard, our content now appears in Calisphere—a digital library culled from many archives, libraries, and museums located in the state of California—as well as OCLC’s tremendous Worldcat database that collects the holdings of libraries worldwide. These external search engines increase the exposure of our resources and make it easier for people who never heard of California College of the Arts, much less VAULT, to discover our archives. After all, open content isn’t really that valuable if no one knows of its existence. Even if all of VAULT was openly licensed, we would still need to increase its exposure such that it’s not merely a school-wide secret.

A Culture of Open

Art+Feminism edit-a-thon at California College of the Arts
Art+Feminism edit-a-thon at California College of the Arts, by Eric Phetteplace, CC BY

While my primary project of loosening the licensing restrictions of VAULT has only affected archival works, as an institution California College of the Arts has been improving its understanding of and support for openness. The libraries have started a few projects which relate to Creative Commons and open access to research. For instance, in each of the past two years we hosted an Art+Feminism Wikipedia edit-a-thon at a campus library. Wikipedia is one of the most prominent examples of Creative Commons content; it’s ubiquitous, a great topic for information literacy discussions, and the perfect place for students to add to the commons for the very first time. Unfortunately, Wikipedia also suffers from the topical biases that plague other encyclopedias and much of the Western academic canon: male historical figures and Anglo-European subjects are disproportionately represented. Thankfully, events like Art+Feminism exist to train new editors and to encourage people to contribute Wikipedia articles relating to female artists and feminism. We’re not only helping to balance Wikipedia’s topical coverage, but we’re also developing new, open content.

On a related note, the CCA Libraries have made a habit of participating in Open Access Week. Open Access Week is a week devoted to raising awareness around the open access movement that promotes free access to scholarly content online. As part of our participation, we’ve posted informative notices around campus and on our website, created a study guide, and run small contests to encourage faculty to engage with open access. But what’s more: we’ve gone beyond simply advocating for open access content and informing our stakeholders about its value; we’re  actually distributing open content ourselves. Now we have a small but growing Faculty Research collection in VAULT which provides a place for faculty and staff to post open access versions of their research works.

While these feats are modest, they signal a wider change around campus. Open has moved from being a subject of conversation to a subject of action. Each year, the CCA libraries’ activism expands gradually: we license more archival objects under Creative Commons licenses, we host more events surrounding open access scholarship and open content, and we see growing attendance and awareness. It will continue to be difficult to find the time the cause deserves, but we’re committed to participating in the open movement and improving our institution one step at a time.

The post Inching Towards Open at California College of the Arts appeared first on Creative Commons.

A simple and versatile resource for refugees: an interview with Refugee Phrasebook

mercredi 7 décembre 2016 à 18:54

logo-phrasebook-03

Last year, after a series of attacks on refugee centers in Berlin, I saw a Facebook post circulating from my friend Paul Feigelfeld, an academic in Berlin. The post called on his community – academics, artists, translators, and activists – to take action to stop the continued attacks on asylum seekers in Europe. That post and others like it were the spark for Refugee Phrasebook, a CC0 open data project with hundreds of contributors. Taking shape over the past year, the book has spread all over Europe, attracting global press in Wired, Newsweek, STERN, Die Zeit, and Der Spiegel, as well as winning the Prix Ars Electronica “Award of Distinction” for Digital Communities.

With 1.1 million asylum seekers in Germany, the need for language and education support is acute, and the Refugee Phrasebook helps meet that need. As both a physical and digital resource for refugees, the project has spread rapidly across Europe, and contributors are adding more languages, data, and phrases to continue to support an increasing number of refugees.

Refugee Phrasebook is accepting contributors who want to get involved and collaborate with organizations like the P2P Foundation, CC, and Wikimedia. Visit their website to contribute to their global knowledge community.

What was the impetus for the Refugee Phrasebook? How did the project come about?
The urgency of the refugee situation in the summer of 2015 made it immediately clear that language aids were needed for both refugees and helpers. Infrastructure was bad, people had short battery life on their smartphones – if they had one at all – or no data plans to use translation apps, so along with many other small simple projects, a shared document with often used phrases for basic communication and central questions spread over several Facebook groups.
It grew exponentially and was quickly transformed into a Google spreadsheet to make the data easier to expand and maintain. Hundreds of people contributed; there were anonymous contributors, brief contributors, and others on a more long term basis. People started creating the first print versions after only a few days and distributed them at train stations in Vienna or Berlin. It soon found its way to Lesbos, Idomeni, and even Norway. It spread very quickly and received a lot of positive feedback and support from private individuals and institutions like universities, art schools, and art institutions, who provided printers, design expertise, etc.

phrasebooks

How did the Refugee Phrasebook evolve from nascency to a global project of this size? What kinds of tools, both digital and physical, helped you scale the project so rapidly? How do you organize the data and translations?
The Google spreadsheets were shared only on Facebook at first, but soon it became clear that we needed more translators beyond the initial group based in Berlin. We created a website as a contact point for contributors, and thanks to Open Knowledge Foundation Deutschland e.V., we could also handle donations and provide an official donation receipt. As the media woke up to the topic, our website was also covered a lot, which helped to reach other initiatives.
To coordinate the project, we mostly use Skype or Zoom calls, Etherpads, Slack, and Trello. In several hackathons, developers helped to improve the structure of the tables and how to display them on the site. But the main factors were a strong sense of urgency, the network effect of personal recommendations, and the data being open, which is still exceptional in this field.

How do the physical and virtual interact in this project as both a virtual collection of data as well as a collection of physical booklets?
It continues to be interesting how virtual and physical realms overlap in this project. First, of course, there are very pragmatic practical problems that arise: how can we create a layout that fits as much information as possible in the best way onto as few pages as possible to keep costs down? Where can we print? This is always done differently: everyone can use the data and create and print their own booklets, or download existing print versions from our website, but often we collaborate with the people who need booklets, help raise money for printing costs, locate affordable printers, organize transport and distribution. Creating the phrasebook alone from data to print to distribution and usage is already an act of conversion and conversation.
All these aspects give us valuable feedback on how to make things easier, where the data is inconsistent, what is lacking in terms of phrases, languages, information and so on. At the moment, we are working on making the conversion from language data in the spreadsheets to printed booklets easier, so people can simply select what they want and don’t have to go through tedious conversion processes. Refugee Phrasebook is a tool for places with the worst infrastructure, which is why it has also been used as insulation during cold weather and to kindle fires. It’s a simple and versatile resource. In this case, it can be helpful to burn books.

books in car

What kinds of outcomes have you seen from sharing the data under CC0? Has the use of CC been particularly relevant in terms of your work and what you are setting out to do?
Thanks to the open CC0, the translations could be used in several apps and other translation projects. Designers used it to create signs and other communication aides, local initiatives created custom print versions.

Being able to adapt the content to a specific use was especially important for the camps, as refugees encountered different languages across Europe and often could only stay for a few days. In a situation that required urgency, we wanted everyone to be able to adapt and share the translations freely. We will continue to share content under a CC license.

The next step is an automatic solution to create custom pdf files as well as more icons.

What kinds of outcomes have you seen from this project, more generally? How have you balanced the project’s growth in terms of the usage of the physical and virtual assets as well as the ever expanding scope of a project such as this one?
The demand for print versions was a surprise at first, but electricity and wifi is often not available in refugee camps and shelters. A decentralized structure with independent and connected regional projects helped develop our community and supports the project’s growth. The printing of the phrasebooks is often organized locally. Updating the tables is a time consuming task, so feedback from helpers is an important motivation. In the last month, we saw the demand shift to the south of Europe, where current policies have moved refugees out of sight without providing substantial support. The need for shelter and welcoming culture has not diminished, and language support is only a very small part of what is necessary.

What’s next for the Refugee Phrasebook? How do you see it evolving?
Though it might be easy to start projects like this it can be harder to sustain them. At the moment we are focussing on consolidating the project while expanding the target language base to improve on translations of prominent but lesser known African languages like Tigrinya.
The growing and changing data set has been included in various other projects from the beginning, often without us even knowing about it. The evening before Refugee Phrasebook received the Ars Electronica Award of Distinction in the category Digital Communities, we received an email that someone in Washington had used it in an app.

It lives forth in apps, language learning cards, other aid websites, etc. and is a great example for open data and peer to peer.

We do not see it growing and growing, but are looking to build it into a sustainable, stable resource that is easy to use, to expand and adapt into other projects. We hope to continue to develop the global community we have established through the phrasebook.

The post A simple and versatile resource for refugees: an interview with Refugee Phrasebook appeared first on Creative Commons.

Solving some of the world’s toughest problems with the Global Open Policy Report

mardi 6 décembre 2016 à 15:24

Read the Global Open Policy Report


Open Policy is when governments, institutions, and non-profits enact policies and legislation that makes content, knowledge, or data they produce or fund available under a permissive license to allow reuse, revision, remix, retention, and redistribution. This promotes innovation, access, and equity in areas of education, data, software, heritage, cultural content, science, and academia.

For several years, Creative Commons has been tracking the spread of open policies around the world. And now, with the new Global Open Policy Report (PDF) by the Open Policy Network, we’re able to provide a systematic overview of open policy development.

screen-shot-2016-12-02-at-5-57-09-pmThe first-of-its-kind report gives an overview of open policies in 38 countries, across four sectors: education, science, data and heritage. The report includes an Open Policy Index and regional impact and local case studies from Africa, the Middle East, Asia, Australia, Latin America, Europe, and North America. The index measures open policy strength on two scales: policy strength and scope, and level of policy implementation. The index was developed by researchers from CommonSphere, a partner organization of CC Japan.

The Open Policy Index scores were used to classify countries as either Leading, Mid-Way, or Delayed in open policy development. The ten countries with the highest scores are Argentina, Bolivia, Chile, France, Kyrgyzstan, New Zealand, Poland, South Korea, Tanzania, and Uruguay.

The Index scores show that open data policies are the most common, while the rarest open policies are in the heritage sector. Our data also shows a clear correlation between the scope of policy and the level of its implementation. “The Open Policy Index is the first measurement tool that aims at cross-sector comparison of policies, at global scale. The 2016 edition is a prototype which we will be developing further in coming years. We would like to double the number of indexed countries to cover all those in which Creative Commons is active,” says CC Poland’s Alek Tarkowski, one of the leaders of the project.

In his introduction, Creative Commons Public Policy Lead Timothy Vollmer calls us to action, saying that with open policies we have the opportunity, the infrastructure, and the ability to “improve educational opportunities and help solve some of the world’s toughest scientific challenges.”

This report documents global achievements from teams all over the world. Each section was written by experts in open policy in their region. Kelsey Wiens, Project Manager for the Global Open Policy Report, emphasizes the importance of communities and open policy: “We need to leverage effective open policies with vibrant, active communities to embrace, embed, and enhance policies in addition to written statements. Without communities like Creative Commons and OPN, policies are simply paper, not actions.”

Our partners in collaboration are:

Carolina Botero – Karisma Foundation (Colombia)
María Juliana Soto – Karisma Foundation (Colombia)
Laura Mora – Karisma Foundation (Colombia)
Tomohiro Nagashima – CommonSphere (Japan)
Tomoaki Watanabe – CommonSphere (Japan)
Alek Tarkowski – Centrum Cyfrowe (Poland)
Kelsey Wiens – Currently CC Canada, formerly CC South Africa (Canada)
Nicole Allen – SPARC (United States)
Delia Browne – Australia National Copyright Unit (Australia)
Baden M Appleyard – AusGOAL (Australia)
Jessica Smith – Australia National Copyright Unit (Australia)
Nancy Salem – Access to Knowledge for Development Center (Egypt)
Editor: Isla Haddow-Flood (South Africa)
Graphics: Atramento.pl (Poland)
Survey Partner: CommonSphere (Japan)

We would not have been successful without the participation and support of the Creative Commons Affiliate Network. We give thanks to all who participated in the survey, interviewed for the case studies or provided research support.

This project was a part of the Open Policy Network grants made possible by a generous donation from the Hewlett Foundation and the Open Policy Network supported by Creative Commons.

The post Solving some of the world’s toughest problems with the Global Open Policy Report appeared first on Creative Commons.

Open Practices and Policies for Research Data in the Marine Community

jeudi 1 décembre 2016 à 21:07

In March we hosted the second Institute for Open Leadership. In our summary of the event we mentioned that the Institute fellows would be taking turns to write about their open policy projects. This week’s post is from Alessandro Sarretta from the Institute of Marine Sciences (ISMAR), part of the Italian National Research Council.


2016 has been a great year for me both personally and professionally in understanding, embracing, and disseminating the culture of sharing open knowledge. One thing that really helped was my participation in the second Institute for Open Leadership (IOL), held in March in Cape Town, South Africa. Creative Commons brought together 15 fellows from 14 different countries to learn and discuss about open knowledge, and to propose a specific open policy project to be improved and supported by the contribution of other fellows and mentors.

Centenary Tree Canopy Walkway
Centenary Tree Canopy Walkway by Alessandro Sarretta, CC BY 2.0.

As a researcher in the field(s) of Coastal and Marine Environment and Geospatial Information, I’m constantly dealing with data. Data are the core of science, and research has to be based on sound and reliable data.

Since at least 2002, there’s been a strong movement to allow online research outputs (referring principally to scholarly papers) to be published “free of all restrictions on access (e.g. access tolls) and free of many restrictions on use” (Open Access movement).

When talking about data, things usually get more complicated, and the open access community is still working to find the best way to allow open access to research data. One part of this requires working to convince both researchers and funders that this is the way to support better science.

In the marine community there is already a solid history of common standards for metadata and formats. There are also various portals (e.g EMODnet, Jerico) and projects (e.g. SeaDataNet) that exist for accessing a great variety of research data related to seas and oceans.

However, data policies and licences that regulate access to data are, when available, usually custom-made, requiring the filing of specific forms before use. Oftentimes these custom licenses do not clearly address the reuse of data and information.

The use of common, standard, open licences would help users to understand what they can do with the data. It would also ensure that the data providers would be able to easily share their products, with easy to understand conditions for reuse.

My goal as an IOL fellow is to inform relevant marine communities of the benefits of an open research data policy and, more specifically, to apply these principles to the practices within my institute—the Institute of Marine Sciences (ISMAR), part of the Italian National Research Council.

One of the deliverables for the Italian flagship project RITMARE (Italian research for the Sea), was to clearly define a data policy for the initiative. The document (written in Italian) defines categories of data for which different moratorium periods apply for the release of data; for all data in the project, the document requires that an open license is applied, mentioning Creative Commons licences as one of the standard options, with CC BY as the recommended first choice.

fig 1
Fig. 1: Data Policy rules for the RITMARE project (from Paola Carrara et al., Facing data sharing in a heterogeneous research community: lights and shadows in the RITMARE project. https://dx.doi.org/10.6084/m9.figshare.4244375.v2)

Another way to help RITMARE researchers share their data—and also ensure they receive recognition for their scientific outputs—is by launching a grant program that will provide funding for researchers who wish to publish data papers. These grants will be provided to support the payment of the article processing fees required by open access data journals. The main requirement of the funding is that researchers must deposit their data in an open access data repository under a CC BY or CC0 license.

We are working on other initiatives that represent a bottom up, collaborative research approach. Among them, two are very well established and almost finalised. First, a repository is being developed that includes digital images of both historical and recent materials belonging to multiple typologies: a historical library, that includes books, photographs, manuscripts, etc. from the end of 16th century; a collection of maps from the 16th century, mostly devoted to the Adriatic Sea and the Lagoon of Venice; an algal collection including an historical section performed during the Second World War containing more than a thousand vouchers, and a modern collection in progress. All these materials will be released under a CC BY license through two main interoperable data portals based on open source infrastructures. Second, the data from six meteo-oceanographic buoys in the Adriatic Sea, has been recently organised in a common database containing time series related to various parameters. This data will be made available as under a CC BY license and will be published an open data in a research data repository.

Carlos Moedas, European Commissioner for Research, Science and Innovation, said in a speech on “Open Innovation, Open Science, Open to the World” that researchers should be able to rely on free access to research data—and that data needs to be “Findable, Accessible, Interoperable, and Reusable” (“FAIR Guiding Principles for scientific data management and stewardship”). The Institute of Marine Science is embracing these principles by opening its data to the marine community and the wider society. While a comprehensive open data policy for the institute is not adopted, various initiatives are fully supporting this vision. We are making valuable data open and reusable using advances to technical infrastructures, standard formats, interoperable services, and Creative Commons licenses.

The post Open Practices and Policies for Research Data in the Marine Community appeared first on Creative Commons.

How fast is your internet? How MLab uses CC0 data for the public interest

mercredi 30 novembre 2016 à 22:46

Though internet as infrastructure may have seemed radical only a short while ago, many technologists are now taking a different tack: as a vital part of modern life, access to reliable internet is essential to the development of a just and equitable society. Built in response to proprietary measurement datasets, M Lab has assembled the world’s largest collection of open internet measurement data, all under a CC0 license.

A collaborative project from New America’s Open Technology Institute, Google Open Source Research, Princeton University’s Planet Lab, and many others, M Lab’s success stems from their insistence on open data and an open web, maintaining the tests that keep the web free and open. From researchers to consumers, MLab’s data is the backbone of the internet, an example of open collaboration that benefits consumers, researchers, and the future of the web. 

To read MLab’s reports and try their tools, visit the website.  Thanks to Chris Ritzo, Georgia Bullen, Alison Yost, Collin Anderson, and Stephen Stuart for their time in answering these questions.

Why does Internet measurement matter? What is the ultimate goal of this project?

Measurement Lab’s goal is to provide an open, publicly available dataset and the platform on which to gather it. There have always been proprietary data sources about the quality of consumer broadband connections, but those were and are the intellectual property of companies like Ookla, Akamai, Google, and network operators themselves. New America’s Open Technology Institute, Google, and Princeton University’s Planet Lab formed a consortium to build a data collection platform that could host a common base of internet measurement experiments developed and vetted by the academic research community, be deployed globally, and over time provide what is now the largest open, publicly available internet measurement dataset in the world. Today we run over 100 measurement points around the world and collect an average of over 9 million tests per month worldwide.

From a consumer perspective, are you getting the speed and quality of service you purchased from an ISP? Using a speed test or internet health test provides data to help answer that question. For regulatory agencies, measurement is a means of keeping state on broadband speeds, health, consumer protections, anti-competitive practices and more. For network operators, measurement is paramount to understanding how to provision infrastructure and services. For civil society groups and human rights advocates, it is a means of assessing disparities in accessing the internet, in the quality of available internet services, whether internet traffic is surveilled by state actors or others, and whether and where the internet is censored or blocked. The research community is also keenly interested in openly available internet measurement data, in order to understand and answer many of these issues, and in many cases how they might devise ways to make the internet function better.

How did you make the decision to use CC0 data? How does your organization support the commons?

M-Lab uses a CC0 license on the data for experiments that we maintain or contribute to: NDT, Paris Traceroute and Sidestream. We don’t require researchers hosting other experiments to use the same license, but we do require data to be provided openly. In some cases M-Lab will agree to embargo data for an agreed upon period of time such that the researcher can be the first to publish on the data their test collects. But the most popular tests we maintain on our platform are licensed with CC0 because we think that this data should be in the public domain, and using a CC0 license allows anyone to freely use it without restriction, particularly those in the academic community.

The choice to use a CC0 license goes back to our beginning. The academic community interested in researching the internet needed a data source and couldn’t get that from private companies. Providing that data would have violated companies’ terms of service with their users, and even if it was legally possible, anonymizing it had been proven questionable, if not ineffective. Initiatives like Planet Lab at Princeton University had made some progress toward the idea of a research platform that could be used to collect such data, but didn’t necessarily measure at the scale of the consumer internet. Instead the M-Lab core team engaged with academics, company reps and others to map out what an internet measurement platform might look like to support the work of the research community, that would situate infrastructure to measure the consumer internet, and would provide open data in the service of the public interest. This was the genesis of M-Lab. So from the very beginning we’ve always supported the commons.

The M-Lab core team engaged with academics, company reps and others to map out what an internet measurement platform might look like to support the work of the research community, that would situate infrastructure to measure the consumer internet, and would provide open data in the service of the public interest. This was the genesis of M-Lab. So from the very beginning we’ve always supported the commons.

On your “About” page, you write that “transparency and review are key to good science.” Can you elaborate on that? How do you feel that your project participates in the scientific process to make the Web better for everyone?

M-Lab was created as a platform to produce open data about the health of consumer internet connections. Everything from the submission of proposed tests to the hosting of resulting data mirrors the process of submitting a paper to an academic journal. M-Lab defines the parameters that an experiment must adhere to, and academic or regulatory researchers apply to host their tests with us. Applications are reviewed by an experiment review committee to confirm that the researcher has ethical approval from their Institutional Review Board, that the test they propose conforms to M-Lab’s data privacy policy, determines whether the test has overlap with existing tests, and assesses capacity of the researcher for long term support of the test. M-Lab wants to encourage ongoing longitudinal research, not one-off projects, and make the data available openly for broad analysis and research.

We regularly support researchers interested in secondary data analysis with documentation, sample queries and tools to access, visualize and use M-Lab data, and where possible we produce our own analysis and research. This support varies from individual researchers and graduate students, to civil society and research organizations, to national regulatory agencies. In the United States, the FCC’s contractor, SamKnows, uses the M-Lab platform to host a portion of the tests for the annual Measuring Broadband America program. In Canada, the Canadian Internet Registration Authority (CIRA) hosts three M-Lab sites throughout Canada and has built their own national data portal using M-Lab’s data which also integrates our test.

Additionally, because our tests are open source, we support their integration into other websites, software, or other platforms. These developer integrations are key to our expansion and impact in new areas of the world and by new audiences. Most recently, Google’s Search team integrated Internet 2’s Network Diagnostic Tool (NDT) as a top level answer in their Search product. When you search for “how fast is my internet” or similar, the Google version of our test can be run immediately in your browser.

What kinds of results have you seen that are particularly exciting, surprising, or troubling from this project? What steps can people take to improve the Web? How can they use your project to do so?

M-Lab initially focused on providing the platform and data, leaving analysis to the research or regulatory community. As we’ve grown in size and interest, we have focused on building more accessible tools to run tests, visualize and download our data as well as support individuals and groups interested in using our data in their work.

The M-Lab team is also now working on our own research as well as supporting new inquiries into our data. In October 2014 the M-Lab research and operations team published a technical research report: ISP Interconnection and its Impact on Consumer Internet Performance. The data in this report helped to inform the FCC and supported its historic ruling in favor of Net Neutrality in 2015. Our data and analysis showed clear indicators of congestion and bad performance at the Interconnection points between consumer ISPs and Transit providers. We’ve since presented it to the FCC, NANOG, and at numerous international network operator gatherings. Before the M-Lab report, interconnection wasn’t even on the FCC’s radar. We’ve also supported individual researchers interested in using M-Lab data, through our support email, but also directly. In 2015, M-Lab hosted two research fellows who examined our network performance data in new ways. One fellow examined the economic geography of access by using M-Lab data and US Census data. Another worked on a machine learning algorithm that identifies anomalies in normalized M-Lab data, attempting to identify patterns in our data where known internet shutdowns had occurred.

Anyone can use M-Lab’s public data, tools and open visualizations for free.

M-Lab operates in the public interest- providing open data, open source tools, visualizations and documentation to support our own research, and yours.

People can test the speed and latency of their connection using our site: https://speed.measurementlab.net/. We also have an extension for Google’s Chrome browser, M-Lab Measure, that allows you to schedule tests to be run regularly.

Because M-Lab data is open and all of our tests are open source, developers can integrate our data or our tests into their own applications, services, web-mashups and more. We provide source code, documentation, and implementation examples to enable you to leverage our data, tests and infrastructure.  learn more about the project and how to get involved in the project on our website, and contact us for more information.

The post How fast is your internet? How MLab uses CC0 data for the public interest appeared first on Creative Commons.