EVENTS CALENDAR

SEE ALL EVENTS

OFE statement on the release of OSI Open Source AI Definition 1.0

29 October 2024

Author: Sivan Pätsch

OpenForum Europe (OFE) welcomes Open Source Initiative’s (OSI) stewardship of the process that has now led to the release of version 1.0 of the Open Source AI Definition. In line with our vision statement, OFE promotes openness as a means to enable user centricity, competition, flexibility, sustainability and community. To ensure that “open” remains clear, and to fend off attempts of “open-washing”, OSI has been crucial and will remain so as technology progresses.

We wish to thank OSI for its work on the development of version 1.0 of the Open Source AI Definition, including the countless hours spent on community outreach and discussions. Although some aspects of this version 1.0 text will require further work, in OFE’s view the spirit of the text is clear and sensible; this version 1.0 text is already sufficient to cover the main, real-world cases that exist today.

OSI has been the steward of the Open Source Definition since 1998 and the Open Source AI Definition is a continuation of this to address certain technical issues raised by AI. Specifically, AI models are different from software in that they don’t have a human-written form (“source code”) and they are created from training data which brings legal and technical challenges that are almost always out of our control.

Addressing this difference is complex because of the technical and social aspects, but OFE in particular welcomes the most crucial aspect of the Open Source AI Definition: Open Source is not a gradient, it is a binary. Where that binary line is drawn may occasionally need to be clarified for new contexts but attempts to blur the line by introducing gradients of openness would undermine the actual test: do we have the freedoms to use, study, modify and distribute modified versions? This has always been a question to which we give a yes or no answer. Blurring it would disrupt the open innovation cycle.

The Open Source Definition, applied to software licences, exists to ensure that anyone who receives the software has those four freedoms. The Open Source AI Definition ensures these same freedoms and binary categorisation via two requirements. The first is access and permission to modify the AI model. That part is logical but not sufficient because, unlike software’s source code, a model is not inherently understandable for a human. The second, more innovative, is a requirement for sufficient “data information” about the AI System so that a human can understand the AI model. This should enable a user, for example, to study the AI model for unwanted bias and to be able to understand how to modify the AI model to remove or modify that bias.

We are still in the early days of using AI models and the Open Source AI Definition is a version 1.0. Our understanding of exactly what “data information” is necessary will evolve along with the technology and our societal experience, but the approach is sensible and the current level of detail is already good enough to generally yield the expected result for current AI Systems.
OFE looks forward to continuing its work to increase the understanding of these issues in our ecosystem, as well as to continuing to work with OSI to review the suitability of the requirements in the Open Source AI Definition.