GitHub is one of the biggest code repositories on the Internet. It hosts billions of lines of code, creating an unparalleled dataset with which to train a coding AI. And that is exactly what OpenAI, via GitHub, thanks to its owners Microsoft has done — training Copilot using public repositories.
The chances are you haven’t tried Copilot yet, because it’s still invite-only via a VSCode plugin. People who have, are reporting that it’s a stunning tool, with a few limitations; it transforms coders from writers to editors because when code is inserted for you, you still have to read it to make sure it’s what you intended.
Some developers have cried “foul” at what they see as over-reach by a corporation unafraid of copyright infringement when long-term profits are on offer. There have also been reports of Copilot spilling private data, such as API keys. If, however, as GitHub states, the tool has been trained on publicly available code, the real question is: which genius saved an API key to a public repository.
GitHub’s defense has been that it has only trained Copilot on public code and that training AI on public datasets is considered “fair use” in the industry because any other approach is prohibitively expensive. However, as reported by The Verge, there is a growing question of what constitutes “fair use”; the TLDR being that if an application is commercial, then any work product is potentially derivative.
If a judge rules that Copilot’s code is derivative, then any code created with the tool is, by definition, derivative. Thus, we could conceivably reach the point at which a humans.txt file is required to credit everyone who deserves kudos for a site or app. It seems far-fetched, but we’re talking about a world in which restaurants serve tepid coffee for fear of litigation.
There are plenty of idealists (a group to which I could easily be accused of belonging) that nurture a soft-spot for the open-source, community-driven web. And of course, it’s true to say that many who walk the halls (or at least log into the Slack) of Microsoft, OpenAI, and GitHub are of the same inclination, contributing generously to open-source projects, mentoring, blogging, and offering a leg-up to other coders.
When I first learnt to code HTML, step one, before <p>hello World!</p> was view > developer > view source. Most human developers have been actively encouraged to look at other people’s code to understand the best way to achieve something — after all, that’s how web standards emerged.
Some individuals are perhaps owed credit for their work. One example is Robert Penner, whose work on easing functions inspired a generation of Actionscript/JavaScript coders. Penner published his functions online for free, under the MIT license; he also wrote a book which taught me, among other things, that a while loop beats a for loop, a lesson I use every day — I’d like to think the royalties bought him a small Caribbean island (or at least a vacation on one).
There is an important distinction between posting code online and publishing code examples in a book, namely that the latter is expected to be protected. Where Copilot is on questionable ground is that the AI is not a searchable database of functions, it’s code derived from specific problems. On the surface, it appears that anything Copilot produces must be derivative.
I don’t have a public GitHub repository, so OpenAI learned nothing from me. But let’s say I did. Let’s say I had posted a JavaScript-powered animation from which Copilot garnered some of its understanding. Does Microsoft owe me a fraction of its profits? Do I in turn owe Penner a fraction of mine? Does Penner owe Adobe (who bought Macromedia)? Does Adobe owe Brendan Eich (the creator of JavaScript)? Does Eich owe James Gosling (creator of Java), if not for the syntax, then for the name? And while we’re at it, which OS was Gosling using back in the mid-90s to compile his code — I doubt it was named after a fruit.
If this seems farcical, it’s because it is. But it’s a real problem created by the fact that technology is moving faster than the law. Intellectual property rights defined before the advent of the home computer cannot possibly define an AI-driven future.
Featured image uses images via Max Chen and Michael Dziedzic.