It’s difficult to reflect on the past year—or forecast the next—without a sense of wonder regarding the sheer magnitude of innovation taking place across the AI landscape. On a weekly basis, researchers across industry and academia have published work advancing the state-of-the-art in nearly every domain of AI, toppling benchmarking leaderboards and accomplishing feats beyond what we could have imagined even a few years ago.
In large part, this progress is due to the rapid advancements we’ve seen in large AI models. Recent progress in supercomputing techniques and new applications of neural network architectures have allowed us to train massive, centralized models that can accomplish a wide variety of tasks using natural language inputs—from summarizing and generating text with unprecedented levels of sophistication, to even generating complex code for developers.
The combination of large language models and coding resulted in two of the most powerful AI developments we witnessed in 2022: the introduction of the OpenAI Codex Model—a large AI model that can translate natural language inputs into more than a dozen programming languages—and the launch of GitHub Copilot, a programming assistant based on Codex.
Historically, computer programming has been all about translation: Humans must learn the language of machines to communicate with them. But now, Codex lets us use natural language to express our intentions, and the machine takes on the responsibility of translating those intentions into code. It’s basically a translator between the human imagination and any piece of software with an API.
Codex has enabled the creation of GitHub Copilot, a virtual programming partner that, on average, generates more than 40 percent of the code for developers who use it. Over the coming months and years, as large AI models reliably scale up in size and become more powerful, GitHub Copilot will become increasingly useful for the developers relying on it, freeing up their time for more engaging and creative work and enhancing their efficiency.
In and of itself, that’s a truly remarkable step forward in productivity for developers alone, a community of knowledge workers who are wrestling with extraordinary complexity and unprecedented demand for their talents. But it’s just the first step of many that will be taken in 2023 as we see this pattern repeated across other sorts of knowledge work.
In 2023, we will see Codex and other large AI models used to create new “copilots” for other types of intellectual labor. The applications are potentially endless, limited only by one’s ability to imagine scenarios in which such productivity-assisting software could be applied to other types of complex, cognitive work—whether that be editing videos, writing scripts, designing new molecules for medicines, or creating manufacturing recipes from 3D models.
By applying the same underlying technology used to create GitHub Copilot, it will be possible to build Copilots for virtually any complex, repetitive aspect of knowledge work, allowing knowledge workers to spend their time on higher-order cognitive tasks, and effectively transforming how a great many of us interact with technology to get things done.
Our increasingly complicated and information-dense world requires more knowledge work every year, imposing ever-greater demands on those workers in every field and industry. Copilots for Everything could offer a genuine revolution for types of work where productivity gains have been few and far between since the invention of the personal computer and the internet.
Discussion about this post