World News

Large Language Model Performance Raises Stakes

Benchmarking large language models presents some unusual challenges. For one, the main purpose of many LLMs is to provide compelling text that’s indistinguishable from human writing. And success in that task may not correlate with metrics traditionally used to judge processor performance, such as instruction execution rate.

But there are solid reasons to persevere in attempting to gauge the performance of LLMs. Otherwise, it’s impossible to know quantitatively how much better LLMs are becoming over time—and to estimate when they might be capable of completing substantial and useful projects by themselves.

Scatter plot showing negative correlation between success rate and task-messiness score.Large Language Models are more challenged by tasks that have a high “messiness” score.Model Evaluation & Threat Research

That was a key motivation behind work at Model Evaluation & Threat Research (METR). The organization, based in Berkeley, Calif., “researches, develops, and runs evaluations of frontier AI systems’ ability to complete complex tasks without human input.” In March, the group released a paper called Measuring AI Ability to Complete Long Tasks, which reached a startling conclusion: According to a metric it devised, the capabilities of key LLMs are doubling every seven months. This realization leads to a second conclusion, equally stunning: By 2030, the most advanced LLMs should be able to complete, with 50 percent reliability, a software-based task that takes humans a full month of 40-hour workweeks. And the LLMs would likely be able to do many of these tasks much more quickly than humans, taking only days, or even just hours.

An LLM Might Write a Decent Novel by 2030

Such tasks might include starting up a company, writing a novel, or greatly improving an existing LLM. The availability of LLMs with that kind of capability “would come with enormous stakes, both in terms of potential benefits and potential risks,” AI researcher Zach Stein-Perlman wrote in a blog post.

At the heart of the METR work is a metric the researchers devised called “task-completion time horizon.” It’s the amount of time human programmers would take, on average, to do a task that an LLM can complete with some specified degree of reliability, such as 50 percent. A plot of this metric for some general-purpose LLMs going back several years [main illustration at top] shows clear exponential growth, with a doubling period of about seven months. The researchers also considered the “messiness” factor of the tasks, with “messy” tasks being those that more resembled ones in the “real world,” according to METR researcher Megan Kinniment. Messier tasks were more challenging for LLMs [smaller chart, above].

If the idea of LLMs improving themselves strikes you as having a certain singularity-robocalypse quality to it, Kinniment wouldn’t disagree with you. But she does add a caveat: “You could get acceleration that is quite intense and does make things meaningfully more difficult to control without it necessarily resulting in this massively explosive growth,” she says. It’s quite possible, she adds, that various factors could slow things down in practice. “Even if it were the case that we had very, very clever AIs, this pace of progress could still end up bottlenecked on things like hardware and robotics.”

From Your Site Articles

Related Articles Around the Web

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button