Startup is building the first data centre to use human brain cells


A small number of companies are working on biological computers
Floriana/Getty Images
Data centers consume enormous amounts of power and chips are in high demand. Could brain cells be the answer? Australian start-up Cortical Labs has announced the construction of two “biological” data centers in Melbourne and Singapore, equipped with the same chips full of neurons that it has demonstrated can work. Pong Or Loss.
Cortical Labs is one of the few companies developing biological computers, made up of neuronal cells connected to microelectrode arrays that can stimulate and measure the cells’ response when fed data. Earlier this month, the company demonstrated that its flagship computer, the CL1, could learn to play the game Loss in a week.
Today, Cortical Labs revealed two data centers it plans to build. The first, in Melbourne, will contain around 120 CL1 units. The second, built in collaboration with the National University of Singapore, will initially house 20 CL1s, but the company hopes that it will eventually contain 1,000 units in a larger data center, following regulatory approval. Cortical Labs says this will allow it to expand its cloud-based brain computing service.
Biological computers like the CL1 are built and tested by research groups around the world, but they are often difficult to build and difficult for others to use, says Michael Barros of the University of Essex, UK. “We spend a lot of money and sweat to build these [systems].”
“What [Cortical Labs] What we’re doing is basically enabling their biocomputer to be accessible at scale,” says Barros, who already uses Cortical Labs’ cloud services in his research. “They’ll be the first to do it.”
Although these systems can be trained for relatively simple tasks, like playing LossExactly how these neurons work and how best to train them to perform tasks such as machine learning is still unclear, says Reinhold Scherer, also of the University of Essex. “Having access to that allows you to explore learning, training and programming,” he says. “We don’t program neurons like standard computers.”
Cortical Labs says its data centers will also require significantly less power than typical computing systems, saying each CL1 needs about 30 watts, rather than the thousands of watts required by a conventional cutting-edge AI chip.
“When we scale them up and turn them into entire rooms, as is currently the case with data servers, then we could achieve huge energy savings,” says Paul Roach of Loughborough University, UK. There are other resources that biological data centers might need, like nutrients to power and keep neural chips alive, but that should require much less cooling than conventional computing, he says. “The amount of energy saved thanks to [Cortical Labs’s] the figures are quite conservative.
However, the technology is still in its early stages, says Tjeerd olde Scheper of Oxford Brookes University, UK, who has worked with a rival biological informatics company, FinalSpark. “Is this going to work like people might think? No, we’re still in the early days of this development.”
It’s difficult to make a direct size comparison because the CL1 chips can’t perform conventional calculations like a regular silicon-based AI chip can, but the proposed biodata center will feature hundreds of biochips, compared to hundreds of thousands of graphics processing units (GPUs) seen in the largest AI data centers.
“I think there’s a very long way to go before we’re production ready. It’s a very big step from a small network playing a video game to an LLM,” says Steve Furber of the University of Manchester, UK.
One of the remaining problems is that it is still unclear how to store the results of training these neurons in some form of memory, or how to run real computational algorithms on them, rather than training them for specific uses like video games.
Another challenge is how to retrain neurons once they have completed a particular task. “Everything they are trained on is lost when the culture ends, so there needs to be proper retraining,” says Scherer. “It’s then not an optimal solution to keep a technology running if you have to retrain every 30 days.”
Topics:


