Tuesday, 13 May 2025

The biocomputing revolution

IEC Tech

article here

The announcement of the world’s first commercialized “biological computer” earlier this year is the latest breakthrough in the emerging field of biocomputing which, it is claimed, will not only dramatically reduce energy consumption for artificial intelligence (AI) processing but could also give birth to a superior form of intelligence over silicon-based machines.

Biocomputing fuses living biological matter grown in a lab from proteins or DNA with electronics hardware. This creates processing systems “that use less energy and learn from smaller datasets than conventional computers” outlines the Australian company behind the breakthrough.

“We are going to create a new industry,” says Dr Fred Jordan, co-CEO of a Swiss-based biocomputing developer that rents its electrophysiological system to private companies for experiments. “It is going to totally revolutionize the way we perform computation.”

Back to the origins of biocomputing

A key stepping stone on this path was the discovery (by John B. Gurdon and Shinya Yamanaka, who were jointly awarded the 2012 Nobel Prize) that mature human cells can be reprogrammed to develop tissues for any other part of the body, including the brain. Since then, neuroscientists have grown human cells to study brain diseases and to better understand how the brain works. Now, researchers are able to use the same technology for computations.

“[Gurdon and Yamanaka] invented a way to create neural stem cells directly from your skin,” explains Jordan. “Because of that, we are able to create as many human brain neurons as we want. Our objective is non-medical, purely engineering, which is to construct a new type of computer processor.” Its biocomputing platform is composed of brain organoids about 0,5 mm in diameter built from 10 000 living neurons, but scalability is potentially infinite.

Jordan says, “When we create neurons in the lab, we have to stop the multiplication not to get too many of them. Potentially, we could create nervous tissue of one billion neurons, 100 billion neurons or 1 000 billion neurons without major issues and also without major cost.” In a paper he co-wrote, Jordan explains that the core concept involves using living neurons to perform computations, similar to how artificial neural networks (ANNs) are used today.

If perfected, this training could eventually allow organoids to mimic silicon-based AI and serve as processing units with functions similar to today’s CPUs (central processing units) and GPUs (graphics processing units), the company says. “If you want to do biocomputing, you first have to solve the problem of training,” Jordan explains. “Training means modifying the synaptic connections [between cells] in order to get the output that you need. It is exactly the same problem we had with ANNs 30 years ago. However, the solution that was found for ANNs that gave birth to modern AI and ChatGPT is not applicable to biocomputers. We have to find a new way to tune these synaptic connections.”

Scientists think they have achieved this by stimulating living lab grown cells with electronic pulses or chemicals (such as dopamine) in order to switch them on or off. This sends signals to the next cell in line forming new pathways and triggering responses in the same way that the human brain learns. “You can reward neurons when they do what you want and punish them when they don't do what you expect. This reconfigures the neural network to produce the right behaviour,” Jordan explains.

Environmentally sustainable AI

But why invest in biocomputing at all? The principal area of interest is the potentially huge energy efficiency it could provide compared to existing hardware-based AI processing. The Swiss company claims biocomputing is one billion times more energy efficient than current computing hardware and states its principal goal is to develop a biocomputing system that would use 100 000 times less energy than what is required to train generative AI today.

“The reason is that the [brain’s] nervous tissues are the result of 300 million years of evolution in order to optimize the use of energy for processing information,” Jordan explains. “[Our technology] is the result of this optimization. So, from an energy efficient perspective, biocomputing is going to be superior because it's been developed over millennia to work like this.”

Instead of powering and cooling stacks of GPUs in data centres, companies would eventually build data centres comprised of nervous tissue, Jordan suggests. “10 centimetres by 100 metres by 100 metres and millions of electrodes that do exactly the same thing but with thousands of times less energy,” he describes. “At this point computation becomes totally cheap [because of lower energy costs] and without environmental impact.”

Maybe a way of solving this conundrum: how can AI possibly help to reduce emissions when it uses so much energy to simply function? Biocomputing could be one of the solutions.

Smarter than classical computing

Since it uses the same neurons as the human brain networked in a similar manner, a synthetic biological intelligence could also be smarter at solving certain types of problems than traditional silicon-based AI. “Synthetic biological intelligence is inherently more natural than AI [with] the potential to create systems that exhibit more organic and natural forms of intelligence,” states the Australian developer. It cites learning capabilities like generalization and learning efficiency, “that AI can only simulate”.

Some researchers believe that biocomputers built using human cells can better replicate the function of the human brain and therefore for accelerated discovery of cures for neurological diseases like Alzheimer's. A Forbes columnist suggests: “In the future, these cells could be programmed to scan for biomarkers that indicate the presence of disease. The same cells could mass-produce proteins that could help treat the disease.”

Others theorize that forms of cellular computers, perhaps built from bacteria or mycelium, could be used to test changes in the natural environment and provide better solutions to restoring damaged ecosystems. “That’s a domain where conventional computers can basically do nothing,” says a researcher at Spain’s National Centre for Biotechnology. “You can’t just throw a computer into a lake and have it tell you the state of the environment.”

Jordan stresses that none of this proven. “Any computation related to cryptography and big data number crunching is not good with biocomputers,” he says. “However, many of the things which are done with AI can be done as well, if not better, with biocomputers, for the very simple reason that these are neurons, and AI is only a simulation of neurons. Biocomputers can do the same, and probably much better.”

Limits and challenges, including ethical

Of course, the fact that biological intelligence systems are alive (sustained on life support systems controlling temperature, and other factors) means they can also decay and die. In theory, brain matter could survive as long as the cerebrum in human bodies, which maxes out around 120 years, but in practice it is much shorter. The Swiss company has managed to evolve its organoids to survive for around 200 days and says even a few hours are enough for most experiments. The Australian developer claims its units survive around six months.

Another challenge confronting early developers of biocomputing systems is an ethical one. To the extent that the systems are alive and learning, questions of consciousness arise. Scientists tend to take an agnostic view. While acknowledging that the technology raises concerns about the need for regulation, the Australian company also questions whether it can ever be sentient and is of the opinion that biocomputing creates a different life form to animal or human. “We think of it as a mechanical and engineering approach to intelligence,” says the company’s Chief Scientific Officer. “We're using the substrate of intelligence, which is biological neurons, but we're assembling them in a new way.”

Jordan says his company is open to discussing the concept with ethical committees and expert academics and holds a similar view. “We present the science and they think about the ethical impact in the long run. How is [a biocomputer] with a 1 000 billion neuron nervous tissue going to behave? Can it develop consciousness? I have absolutely no clue because I don't know exactly what consciousness means.”

What standards can help?

Existing standards for AI and machine learning (ML) in IT are useful starting points. ISO/IEC 5259 handles aspects of data quality for analytics and ML. ISO/IEC TS 4213 is an assessment of machine learning classification performance. ISO/IEC 15938 - Part 18 concerns conformance and reference software for compression of neural networks. ISO/IEC 24029 documents the robustness of neural networks.

In the future, ethical considerations could be taken on board. The IEC and ISO have recently formed a joint systems committee on bio-digital convergence, IEC/ISO JSyC BDC, which will be tackling such issues. It is working on a series of landscape bio-digital standards dealing with life systems and bioengineering, including bioprocessing.

Biocomputing is still in very nascent stages but huge progress is anticipated. “The next big leap we predict is to succeed in training,” says Jordan. “We bet it will take 10 years before the ability to train [biocomputers] is cracked. It is going to happen and once we succeed it will change a lot of things. It is going to be an alternative way of computing,” he concludes.

No comments:

Post a Comment