In 1996, a young graduate student called Richard Watson sat down to read a paper on evolution. The article was provocative. It tackled a long-running problem in evolutionary biology: we do not fully understand how organisms can adapt so successfully to their environments.

Creatures do not seem to be merely at the mercy of random changes, or mutations, in their genes over time. Instead, they actually seem to "improve" their ability to adapt. It seemed this ability was not explained solely by the process of natural selection, in which the best traits are passed on by the most successful organisms.

So the paper's authors, Gunter Wagner at Yale University and Lee Altenberg at the Hawaii Institute of Geophysics and Planetology in Honolulu, decided to look for answers in a surprising place: computer science.

Watson, a computer scientist, was hooked. In the 20 years since he read that paper, he has been developing a theory based on the ideas it contained. His ideas could help explain why animals are so good at evolving: a trait called their "evolvability". What's more, it might even help to solve some long-running curiosities in evolutionary biology.

Many people will be familiar with the idea that genes are passed from parent to offspring, and those genes that help their hosts survive and reproduce have a better chance of getting passed on. This is the essence of evolution and natural selection.

But there is more to it, because genes often work together. They form "gene networks", and those gene networks can also sometimes be passed intact down the generations.

The connections between genes can be strengthened or weakened as a species evolves

"The fact that organisms have gene networks and they are inherited from one generation to the next, that's not new information," says Watson, now at the University of Southampton in the UK. His contribution is largely to do with the way natural selection acts on those networks.

He believes it does not just act like a partial barrier, letting some adaptations through over others. Instead, the impact of this filtering allows gene networks in animals to actually "learn" what works and what does not over time. This way, they can improve their performance – in much the same way that the artificial neural networks used by computer scientists can "learn" to solve problems.

"Gene networks evolve like neural networks learn," he says. "That's the thing that's novel."

Watson's basis for this claim is the idea that the connections between genes can be strengthened or weakened as a species evolves and changes – and it is the strength of those connections in gene networks that allow organisms to adapt.

This process is similar to how human-made artificial neural networks on computers work.

Today, these systems are used for all kinds of tasks. For example, they can recognise people's faces in photographs or videos, and even analyse footage of football games to see which teams' tactics performed better and hint at why. How do computers manage to figure things like that out?

Artificial neural networks are inspired by biological networks – chiefly, the brain. Each network is a collection of simulated "neurons", which are linked up in some way; a bit like the stations and lines on the London Underground.

Networks like this are able to take an input – say, the word "hello" written on a page – and match it to an output – in this case, the word "hello" held in the computer's memory. This is something children do when they learn to read and write.

Neurons that fire together, wire together

Like a child, a neural network cannot make the connection instantly, but rather must be trained over time. That training is complicated, but in essence it involves changing the strengths of the connections between the virtual "neurons". Each time, this improves the result, until the whole network can reliably output the desired answer: in our example, that the funny symbols on the page ("hello") equals the word "hello". Now the computer "knows" what you have written.

A similar thing happens in nature, Watson believes. An evolvable species would "output" a trait just right for a given environment.

There are different ways to get neural networks to learn. One that Watson has focused on, as a good example for what appears to be happening in biological gene networks, is "Hebbian learning".

In Hebbian learning, the connections between adjacent neurons that have similar outputs are strengthened over time. In short: "neurons that fire together, wire together". The network "learns" by creating strong links within itself.

If an organism has certain genes firing together in this way, and that organism proves successful enough to reproduce, then its offspring will not simply inherit its beneficial genes, argues Watson. They will also inherit the connectivity between those genes.

A particular advantage of Hebbian learning is that the networks can develop "modular" features. For instance, one group of genes might define whether or not an animal has hind legs, or eyes, or fingers. Similarly, a handful of related adaptations – like a fish's ability to cope both with higher water temperatures and also saltier water – could get bundled and inherited together in a single gene network.

"If there is an individual that has a slightly stronger regulatory connection between those genes than some other individual does, then they'll be preferred," says Watson. "They'll be favoured by natural selection. That means over evolutionary time, the strength of the connections between those genes will be increased."

The ability of natural organisms to evolve to new selective environments or challenges is awesome

For Watson, this helps to get around a sticky problem in the theory of evolution.

Imagine for a moment that an organism's genome is a piece of computer code. A novice computer programmer might gradually update their code now and again, in an effort to make improvements. They might explore whether a different string of commands might make the program work a little bit more efficiently.

To begin with, this process of trial-and-error updating might work reasonably well. But over time, updating the code this way would become ever more cumbersome. The code would begin to look messy, making it difficult to work out what impact a particular addition might have. This does sometimes happen in programming and there is a term for the result: "spaghetti code".

If organisms actually evolved this way, says Watson, "their evolvability – their ability to adapt to new stresses or environments – would be rubbish." But in fact, "the ability of natural organisms to evolve to new selective environments or challenges is awesome."

Watson has also suggested that gene networks can contain "memories" of past adaptations, which can be expressed when required by the environment.

For instance, perhaps certain groups of organisms could rapidly evolve to eat a food that is harmful to other members of the same species – because their ancestors had already endured such a diet. In the past, the gene-regulatory structure would have been changed, making some gene-expression patterns easier to trigger than others. This "bias" would ultimately help their descendants to digest a tricky meal.

The big challenge for Watson's hypothesis is whether any empirical evidence for it can be found in nature

One real example Watson gives is sticklebacks. These fish have evolved tolerances for fresh- and salt-water habitats at different times – and back again – depending on what the current environment demands.

Watson's idea means that organisms would be imbued with multiple options for adapting.

It also implies that gene networks have evolved – in all animals – to be adaptable to Earth's natural world. That is why organisms are so good at responding to the environment: the stresses and strains of living in Earth's environments have been imprinted in the regulatory connections between genes, over the course of millions of years.

"There's always been the potential, I think, to explore [the parallels between computational learning and evolution] in a little bit more depth, but nobody's really done it with the rigour that Richard Watson's done now," says Kevin Laland at the University of St Andrews in the UK, who is involved in a large-scale research project that also includes Watson.

However, the big challenge for Watson's hypothesis is whether any empirical evidence for it can be found in nature.

So far, all of Watson's ideas have been based on computational experiments in the lab. These experiments may seem to produce results analogous to real-life organisms, but the specific processes have not yet been directly observed in them.

Hens are genetically capable of growing teeth, but do not normally do so

"It's the $64 million question," admits Watson.

But both Watson and Laland think there are ways in which this theory of evolvability could be tested. Watson suggests analysing how gene networks change in microbes that evolve in the lab. Because microbes like bacteria reproduce so quickly, it is possible to observe several generations of adaptation in a matter of days.

"If you want a tougher test for the theory, [you could ask,] 'can you make novel predictions not already in the literature?'," says Laland.

For example, it might be possible to develop a computer system, based on Watson's ideas, that can predict how organisms will evolve out in the wild, given certain known conditions. Should such a system prove accurate, that would certainly help to reinforce the theory.

There are already some existing features of gene networks that help stand up Watson's approach. A mini-network of genes defining a particular adaptation – like one of the modules mentioned above – can sometimes be turned on or off by just one other activator gene.

It might be easier to build certain kinds of fish than other kinds

We can see examples of this in nature, says Watson. They include "evolutionary throwbacks": organisms with adaptations that were thought to have disappeared with their ancestors. This is known as "atavism".

A famous example of this is hens' teeth. Hens are genetically capable of growing teeth, but do not normally do so in the wild or in captivity. However, tooth growth can be enabled in the lab by tweaking hens' molecular biology.

Plus, sometimes atavistic traits do appear in wild populations. One recent possible case is a beaked whale found on a beach in Australia in February 2016. It had fang-like teeth not normally seen in beaked whales. It may be that it descended from ancestors that also had the fang-like teeth millions of years ago.

Another relevant phenomenon is "convergent evolution", in which unrelated species living in completely separate habitats somehow evolve almost exactly the same adaptations. Examples include the particular patterns on butterfly wings and highly similar fish living in separate lakes in Africa, says Laland.

Is evolution functioning like a big, natural computer?

"The same patterns, the same forms appear again and again," Laland says. "It might be easier to build certain kinds of fish than other kinds. Some forms might be generated more often by the generational process."

Evolvability of the kind described by Watson could account for this. The gene networks, he argues, have gradually learned to respond in similar ways in similar situations. Those modular features, such as a butterfly's wing pattern, might be more likely solutions for the learning system than others.

In other words, when given a few necessary conditions, evolution will perform the same tricks again and again.

All of this raises some rather philosophical questions. For one thing, is evolution functioning like a big, natural computer? And does "evolvability" suggest that life is in some sense programmed to improve – at the genetic level at least?

Some biologists flinch at the idea, but if the capacity of organisms to adapt is getting better and better over time, if evolution is learning as it goes, then might it just as well be described this way?

Watson, for one, is sticking firmly by his guns.

"It's only when you assume a system that has appropriate variability, selection and inheritance, that you can get anything out of evolution at all," he says. "And you don't get those things without thinking about the evolution of evolvability."

Join over six million BBC Earth fans by liking us on Facebook, or follow us on Twitter and Instagram.

If you liked this story, sign up for the weekly features newsletter called "If You Only Read 5 Things This Week". A handpicked selection of stories from BBC Future, Earth, Culture, Capital and Travel, delivered to your inbox every Friday.