Skip to main content

AI and computer vision could make very good chemistry

Why can't we teach our lab robots instead of programming them?

By David Lloyd

David is enginzyme's Senior Lab Automation Engineer

My job is all about accelerating enginzyme's research through automation, so I spend a lot of time working with robots. We have a robot that runs our target reactions for us so the catalyst development team can screen novel materials faster and with great precision. Another robot does sample preparation so that our process engineering crew can perform their analyses with greater reliability and minimal effort.

How bad are current lab robots?

The problem with today's robots is that you have to program them. You must tell them exactly what to do, using a formalized language like a python API. As much as I like to code, programming a robot can be tedious. Observing a robot do its thing once you've programmed it is satisfying, until you realize how utterly mindless the robot is and how it is unable to self-correct or even realize it has made a mistake.

Working with a lab assistant the other day got me thinking: humans have five senses, and we integrate all this sensory input and use it to fine-tune what we're doing. Humans learn by doing, not by reading a detailed recipe of instructions and mindlessly following every word and comma. If I show a new lab assistant how to prepare a plate of samples and she practices it a few times, she's got it — she doesn't just see and do, she understands what she is seeing and doing. Knowing why she is doing it is even more valuable because that motivates her to do it well and allows her to self-correct and figure out even better ways of doing the same task.

Robots have sensors too. A colleague of mine just gave one of our robots eyes so that it can see whether the pipette it uses is aligned correctly when dispensing samples. The robot sees, but it does not have vision. It can only make a very basic decision about how well-aligned it is with the target location. Currently, programming a robot is an extremely complex and laborious task in its own right. If the practical lab work to be performed requires less effort than training the robot, why invest time in automating the task?

Taking lab automation to cybernetic perfection

There is a lot of talk about Large Language Models and Artificial Intelligence that can teach itself, but these tools are not yet fit for purpose in a cutting-edge enzymatic chemistry lab. We can't have our equipment hallucinating as it whirs through its chores. For the time being, the human needs to stay firmly in the loop. But how can we use recent innovations to do this better?

Here's the scenario I would love to see: a human needs to formally describe a task and then supervise the robot as it performs the task. But instead of writing code, the person responsible for training the robot can use equipment, such as a VR-headset and controllers, to quickly and directly perform the actions using the robot in the most natural way possible. The sophisticated cameras inside the robot let the human and robot see exactly what is being done, and the robot would encode the exact mechanical actions taken by the human. This kind of approach replicates how self-driving cars apply computer vision and machine learning algorithms to mimic human driving skills.

So now we have taught the robots a set of physical actions and allowed it to see what perfection looks like. The next step is to integrate natural language processing, so we can explain the meaning of our actions to the robot as we show it what to do. This would allow a robot equipped with AI to "understand" and replicate plain English instructions, just like LLMs do today.

Instead of writing hundreds of lines of code to get the robot to squirt 3ml of liquid into dozens of vials, you could just go to the lab and do it, with the robot looking over your shoulder and listening as you explain the actions being performed, where things can go wrong and how to error correct. There would be no need to learn a new language and user interface. The more you explained to the robot, the more it would learn. It would be an immersive experience for both the robot and the person teaching it. It’s worth noting that the researcher would also simultaneously create a detailed record of their work in the lab.

If you wanted the robot to replicate a task, you would only need to show it once, then say, "Now repeat that with these 50 other samples." And if you bought a new robot, you could have it just copy the knowledge from the one you taught.

Self-teaching robots could hone their skills

Robots that could combine AI and computer vision, and perhaps some other sensors and online analytics, would transform the laboratory. Once you teach them, they could make decisions based on sensory input and "experience". A lab environment also creates the possibility for radical self-teaching of a robot. Once it has learnt a basic set of skills it can use all the powerful analytical equipment in the lab to hone those skills further. for example, it could practice weighing a difficult liquid on to a balance until it is able to do this better than any human.

Will a lab robot ever be able to describe how it is motivated by the purpose behind the work it does? We’ll see — it might be a while.

In the meantime, we'll keep tinkering with our equipment and its software to speed up and improve our lab processes. Why? Because we're proud of our mission to help the chemical industry transition to greener, cleaner manufacturing.

Join enginzyme and impact tomorrow today

We are looking for the best protein engineers, automation experts, process engineers, enzymologists and software engineers to join us on our mission to change chemistry for good.


Privacy settings