Using automated setups is a pain in the arse. That’s right, I said it. What a click-bait!
It is though, at least if you care about your chemistry and do not want to have your product go to waste (literally, there is a button for that in every automated column setup or automated synthesizer). What I mean by that is that the synthesis itself is only as good as the code behind it. We surely have to test our code, before we send it to production. But oftentimes we are literally testing it in production of molecules. When we started the projects and managed to get our devices working, we quickly ran into another problem: You programmed you device layer and started to work on the command layer. The «prime all tubing» function seems to go alright, that’s a good start at least, but then suddenly: baaam, all to waste! Or something is not responding. The heating plate is set to 1 °C, but the pump is delivering 80 mL of reagent to the beaker. That can’t be right.
Well, this one is a quick fix. A couple of swear words, a change in the textcolumn reader (because you didn’t know how Python reads arrays) and you are good to go. But what about mistakes you can only make once: Set the GC to 300 °C before purging oxygen out of a wax column? Well it’s one month’s worth of your salary down the drain. Or some errors that occur 8 hours into the synthesis, like a random disconnection of one of the switchers or pumps, rendering the whole setup unreliable and basically unusable outside of a “look, we make automation as a real science in chemistry, we are therefore novel, isn’t that marvelous”-setting.
How can we combat the problem?
Well, the book I have read (The Unicorn Project by Gene Kim) and am currently reading (The Pragmatic Programmer 20th Anniversary Edition by David Thomas and Andrew Hunt) talk about automated testing and Test Driven Development (TDD). So these are rather advanced techniques, because first of all you need to know what you want and know a little bit about the programming language you are using. That’s certainly not what I have in mind when struggling to enforce a new functionality by all means, completely disregarding the rest of the code and the convolution my half-minded ideas cause. I would not bring up this topic, weren’t it for the possible time reduction when modeling devices so that a simple line on text is returned, instead of waiting for the actual parts to run their duties. Heck, we could even solve the random problems we get by modelling the behavior instead of saying: “oh well, I didn’t get the problem for two days straight, it seems to be gone.” The subtechnique is called “mocking” and is usually used to deal with outside services you are sure will work with your program, but do not want to call in simple testing, or even want them to fail in tests. After all, devices we use have some simple commands they respond to and return simple strings as notifications. This should not be hard, right? And doing so, maybe we can implement the other good programming principles as well, who knows.
We will discuss this topic in our next online meeting, prepare some examples. Maybe with the pumps and switchers we took home. We will see what this approach will bring to the table and document the whole process as a guide for the next post.