Our guest author Prof. Günter Roth asks the question of how much trust we should put in “established” methods. Can “seeing more” help us to better evaluate our methods and the data generated by them?


“To see, or not to see, that is the question:

Whether ’tis overtrustful to the mind

to suffer numbers and diagrams of automated sensors blind,

Or to drop harm against established methods size

And by opposing end them. To question—to open eyes

See more; and by an insight to say we end

Disbelieving, fixed numbers so many sent”


With this somewhat changed and twisted Hamlet citation, I like to simply raise your attention to some philosophical but none the less scientific questions.

How much do we need to see? And if seeing is believing, how much should we trust faithful believe in “well established” but probably blind methods? Or should we rather open up our eyes as much as possible to be aware of details and have a deeper look into our systems, devices and algorithms?


All these questions are not new because that’s science at its heart – “question everything”. But I like to focus on label-free measurements of binding kinetics. Upon several occasions, when people saw our label-free imaging systems, they realised first “Wow, that looks nice” and second “Awe, but some spots look like donuts and some of them are looking like a bite has been taken off. How can you ever take such dots/donuts into account for reliable kinetic measurements?” (This video shows some examples of different spot geometries.)

You see, seeing here led to disbelieving. But why? Simply by the fact, that now with label-free imaging systems it is possible to see “more details”. You do not have a closed black box sensor any more. You have a field of view. You see down to the micron if needed. Contrary, not seeing means blind believing as no one asks for the standards in the field of label-free (but non imaging) kinetic measurements (namely mainly SPR or Biolayer interference systems): “How homogeneous are the biomolecules distributed on the sensor? How can I test this? How can I see or control this?” Here, the researchers have blind faith because so many other people are using this devices. Therefore, such questions must have been already asked, and should be answered. They see simply a diagram delivered by the shiny device, and have blind trust into it. They believe these devices deliver THE – one and only correct – binding constant. Telling them that it is “a” binding constant and not “the” (which may not exist at all) makes them disbelieve even more.

Some history

At this stage, I am always happy to be able to reference “the good olde times”, namely 2008, when label-free biosensors were young and hip and close to become a standard in everyday kinetic measurement routine. At that time David G. Myszka and colleagues organised a worldwide comparison study with staggering 150 participating laboratories to measure one and the same binding event from a defined aliquot in defined buffers and under defined temperature conditions (1). So to say, the worldwide effort to find THE binding constant of this interaction pair. The results have been disillusioning, but not unexpected, as some labs have been as far away as TWO orders of magnitude from the worldwide average for the KD, kass and/or the kdis value. Each one generated “a” binding constant. Therefore, you should be aware that you also always measure “a” binding constant in exactly this experiment under these conditions and that it is probably strongly influenced by the surface and the sensor you used (gold/no gold and binding via amino, epoxy, NHS, SCN, streptavidin or DNA etc. etc. pp). And all you can hope for is that other scientists with other devices and probably different methods find a similar result. Therefore, to change a molecule and measure a KD of 102.2 nM and reference it to a paper where the unmodified molecule shows a Kof 188.1 nM is pointless. Even if it is the same device. As such, it is essential and obligatory to at least compare the modified and the unmodified molecule for binding constant – in your lab, with your device – and best make a competition experiment (which should show which one is better). And if you like to compare many molecules with each other, the best way is to use a label-free imaging system and an according microarray as sensor to have high throughput… or use several hundred none-imaging chips.

Some old knowledge

David G. Myszka provided another fundamental publication (and some follow ups) together with Rebecca L. Rich, “Grading the commercial optical biosensor literature—Class of 2008: ‘The Mighty Binders’” (2). Reading this lengthy report reviewing 10 label-free publications in detail and grading more than 1,000 others with school marks from A to F shows clearly, “…you have to learn much, you can make many mistakes and you have to have a very close look into your device and biochemistry”. This Rich and Myszka report stirred up many things and induced vast discussion. Many researchers felt annoyed (mostly the low graded) to be “graded like a high school student”. And even the choice of wording was worth a discussion as “In this context their capricious writing appears to leave behind an impression of hauteur that outshines whatever serious message can be found in the review” (3).

None of us likes to be graded, but as scientist we should not be annoyed to be graded, corrected or questioned. That is science – question everything and rethink every conclusion to be sure that no mistake was made. And if new information arise, verify if the old conclusion is still fine. Therefore, our measurements should be so clearly documented, reproducible and “good” that anyone looking on it, reproducing it and recalculating it should (hopefully) say at the end “well, that was fine, I got a similar result”.

As such, providing all data and explaining all the applied assumptions and restrictions made is essential to be really scientific. It is essential that everything needed is provided to recalculate and to evaluate the data. And if the label-free real-time imaging video contains a spot that looks like a donut with a bite taken, then that’s also fine. It is a fact that not all measurements are shiny and perfect. And as such, to see means knowing more, and therefore imaging is to be preferred to none imaging (same reason many stick to fluorescence even if other methods may be better). And also label-free imaging allows you to say “yes it is a donut, and we used it anyway”.

Continue reading…

Part two of Prof. Günter Roth’s article continues with the question on how to deal with the donut phenomenon and a wish for the future of label-free imaging technology…


The author

My name is Dr. Günter Roth and I like to describe myself as a questioning and unorthodox thinker (Querdenker). On my quest to wisdom I studied physics (diploma) as the sole pure science (that explains the cosmos and everything in it) and in parallel biochemistry (to at least get an idea how the unexplainable rest works). We build a biomolecule copy machine to generate DNA, RNA and proteins, and we intend to use Biametrics’ label-free imaging system as “contrast control” for the copy process. Be invited to have a look at our website. I am happy for any questions and discussion. And you will also find arrays without donuts here!


(1) Rich, R.L., et al. (2008) A global benchmark study using affinity-based biosensors. Anal Biochem. 2: 194-2016.

(2) Rich, R.L., and Myszka, D.G., (2009) Grading the commercial optical biosensor literature – Class of 2008: ‘The Mighty Binders’. J Mol Recognit. 23: 1-64.

(3) Vorup-Jensen, T., (2010) Coping with complexity (in macromolecular interactions)–a comment on Rebecca L. Rich’s and David G. Myszka’s “grading the commercial optical biosensor literature–class of 2008: ‘the mighty binders”‘. J Mol Recongit. 23: 389-391.

This article was written by
Experts from different fields in the area of biomolecular interaction studies are invited regularly to share their thoughts on latest trends and controversial topics.