In part one of his article Prof. Günter Roth demonstrated how “blind believe” in established methods for kinetic analysis might be misplaced trust. In this second part he explains how label-free imaging technology literally allows us a fresh look on binding kinetics.

The donut paradox and some philosophy

Coming back to the donut from part one: Even if it looks like fun on the first look, it proposes as a serious question on the second. How to calculate with a donut?

In a perfect world the data from our sensor is perfect. The spot is 100 % homogeneous and has perfectly determinable rims and edges, the concentrations used are exact to a iota and there are neither disturbance signals nor noise of any kind.

But in a real world such sensor would not exist. Already the light source needed to generate the signal bears a Shot noise (a quantum mechanical noise caused by the simple fact that electricity is made from quantised particles, electrons, which have a discrete electrical charge) and the sensor chip bears a thermal noise as we cannot operate at 0 Kelvin temperature. There is also backscattering, stray light, device and chip drift, buffer effects, surface effects, charges etc. etc. pp. Many of these disturbances are neglected, not taken into account or “somehow” implemented in the analysis software of the commercial none-imaging systems. As this software is highly valuable, the user is not allowed to see into details and therefore has to blindly believe that there was never a mistake and that the assumptions made are still valid, or have been corrected with the latest update. But none of us can easily open such a chip and have a look at the sensor to see if the molecules are distributed uniformly on the surface. None of us typically controls if the analysis software does the same fitting and generates the same results as a self-written software. We have to believe and do not really know if everything is taken into account. But we should be very aware that there may be “a donut” in the label-free chip ever since. Even if we cannot see it.

But being illuminated by an imaging system, we are not allowed to shut eyes if the donut occurs. So how should we go ahead here? This is more of a philosophical question.

The “take all philosophy” says, “On a none-imaging label-free chip, you simply use the whole signal from the whole sensor (you never care about the spatial resolution and distribution). Therefore, use the total area which belongs to the molecule of interest. That’s the most and probably best you can get”.

The “be the best philosophy” says, “You look for the highest signal and just take the area which generates the highest signal into account because a high signal can only be gained if the surface chemistry was good and efficient (many assume that more is better). Therefore, the highest signal is the best and most reliable”.

The “as well as philosophy” may be the most complex, but in my eyes the most honest approach. It says “as we don’t know which part of the donut is representing the molecule of interest in its most natural state, we have to calculate for each area separately. We should do an averaging also. As such, we get a hint what may be the best, the worst and the eventually normal. But basically, we cannot know which one is THE truth”.

The donut is paradox. It is a clear signal, but it is undeterminable which part of the donut is showing the “truth”. Still, I think all of us agree, that it is better to see it (and have doubts about the results), than blindly believe that the (none-imaged) sensor is fine beyond any doubt or control.

A wish for the future

The comparison measurement of label-free technology from Myska (1), described in part one of this article, is now a decade old. As more and more label-free imaging systems enter the market, the feeling arises that someone should make the effort to provide some hundred or even thousand binders against one or several distinct targets and organises a world-wide comparison study. Just to provide data about how reliable and comparable the measurements are – between devices, between methods and utmost in comparison to “the good olde” non-imaging systems. Until now, we are not aware of such a high throughput comparison plan. Maybe you have the time and interest to organise or participate?

Opened minds?

There is the saying “believing, is not knowing”. Unfortunately the inversion “knowing, is not believing” is not really true. Because we assume quite often to know something, we quite often believe things without really knowing if they are true. We believe that old established methods are beyond doubt and are so often tested that they have to be correct. E.g. we simply use a label-free kinetic measurement chip and quite often do not question the influence of the surfaces, the surface chemistry, the blocking, the buffers, the flow cell geometry and so on. We take an algorithm from the device’s implemented software and press the “go”-button. And in the end, we believe that we derived THE binding constant, even if we cannot reproduce the former mentioned KD of 188.1 nM. If we were expecting a better binding constant, we believe that the biomolecules may have been denatured or harmed in some step. And if we find a better binding, we are ensured that our expectations have been correct and typically do not have a closer look into the chip as e.g. multimerisation may have made a wrong but better signal.

Another paradox is, that as soon as we got our eyes opened by label-free imaging systems and see how inhomogeneous donuts can be, we start to disbelieve. Especially if we see that two spots with the identical molecule behave differently and therefore deliver e.g. a factor 5 difference in KD value. Strangely then many believe that the “the olde systems” (without that imaging hocus-pocus) have to be better, simply ignoring that Myszka and 150 others (2 pages of authors and affiliations) have already shown a decade ago that two orders of magnitude differences in results can occur.

As such we have to conclude, for then and now, “results may vary”. Therefore, it is recommended to compare binders preferably in one system, all at the same time with the same reagents on the same chip. And best would be an imaging mode, because then you can see and have to believe – donuts exist. Many thanks for your time, and if you have a little bit more you may read the starting lyrics with other eyes:

“To see, or not to see, that is the question:

Whether ’tis overtrustful to the mind

to suffer numbers and diagrams of automated sensors blind,

Or to drop harm against established methods size

And by opposing end them. To question—to open eyes

See more; and by an insight to say we end

Disbelieving, fixed numbers so many sent”

 

The author

My name is Dr. Günter Roth and I like to describe myself as questioning and unorthodox thinker (Querdenker). On my quest to wisdom I studied physics (diploma) as the sole pure science (that explains the cosmos and everything in it) and in parallel biochemistry (to at least get an idea how the unexplainable rest works). We build a biomolecule copy machine to generate DNA, RNA and proteins and we intend to use Biametrics’ label-free imaging system as “contrast control” for the copy process. Be invited to have a look at our website. I am happy for any questions and discussion. And you will also find arrays without donuts here!

 

Literature

(1) Rich, R.L., et al. (2008) A global benchmark study using affinity-based biosensors. Anal Biochem. 2: 194-2016.

This article was written by
Experts from different fields in the area of biomolecular interaction studies are invited regularly to share their thoughts on latest trends and controversial topics.