More from less
Ice melts and that is easy to see. But supposed you cut the ice cube in half and in quarters, and so on until there was only one water molecule left? What does it mean to melt one molecule? Remarkably, it is possible to query individual molecules! We can measure biological receptors and ion channels (since the 70s), quantum microscopic objects (since the 80s), and molecular engines (since the 90s). Individual molecules can be detected and measured with a variety of tools including spectroscopy and mechanical and electrical probes. Looking at single molecules provides the most detailed view you can get of their life cycle, but it suffers from the fact that life at that resolution is complicated by the random motion of the molecules and the whole environment. Regardless of the type of molecules, their motion is dictated by the temperature, which is equivalent to the average kinetic energy. So studying single molecules is a bit like dissembling a conversation from the background noise of a cocktail party. We can do it, but we make mistakes, and the same is true when studying single molecules. So how can we separate the life cycle of individual molecules from the noise of their neighbors? Software is the key.
We calculate the probability that a given movement of a molecule or change in current or brightness might have come from the molecule under study, and how often it might be an accident of measuring noise. We try to find the parameters of the molecule under study that best separates its behavior from that of the surrounding noise. As one might imagine, you can never do this without making some mistakes; there is no perfect analysis that tells you what you pet molecule is doing.
There are a variety of tricks we have developed to reduce the errors, but despite the aura of statistics, underneath all the computation is a subjective statement of what is important. The easiest tool to limit errors is the of bandwidth. Just like the tone control on an audio amplifier, we can filter out the high frequency hissing caused by the random motion of small objects and rumbling caused by passing trucks. The penalty is that you also erase events generated by the object under study. This is a universal problem in experimental science: you make subjective choices between the amount of noise you want and the accuracy of the data.
Another software tool is to guess roughly what you think the object is doing, and based on that guess minimize the probability of making a mistake. The software calculates the probability that the observed behavior viewed came from our choice of model, and then we tweak the properties of the guess to get the best probability. But what is the best guess? If you have a collection of possible guesses, you can reasonably ask which of the guesses is more likely. But there will always be an infinite number of possible guesses and this is where the scientist meets the data head on. The scientist chooses the most friendly version, the one that has a probability better than most and the fewest parameters, and then which decides which feels best; the heartland of real science!
Needless to say, all this trial and error approach is too much work for pencil and paper, but thanks to the development of computers we ask them to do the work (with thanks to the gamers that promoted high speed video processors). Creating algorithms and writing programs to execute them has its own noise level. There is no perfect solution. But there are good alternatives, many of which are available for free on the web. The price is right!
Ophir Flomenbom and Frederick Sachs
How to get more from less: Comments on “Extracting physics of life at the molecular level: A review of single-molecule data analyses” by W. Colomb and S.K. Sarkar.
Sachs F, Flomenbom O.
Phys Life Rev. 2015 Jun