Sunday, March 25, 2018

European Mass Spectrometry Conference 2018

Last week I had a chance to take part in the European Mass Spectrometry Conference that was hosted by DGMS (German Society for Mass Spectrometry) and SFSM (Frence Society for Mass Spectrometry). Below I share a few key ideas from this nice conference that took place in Saarbrücken over 5 days.

The conference was opened with a plenary lecture by Prof. Alain van Dorsselaer who summarized the main work he and his group has done on mass spec during the last 30 years. One of the key ideas, that came up several times in his talk referred to the fact that endless possibilities are accompanied by extreme data load. The amount of data in LC/MS/MS is huge and it is very complicated to analyse these massive data sets. Several other scientists, including Prof. Andreas Roempp and his group, also stressed the importance of transparent and open source data analyses and storage that could eventually simplify the data treatment. These ideas strongly resonate with my own ideas of applying more data science tools in primary data treatment in mass spectrometry, as today the data processing is by far limiting the progress in several fields of analytical mass spectrometry. Mostly this is the case for fields, where the science is still in the "discovery" stage; meaning that the scientists aim at finding the important compounds and yet do not know which these compounds could be. Such fields include metabolomics, proteomics, environmental science, etc.

Prof. Philippe Schmitt Kopplin stressed the importance of high throughput in metabolic sample analyses and explained why dissolve-and-shoot approach (flow-injection or infusion combined with MS) is often most practical. Also, he showed several case studies where marker compounds could be reliably identified with this simple approach if accompanied with efficient and accurate data processing. A particularly interesting example was a case study of 170 year old wine from the bottom of the Baltic sea.

Prof. Carsten Engelhard showed an extremely clever, almost brilliant, method to analyse nanoparticles with simple dilution&infusion experiment. The infusion of homogeneous solution to ICP-MS instrument causes a almost constant signal with small random variations. However, if the solution of nanoparticles is infused to ICP-MS, most of the time there is no signal (only noise). When one of the nanoparticles enters the plasma a signal suddenly occurs causing a peak in the chronogramm. The height of the signal reflects the size of the nanoparticle and the number of peaks per volume indicates the concentration of the nanoparticles.

Prof. Thomas Kraemer introduced us to the world of forensic analyses. Particularly, he focused on MALDI imaging techniques, that allow revealing drug intake or exposure to toxic compounds. For this purpose his lab is using two types of samples, the traditional hair and lately also toe nails, to overcome the problem arising for hairless people. Interestingly, the single hair analysis also reveals time resolved information with high precision; therefore, allowing to distinguish between  one-time and long time exposures.

Giving my talk at EMSC 2018

With friends from FU
Visiting mining museum before gala dinner

At gala diner



Tuesday, January 30, 2018

p-value that we call statistically significant


This week p-value made it to the Science news feed. The whole story started already in the middle of last year, when Bejamine and coworkers published in Nature Human Behaviuor a paper titled „Redefine Statistical Significance“. The group of some 80 scientists propose, that the major part of research studies should not be conducted at the significance level of 0.05 but at 0.005. They claim that this could be an efficient measure to increase the reproducibility of the scientific results. Now, Lakens together with ca 100 coworkers published a response in defence of assigning p-value = 0.05 as statistically significant. They argue, that this would increase tremendously the size of the studies. The studies would need to analyse far more data points or samples, would take much longer and of course would be much more expensive. The latter may result in a decrease of confirmative studies.
So why really all this fuss? Obviously, p-value is somewhat complicated to understand, students hate it and the scientist just like to state that the effects observed were significant. I personally was pretty much surprised by this topic popping up in the first place as it somewhat goes against my fundamental understanding about statistics.
So I try explaining the concept and the problems very shortly and what is the problem. First, usually when we do science we are looking into some differences or similarities. Unfortunately statistics allow us only testing if two or more things are the same and it does not allow us to directly test if our observations are different. Therefore, the whole concept relies on two hypotheses: the so called Null-hypotheses, that our observation is indifferent from something else. Let’s look in on the examples. So let’s say that we are investigating if the temperature within the last ten years is higher than is was between 1950 and 1960 in our hometown. So our Null-hypotheses is that the temperature from 2007 to 2017 is indistinguishable from the average temperature between 1950 and 1960. The other possibility, logically called alternative hypotheses, is that, the average temperature is not the same.
Now comes the p-value into play. Because of these random variations in our measurements caused by very-very different reasons starting from fluctuations in instruments up to sudden geological processes such as volcanic eruption the averages can not be exactly the same even if there is no real increase in the temperature. Therefore, we take into account the variation in temperature, called standard deviation, and conduct a suitable statistical test to reach a conclusion. However, due to these variations we are also not 100% sure in our average values and can only with some certainty say if the temperatures really are the same. p-value represents this certainty, or better yet uncertainty. For example, if we conclude that the temperatures are significantly different with p-value of 0.05 we actually mean, that even though we state, that T1 does not equal T2 it has a 5% change that they actually are the same. So from this perspective it would be better to use a lower p-value as an indication of significance. If we state something is significant at p-value of 0.005 (or lower) it means that we would assign a significant difference only to 0.5% of the cases where there actually is no difference.
However, the picture is not so simple. It also has a completely other pole. The p-value we talked above is associated with a so called false positive results. This means that in reality there is no effect but based on our test we would be claiming an effect. Still, it could also be the other way round. It can be that in reality the temperature is different between now and then but we do not reach this conclusion with our tests. This is called a false negative result. Unfortunately there is basically no easy way to assign a probability to such a false negative result. But one thing we know for sure is that if we decrease the probability of one type of error the probability of the other type of error will increase if the sample size is constant. Therefore if we really-really do not want to allow us being mistaken if we claim that the temperature now and then is different even if it is not, we have to realize that it becomes more likely, that we say that we conclude that the temperature is the same even if it actually is not. The only way to reduce both type of error is to increase the sample size, this is the main point of Lakens and the co-workers in the paper published this week.
But let’s look at another example. Say that we are trying to find a new drug for a disease. So basically the null hypotheses is here that the drug is as efficient as a placebo and the alternative hypotheses is that the drug has an effect compared to placebo. In reality there are two possibilities. That the drug actually has no effect and that it has an effect. The result of our analyses could also be either, that the drug has an effect or it does not have an effect. There is no problem if the drug really has no effect and we also say that that there is no effect. Also it is very much desired if our analyses conclude that the drug has an effect if it really does. However there are also two undesired possibilities. First, that we say that the drug has an effect if it actually does not have an effect. This is exactly the case we can describe with a p-value. So if we do not wish to label something as a drug if it is not a drug we should lower the p-value we assign as statistically significant and gain more confidence in this way. However, it is also possible that we state that the drug has no effect even though it actually does. This means, that we have lost a possible drug candidate due to our rigidness. As stated before if one error reduces the other increases at the same number of studies. So in one case we assign something as a possible drug that is not and in the second case we lose a drug that could be of use.
As we can not reduce both, the question now is, which is a worst possibility? It is obvious that there is no easy answer and in my personal opinion it highly depends on the circumstances. For example if you are in the beginning of the process trying to find a suitable drug, you most probably start with some computational chemistry approach to investigate which compounds at all could be reasonable. It is already well known that down in the next steps you will lose most of the possible candidates due to various problems. It is very likely that you do not want to lose a best possible drug already in the first step. So in this case you do not want to make a false negative decision. As we can not adjust the probability of false negative result itself we need to adjust false positive. So to lose as little as possible useful drug candidates but still not to test every single possible compound in the next stages we should adjust the p-value we call statistically significant reasonably high, for example to 0.10.
Now, let’s look a situation if one of our drugs that we selected in this first step has made it through a punch of other trials and we are about to undertake some of the last trials. Last trials usually need human experiments, take extremely long time and are followed by enormous costs. This means that we do not want to put all this time, effort and money into testing some compound, that is not very prominent. This means, that now we need to be very sure that the drug really is about to work. Meaning that we want to reduce the false positive risk and this time should observe as small as possible p-value to be certain, that the yet-to-be-drug actually has an effect.
These are the main reasons why there can and should not be a fixed p-value that we call statistically significant and it should be adjusted each time depending on the aim of the study. Even more so, it is much better to report the original data and the probability of false positive calculated from the data in addition to stating the author’s conclusion. This would allow readers comparison with their data and independent evaluation of the data.


Saturday, January 6, 2018

What can we learn from mass spectrometry about charged droplets?

Charged droplets occur everywhere in the world. They are created by the oceans (known as see spray aerosols), near waterfalls and in thunderstorm clouds. Such droplets are expected to play significant role in environmental processes. Similar droplets are also created in electrospray ionization (ESI) source.
Mari Ojakivi joined our group three years ago to conduct her bachelor thesis with us. We were then very strongly interested in the way different additives influence ionization efficiencies in ESI. So she started studying how different acids, salts and bases influence the ionization of some amines in charged water droplets. Soon, some extremely interesting results were revealed that allowed us to make much wider conclusions about charged droplets.
We were able to pinpoint, that protonation of the amines is strongly dependent on the type of additives present in the droplets and is virtually independent of the pH of the solution used for “preparing” the droplets. In “normal” solutions the protonation determined solely by the pH of the solution. This led us to conclude that some of the additives change something about the droplets that other additives do not affect. It turned out, that the factor determining the protonation is the cation present near the surface of the charged droplets, as the protonation mostly takes place on the surface. Cations, such as hydronium ion (droplet A in the picture), that are strong acids protonate the compounds, while weak acids, such as ammonium cation (droplet B in the picture), do not. If both types of cations are present in the solution, the protonation is determined by the ion that has higher affinity for the droplets surface. We were also able to find support for our model from the molecular dynamics simulations carried out in Prof. Konermann’s group.

Why is the protonation in charged droplets at all important? Protonation is one of the fundamental properties of compounds; it may catalyze reactions, break up or induce complexation, change conformation of the macromolecules, etc. Therefore, it can be assumed, that the reactions and processes taking place in charged droplets also depend on the protonation. 

Modifying the Acidity of Charged DropletsMari Ojakivi, Jaanus Liigand, Dr. Anneli Kruve. Chemistry Select doi.org/10.1002/slct.201702269

Illustration of droplets surface for droplets that do not contain ammonium anion (A) and that do contain ammonium ions (B). The latter significantly decreases the protonation of the compounds in the droplet. Picture by M. and K. Ojakivi.


Wednesday, February 1, 2017

IsrAnalytica2017

Last week there were two awesome analytical chemistry events in Tel Aviv. On Monday 23th of January there was an international workshop on validation of test methods, human errors and measurement uncertainty of results organized by Dr. Ilya Kuselman. In this event I had an amazing chance to present my experiences in the validation in this workshop. My lecture was about handling different matrices during validation of LC/MS method. It was such a pleasure to share my academic knowledge with practitioners and to receive their feedback. 

From 24th to 25th the IsrAnalytica conference was held. The conference hosted four parallel sessions, around thousand visitors from universities, industries, regulatory bodies, etc. The topics of the presentations ranged from forensic, to food, to innovative apparatus. The conference also hosted a massive exhibition area with nearly hundred companies. For example check out this massive rotary evaporator instrument for the factories. 

Tuesday, November 1, 2016

Choosing analyses conditions to get matrix effect free?

Recently my first single-authored paper became public. It concerns the solvent effect on electrospray ionization efficiency and matrix effect. As we have talked about ionization efficiency and its solvent effects also previously let's focus on matrix effects this time. 
As LC/ESI/MS is the tool for various fields in routine analyses (quality control over pesticides but also mycotoxins in the food we eat, drug monitoring in patients body, etc) to solving highly demanding scientific questions (finding biomarkers, decoding nervous signalling, etc). However, one thing haunting quantitation in these analyses is the matrix effect. The influence of sample compounds (called matrix compounds; for example the flavor compounds in tomato) on the ionization of our target analyte (e.g. the monitored mycotoxin in tomato). This effect may both give results that are either higher or lower than the actual content. Though underestimation is more common. Meaning that the mycotoxin may actually exceed the allowed limits, but matrix effect may hinder our ability to detect this. This makes matrix effect very important and the need to remove its origin becomes obvious 
This is how matrix effect looks on a chromatogram. 
Though, since I started interacting with this field in the beginning of my Master's thesis, one thing has definitely improved: the awareness of the presence of matrix effect. Still much unclarity exists in the fundamental level. There have been numerous mechanisms proposed to the origin of matrix effect, however some of them are contradictory or have not been reproduced latter. However, there is one general mathematical model proposed by Christie Enke in 1997, which states that matrix effect is caused by the fact that more two compounds compete for the surface charge in the ESI droplets and that the extent of matrix effect is related to the magnitude of these affinities. However, it is unknown of what these affinities depend on. In one of my previous works I discovered that this affinity is well correlated with ionization efficiencies for ionisable compounds1.  
During investigating ionization efficiencies dependents on solvent composition3 naturally the question arouse: if compounds' ionization efficiencies depend on the solvent composition why shouldn’t the matrix effect do2? And it does! It was observed that solvent compositions that provide higher ionization efficiencies produce more matrix effect. Higher ionization efficiencies are usually considered beneficial because they allow analysing smaller analyte quantities (limit of quantitation, LoQ) and increase sensitivity. So if you tune your method for lower LOD-s it will automatically be less robust for matrix effects. It turns out that in ESI/MS you have to balance between low LoD-s and little matrix effect, while choosing your solvent.  
With this study I was able to uncover the true dual nature of the processes occurring in the charged nanodroplets in ESI. Though the practical conclusions are not according to every analyst dreams4 it effectively demonstrates the dual side of the processes occurring in micro compartments such as nanodroplets. Therefore, I find that these nanodroplets are capable of much more and we shall hear about these again.  


It is known that also compounds that do not give ESI/MS signal also produce matrix effect; however, this process is expected to follow a different mechanism.  
Previously different solvent conditions have been also used to reduce or eliminate matrix effect, but with the aim of providing better separation of the analyte and matrix components.  
3 Solvent with higher organic content tends to yield higher ionization efficiencies.  
4 It would be best to have lowest LoD-s, no matrix effect, high sensitivity, etc all at once.  

Tuesday, September 13, 2016

Our publications

Electrospray ionization studies



The Evolution of Electrospray Generated Droplets is Not Affected by Ionization Mode
Piia Liigand, Agnes Heering (Suu),Karl Kaupmees, Ivo Leito, Marion Girod, Rodolphe Antoine, Anneli Kruve
Doi: 10.1007/s13361-017-1737-5
Ionization efficiency and mechanism in ESI is strongly affected by the properties of mobile phase. The use of mobile-phase properties to accurately describe droplets in ESI source is convenient but may be inadequate as the composition of the droplets is changing in the plume due to electrochemical reactions occurring in the needle tip as well as continuous drying and fission of droplets. Presently, there is paucity of research on the effect of the polarity of the ESI mode on mobile phase composition in the droplets. In this paper, the change in the organic solvent content, pH, and droplet size are studied in the ESI plume in both ESI+ and ESI– ionization mode. We introduce a rigorous way – the absolute pH (pHabsH2O) – to describe pH change in the plume that takes into account organic solvent content in the mobile phase. pHabsH2O enables comparing acidities of ESI droplets with different organic solvent contents. The results are surprisingly similar for both ionization modes, indicating that the dynamics of the change of mobile-phase properties is independent from the ESI mode used. This allows us to conclude that the evolution of ESI droplets first of all proceeds via the evaporation of the organic modifier and to a lesser extent via fission of smaller droplets from parent droplets. Secondly, our study shows that qualitative findings related to the ESI process obtained on the ESI+ mode can almost directly be applied also in the ESI– mode.

Think Negative: Finding the Best Electrospray Ionization/MS Mode for Your Analyte
Piia Liigand, Karl Kaupmees, Kristjan Haav, Jaanus Liigand, Ivo Leito, Marion Girod, Rodolphe Antoine, and Anneli Kruve
Doi: 10.1021/acs.analchem.7b00096
For the first time, the electrospray ionization efficiency (IE) scales in positive and negative mode are united into a single system enabling direct comparison of IE values across ionization modes. This is made possible by the use of a reference compound that ionizes to a similar extent in both positive and negative modes. Thus, choosing the optimal (i.e., most sensitive) ionization conditions for a given set of analytes is enabled. Ionization efficiencies of 33 compounds ionizing in both modes demonstrate that, contrary to general practice, negative mode allows better sensitivity for 46% of such compounds whereas the positive mode is preferred for only 18%, and for 36%, the results for both modes are comparable.


Predicting ESI/MS Signal Change for Anions in Different Solvents
Anneli Kruve, Karl Kaupmees
Doi: 10.1021/acs.analchem.7b00595
LC/ESI/MS is a technique widely used for qualitative and quantitative analysis in various fields. However, quantification is currently possible only for compounds for which the standard substances are available, as the ionization efficiency of different compounds in ESI source differs by orders of magnitude. In this paper we present an approach for quantitative LC/ESI/MS analysis without standard substances. This approach relies on accurately predicting the ionization efficiencies in ESI source based on a model, which uses physicochemical parameters of analytes. Furthermore, the model has been made transferable between different mobile phases and instrument setups by using a suitable set of calibration compounds. This approach has been validated both in flow injection and chromatographic mode with gradient elution.

Adduct Formation in ESI/MS by Mobile Phase Additives
Anneli Kruve, Karl Kaupmees
Doi:10.1007/s13361-017-1626-y
Adduct formation is a common ionization method in electrospray ionization mass spectrometry (ESI/MS). However, this process is poorly understood and complicated to control. We demonstrate possibilities to control adduct formation via mobile phase additives in ESI positive mode for 17 oxygen and nitrogen bases. Mobile phase additives were found to be a very effective measure for manipulating the formation efficiencies of adducts. An appropriate choice of additive may increase sensitivity by up to three orders of magnitude. In general, sodium adduct [M + Na]+ and protonated molecule [M + H]+ formation efficiencies were found to be in good correlation; however, the former were significantly more influenced by mobile phase properties. Although the highest formation efficiencies for both species were observed in water/acetonitrile mixtures not containing additives, the repeatability of the formation efficiencies was found to be improved by additives. It is concluded that mobile phase additives are powerful, yet not limiting factors, for altering adduct formation.

pH Effects on Electrospray Ionization Efficiency
Jaanus Liigand, Asko Laaniste, Anneli Kruve
Doi:10.1007/s13361-016-1563-1
Electrospray ionization efficiency is known to be affected by mobile phase composition. In this paper, a detailed study of analyte ionization efficiency dependence on mobile phase pH is presented. The pH effect was studied on 28 compounds with different chemical properties. Neither pKa nor solution phase ionization degree by itself was observed to be sufficient at describing how aqueous phase pH affects the ionization efficiency of the analyte. Therefore, the analyte behavior was related to various physicochemical properties via linear discriminant analyses. Distinction between pH-dependent and pH-independent compounds was achieved using two parameters: number of potential charge centers and hydrogen bonding acceptor capacity (in the case of 80% acetonitrile) or polarity of neutral form of analyte and pKa (in the case of 20% acetonitrile). It was also observed that decreasing pH may increase ionization efficiency of a compound by more than two orders of magnitude.

Influence of mobile phase, source parametersand source type on electrospray ionization efficiency in negative ion mode
Anneli Kruve
The effect of organic solvent content on ionization efficiency (sensitivity) is compared for different ESI sources in negative mode. It was observed that ionization efficiency in ESI with thermal focusing (such as Jet Stream source) is little affected by the organic solvent content while in conventional ESI ionization efficiency can be significantly (by an order of magnitude) increased with increasing organic solvent content (both acetonitrile and methanol). However pure acetonitrile is not recommended for such measurements as it yields poor repeatability. Additionally though organic solvent content increase results in higher ionization efficiency it unfortunately also increases ionization suppression.


Piia Liigand, Karl Kaupmees, Anneli Kruve
The factors influencing observation of doubly charged ions in mass spectra was studied on the example of small acidic (carboxylic acids, phenols, sulphonic acids) compounds with at least two ionisable sites in ESI negative mode. It was observed that being only a strong acid (meaning that the compound is present in solution as two valent anion) is not sufficient for observing doubly charged ion in mass spectra. Additionally two compound needs to be sufficiently hydrophobic. Also, if sufficiently hydrophobic compounds which are not expected to be present as divalent anions in solution, may give doubly charged ions in mass spectra.


Riin Rebane, Anneli Kruve, Piia Liigand, Jaanus Liigand, Koit Herodes, Ivo Lieto

Jaanus Liigand, Anneli Kruve, Piia Liigand, Asko Laaniste, Marion Girod, Rodolphe Antoine, Ivo Leito
Influence of source and mass analyser type on ionization efficiency scales was studied on the example of ESI positive mode. It is demonstrated that ionization efficiency scales can be successfully transferred between different instruments.

Jaanus Liigand, Anneli Kruve, Ivo Leito, Marion Girod, Rodolphe Antoine
The mobile phase pH and organic solvent content effect on ionization efficiency (sensitivity) of basic or neutral compounds in ESI positive mode was studied. It was observed that ionization efficiency changes with pH for compounds which have pKa in the range of varied pH. Compounds that are permanently charged in solution or are not charged at all in solution will not change their ionization efficiency in ESI. Also organic solvent content influences ionization efficiency.

Anneli Kruve, Karl Kaupmees, Jaanus Liigand, Ivo Leito
Introducing an ionization efficiency scale for electrospray negative ionization. Altogether ionization efficiencies (related to sensitivity) of 64 acidic compounds (carboxylic acids, phenols, imides, sulphonic acids, sulphonamides, etc) was measured and modelled. It was observed that ionization efficiency can be explained by the ionization degree in solution and charge delocalization of the formed anion. The missprediction of the proposed model was 3 times while the measured ionization efficiencies varied over six orders of magnitude.

Anneli Kruve, Karl Kaupmees, Jaanus Liigand, Merit Oss, Ivo Leito
The sodium adduct formation efficiency (SAFE) scale was introduced and showed that the order compounds is unaffected by the sodium content in the mobile phase. Also it was observed that SAFE can not be explained by the solvent/vacuum distribution coefficient of formed species nor with strength of the formed adduct species (described by partial charged on the heteroatom associated with sodium and bond length between the hetero atom and sodium).

Merit Oss, Anneli Kruve, Koit Herodes, Ivo Leito

Ivo Leito, Koit Herodes, Merito Huopolainen, Kristina Viroo, Allan Künnapas, Anneli Kruve, Risto Tanner


Saturday, September 10, 2016

Defences in our group: Asko Laaniste and Hanno Evard

We are happily back from IMSC in Toronto and a very important celebration in our group has occurred! On the last day of August two PhD theses were defended in our group by Asko Laaniste and Hanno Evard.
Asko wrote his thesis on ionization sources. Specifically focusing on the comparison of different sources for pesticides. The main conclusion of his work is, that conventional electrospray ionization (ESI) is better than generally expected. Additionally Asko studied the possibilities of 3R nebulizer developed in our lab for ESI source. As an opponent Prof. Risto Kostiainen from University of Helsinki served.
Hanno on the other hand solved a very crucial question of LoD determination. It has been often bothering our lab that the methodology used for LoD determination in various papers can not be clearly understood. So Hanno compared all common methods used to determine LoD and observed that the approach using calibration function residuals to estimate the noise of the method (suggested by International Conference of Harmonization, ICH) gives results closest to the VIM (Vocabulário Internacional de Metrologia) definition. The opponent in the defense was Dr Emilia Vasileva-Veleva from Marine Environmental Studies Laboratory, International Atomic Energy Agency, Monaco.
 Both theses can be accessed form here:
Asko Laaniste, Comparison and optimisation of novel mass spectrometry ionisation sources
Hanno Evard, Estimating limit of detection for mass spectrometric analysis methods
Asko and Hanno celebrating the defense.
Happy supervisor with a defense gift.