Thursday, 11 December 2014

INVOLVEMENT OF THE INFERIOR FRONTAL CORTEX IN THE LEVODOPA-INDUCED DYSKINESIAS

Parkinson’s disease (PD) is one of the most prevailing neurodegenerative disorders affecting populations all over the world.  Almost from 4.1 to 4.6 million patients suffering from PD are counted across the  fifteen most crowded countries where  live the two thirds of the world population. The average age of the PD onset is 55. The incidence grows along with the age: from 20/100000 cases in the overall population to 120/100000 in the elder population above 70 years old.  

This number is expected to reach 8.7 / 9.3 million in 2030. China is expected to increase from two million to five million by 2030. (E. R. Dorsey, et al.; Neurology 2007)

Gender difference  in the rate of incidence of PD was reported in many epidemiological studies:  from 1.5 to 2 times higher in men than in women. In Japan, however, there is a higher incidence in women.

Environmental factors are critical in determining the incidence rate of the disease. Some studies report that, among the cases in which Parkinson's disease is attributable to factors of environmental exposure, 10% is associated with the use of herbicides in the professional field. US epidemiological studies show a significant association between mortality rates for parkinsonian years 1986-1988 and the presence of the chemical in the territory. Only in 5% of cases ensures inheritance of the disease.

It is a disease that involves functions such as control of movements and balance. But what exactly is happening in the brain? The substantia nigra is a structure of Basal Ganglia [see here the basal ganglia circuit] that is rich of dopaminergic neurons In Parkinson these neurons get sick and die, and when the cell loss reachs 80% the symptoms of the disease appear: tremor, slowness of movement and rigidity.

The main pharmacological therapy involves the administration of a drug, levodopa, which is converted to dopamine in the brain. By increasing the concentration of this substance symptoms diminish, slow movements and tremors disappear.

For the first five to six years the patient returns to a normal condition and a good quality of life. Then, however, begin new problems. The major complications related to the assumption of levodopa or dopamine agonists include dyskinesias such as fluctuations of the movement, involuntary movements very fast like tics affecting the face, mouth, tongue, upper limbs and sometimes even lower. These are side effects of the drug that interfere so heavy with normal daily activities.

An italian research team has investigated what happens in the brain in patients suffering from severe dyskinesia. The studies began a few years ago and were conducted over hundreds of patients.

The main findings showed that to be involved is the right inferior frontal gyrus, which modulates voluntary movements. In case of prolonged intake of levodopa anatomic and functional alterations were observed and this would cause the onset of motor disorders. 
The researchers applied repetitive transcranic magnetic stimulation (R-TMS) to PD patients. The stimulation of the inferior frontal gyrus yielded promising: the brain area in question  returned to work and involuntary movements disappeared.

See more on this research here

Sunday, 7 December 2014

Monday, 1 September 2014

A Two-Layered Diffusion Model Traces the Dynamics of Information Processing in the Valuation-and-Choice Circuit of Decision Making

A circuit of evaluation and selection of the alternatives (see here) is considered a reliable model in neurobiology.

In this published study, valuation and choice of a decisional process during Two-Alternative Forced-Choice (TAFC) task are represented as a two-layered network of computational cells, where information accrual and processing progress in nonlinear diffusion dynamics. 
The evolution of the response-to-stimulus map is thus modeled by two linked diffusive modules (2LDM) representing the neuronal populations involved in the valuation-and-decision circuit of decision making (Figure 1). Diffusion models are naturally appropriate for describing accumulation of evidence over the time [see here]. This allows the computation of the response times (RTs) in valuation and choice, under the hypothesis of ex-Wald distribution. A nonlinear transfer function integrates the activities of the two layers. The input-output map based on the infomax principle makes the 2LDM consistent with the reinforcement learning approach. Results from simulated likelihood time series indicate that 2LDM may account for the activity-dependent modulatory component of effective connectivity between the neuronal populations. Rhythmic fluctuations of the estimate gain functions in the delta-beta bands also support the compatibility of 2LDM with the neurobiology of DM.

Figure 1. The two-layered diffusion model (2LDM) for decision making. 


Both stages (valuation and choice) are affected by noise. In the valuation stage the critical threshold indicates the firing rate of the neuronal populations involved, to which would correspond the expected reward. The outputs of this stage then are the differences between the responses of observed neuronal activity at the stimuli provided by the alternatives and the target. These measurements enter the next stage, where the decision is taken so as to optimize some utility criterion (reward). Hence, the attainment of the threshold in the decision stage indicates the preferred alternative. Feedback information flows from the decision stage in order to elicit the adaptation of the boundary in the valuation layer. In this way, a mechanism of reinforcement determines the competition between the alternatives and the valuation is biased to the most probable rewarded one.




The exploration of the neurobiological bases of the cognitive processes that underlie the decision-making (DM) have been the object of many studies of neurophysiology and computational neuroscience [1-8]. By tracing the neuronal circuits that are involved in the DM it is possible to get biophysically reliable models linking the dynamics of the neuronal activities to decisional behavior. Actually, DM is a process that involves different areas of the brain. These regions include the cortical areas that are supposed to integrate evidence supporting alternative actions, and the basal ganglia (BG,  see here the basal ganglia circuit), that are hypothesized to act as a central switch in gating behavioral requests [9-15]. In natural environments several sensory stimuli produce different alternatives and hence demand the evaluation of different possible responses, i.e. a variety of behaviors. In other terms, it arises also a selection question [12] whereby the (probability) distribution of the correct response has to take control of the individual’s motor plant [16]. The action selection then would resolve a conflict among decisional centers throughout the brain. A central switch that considers the urgency and opportunity of specific response to the stimuli result an optimal solution in computational terms and physiologically reliable by taking the BG as the neural base for that switch. Accordingly, BG gather input from all over the brain and by sending tonic inhibition to midbrain and brain stem targets involved in motor actions, block the cortical control over these actions [9,10,17]. Therefore, the inhibition of the neurons in the output nuclei, caused by BG activity, determines the disinhibition of their targets and the actions would be consequently selected. This model, ultimately explains that in the DM among alternative options, the cortical areas associated with the alternatives integrate their corresponding evidence, whilst the BG by acting as a central switch evaluates the evidence and facilitates the best supported responses (behaviors) [16]. Many studies have also reported a significant increase in the firing rate of the neurons of cortical areas representing the alternative choices during DM in visual tasks. The increase of the firing rates then would provide accumulation of evidence (i.e., information) related to the alternatives [13,14]. Reliable models of DM based on the neurophysiology, consider connections from neurons representing stimuli to the appropriate cortical neurons representing decisions (e.g., motor actions). 


There is a theoretical linkage between 2LDM and the well-recognized integrate-and-fire attractor network model [18–21] since both models rely on nonlinear diffusive dynamics. Major difference rests in the expected dynamics of the basal ganglia involved during the decision making process, which we considered driven by nonlinear patterns rather than linear patterns. Furthermore, the characterization of the input-output map in terms of the infomax principle makes, ultimately, the 2LDM an entropy-thresholding algorithm where the model’s parameters (threshold, diffusion noise, and drift) should be tuned to maximize the mutual information between the representations they engender and the inputs that feed the layers. This is consistent with the Q-learning adaptation, since learning the “best” action on the two thresholds to maximize the cumulative entropy is equivalent to learning the optimal behavior which maximizes the reward [2223]. Nonlinearity in the 2LDM is given by static linear-nonlinear functions that express the gain of the input-output map, so overcoming the theoretical weakness inherent in the canonical diffusion models which assume that momentary evidence is accumulated continuously and at constant rate, that is, linearly, until a decision threshold is reached. This way to model nonlinear dynamics is not a novelty in neuroscience because it fits for Volterra series representation which, through the first- and second-order kernels, estimates the driving and modulatory influence that one population exerts on the other. The slope of sigmoidal transfer function yields information about the effective connectivity between the neuronal populations, because it is a proxy of the Volterra kernels [24].

REFERENCES


  1. Platt, M.L. & Glimcher, P.W. (1999). Neural correlates of decision variables in parietal cortex. Nature 400, 233–238.
  2. Sugrue, L.P., Corrado, G.S. & Newsome, W.T. (2004). Matching behavior and the representation of value in the parietal cortex. Science 304, 1782–1787.
  3. Tom, S.M., Fox, C.R., Trepel, C. & Poldrack, R.A. (2007).The neural basis of loss aversion in decision-making under risk. Science 315, 515–518.
  4. Plassmann, H., O’Doherty, J.P. & Rangel, A. (2007). Orbitofrontal cortex encodes willingness to pay in everyday economic transactions. J Neurosci 27, 9984–9988.
  5. Knutson, B., Taylor, J., Kaufman, M., Peterson, R. & Glover, G. (2005). Distributed neural representation of expected value. J Neurosci 25, 4806–4812.
  6. Boorman, E.D., Behrens, T.E.J., Woolrich, M.W. & Rushworth, M.S.F. (2009). How green is the grass on the other side? Frontopolar cortex and the evidence in favor of alternative courses of action. Neuron 62, 733–743.
  7. Blair, K., Marsh, A., Morton, J., Vythilingam, M., Jones, M. ,Mondillo, K., Pine, D.C., Drevets, W.C., Blair, J.R. (2006). Choosing the lesser of two evils, the better of two goods: specifying the roles of ventromedial prefrontal cortex and dorsal anterior cingulate in object choice. J Neurosci 26, 11379–11386.
  8. Kable, J.W. & Glimcher, P.W. (2007). The neural correlates of subjective value during intertemporal choice. Nat Neurosci 10: 1625-1633.
  9. Chevalier, G., Vacher, S., Deniau, J. M., Desban, M. (1985). Disinhibition as a basic process in the expression of striatal functions. I. The striato-nigral influence on tectospinal/tecto-diencephalic neurons. Brain Res, 334(2), 215-226.
  10. Deniau, J. M., Chevalier, G. (1985). Disinhibition as a basic process in the expression of striatal functions. II. The striato-nigral influence on thalamocortical cells of the ventromedial thalamic nucleus. Brain Res, 334(2), 227-233.
  11. Medina, L., Reiner, A. (1995). Neurotransmitter organization and connectivity of the basal ganglia in vertebrates: implications for the evolution of basal ganglia. Brain Behav Evol, 46(4-5), 235-258.
  12. Redgrave, P., Prescott, T. J., Gurney, K. (1999). The basal ganglia: a vertebrate solution to the selection problem? Neurosci 89(4), 1009-1023.
  13. Schall, J. D. (2001). Neural basis of deciding, choosing and acting. Nat Rev Neurosci, 2(1), 33-42.
  14. Shadlen, M. N., Newsome, W. T. (2001). Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. J Neurophysiol, 86(4), 1916-1936.
  15. Smith, Y., Bevan, M. D., Shink, E., Bolam, J. P. (1998). Microcircuitry of the direct and indirect pathways of the basal ganglia. Neurosci, 86(2), 353-387.
  16. Bogacz, R., Gurney, K. (2007). The basal ganglia and cortex implement optimal decision making between alternative actions. Neural Comput 19(2):442-477.
  17. Parent, A., Hazrati, L. N. (1995). Functional anatomy of the basal ganglia. I. The cortico-basal ganglia thalamocortical loop. Brain Res Brain Res Rev, 20(1), 91-127.
  18. G. Deco, E. T. Rolls, and R. Romo, (2009). Stochastic dynamics as a principle of brain function. Progress in Neurobiology, 88 (1),1–16.
  19. E. T. Rolls, Emotions and Decision-Making Explained, Oxford University Press, 2014.
  20. G. Deco, E. T. Rolls, L. Albantakis, and R. Romo (2013). Brain mechanisms for perceptual and reward-related decision-making. Progress in Neurobiology, 103, 194–213. 
  21. A. Insabato, M. Pannunzi, E. T. Rolls, and G. Deco (2010). Confidence-related decision making. J Neurophysiol, 104 (1), 539–547.
  22. P. Yin (2002). Maximum entropy-based optimal threshold selection using deterministic reinforcement learning with controlled randomization. Signal Processing, 82 (7), 993–1006. 
  23. J. N. Kapur, P. K. Sahoo, and A. K. C. Wong (1985). A new method for gray-level picture thresholding using the entropy of the histogram. Computer Vision, Graphics, & Image Processing, 29 (3), 273–285.
  24. S. Ostojic and N. Brunel (2011). From spiking neuron models to linear-nonlinear models. PLoS Computational Biology, 7 (1), Article ID e1001056. 

Friday, 8 August 2014

Automatic eye fixations identification based on analysis of variance and covariance

Abstract:

Eye movement is the simplest and repetitive movement that enables humans to interact with the environment. The common daily activities, such as reading a book or watching television, involve this natural activity, which consists of rapidly shifting our gaze from one region to another. In clinical application, the identification of the main components of eye movement during visual exploration, such as fixations and saccades, is the objective of the analysis of eye movements: however, in patients affected by motor control disorder the identification of fixation is not banal.

This work [download] presents a new fixation identification algorithm based on the analysis of variance and covariance: the main idea was to use bivariate statistical analysis to compare variance over x and y to identify fixation. We describe the new algorithm, and we compare it with the common fixations algorithm based on dispersion. To demonstrate the performance of our approach, we tested the algorithm in a group of healthy subjects and patients affected by motor control disorder.



Comments:

In the last decade a large effort has been made to identify fixations [1-4], however it is not yet easy to provide a formal mathematical definition of fixation: some authors have demonstrated that fixation’s parameters depend strictly by the type of task [5-8].

We suggested a formal definition of fixations based on analysis of variance between x axis and y axis; the implemented algorithm is based on the dispersion algorithm I-DT developed by Salvucci and Goldberg [9] and integrates it with a statistical test (F-test) and covariance.

The main advantage of the proposed technique is to provide a new definition of fixation which does not require the setting of any critical parameter or threshold, and provides a probability value of correctness.



Relationship between the modified Rankin Scale and the Barthel Index in the process of functional recovery after stroke

The modified Rankin Scale (mRS) and the Barthel Index (BI) are the most  
common clinimetrical instruments for measuring disability after stroke. 

This study [here] investigated the relationship between the BI and the 

mRS at multiple time points after stroke. 

The BI, which is a widely used instrument for longitudinal follow-up post-


stroke, was used as  reference to determine the effect of time on the 

sensitivity of the mRS in differentiating  functional recovery. 


MethodsNinety-two patients with first stroke and hemispheric brain 

lesion were evaluated using the BI and mRS at 10 days, 3 and 6 months. 

The Kruskal-Wallis test was applied to examine median differences in BI 

among the mRS levels at 10 days, 3 and 6 months with Dunn's correction 

for multigroup comparison. The Mann and Whitney test was used to 

compare median differences in BI scores between two aggregations of 

mRS grades (mRS=0-2, mRS=3-5) at the same time periods after 

stroke. 


ResultsBI score distribution amongst mRS grades overlapped at 10 

days, differentiating only between extreme grades (no disability vs severe 

disability). At 3 months, independent patients with slight disability could be 

distinguished from dependent patients with marked disability. At 6 

months, grade 2 and 3 overlapped no more, differentiating independence 

(class 0-2) from dependence (class 3-5). The largest transition to an 

independent functional status occurred from grade 4, at 3 months.


ConclusionMaximum sensitivity of mRS in differentiating functional 

recovery is reached at six months post-stroke.





Sunday, 23 March 2014

MULTICOLLINEARITY IN LINEAR MODELS

1. Introduction

One of the fundamental assumptions of the general linear model

1) y=Xβ + u

where u is a vector of zero-mean and uncorrelated stochastic errors, is that the data matrix X of order n * k has rank k, that is, the explanatory variables are not linearly dependent. This is because the least squares solution

2) b = (X'X)-1X'y 

requires the inversion of X'X, which, however, would not be possible if the rank of X is <k (because X becomes singular). If some or all of the explanatory variables are perfectly collinear, then the system 1) is said to be affected by "extreme" multicollinearity. 

Problems in the calculation of the solution 2) can emerge even when the collinearity among the variables is not perfect. the main effects of collinearity are [1]:


  • Lowered precision of the estimates that makes it difficult, if not impossible, to separate the relative influences of the different variables. Large errors not only may spoil the estimates of the coefficients, but they may become correlated each other as well. 
  • Increased standard errors of the coefficients that causes adverse selection of the variables. That is, the variables of the model may be not significant though in presence of high R2  and of significant overall regression.
  • The coefficients can be either "wrong" or an order of magnitude implausible.
  • Ill-conditioning. Small changes in the data produce large differences in the estimated parameters. Hence, the estimates of the coefficients become more sensitive to particular sample sub-sets, such that a few additional observations may drastically change some coefficients. 


2. Diagnostics of multicollinearity

a) Examination of the correlation matrix of regressors: high correlations (say, >0.9) may indicate the presence of collinearity. However, with this method one can identify problems just for pairs of variables, whilst the doubt remains about what to do if there are more than two variables to create multicollinearity.

b) One alternative strategy is to do "auxiliary regressions" between a variable "suspect" (say, Xj) and the other k-1 explanatory variables. If the coefficient of determination that you get (Rj2) is close to 1, the regression coefficient of the variable in the original regression is affected by the problem of multicollinearity. 
An indicator that immediately provides information on the variables that generate multicollinearity is a VIF (Variance Inflaction Factor):

3) VIFj=1/(1-Rj2)

So, if  VIFj  >10, then Rj2>0.9, therefore the variable Xj is  strongly correlated with one or more of the other explanatory variables. On the contrary, if  Xj is not linearly dependent to the other k-1 variables , then Rj2=0 and VIFj=1. In presence of non-perfect multicollinearity, VIFmeasures to what extent the increase in the variance of the estimated coefficient bj is  due to multicollinearity. 

c) Method of Eigenvalues. The determinant of the matrix X'X equals the product of its eigenvalues:

4) det(XX)=λ1λ2λk

det (X'X) close to 0 means that one or more eigenvalues ​​are close to zero. Therefore we can calculate the condition index:

5) K=(λmax/ λmin)

When the columns of X are orthogonal, K = 1, and K increases with the collinearity between the variables; experimental studies have shown that K> 20 is a symptom of multicollinearity.
Furthermore, to identify the variable(s) affected by collinearity we can calculate the condition number for each regressor as:


6) Kj=√(λmax/λj ) 

d) Contradiction between the statistical t-test and F-test of joint significance. This is not a necessary condition for the existence of the problem of multicollinearity, but it is a symptom: there is a high value of the index of determination (and hence significance of the regression as a whole) but non-significant values ​​of the t test for regression coefficients individually. In addition, the partial correlations between the regressors are low.

3. How to fix multicollinearity


To solve the problem of multicollinearity there are several methods that can fit for it: 

  • the addition of new observations that make  X a full rank  matrix (even if this remedy is not always applicable); 
  • the exclusion of either the correlated variables from the model  or those for which the estimated variance of the regression coefficient is high; 
  • transform the variables that cause multicollinearity. This technique is particularly appropriate in the case of exact multicollinearity, in fact one can make a substitution of variable and estimate by Ordinary Least Squares the new parameters, obviously abandoning the idea of estimating the original parameters.
  • the use of principal component regression (PCR): the main components are extracted from the original regressors (these new variables are by definition orthogonal to each other) and regresses the variable response of these; 
  • the use of ridge regression.

3.1. Rescaling the regressors

An easy way to prevent or at least reduce the effect of multicollinearity is to rescale the variables with respect to their means. A regression equation with an intercept is often misunderstood in the context of multicollinearity [2]. The mean-centering facilitates the interpretation of the intercept term, that becomes the expected value of the outcomes y when the explanatory variables are set to their mean values. When variables have been centered, the intercept has no effect on the collinearity of the other variables [3]. 
Applying other transformations, introducing additive constants or using uncentered variables would result in large effect [4], especially in regressions with higher order terms where the means' level  of the predictors may shift the covariance between the interaction terms and each component. Rescaling changes the means so that also the regressors' covariance changes, yielding different regression weights for the predictors in the higher order function [2]. 




Friday, 14 February 2014

Global cooling. The crazy winter 2013-2014

Another wave of frost is hitting the United States. The bad weather has caused paralysis of road traffic (with at least 13 deaths from traffic accidents) and the cancellation of about 6,000 flights in the last two days.
The more dramatic situation is reported in North Carolina, where because snow and ice have created an immense traffic jam in both directions of the highway, with thousands of people trapped and hundreds of abandoned cars in the middle of the roadway .

Just as had happened two weeks ago near Atlanta , Georgia. Over half a million homes and businesses were left without electricity due to power outage caused by snow and ice. President Barack Obama has declared a state of natural disaster in Georgia and South Carolina, sending federal aid. And the spate of bad weather is moving to the northeast, where heavy snowfalls are expected in Boston, the capital Washington and New York.

The scenario is rather catastrophic. In the capital all federal offices have been closed. Everywhere schools have been closed. At least 800 thousand people remained in the dark and cold for the blackout. New York woke up completely whitewashed, and the snow has caused delays mainly to Metro lines , although some schools and many offices have regularly opened. In Washington, completely paralyzed, has been also suspended the bus service for what is expected to be the worst snowfall of the year . The governors of the states in emergency have urged citizens to avoid to travel unless absolutely necessary.

At the same time , prolonged and violent storm fronts are flooding Europe. Between storms and floods burst, the UK has to deal with the most disastrous winter in modern history. Tens of thousands of hectares flooded and thousands of people displaced. The damage is huge and likely to rise after hour by hour. The problem is that other storms are forming and they will be striking in the next few days, so we have to prepare to other disasters. The accused for excellence is always the same, namely the Canadian lobe of the polar vortex, which in addition to having flooded half of Europe (including Italy), has deprived the continent of the winter season. 
Only the   north African anticyclonic comebacks made ​​us avoid the worst, but in the rest of western Europe the nightmare continues (none of the respondents from the British TV remembers a so rainy winter).

We have already talked about the ruinous effects of the wide-spread current totally uncritical approach to the climate change issue (here) and we have also analyzed and commented some air temperature, ice sheet temperatures and atmospheric carbon dioxide data (here), which unveil the fallacy of the global warming religion
The current events should lead us to think that the dreaded global warming scenario is unrealistic. Rather, we should begin to consider the opposite hypothesis. 
Dr. J. Kirkby [head of the CLOUD Experiment at CERN in Geneva]  warns that another Maunder Minimun is possible within a short time (here)




After all there are plenty of clues about possible global cooling.

1.Gulf Stream and Atlantic meridional overturning circulation (AMOC). 

Dr. D.A. Smeed and collegues have recently published (here) a study on the decline of the Atlantic Meridional Overturning Circulation (AMOC). 
AMOC has been observed at a latitude of 26 ° North, since April 2004.
The AMOC and its components are constantly monitored by the combination of a transatlantic array of moored instruments with submarine cable based measurements of the Gulf Stream and satellite-derived Ekman transport. The series has recently been extended to October 2012 and the results show a downward trend since 2004. From April 2008 to March 2012, the AMOC has remained at an average of 2.7 Sv (in Sverdrup units, 1 Sv = 10^6 m^3s^-1), weaker than in the first four years of observation (with a reliability of 95% which is the reduction of 0.3 Sv or more ). The Ekman transport reduced by about 0.2 Sv and 0.5 Sv of the Gulf Stream, but most of the change ( 2.0 Sv) is due to the medium of the ocean geostrophic flow.
The change in the average geostrophic flow represents a strengthening of the oceanic flow south above the thermocline. The increased flow of warm waters south is balanced by a decrease in the lower flow south of the North Atlantic, with a depth of less than 3000 m.
The low transport in the North Atlantic deep water slowed down by 7% per year (with a reliability of 95% and a rate of greater slowdown of 2.5% per year).
The authors has interpreted the relationship of AMOC changes to the climate of the North Atlantic. Although the model simulations relate the forecasts of AMOC decrease to the increase of greenhouse gases of the order of 0.5 Sv per decade in the 21st century, their findings show that the actual change is by far greater. The amplitude of the AMOC change suggests that they mainly depend on some periodic variation rather than on anthropogenic factors.     

2.Solar magnetic field.

The solar magnetic field is now close to turning and this would lead the sun to a long hibernation. The main points, which make us understand clearly that we are at the point of no return are different:

2.1. The migration of the coronal holes towards the sun poles. 
The coronal holes or CHs (Coronal Holes), are always located in the polar regions during solar minimum. As the solar cycle progresses the latitude of CHs run down to the equator in the phase of maximum, and then up again when the star returns to the minimum.

Figure 1. Coronal holes and active region map. 
 Looking at the current location of CHs (Figure 1), we can see how they are moving back to their respective geographic poles (blue spot of positive polarity in the northern hemisphere and red spot of negative polarity in the southern hemisphere). So here we are: The coronal holes migrate towards the poles, and with them they bring their polarized magnetic fields. The moving away of the 2 polarities will decrease the overall electromagnetic interaction that slowly, but progressively,  will run out thus freeing our star by the presence of spots.


2.2. The Global Magnetic Field (GMF), has stabilized, while the one of the northern hemisphere is regressing.
This is a summary of what appears from a reading of the magnetic poles, where the North Field (NF), in 6 months is back from +11 to +7, (remember that after the reversal should progress at a speed mailed instead of regress). It is a very serious signal that is confirmed by the almost absence of magnetic activity in the northern hemisphere.
The regression of northern hemisphere has led, in turn, all the GMF to the stall (+12). The stalling of the entire magnetic field is visible to everyone (since June 2013, ie, from the reversal, it has risen by just 12 points, that is just one third compared to other solar cycles with a heavy stop in last 2 months).
At this point it remains active only southern hemisphere. But we must keep in mind that the activity in this hemisphere is playing with 2 years of delay compared to the northern one.

3. The contraction of  Heliosphere.
The solar wind is the thermometer that measures the (magnetic) health of the star. 

In recent months there has been a sharp contraction of the heliosphere. Except in the presence of front-earth coronal holes, the solar wind has remained well below the 400 km / s, with a peak daily minimum of 250/260Km/s (typical speed of the deep minima). This also makes us understand what is the real intensity of this solar cycle.



Pieter Bruegel. New York Harbor, 1780 





Understanding Anaerobic Threshold (VT2) and VO2 Max in Endurance Training

  Introduction: The Science Behind Ventilatory Thresholds Every endurance athlete, whether a long-distance runner, cyclist, or swimmer, st...