Computational_Biology
3 views | +0 today
Follow
Your new post is loading...
Your new post is loading...
Scooped by Gabriele Scheler
Scoop.it!

Neuromodulation Influences Synchronization and Intrinsic Read-out

The roles of neuromodulation in a neural network, such as in a cortical microcolumn, are still incompletely understood. Neuromodulation influences neural processing by presynaptic and postsynaptic regulation of synaptic efficacy. Synaptic efficacy modulation can be an effective way to rapidly alter network density and topology. We show that altering network topology, together with density, will affect its synchronization. Fast synaptic efficacy modulation may therefore influence the amount of correlated spiking in a network. Neuromodulation also affects ion channel regulation for intrinsic excitability, which alters the neuron's activation function. We show that synchronization in a network influences the read-out of these intrinsic properties. Highly synchronous input drives neurons, such that differences in intrinsic properties disappear, while asynchronous input lets intrinsic properties determine output behavior. Thus, altering network topology can alter the balance between intrinsically vs. synaptically driven network activity. We conclude that neuromodulation may allow a network to shift between a more synchronized transmission mode and a more asynchronous intrinsic read-out mode.
No comment yet.
Scooped by Gabriele Scheler
Scoop.it!

PLOS Computational Biology: Ribosome Traffic on mRNAs Maps to Gene Ontology: Genome-wide Quantification of Translation Initiation Rates and Polysome Size Regulation

PLOS Computational Biology: Ribosome Traffic on mRNAs Maps to Gene Ontology: Genome-wide Quantification of Translation Initiation Rates and Polysome Size Regulation | Computational_Biology | Scoop.it
Author Summary

Gene expression regulation is central to all living systems. Here we introduce a new framework and methodology to study the last stage of protein production in cells, where the genetic information encoded in the mRNAs is translated from the language of nucleotides into functional proteins. The process, on each mRNA, is carried out concurrently by several ribosomes; like cars on a small countryside road, they cannot overtake each other, and can form queues. By integrating experimental data with genome-wide simulations of our model, we analyse ribosome traffic across the entire Saccharomyces cerevisiae genome, and for the first time estimate mRNA-specific translation initiation rates for each transcript. Crucially, we identify different classes of mRNAs characterised by different ribosome traffic dynamics. Remarkably, this classification based on translational dynamics, and the evaluation of mRNA-specific initiation rates, map onto key gene ontological classifications, revealing evolutionary optimisation of translation responses to be strongly influenced by gene function.

Gabriele Scheler's comment, February 23, 2013 6:37 PM
Ribosome Translation Initiation Rates
Rescooped by Gabriele Scheler from Papers
Scoop.it!

Logarithmic distributions prove that intrinsic learning is Hebbian.

In this paper, we document lognormal distributions for spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas.
The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears as a functional property that is present everywhere. 
Secondly, we created a generic neural model to show that Hebbian learning will create and maintain lognormal distributions.
We could prove with the model that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This settles a long-standing question about the type of plasticity exhibited by intrinsic excitability.


Via Complexity Digest
No comment yet.