Editorial - (2021) Volume 10, Issue 1

Neural Networks and Probabilistic Methods
Xiaochen H*
 
School of Public Policy and Administration, Center for Administration and Complexity Science of Xi’an, Jiao Tong University, Xi’an, Shanxi Province, 710049, China
 
*Correspondence: Xiaochen H, School of Public Policy and Administration, Center for Administration and Complexity Science of Xi’an, Jiao Tong University, Xi’an, Shanxi Province, 710049, China, Email:

Received: 01-Jan-2021 Published: 28-Jan-2021, DOI: 10.35248/2090-4908.21.10.e201

Description

Unsupervised learning (UL) could be a sort of algorithm that learns patterns from untagged data. The hope is that through mimicry, the machine is forced to create a compact cognitive content of its world. In contrast to Supervised Learning (SL) where data is tagged by a personality's, eg. as "car" or "fish" etc, UL exhibits self-organization that captures patterns as neuronal predelections or probability densities. the opposite levels within the supervision spectrum are Reinforcement Learning where the machine is given only a numerical performance score as its guidance, and Semi-supervised learning where a smaller portion of the information is tagged. Two broad methods in UL are Neural Networks and Probabilistic Methods.

Two of the most methods employed in unsupervised learning are principal component and cluster analysis. Cluster analysis is employed in unsupervised learning to group, or segment, datasets with shared attributes so as to extrapolate algorithmic relationships. Cluster analysis may be a branch of machine learning that groups the info that has not been labelled, classified or categorized. rather than responding to feedback, cluster analysis identifies commonalities within the data and reacts supported the presence or absence of such commonalities in each new piece of information. This approach helps detect anomalous data points that don't fit into either group.

The only requirement to be called an unsupervised learning strategy is to find out a replacement feature space that captures the characteristics of the first space by maximizing some objective function or minimising some loss function. Therefore, generating a covariance matrix isn't unsupervised learning, but taking the eigenvectors of the covariance matrix is because the algebra eigendecomposition operation maximizes the variance; this can be called principal component analysis. Similarly, taking the log-transform of a dataset isn't unsupervised learning, but passing computer file through multiple sigmoid functions while minimising a ways function between the generated and resulting data is, and is understood as an Autoencoder.

In particular, the strategy of moments is shown to be effective in learning the parameters of latent variable models. Latent variable models are statistical models where additionally to the observed variables, a group of latent a variable also exists which isn't observed. A highly practical example of latent variable models in machine learning is that the topic modeling which may be a statistical model for generating the words (observed variables) within the document supported the subject (latent variable) of the document. within the topic modeling, the words within the document are generated in line with different statistical parameters when the subject of the document is modified. it's shown that method of moments (tensor decomposition techniques) consistently recover the parameters of an outsized class of latent variable models under some assumptions.

The classical example of unsupervised learning within the study of neural networks is Donald Hebb's principle, that is, neurons that fireside together wire together. In Hebbian learning, the connection is reinforced regardless of a blunder, but is exclusively a function of the coincidence between action potentials between the 2 neurons. An identical version that modifies synaptic weights takes into consideration the time between the action potentials (spike-timing-dependent plasticity or STDP). Hebbian Learning has been hypothesized to underlie a variety of cognitive functions, like pattern recognition and experiential learning.

Citation: Xiaochen H (2021) Neural Networks and Probabilistic Methods. Int J Swarm Evol Comput. 10:e201.

Copyright: © 2021 Xiaochen H. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.