Backprop (BP) uses detailed, unit-specific feedback to train DNNs with remarkable success. Can DNNs can be trained as efficiently by broadcasting a global, non-unit specific learning signal to all units and applying local, Hebbian-like updates to the weights? We show that the answer is yes! ...with some tricks: vector-valued units and nonnegative weights.
Dynamical components analysis (DCA) is a linear dimensionality reduction method for high-dimensional time-series data that extracts dynamical structure by maximizing an information-theoretic objective. DCA outperforms methods such as Principal Components Analysis and Slow Feature Analysis in extracting dynamical structure in several datasets.
Following the advent of a post-Moore's law field of computation, novel architectures continue to emerge. With composite, multi-million connection neuromorphic chips like IBM's TrueNorth, neural engineering has now become a feasible technology in this …