Lucd Announces Ground Breaking Advancements in Reservoir Computing
At SC18 Lucd unveils a next generation AI approach based on infinitely scalable Reservoir Computing with 6 orders of magnitude greater accuracy.
Lucd has implemented a key approach to Reservoir Computing, called Echo State Neural Networks, using its patent pending Distributed Optimistic System. Reservoir computing (RC) is an alternative to Deep Learning Recurrent Neural Networks (RNNs), which are critical to breakthroughs in applications such as natural language processing. RC has the added benefit of greatly reduced computation required in the hidden layers of the network. Lucd's distributed implementation has been tested on tens of thousands of processors to demonstrate the scalability of RNNs containing millions of neurons.
As an alternative to Deep Learning, Reservoir Computing is transforming artificial intelligence (AI) because it takes less time to train highly accurate models. Today, practitioners are hampered by the long training times of RNNs, and the need to refresh training on a regular basis. Organizations with tens or hundreds of existing models require large-scale compute resources just to maintain the models they have. Lucd's distributed approach solves this problem by reducing training times to minutes, while also enabling models of nearly unbounded width and depth. Neural nets that are millions of inputs wide and thousands of layers deep can now be trained in minutes, rather than months, and provide magnitudes of greater accuracy.
"Training hidden layers is just too slow. Today's systems were developed with parallel processing as an afterthought. In our experience, attempting to parallelize existing libraries almost never works well. We believe parallelization must be considered at the outset of the development of any new library. By starting with our scalable distributed optimistic system, we rapidly developed a large-scale echo state model that, out of the box was able to scale to thousands of processors. Our approach to neural network modeling requires a fraction of the time of classic training algorithms,"
The anticipated impact of this approach will be the further democratization of machine learning by reducing requirements for large amounts of highly specialized computing resources; the ability to train and retrain models of nearly any size within minutes; and to perform online training of models in production.
"At Lucd, we continue to push the envelope so more businesses can easily integrate AI into their processes. The compute resource challenge of training deep neural networks risks bifurcating Enterprise AI into the haves and have nots. Our approach to reservoir computing offers to change all that. By exploiting the massive reduction in computational effort needed for model development, Lucd empowers all industries with faster training and greater accuracy on any infrastructure,"
"Artificial Intelligence needs computational breakthroughs to continue a successful trajectory. Demonstrating scalability of large-scale Reservoir Computing models is a game changer in the field," noted Chris Carothers, Ph.D., Director, Center for Computational Innovations and Professor, Computer Science at Rensselaer Polytechnic Institute and Lucd Board of Advisors member.
Lucd is at booth #3775 and will be showcasing its Reservoir Computing results.
About Lucd. By unleashing the power of data, the Lucd Enterprise end to end AI platform allows all businesses to conduct machine learning in a responsible way. Lucd builds Competitive Digital Advantage through leveraging data assets; Digital ROI; and providing the ability to exploit market knowledge. Lucd develops pioneering capabilities in AI, Big Data, Data Fusion and Machine Learning. Visit Lucd online at: https://www.lucd.ai/
SC18, the International Conference for High-Performance Computing (sc18.supercomputing.org)
JoAnn M Stadtmueller
SR Director, Marketing