Any answer to this question will be moot. Some people these days are very afraid that AI will grow out of its toddler pants and take away our wives and children, and for such people this is the main problem. However, we’d like to be more down to Earth and closer to real technology. Out of the main problems of traditional neural networks, listed in the recently published IEEE (Association of Electrical and Electronic Machine Engineers) report, I would subjectively put their huge energy consumption in first place. Data is growing in volume like yeast to the power of e, and processing it requires huge computing power. Supercomputers are already being built to train large neural networks.
Our network is much more efficient. This statement may seem like magic (or, much worse, like fraud) to you, especially since we’re not talking ‘2 times’ or ’10 times’, but several orders of magnitude more efficient. Actually, a lot depends on the specific task, we haven’t yet been able to test the energy savings on huge data. But the theoretical estimate is that the energy cost of training our network is about 100,000 times less than any other existing neural network. How is that possible? Due to high intelligence levels. Where a traditional network takes hundreds of thousands of epochs to assimilate a piece of information (each epoch includes many elementary operations, and each operation consumes energy), our network will learn in 10 epochs or less.
Note 1: IEEE is the world’s largest technical association with 423,000 members in 160 countries.
Note 2: we will prepare a detailed commentary on the IEEE report later.
Answered by Anatoly Guin, TRIZ specialist, member of Omega Server’s board of directors.