Cuaderno de ideas, English, new media, Productos imaginarios

Accelerating heuristics

As computing has become more complex, we humans have externalized a set of functions from the main processing unit. By creating interfaces in between different implementations and the core of the computed experience, we have gained better graphics —this is terribly obvious—, faster data transfer —USB, for example— and some combined and agile applications such as Peer2Peer, taking huge bit streams from the Internet and placing them inside an over-sized hard drive without collapsing the CPU.

But we are no longer in the 90s. Now it is the time for Big Data, face recognition, procedurality and bad-ass artificial intelligence in videogames. What all this applications have in common is not fancy graphics, incompatible peripherals or forced cohabitation of hardware; all this fields can implement artificial neural networks into their algorithms.

Our nowadays computers do not include a way of externalizing a simulated brain function, so all the work is up to the CPU. But, what if we could just supply an external set of chips to calculate heuristics? What if a specific module was built in for the good of approximations, such as «this may be an apple» or «the C route seems less conflicting»? That could have applications on forms, automated updates, procedural settings as a given terminal is used, or human-like content prediction; we could even sketch image formats based on given networks, or guess what short of diet we should take in order to avoid certain kind of diseases.

To me, this is a vital point of the next generation of computers, as nowadays they act more as action-reaction devices, while synaptic networks could embrace a short of a more-human, controlled entropy they are lacking nowadays. And this is also good, because approximation is an outside-in way of facing problems without the odd of calculating every single possibility, even without thinking on sampling depth or buffer scalability —because of the control provided by weights, I think neural networks are good at not overflowing their own capacity—. In short, a synaptic processor inside every machine would put them in a vantage point towards NP problems. In human terms, they would become less able to do what it is hard to us to do and worthy competent at solving the kind of problems that are natural to us.

Ten years from now, we might have deprecated any lingo about out-fashioned, bi-dimensional, 3D rendering, and could be asking about what’s the gestaltic IQ of the latest, trendiest neural board.

Share on FacebookShare on LinkedInTweet about this on TwitterShare on Google+Share on Reddit

Leave a Reply

Your email address will not be published. Required fields are marked *