With Great Hubris & Folly
Let’s talk about neural networks.
The basis for existing neural networks is to initially break apart data in some way, typically into integer values. Those are fed into a series of placeholders using assumptions about the nature of the data. The calculations made for each value determine which of the next placeholder the value should be sent to. Once through a series of placeholders, an output is obtained and compared to expected values based on what is thought to produce a correct result. Adding a bit more complication, more data is passed through the same system until it seems fit for comparing unknown data in an attempt to predict an accurate result.
Take this model and look at where its topping out right now. We might be close to self-driving cars. Evidently, we’ve taught a NN to play the games Go and Chess successfully. Considering the limitation that we’re reaching in semiconductor technology; this design seems to already be reaching its limit. Soon we’ll be making the gates of our transistors within several atoms of thickness. This is for the most part the limit in that kind of design. There’s hope we can maneuver Moore’s law along a bit further by transitioning to quantum computing but given we’re already near the limits of material science, that seems only like borrowed time to me.
The remaining of these blog posts will detail a different kind of design that I think might allow for more growth and flexibility. The target, I suppose my target, is to create human like human level artificial intelligence that runs on a personal computer. That sounds both a claim of hubris and a pursuit of folly. In all likelihood I’m wrong. But it’s never a waste of time to explore a new design.