It’s presumptuous of me to think I know about this stuff. Despite that, hopefully I’m bringing a unique (even if wrong) perspective to the table. I started life in code by learning VBA making Excel spreadsheets. One of the coolest things I made while working at Hynix Semiconductor was a spreadsheet that sucked in CSV files from the WIP database. It brought in our QA scans and mapped and compared the positions of defects. By the time I got it usable a software engineering team in Korea had already done the same thing and their software was installed at our wafer FAB. A bit disappointing but I was still pretty proud of the accomplishment. I forget how many records of data I was processing in that workbook, ten’s of thousands, hundreds of thousands maybe with all the inspection points. I even got it to draw images in the shape of wafers side by side at each inspection layer. It initially took over an hour to run but I was able to whittle it down to ten minutes. Not bad for someone new to code and just using VBA. That was the first time I was mesmerized by what could be accomplished with programming.
A big part of my experience as a software engineer has been around databases. I started with Microsoft SQL and my company has jumped from that to MySQL (briefly) then to PosgreSQL. We've been using that quite a bit in our new BI system. I got used to the database structures and code and the speed at which you could ask for vast amounts of information. The fact that thousands of records takes only milliseconds to return should inspire at least a little awe.
Anyway, all that to say the angle I first started trying to figure out how the brain processes information was in a database-y kind of way. While a relational database just can’t perform well enough it at least founded my understanding of information and their relationships. I also got a neat little book for Christmas one year on mathematical ideas. I loved the chapter on set theory and I think I finally understood how database relationships work. At the very least it made joins a lot easier to understand. Every time I picked up some new idea I tried to use it as a model in figuring out how information can be constructed in such a way that it looks like a “brain”. Nothing was a very good fit so I decided I had to focus on the logic rather than the physical structure. Since our brains were part of our evolution it must be that its taken this physical form because it’s the most efficient in nature. But the job is to break down and process information in some other form than water and protein. So my task was to do the same thing, physicalis agnost.
I recall seeing a talk on TED where the speaker was part of a group of neurosciences trying to discover where and how the brain stores information. It got me thinking about how it processes information. All of the connections like lightening in real time, massive swarms of electrical activity. Doing a bit of research turned up the characteristics and function behind the action potential. As it turns out our brains don’t stream or encode information in the signal (like we do for TV or radio). The synapse requires enough activity to overcome the gate between the dendrite receptor and axon terminal. It’s a switch. A transistor. So how have we designed transistors to take information and make something meaningful from it? We store electrons in a capacitor and when the gate is accessed, the data is read or written in the capacitor. It bugged me trying to figure out what the relationship was. How does it know? Of course the dendrites and axons define where but it still doesn't answer the question. In semiconductors we order the physical structures sequentially, in a pattern, a more defined form of what neurons do. In neither case can we point to the structure and say, "There! That is the data!" While there are regions of the brain for certain kinds of processing the basis for neurons isn't like that. It’s a fuzzy mesh where placement isn't strictly defined by the data it holds. Everyone can and does know different things and the exact position that a neuron sits in reference to the skull is likely different between human beings. We're not built like semiconductors. So what difference does it make to any other neurons whether a particular one fires? (In terms of making data relate together.)
Asking myself that question is when it hit me. The activity we observe isn't data at all. Data doesn't move. It doesn't shift. Well, it may shift in the brain but no one has been able to identify that quite yet. We've got some good theories around sleep based on it but no proof. I suppose we can’t rule it out but for now it’s unknown. Since the signal isn't data then what a neuron is really saying when it reaches a high enough polarity to bridge the gate is, “I’m related”. It’s saying it has a relationship and the next neuron that gets asked the question has to decide if it’s related too. The action potential must overcome a relationship threshold. This is often modeled in AI and network type databases as "weights" or "vectors". Under a certain amount and the signal dies out, over it and it's carried on to link other neurons, other points of data. Linking them together helps build a model of what we’re thinking, perceiving. This isn't a new concept in neuroscience but take a moment to think about that in terms of software. How could we model these relationships and their activity in code?
Neural nets take a stab at it but they’re too structured of a model and too granular to do any good. The hardware required to go from a neural network to human level intelligence is immense. Which, I think is what I’ll write about next time. There’s a fundamental problem we’re dealing with when trying to model the logical processing of the brain. And I think I've figured a few ways around it.