Monday, November 26, 2018

A.I. Get It

     I've spent quite a lot of time reading, thinking, reading, thinking, doing some writing, reading... thinking... And I can never get unstuck from some of the most prescient issues. I won't bathe you in gloom or idealisms of the future, but I will offer a way to think about it before I continue on muddling through life.

     Okay, so artificial intelligence. Current solutions at least at the billion dollar investment level are mostly just trained layers of feed-forward statistical point clouds provided a-priori ground truths (e.g. game boards or road rules). There are skip connections and other loopy recursive signal analysis-based things too. It's not the newest way to do things but improvements are based in our progress in understanding the animal neo-cortex and the ability for processors to handle that kind of data. Newer methods involve tensor networks, which mostly translates to stacks of n-dimensional statistical point cloud matrices. These are still provided a-priori rule sets to form statistical clouds with but offer orders of magnitude better compression and speed with little loss. It's the proof in the pudding of the biggest scientific advancements of recent decades.

*Whispers* "Ryu and Takayanagi are legit."
     Another interesting idea is that tensor networks are also the keystone to solving all the hardest problems in quantum physics, which is really the math of complex systems that are able to be broken down into unit "nodes" in a physical/natural network. Uncertainty in this realm is more due to limited assumptions and abstractions rather than untraceable randomness in nature. This opens up another important ability, and that's the ability to apply the same high level abstract solvers that we apply in quantum mechanics/quantum field theories to these neural nets, the method I'm most read-up on being entanglement. This means not only can broader systems be recognized by the computer, but the a-priori assumptions built into data (e.g. the limitations of the measurement instruments or categorical assumptions) can actually be found exactly with numerical representations (e.g. linear correlations, jet bundles, diagonalized density matrices), which the computer can then add another layer of learning and exploration to and in the most computationally efficient and physics-based manner possible with modern mathematics.

     The difficulty in all this is data acquisition and the ability for a computer to form its own categories. I lined out how this will be overcome with wisdom from quantum mechanics, but then how will this be stored and iterated on? What is the memory system and how is it modified? This is where employing genetics may be key, following nature seems to have provided maximally performing models everywhere else. A DNA strand provides explicit memory of a species which modifies with each offspring through the elements and re-combination via sexual or asexual reproduction. It's dangerous, it's error-prone, but as long as the species survives it builds long-term resiliency - though not necessarily homeostatic ability, adaptability or even individual life-span (which are not always necessary to the reproductive cycle). Those are evolved traits that seem to assist in maturing biological intelligence.

     While a very large data set is analogous to a lot of experiences of many generations of a species, it does not capture the filtering process. New generations of living animals do not have all the same access to the same information due to direct experience being lost with each generation and history only able to be inferenced through behavioral and physiological changes. It's the invention of language which has allowed experience to truly endure beyond a few concurrently-living generations, something only humans have been able to expand on. Genetic algorithms have had a lot of success, especially with adversarial networks. StackGAN was a pretty cool experiment, it's getting cooler with 3D model generation and self-scoring. This was done in one demonstration to generate drone frames based off physical constraints and a few example models. This could easily extend to aesthetics generators. There are also great memory models like LSTM networks, based off our long and short term memory structure. They're all pieces to a bigger puzzle here.

     Human intelligence is based in a structure formed a-priori through generations but utterly ignorant and amnesiac of itself at the upper waking levels of consciousness. The reason we even think in "upper" or "lower" levels is due to the assumption that a higher-order model will successfully encapsulate all the "things" in a way that they can all be discovered to be the same (or at least communicable) by independent observers, thus establishing ground truths through a recursive collective filtering. The human neo-cortex, mainly the frontal lobe, also assumes a lot of control as it develops, meaning the body goes from a "bottom-up" wind-directed sort of development to at least fractionally a "top-down" self-controlled, self-transforming development. Pre-mature infants are often born blind due to undeveloped parts of their brain, where they may have no access to visual functions whatsoever for many weeks until their brains develop and cells differentiate.

Weasel baby brain development.
Diverse types of neurons and support cells differentiate from singular root columns.

     Humans develop at a high level through social networks, the body does this at a lower level through unfathomable cellular inter-connectedness and layered information/energy processing from genes to electrical impulses. The only way to really begin finding out which models can do what the human is doing is by beginning with a more primitive structure that can eventually do what humans can do. This means there are more systems to create that can handle wider sets of problems, whether ending up as a series of specific taught solutions (e.g. 100 different board game-playing neural nets linked as one in a network) or one natural law-understanding mega solver that only needs sensory inputs to start building its own patterns and assumptions and languages off of. That means it has to be based in a ground truth structural reality in a way that can fully quantify it and become it and still obey the laws of physics. We have all the tools now to do it, while computers continue to get beefier. And people are doing it, to be sure.

     Google Brain is an example of the slow modular approach. This does resonate with the fact that the neo-cortex is divided into hundreds of thousands of columns that do isolated as well as parallel processing. There are some keen blockchain versions of this, which might the best suited architecture to turn a computer network into a controlled genetic brain structure. There is then still tons of mimicry, one of the more recent Nvidia conferences had some dopey VR real time car driving that was "A.I. powered" horseshit from a decade ago, while the chips and algorithms they're sitting on are a real fucking beauty of electrical engineering and physics. I don't know, we'll get there, but focusing on a few conglomerates' work is actually slowing things down, while they should focus their resources on maximizing the quality and accessibility of their hardware - not shareholder gimmicks. And really, most of the world just isn't suitable for honest, optimistic, spirited science of this magnitude, but that underscores the responsibility they're carrying - not just the amount of money they're making.

     My fix? Tensorflow plugin with Unreal Engine, now that's the [free] sauce for properly engaging with this loopy world.

    And just in case you forget in this goopy mess of writing, this is all tongue-in-cheek creative non-fiction writing. I am nowhere near an expert in systems and A.I., but I do get a kick out of the airy metaphors and some of the functional progress in computing going with it, mainly for my electronic hallucinations - I mean video games. Not enough people really understand how down-to-earth most of the engineering is either, the only way some silly computer networks are going to create an all-consuming black hole of optimization (or in the real world: speculator-generated inflation) or be used for mind control (again, it's about who's using this stuff for market manipulation) through predictive mechanisms outclassing humans is if we mystify ourselves, just like others mystify their dear leaders. Luckily most nations aren't dumb enough to continue allowing data to be exploited willy nilly - the U.S. not being one of them, however.

     New technologies invite new perspectives, just as they invite new cautions and new abilities, and it's our duty to be educated enough to participate meaningfully in those discussions. What are the new creative standards going to be when a kid can create a Pixar-quality film or Naughty Dog-quality game just by talking to a digital cloud for a few minutes? What about the standards for education, health, and individual empowerment? Modern computing has the potential to assist in all those realms as it already is doing, it might even fold enough proteins to solve major illnesses. It's all there and it's happening, how fast depends on how many properly screwed-on minds can get into it. I ain't counting myself until I can actually code one of these damn things.

In other news, my HEG biofeedback project (https://github.com/moothyknight/HEG_Arduino) got picked up by Crowd Supply and it might actually go mainstream after all the contacts and potential partners I'm meeting. I fucking love science.

Edits: forgot to give specific examples of genetic algorithms, they're nothing new either.