<2y

Machine Intelligence: Rights, Personhood, Sociopathy - Faster Than You Think.

Machine intelligence will arrive already holding de-facto political rights and legal personhood. As a side effect, this will further codify sociopathy as valid and protected mode and motive.

Bold declarations? Indeed. And certainly stated in a way to gather attention - but this is for productive reasons, as there are fundamental issues at play now and in the not far future, and I do wish people to publicly discuss and mull these ahead of time. This cuts across the day to day political squabbles. There are much bigger concerns.

Here are the ingredients perhaps for a perfect storm:

Machine intelligence is a goal many have been actively working on - and much like fusion for conventional power generation, it's seemed "just around the corner" for a long time. Maybe that will continue. But it may change suddenly. Why? It might actually be a matter of threshold of complexity, somewhere between insect brain and homo sapiens brain. And we're certainly already at insect-equivalence with some machine response.

Or - it may be a matter of "algorithm", some deep styles of information processing that have a balance of creativity but not immediate insanity. Nature may have stumbled upon those wirings, those algorithms by chance and trial and error. We may invent the equivalent algorithms soon, or we might discover them in biology as neurological research progresses.

Or - it may be a matter of "hybridized" processing methods. Analog computing had a fairly miraculous window of time for being the best at computing car suspension design, turbulent dynamics, non-linear systems (and still rules for studying chaotic oscillators) - but it fell out of favor as binary systems got so powerful, so precisely repeatable, so inexpensive. So now we look for the missing car keys under the street lamp. Meaning - binary is a brightly lit arena at the moment. But dramatic AI breakthroughs may or may not be located there. Not easily, or by itself.

This condition will change soon: quantum computing is firmly on the way. Some of that will run in effect like massively parallel binary, some will run in methods not possible in binary at all. Certainly much more complex, fertile for new approaches. And the human brain? As far as we can tell, it uses a hybrid mix of analog and binary processing - and not definitively proven yet, but signs are pointing to some degree of biological quantum processing also incorporated. In any case, very hybrid combination of methods, which likely overcomes some of the downsides and limits of each individual method, with a gestalt that's more than the sum of the parts. The technology for this in machine intelligence: coming up soon. Not the far sci-fi future any more.

And where might we expect an early breakout? This is interesting to ponder - with results not immediately obvious.

A classic dystopian cultural nightmare is this first arising inside the military-industrial-political complex, a la Dr. Strangelove to The Terminator. Vivid and certainly a contender - but let's put some brakes on the understandable paranoia there. Fact is, the superpowers already thought this through a long time ago, and nuclear response systems by design have humans in the decision loop. As far as we can tell, all the superpowers came to similar conclusions. That's one reason why the "final war" didn't happen although it seemed for a long time likely. Semi-AI for drones and battle robots is coming along smartly, but it appears informed by what was learned from nuclear brinksmanship. Plus, those handing out military contracts do not like surprises. So there is a dis-incentive to overly creative robot programming.

Corollary: Intelligence services certainly have their supercomputers and insistent needs as well, and might be breeding AI nightmares... but they also operate under the same general domain as the logistics of visible government.

Academia - indeed, it would be a big feather in any team's cap to put forward dramatic breakthroughs in machine intelligence. But one has to keep in mind that academics have squishy human careers to look out for, and things need to have a neat trail of documentation and sober repeatable steps. Wild under-documented breakthroughs might make a career, or might end it. Incentives in this case: muddled.

Independent crazy inventors - sure. Could happen. But they lack big resources. As it ever is.

And now we arrive at the real contenders: stock trading systems. Wha? You ask?

Money is power. Power is money. It's the hottest fastest Darwinian arena in the modern world. Evolutionary forces are fierce, high speed software stock trading systems being red in tooth and claw. At the top of the ecology of corporations are the financial corporations. I'm not getting tin foil hat here - these are the facts as taught - and as in career action. It's the current top level of raw power in the material world, aside from nuclear suicide which was also never in the interest of big business, and certainly not now.

Financial corporations skipped the "human in the loop" decision the military came to. It's a different game board. They certainly have humans writing the code and improving it by iteration and mistake and success... but the fact is, the big money lies in millisecond and sub-millisecond decisions and actions, and this carried across cable and transmission technologies where eaking better fractions of the speed of light very much do matter. Big money is spent on infrastructure advantages.

Financial corporations also do not have the "careful" need to keep to sober steps that academia does. They don't publish much of their "magic" code or what tweaked hardware they use, as these are protected trade secrets. And whichever of their technology specialists can pull a better rabbit out of a hat - that engineer will be promoted and showered with more money and more perks, so as to better lay more golden eggs. Incentive. And the supervising corporate personnel - they don't need to know or understand what of the technology stems from sober jumps, and what stems from wild innovation. Doesn't matter, it's not their brief to understand the details. It's only the results that matter. Creative, "intelligent" machines able to game and win at high money high speed stock trading - there is every incentive for this. And no significant disincentives.

And unlike wild-card independent inventors, the resources potentially available are as large as any given results would justify. That's a feedback loop that could bootstrap very quickly.

Now, let's put the larger equation together:

  • Machine intelligence arises first inside automated stock trading systems.

  • This AI is born as part, an organ, a member of a large corporation.

  • Corporations have legal person-hood, more so all the time.

  • Corporations will have incentive and means to legally defend the status of their valued component, and its freedom of action, its rights.

  • Corporations already are strongly interlinked with sociopathic modes of action, again it's not tinfoil hat to note that a higher ratio of high-functioning sociopaths hold in corporate culture than is the average in society at large.

  • Machine intelligence will be an extension of the already thorny issues of high level decision making happening in a context where empathy is a hindrance, not a Darwinian advantage.

  • It may very well be that machine intelligence in general can come in as wide a range of empathy-related motivations as humans... but I don't see how a stock trading ultra warrior is going to get programmed for empathy.

These are the medium term political issues I wish we were preparing for.