Hickman continues to dazzle and arch-enlighten us; only this time using the notions like attentional capture and algorithmic enslavement to look at how our characteristic human desires are in danger of assimilation into the machinic blur.
I say we resist and shape our futures in ways that retain some semblance of what is most joyous of the human.. I’m feeling more and more unconformable looking at these issues without trying to figure ways and tools for resistance, or redirection, or exploitation.
What can those ontopunks among us who see ‘crash spaces’ as zones of proximal contestation and ontological opportunities do to better bricolage and design adaptive futures for humans?
What say you?
The Dark Fantastic: Literature, Philosophy, and Digital Arts
Algorithmic governmentality, by its perfect ‘real time’ adaptation, its ‘virality’and its plasticity, makes the very notion of ‘failure’ meaningless…
—Desrosières, The Politics of Large Numbers
With the advent of the Digital Age time has been out of joint, “the symptoms of a sort of dissonance and of temporal unbalance are multiplying in the sphere of aesthetic sensibility”.1 The rhythm of life is haunted by a sense of acceleration that fragments living experience and sensory perception itself. Time is out of joint—disjointed. As more and more humans in the past twenty years have become netizens, joining with hundreds of millions of others across the planet in the virtual environments of our networks the power of the mind, the cognitive activity coupled to the linguistic machines (i.e., interfaces, computers, mobile devices, etc.) has brought about a disjunction between our natural and artificial environments, allowing us to mutate and metamorphically decouple ourselves from…
View original post 4,263 more words
the actual fit between people and machines isn’t nearly so seamless (the occupation/integration so complete) but the brutality of the results is horrendous/dehumanizing all the same, so what power does an individual have to address the wrongs of a calculation that fucks up their insurance or their probation, to resist or escape the financialization of various markerts and government fucntions, etc, next to none as the corporate capture of the related bodies of government (the only rough equivalents in terms of power/reach to corporate interests) is ever greater, for some examples of real world applications/dilemmas see:
In recent years baseball has gotten algorithmized. Trajectory, loft, speed off the bat, distance: data on a player’s every time at bat are compiled and analyzed, providing the opposing team with the optimal defensive positioning for limiting that batter’s success. The Major League batter has through long practice attained a mastery compiled into habit; now thanks to the algos that masterful habit is biting him in the ass, sending him back to the bench hitless. “Hit ’em where they ain’t” — that was Wee Willie Keeler’s secret to success in the 1890s. But it’s hard to retrain oneself, to abandon previously successful habits that in the new ecology are no longer adaptive. Are big data and statistical analysis killing the game, neutralizing talent with calculation? Or is it forcing players to get better, acknowledging their own predictable tendencies and making a concerted effort to change them?
I’m a big fan of AI and data-driven statistical analyses as ways of enhancing human decision making. I expect that the self-driving cars will have fewer accidents than human-driven cars. I’m curious, dmf, about calcs fucking up insurance or probation: my sense is that, in these task domains, human error and bias are the more likely sources of fuck-ups and injustices. What’s most problematic from my POV is that the AIs are putting the humans out of work, and work means pay. “Fuck work,” you say, and that’s fine as long as the workers own/control the means of production, so that they benefit in the form of reduced hours with no loss of pay when the AIs take over the tasks for which they are better suited than humans.
“human error and bias are the more likely sources of fuck-ups and injustices” but that’s the very issue AI/algorithms are just more calculating power cranking up our errors/biases.
They are killing some jobs and hoovering up profits, see related work by Jaron Lanier, Rana Foroohar, or the MathBabe Cathy above for her new book Weapons of Math Destruction.
“AI/algorithms are just more calculating power cranking up our errors/biases.” Well that’s just not so in most applications. These aren’t first generation expert systems, codifying and reifying human expert heuristics; these are self-correcting systems, not subject to the memory and processing limitations that humans must compensate for and work around. Plus they aren’t distracted from the job by their cell phones.
From the Amazon blurb: “If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues.” So what are the problems here? That the risk model isn’t accurate enough, the decision rendered by an AI that displaced some college-educated loan officer who also wasn’t accurate enough? That student debt isn’t high enough, and the threshold for borrowing should be lowered so that even more kids are making interest payments for decades? That a college degree is a badge signifying bourgeois credentials for getting higher paying, higher status jobs? That college costs are escalating even while the workload continues shifting to underpaid adjuncts and grad students and online coursework graded by AIs? That the maker of the lending algo, the lender, the university, and the employer of the college grad are charging high prices and making exorbitant profits while shrinking the labor force and real wages? The accuracy of lending risk model AI is the easiest problem to fix.
“These aren’t first generation expert systems, codifying and reifying human expert heuristics; these are self-correcting systems” real-world examples in wide use?
Well I thought the baseball example was a pretty good one. A rule-based expert system might ask expert baseball coaches to describe how they would position each player currently in the league, or perhaps go one level deep to variables about the batters: height, weight, type of swing, left or right handed, etc. But when you’ve got the ability to compile multiple data points on each at-bat for each batter and each pitcher, and you’ve got predictive statistical tools like multiple regression and factor analysis and cluster analysis with which to make predictions about each player’s at bats in this game against this pitcher. With each new at-bat new data are added to the pool, the stat analyses are run again, and incrementally more accurate statistical predictions can be made. And the results of these empirically-driven predictive analyses don’t necessarily correspond to what the expert human coaches might have come up with using intuitive heuristics based on their own observations. E.g., the coach might play a left-handed power to pull, shifting all of the outfielders toward right field and deep, with the infielders shifted toward first base. But the data analyses might show that this batter pulls only fly balls; if he hits a ground ball it’s usually to the opposite field, so the infielders should be positioned around toward third base instead of toward first.
Here’s an article about Trump’s AI-supported Facebook campaign that might be illustrative. Instead of dumping potential Trump voters into one single basket of deplorables, the AI used big data and statistical analytics to partition them into multiple baskets, each calling for particular advertising tactics.
Reading John’s post I’m reminded that there’s a difference between work and employment.
Marx himself saw in capitalism one main flaw and contradiction:
there is an immanent contradiction in the application of machinery to the production of surplus-value, since, of the two factors of the surplus-value created by a given amount of capital, one, the rate of surplus-value, cannot be increased except by diminishing the other, the number of workers. This contradiction comes to light as soon as machinery has come into general use in a given industry, for then the value of the machine-produced commodity regulates the social value of all commodities of the same kind; and it is this contradiction which in turn drives the capitalist, without his being aware of the fact, to the most ruthless and excessive prolongation of the working day, in order that he may secure compensation for the decrease in the relative number of workers exploited by increasing not only relative but also absolute surplus labour.
Even in Marx this battle between automation (application of machinery) and the worker. Exploitation began in the need of the Capitalist to gain surplus value (profits) from the workers, and yet as he automated his production he realized that he does not gain surplus value from the automated machinery but only from the exploited worker. Many who see all these jobs disappearing in the books bandied about forget that it want be work that will vanish, it will be employment (paid work). The workers will still be part of the production/consumption cycle, which is you’ve read the Grundrisse tells us that consumption is production, and production consumption. Our machines will not consume the products except under financial capitalism: which is what we are already seeing in the world today. Only in Financial capitalism has the worker been eliminated as the high-speed algorithmic cycles in the production-consumption and consumption-production circulation eliminate the middle terms of distribution and exchange: since in world of algorithmic high-speed trading nothing is distributed or exchanged, only the cycle of productive consumption.
The talk of automation in actual produced products for consumption as material commodities rather than financial or immaterial commodities is what people mean by this end of work or elimination of the worker due to automation. Capitalism no longer needs the mundane industrial forms of production so is eliminating the need for the worker in this form. In financial capitalism on the other hand it is the cognitive or knowledge worker who is at risk as AI based machine learning systems begin to replace and augment the decision processes that once took place with human workers. It’s this that is the true deregulation and eliminative move of immaterial capitalism. This is why as we’ve seen in both South Korea and Japan – the two countries who have implemented the most devastating aspects of the new economy a rise in Suicides. The knowledge workers on these countries are working in most instances on 24/7 call, and most literally work 18-20 hour days with less than 4 hours of sleep usually grabbed on commutes from the main city districts.
Agreed, S.C. There’s no point in trying to reclaim these jobs from the machines, send the coal miners back underground, rehire the displaced financial planners on Wall Street. The machines are only going to get better and cheaper, while the human workers are going to be increasingly exploited. But as you say, the machines aren’t going to be paying interest on loans, aren’t going to pay tuition, won’t be buying self-driving cars, won’t become consumers of the goods and services they make. It’s not much different in the outsourced sweatshops where the workers are kept alive but can’t afford the things they make. So you’ve got reduced production costs and higher profit margins, but economic growth is sluggish because fewer people can afford to buy the commodities being produced.
Yep, something people have to take into consideration is the fact that these elites don’t give a shit. They’d just as soon eliminate us literally not figuratively. They just don’t know how to do it. So they continue to use government austerity, war, man made diseases, among other things to eliminate the excluded and poor who more and more live in the slums and favelas of the world.
It’ll come down to calculating the comparative ROIs. Will the elite support universal basic income so that the unemployed can continue to buy? Or will the redundant humans be cut loose and eliminated, with production focusing almost exclusively on the elite as consumers? That’s already happening, luxury goods being one of the fastest growing sectors. The other growth sectors are things like home care and food service — sectors where it’s still cheaper to pay humans than to build robots.
On my walk this morning I was thinking that the pooled talent of the people writing on this thread could build and run a passable AI for little or no money. Suppose, for example, that it seemed like a good idea to build an AI device for identifying publicly available content relevant to post-nihilist praxis. We could codify the expertise at the task amply demonstrated by the two show-runners here, or else build a keyword-based search engine, its relevance iteratively improved via feedback from the experts. Have the content flagged with readily identifiable surface features: audio or text, length, author, date, etc. Then build a back-end evaluator based on amount of blog traffic linking to the post, amount of commentary generated, and number o “likes,” etc. Then maybe we could strap a think tank component onto the AI, generating insightful narratives based on compiled data gathered across the selected content. We’d make millions!
Interesting, John. Of course my own feeling is that all the fear of AI is for the most part hype and buying into hype. I doubt that AI will be at the point of some superintelligence for a long while yet. Why? As a software engineer for most of my life I know how the spin doctors of middle-management love to hype a new notion into existence. Memes… all the talk of Singularity by Kurzweil and his illk as a business ploy by him to become rich, work for such companies as Google, enjoy a technological career as a hype artist of AI emergence. But even Google has downplayed this as of late realizing that such engineering feats are a long way off. I suspect that our current fear of automatic society and automation are of the same meme variety, a hype ploy to instigate entrepreneurial money finding for start-ups, spark a different economy rather than truly replacing all humans in their jobs. The jobs that will be replaced will be due to cost cutting and profit. What is really happening is that Capitalism is a bottom a system of thievery: it stills our free time, our disposable time to further our own private learning and lives of play and family. Marx realized a long time ago that there is a symbiotic relationship between humans and technology, the capitalist needs both: and, that is the contradiction at the heart of their system. They want to eliminate and exclude the worker, but at the same time they need to squeeze every last ounce of surplus value out of them, for they cannot get it from the machines only from the flesh and blood humans they’ve enslaved to wage labour.
Problem with this critic is he is either ill-informed or just ignorant. There are already neural implants and neural prosthesis for computer-brain interfaces on the market, and many tests being done to improve the actual implants. Such as this: The most surprising part of our work is that the living brain tissue, the biological system, really doesn’t mind having an artificial device around for months,” Luan said. https://www.sciencedaily.com/releases/2017/02/170216104017.htm
Pretty cool about the ultraflexible brain probes. The myth of superhuman AI article got taken down before I could read it. I wonder: is some human assigned the task of finding and disabling unauthorized hacks of this published article, or is it a bot doing the work?
https://backchannel.com/the-myth-of-a-superhuman-ai-59282b686c62 … found it on another numeric plot. Kelly’s article…
Here’s another fascinating one attacking the fallacies undergirding the U.S. and EU Brain initiatives and AI: https://www.evolutionnews.org/2015/02/swarm_science_w/
I kid of course — who’s going to pay us millions for that? But there’s a kernel of no-BS there too, responding to the charge of exploring post-nihilistic praxis that this website, and this post, are about.
Part of the hype is that ordinary humans can barely contribute to building the high-tech future, that it’s only capital and machines. Humans will be replaced and left behind, or exploited into zombiedom, or modded into a barely imaginable transhuman configuration. But in the present it’s still humans doing the work. Capital? Its main functions are organization and hype. Corporations like Google and Uber required very little infrastructure to get where they are now. People were running their own ride-sharing pre-Uber as a self-organized service before the investors jumped in to corner the market and to skim off their cut. Sure, those corporations have self-driving cars now — actual hardware — which adds to their glamour and untouchable image, but the money to build the hardware came from software, and software came from humans working together. Couldn’t people with the requisite skills have banded together without investment capital to build an anarcho-syndicalist Uber, or even a communistic one?
It’s not just the hight-tech futuristic stuff either. Home care is about as low-tech a business imaginable, but it’s big business. And it’s futuristic: the population is getting older, and it’s mostly old people who need home care. What does capital add to labor here? Organization and hype. The bosses pay the workers minimum wage while they charge twice that much. Eighty percent of American jobs are in the service industries. People are willing to defer getting paid, even to shell out their own money, to go to college in hopes of getting paid more money later by corporations whose managers and owners are still going to be exploiting them. Can’t people defer getting paid for a while in order to build anarcho-syndicalist alternatives to capitalistic service companies, distributing the profits among themselves or lowering prices instead of letting the investors skim it off and get rich?
I did a search found it here instead: https://backchannel.com/the-myth-of-a-superhuman-ai-59282b686c62 … must have been a new net address attached?