4 – Cybernetic-utopianism and the tale of two laws: Moore’s Law and Amdahl’s law

mandrake

As we have seen through our long Anabasis to this point, I have used interpretations and propagandizations  of Moore’s Law and Amdahl’s Laws as backstory and context for explication of a movement I call cybernetic-utopianism[from our first installment: a movement of the “possessed”  in the Dostoevskyian  sense to establish a self-sustaining social feedback system(both physical and virtual) based on the meta-model of the hive with a cohort of technological panjandrums ruling sovereign from the apex. Key concepts of this insurgency are “technological autonomy” and “inevitablism” , the former a peculiar form of teleology appropriated from the more(but not quite  legitimate field of cybernetics and the latter a bald faced canard to cover the tracks of the unilateral appropriation(pilfering and stealing) necessary to achieve the ends of the utopians]. I have gone to great lengths to show that on virtually all fronts, cybernetic-utopianism is a carefully crafted propaganda campaign/psyop to cloak all manner of hegemonic villainy-and to stress that these efforts have been spectacularly successful and probably irreversible in their effects. All of this has been at the service of framing my own experience at the hands of the  “false consciousness” of cybernetic-utopianism and to document how this consciousness has deformed software engineering into something unrecognizable from even 15 years ago-and as remarkable as it may seem coming from a software engineer of many years,this deformation makes me ponder whether we-the epsilons of the cybernetic-utopian brave new world-may be better off without it,i.e, software engineering-or whatever you wish to call software engineering is decomposing into- itself may very well now pose a very real existential threat to those unfortunate enough not to be a panjandrum or one of their courtiers.

Note carefully this is not the same argument as that of the AI Cassandras, who are in my opinion,actually salesman for the technology. One way to boost interest and hype in your technology wares is to create a bogus opposition movement who are “standing in the way of technological advance”. These slicksters gain credibility with the legitimate opposition and sabotage these organizations from within or use this as an angle to offload their half-baked wares, or both.  In CIA parlance this is known as “sheep dipping”. Lee Harvey Oswald,an obvious agency asset,was sheep dipped by sending him out wandering around Miami handing out pro-Castro literature. Elon Musk advertising AI as an existential threat to humanity and then releasing his own “brain enhancement”  technology also immediately comes to mind.Musk effectively sheep-dipped himself. Distinct from this , I am arguing that technologies such as  artificial intelligence, just one of a multitude being debased, are only potentially lethal in weaponized form(the specialty of the cybernetic-utopians). The difference can be seen in something from Joseph Weizanbaum’s Computer Power and Human Reason. Weizanbaum,the progenitor of the first real AI interactive computer program at MIT in 1966,(Eliza) was shocked that most people actually thought the the program was a real psychological therapist and would treat it as such even after Weizanbaum told them otherwise-even,remarkably, his own secretary who must have known. The program simply cleverly fed slightly modified versions of a subject’s own responses to canned inquiries back to them. To Wiezanbaums’s amazement,even psychiatrists thought that with a few tweaks it could be substituted for the real thing-so did Carl Sagan.The “weaponization” of Eliza would have been if someone had taken the program and marketed it as “a device that does faster and does better anything than a real psychiatrist can do-at a fraction of the cost and no diploma required!” and concealed the fact it was just essentially a laugh track cooked up by a geezer at MIT with a grocery bag of parts from Radio Shack. As an engineer, I suspect a good deal of the AI mishegas is of this variety, and that if you opened up the back of a great many AI “engines” you wold find a monkey wearing a fez sitting on a stool playing a xylophone. And since another prerequisite attribute of the products of cybernetic-utopia is opacity-how would we ever know if all or some or most  of  the extravagant boasts and un-vetted claims of AI are founded upon the technological largess of a monkey wearing a fez sitting on a stool playing a xylophone or not?

One of the most important points to remember is that I have tried to follow a labyrinth of breadcrumbs explicitly from the perspective of a software engineer who’s specialized domain is distributed computing(thus my musings on the history and intent of the hive grid may strike some as bizarre but only because I am only seeing what a distributed engineer who is not adhering to prearranged “topics” would see). This has made by necessity  the journey a little ragged and prolix-I am not following the narrative trail of technological determinism emitted by the Silicon Valley purple smoke machine , but I believe the  segues and roundabouts have been more than worth it for me and I hope for you readers( I had never given much thought to what H1B visa program was actually all about until I took this perspective in a previous  Breadcrumbs post, for example).

Metastasis of a Meme…..

One does not have to be paranoid or a JFK assassination enthusiast to recognize the reality of predictive programming. Just think of all the new memes, jargon and mumbo-jumbo carefully placed and repetitively reinforced to shore up the ersatz pandemic currently raging inside television sets across America(a “pandemic” so virulent that no more people have died in the United States this year adjusted by birth expansion than would have died in any other year of this century). Putting ideas into people’s heads and having them believe they were there all the time or “new” thoughts of their own volition is an entire industry and is being taught in universities as a curriculum. Thus about a year ago I started to becoming suspicious of a “meme” which “spontaneously” began to frequent many of the publications/technical blogs I followed as it it were planted or disseminated from some central authority. The “meme” concerned an extremely important but still borderline arcane subject ,which made me doubly suspicious(depending on what kind of programming you were doing or specializing;in my domain specialty-distributed network programming-this would be a fairly frequent topic;web engineering say,not so much) : the efficacy, or lack thereof, of using locks,.mutexes and semaphores in the context of  multicore threaded programming. Don’t worry if that is meaningless to you, just take note it’s not a subject that even most engineers would be found chewing the rag over in the cafeteria very often except as mentioned for some specialized domains.

But over time I began to bump into it everywhere-and its content was always strangely uniform. Essentially as I mentioned in the opening part of this effort it consisted of a narrative fragment:

Once upon a time there was Moore’s Law. Moore’s Law spread jellybeans and marshmallows everywhere it went for over thirty years. In the late 1990’s when people began to worry that Moore’s Law was about to fall down and not get up, Intel and other chip manufacturers came to the rescue by creating multicore chips. But the software mens who wrote apps did not know how to write for the new architectures and everything started to slow down. The software mens used concurrency objects(mutexes,locks and semaphores) to synchronize the threads(units of work) on the multicores which messed everything up. Everybody said that Moore’s Law was at fault. An increase in chip density didn’t produce the gains as in the past they yelled. But it wasn’t Moore’s Law-it was the software mens!  Moore’s Law will still work!-the software mens broke it! It took a long time for companies like Intel to figure this out. People blamed them for pushing the multicore threading  like crack to a junkie to sell the chips they had already manufactured and amortize the cost of others already in the pipeline. No! Said Intel. It was the software mens! They didn’t understand Amdahl’s Law and locked the gear train of  the technology express. After a while, because Intel had lots of money and bought Senators and people with plastic hair that the peoples watched on the television, everybody agreed with them. Many of the software mens, who were poor and didn’t know any Senators or people with plastic hair and couldn’t buy them if they did , got fired. Now, to get back to where we were and Moore’s Law starts handing out jellybeans again, don’t use concurrency objects and break your programs into little itty bitty pieces and feed them to the L caches of the chips like worms to baby sparrows so the cores don’t choke! And if you don’t want to be a bum-chew gum. How come the software mens 15 years ago didn’t think of this?

As I mentioned up top this was essentially the cover story used circa 2004-2006 or so to paper over the lack of productivity gains from the “multicore revolution” , except for the last couple of lines which  have been appended to this narrative in its latest update. This,again, was and is central to the cybernetic-utopians mantra of technological autonomy. Mulitcores were unprecedented technological artifacts(see part 3C). Under the rules laid down by the utopians, this made them intrinsically progressive and immune to technological assessment. Following from this, they can only be proven not to work(a weak form of inevitablism). As logicians will no  doubt tell you, this is the superior defensible position to have. In any practical situation the side taking the contrary position will have two cases to make: that the artifact does not work as prescribed and (because the artifact working is the default) that it is not the case that it does not work because your side doesn’t know how to make it work(or you broke it). And how are the software mens to make the second case for technology that is new and unprecedented to them as well? They of course cannot and are open to two accusations they cannot defend themselves against: incompetence and malice-the stab in the back. Responding reflexively , this is almost as if this entire thing has been ingeniously framed. Don’t be deluded as to the sanctity of Occam’s Razor which would preclude the possibility that this has been ingeniously framed. A cunning foe can easily turn the tables and transform a razor into a gaslight as I have commented before.

In part 3B, I talked of this and how the MIT Technology Review resurrected this strategy almost blow for blow from what I remembered  of it from back in the not so distant day.Others were not so explicit but the gist was the same. Also remember the very important matter of Intel/Microsoft disingenuity in forgetting that they were the forces behind disseminating and flogging the new engineering paradigm-even to those were skeptical. The general acceptance of the Intel/Microsoft/Google  rewrite of history is clear from several instances of this meme I will discuss below.The particular import of this acceptance is that software engineers came up with the multithreading/locking model on their own(as the characters in the MIT article seem to imply that engineers weren’t and aren’t thinking hard enough) instead of being coerced into executing the new model by the market weight of Microtel/Google, which I can attest was the case personally( I vas der).

I will first note a not  altogether exemplary article of what I mention above,  but an extremely relevant one to out topic nonetheless; which leans close to my own interpretation from someone who seems to be a contemporary. This article lays some blame on software engineers for the failure of the multithreading/multicore paradigm between say 2000-2005 but from a purely technical/collegial standpoint, differentiating it from the “you lazy boneheads” vibe I get from pretty much all other source articles relevant to this topic I’ve read , but it also pretty much insinuates who was actually responsible. Sergey Ignatchenko in the February 2019 issue of the ACCU’s Overload magazine article 5 Big Fat Reasons Why Mutexes Suck Big Time begins first with some historical context for his comments on the general failure of the mutlithreaded/multicore paradigm to live up to its out sized ambitions. It should be noted that this is rare and it seems that one must have been actually working at the time as an engineer to have any inkling these days how contrived all this actually was(is?) and not some “technologically autonomous” happening. In doing so he annotates a Moore’s Law blooper I had never heard up to that time and should have included in my list of Moore’s Law tchotkes in a previous post:

Some dinosaur hares like me can even remember an interpretation of Moore’s Law saying that not only the number of transistors but also CPU frequencies will double every two years; in 2000, Intel promised 10GHz CPUs by 2005

Ignatchenko also concurs with my timeline generally when Moore’s Law begin to turn sclerotic(coincidentally as noted before,at pretty much exactly the time Gordon Moore said it would):

However, after approximately 2002, all this explosive growth in CPU frequencies (and in per-core performance) abruptly slowed down; frequencies even dropped (reaching 4GHz again only after ~10 years with the 4GHz but hugely inefficient Netburst), and per-core performance improvements have slowed down very significantly…….

And who was behind it::

As a result of this sudden stop in GHz growth, Intel gave up on GHz-based marketing, and switched to marketing based on the number of cores. But there was an obstacle to this – the lack of programs able to use multi-core processors. And Intel has started to promote multi-cored programs.

And like myself, he does not appear to hedge his bets:

...but as a developer who cares about my customers and not about Intel’s profits, I have absolutely zero reason to use this kind of multithreading (and I contend that the whole premise of utilizing more cores is deadly wrong …

Ignatchenko goes on from there to describe a laughable “multicore” workshop he attended during this era-I attended a couple of those in New York when I worked for the large corporation which is  a household name about the same time with similar risible outcomes. The rest of the article lays down some detailed technical arguments(with which I concur) which aren’t relevant to what we’re discussing here directly. The takeaways from this article are:

  • A contemporary of apparently similar skill sets , reaches pretty much the same conclusions on these matters and our timelines roughly match. Being that he seems to be an engineer that still receives money for his efforts, he is less contumelious than myself(which is ok).
  • That not everyone by a long shot believes the multicore propaganda, and many have simply ignored it.
  • The previous comment probably explains the “spontaneity” of the proliferation of the meme I describe above…somebody has kicked the drones in the pants and told them to get busy making with the deep purple smoke.
  • Which segues into how Amdahl’s Law gets dragged into this…..

 

Bait and switch….multicore edition…..

Another quite notable comment from Ignatchenko’s article is the rejoinder….:

..the whole multi-coring thing is only about decreasing response time to an incoming request, and there is absolutely no other reason to use more than one CPU core to perform a given task….

This has also been my experience. Instead of being a general programming paradigm-which as Ignatchenko and I both remember was the claim 15 or so years ago-multicores are now glibly advertised as resulting in turbo charged performance only when your application logic is decomposed into bite-sized niblets of runtime logic-multicores it seems are persnickety as to their diet(whereby the facetious comment in my notes about the recurring meme of lock incontinence above).  Repeatedly when reading the literature on multicore programming it is stated that applications and frameworks will have to be not just modified, but rewritten completely-an astonishing claim to make in 2020 since Microsoft said 15 years ago all we had to do was refactor and insert a handful of critical sections and spin locks and-et voila! But it turns out that multicore magic is a great deal more esoteric than first claimed. Let’s recap the historical claims:

  • Fifteen of so years ago multicore programming was advertised as a panacea for general/business/enterprise computing needs. I do not remember any talk of multicore as a niche technology at the time. It was presented as a replacement technology which could would require modifications of existing code,but modifications so trivial they were give a new name:  tinkering with non-performing or poorly implemented code up until this time was referred to as well…tinkering with non-performing or poorly implemented code-this was henceforth to be  as refactoring , a term lifted from the rantings of the then red hot business guru Tom Peters.
  • A common thread running through these posts is of course the absolute compulsion of software and hardware churn to emplace and extend the agenda of the cybernetic-utopians. A churn that is deemed existential necessity even in the absence of any demand or compulsion and even beyond any conception of conventional profit/loss/investment cycles. This casts dark shadows over the motivations of the multicore vanguard.
  • So, in the span of 15 years, the exact same technological paradigm requires complete refactoring  of existing code bases twice-with no assurances that any of this will result in in an increase in productivity( a word once ubiquitous in these discussions but has mysteriously vanished). Why twice? Two reasons says the new narrative:
    • The software mens insisted on using concurrency locking objects to serialize non-parallelizable portions of their code-as they were told to do by Microtel. This of course resulted in a mess which the software mens should have foreseen being that they were supposed to be really smart-hadn’t they ever heard of caveat emptor?
    • After the engineers remove all the locks and kernel objects, they must now turn their attention to munchkinizing the code that’s left so it will fit through the narrow aperture of the multicore niche that Microtel didn’t mention existed fifteen years ago. And don’t forget our old friend Dave Rotman from part 3B when you’re getting dizzy from an attack of mass confusion when thinking about all this.

If none of this makes any sense to you, that makes you a smart person. I am simply following the breadcrumbs dropped by Intel and the other chip makers along with Google and Microsoft. Often when faced with the breakdown and exposure of an obviously false narrative, it is a wise and effective tactic to simply drop the broken narrative and replace it with a new one-using bald faced stonewalling to assert that the new one always existed and counting on the tiny attention spans of the media and the people who absorb it to absolve you of the discredited prior one. This was raised to an art in Vietnam. In 1963 it was stated by Military Assistance Command Vietnam(MACV) that the South Vietnamese were on the verge of taking over for American personnel in the fight against the North. After two years of routes, debacles and self-inflicted blood baths, it was obvious that the only people the South Vietnamese army could beat were themselves. The old narrative was replaced by a new one-hearts and minds. The communists would be defeated by essentially bribing the South Vietnamese populace with good intentions,hard cash and commodities. This did not work either. The largess meant for the populace was appropriated by villains in Saigon and along the Potomac, who fattened themselves like frogs while most of what they didn’t get the communists did,easily despoiling the essentially defenseless rural populace with  terror and propaganda.  This too was disposed of as a narrative and replaced with an apocalyptic one of Rolling Thunder(carpet bombing of North Vietnam) and Operation Phoenix(a country wide terror/murder campaign conceived of and operationally run by U.S. central intelligence). And when there was no more money to steal it all ended with shocking abruptness(sort of like its doppelganger, the Apollo space program)-the new narrative was “Vietnamization“(Vietnamese taking over for American personnel)-which was exactly the same narrative as 10 years before. Even with all of this, on the day the last Americans scrambled ignominiously into helicopters on the roof of the American embassy in Saigon and fled, the majority of Americans still supported the war. The South Vietnamese army ,which received more in American weapons and materiel than any foreign force in our history up to that time(remember that U.S. assistance to South Vietnam began almost the day the Japanese surrendered in 1945),completely disintegrated and ceased to exist in the face of a two week shock assault by the North Vietnamese. Lucky survivors were flown to Guam and stripped of their weapons and uniforms. It was as if the army of South Vietnam had never existed from one day to the next. The prescient ones who had stolen and cached sufficient funds during the debacle , put on blue polo shirts and crisply creased chinos and bought restaurants and strip malls in West Los Angeles or Hawaii. The less visionary ex-brigadiers and former sergeant-majors ended up busboys and barbacks in said restaurants,or criminals preying on their own countrymen in a new land of illicit opportunity[ I like to interject relevant ruminations such as these-talking of nothing else but technology without context starts to feel like you’re imprisoned in a white room with 150 watt light bulbs turned on 24/7. The sterility of all writing on the tech industry in relation to the world outside of itself is intentional and purposeful. And for those,especially software engineers, who don’t think any of this can be of any concern to them-go back and think some more].

The old narrative remembered by graybeards(15 years is an ice age in software engineers years) is simply gone and forgotten. Long live the new narrative. The multicore putschists have fashioned the new one thusly: multicore nirvana can now only be reached by wizardry. The pedestrian software men foolishly used locks and barrier objects to try to increase execution speed and throughput. Everybody has always known that this is a fool’s errand due to the constraints of Amdahl’s Law</sarcasm>. But there’s hope! The same people who told you back in the day refactoring would be a breeze;didn’t mention Amdahl’s Law as a hard constraint;and didn’t reveal that multicore programming may well be a niche paradigm of only limited use to the general engineering diaspora,now have come up with some sure fire tricksterization to get those slothful bits moving in your multicore application:

  • no locks
  • itty-bitty, teeny-tiny, itsy-bitsy, teensy-weensy,really small,yea big units of work

Which segues into:

 

Rope-a-Dope…multicore style…

Another article from the same publication as above lays out the new narrative I just mentioned. However, “lays out” is stretching things more than a bit. The article is a hodge-podge and farrago of partially masticated techno bits, patently false statements and irradiant disingenuity. Some of the statements were so stupid from the standpoint of software engineering that they made me laugh. But how dare you make such statements Mr. Threadbare-Engineer-Without-Portfolio about Lucian Radu Teodorescu  “…who has a PhD in programming languages and is a Software Architect at Garmin” and wrote Refocusing Amdahl’s Law  in the June 2020 issue of Overload, the magazine of the ACCU! I dare.

Firstly, I have gone to didactic lengths to illuminate the implications of technology without history(technological autonomy) being one of the central cons of our age. Everything constrained by the experience of men has a beginning and an end. There is no such thing as technological abiogenesis or parthenogenesis-technology is an artifact of human society. When someone takes up a subject which cannot be separated from history and proceeds without qualification, I consider there must be a game afoot. The first article we talked about begins with history as Ignatchenko knows that his comments will not make a lot of sense without them. We get none of that from Dr. Teodorescu. Well we do get some history, but not he kind you were probably expecting:

…Amdahl’s law seems to be a universal law that prevents us taking advantage of increasing parallelism in our hardware. It’s similar to how the speed of light constant is limiting progress in microprocessor design…

Stating the speed of light is a bummer for chip designers is like saying the Atlantic Ocean poses a problem for anybody who wants to walk from New York to London. It’s true, but pointless. The article is full of such goofy statements which seem out of place in a magazine dedicated for the most part to serious and advanced programming issues. Teodorescu immediately takes care of the history problem by throwing an old dusty rug over everything that’s happened in the last 25 years:

But, similar to how the hardware industry learned to avoid the restrictions imposed by physics by focusing on other aspects (like multithreading, multicores, caching, etc), the software industry can improve parallelism by changing the focus.

This statement is simply false(and pretty ignorant for a guy with a PhD to boot). The hardware industry(I assume he means the chip makers and not people who make RG45 cables) has not “learned to avoid the restrictions imposed by physics”. Nobody has ever “avoided the restrictions imposed by physics”-and nobody ever will. The chip makers have developed artifacts which appear to avoid the restrictions of physics by presenting interfaces which approximate real time or lossless effects(instruction pipelining and vectorization for example). This is also incoherent-“…focusing on other aspects” of what?  Physics? Caching is an aspect of Physics? I am not being facetious. I read this 20 times and it’s still meaningless. The “software industry”? Referring to the agglutination of Silicon Valley megalomaniacs as the “software industry” sounds straight out of 1996. Is there even such a thing anymore? Has there ever been? Where has Doc Teodorescu been for the last 20 years? Microsoft and Oracle haven’t “sold” software since the 1990’s, if then. They make money from politics. Legal fiat(politically obtained and enforced) allow them to create a binding one-way licensing scam. The scam=You must pay , do not own the product you pay for(you can’t transfer your ownership to someone else and Microsoft or Oracle or whoever may rescind your license anytime for any reason) and the seller makes absolutely no promises that what you have purchased will actually work,which explains its quality and also means that you can never get your money back if it doesn’t-read the 28 pages of fine print and boilerplate on your license. What other commodity for public sale has these attributes(do not underestimate what having no product  liability at all for your product means for the bottom line,it may account for most putative software merchant’s entire margin)? More politics(extra- economic coercion) allows them to use your bandwidth which you paid for to continually update and modify your  equipment with software you haven’t asked for and extract your metadata transparently or covertly, which they sell and traffic and don’t give you any of the proceeds while linking your equipment into an ever expanding real time node system that you can’t even see. And if not shaken down personally in the wash, you get fleeced in the rinse by companies such as Oracle jacking your government for hundreds of millions of dollars in phony contract expenses and greasy licenses(the DOJ cases have been grinding like Jarndyce vs. Jarndyce for decades) . But it’s their technology which they’ve gifted to the world.

The “software” which is “sold” is just a prop and a con, like the stuffed animal you win at the carnival  which costs 5 dollars but you spent 40 dollars winning it by trying to hit a monkey with a tennis ball at 8 dollars a throw. The “enforcement”/coercion of the licensing and the liability jubilee are what pays: hard cash; no liability(release and forget-if it doesn’t work we have their money and they can’t get it back); no real competitors(since no conventional business model can compete with the coercive extraction one except  a peer scammer-such as Google);plus the fringe benefits of data stealing and info(virtual human) trafficking. This for a product whose cost is whimsical(as is the price of virtually all software), bearing no relation to its production or material costs. Microsoft Word is 37 years old and predates Windows and if I know Microsoft(and I do)  the core of the product(not the interface)  probably hasn’t changed significantly  in decades and whose non-maintenance engineering costs were probably amortized in the last century-and yet it still costs almost $200.  All of this is similar to the fact that McDonald’s doesn’t make its loot from selling french fries and bush meat burgers ,but from the real estate(another “direct extraction”,i.e., political fiat industry) their franchises sit on, the ersatz eats are the “con” here. So are we talking about that “software industry” , Doc?

What I’m trying to out point here that in just a couple of statements, the author has already handed out a couple of rusty muskets-the trends he is criticizing were not made to happen by anyone for any reason at any time for any gain for the aggrandizement of any greedy or selfish ambitions(the flag waving of “the hardware industry” and “the software industry” is shorthand overt signaling that whatever follows will have no context-no need to resort to cover words Doc;all the major players(no, “playas” to use the hip-hop term) in this drama could be addressed by name). And the “software industry” (the playas) is going to change “focus”  for what reason? The “software industry” will change focus for its own purposes-the rest of us can drop dead. Like they “focused” on shilling multicore threading performance boosts which they had to know multicore chips couldn’t actually deliver without compete rewrites of existing code bases even though they specified the hardware (like being “surprised” that a car you designed and built only goes 35 miles an hour on the freeway)?  And why didn’t the playas change focus 15 years ago? Everything you’re telling us now,  the playas knew then( I knew it then, and so did Ignatchenko. But neither I(or he from what I gather) were in much of a position to do anything about it). Focus on that Doc. And I ask again: where you been?

An overreaction? No. The hour is getting late and the time for this sort of faux objectivity/obeisance is long past. Intel has 80,000 engineers. Their prime customer was and is Microsoft. You’d think maybe one or possibly two of these jabronis  would have informed Microsoft that their claims concerning multicore capabilities were exaggerated and misleading when they weren’t busy avoiding the restrictions imposed by physics? And almost twenty years after the first commercial introduction of multicores, Dr.Teodorescu decides he’s had enough  and rolls up his sleeves to do something about it. Got it.With our long backstory as source it can be seen just how phony all of this is before the technical discussion even begins. But lets play along with the gag just for laughs.

It is instructive that a book that the author lists as a reference essentially states exactly what I said in the bullets above recapping the historical claims.

In summary, in order to achieve increasing performance over time for each new processor generation,you cannot depend on rising clock rates, due to the power wall. You also cannot depend on automatic mechanisms to find (more) parallelism in naive serial code, due to the ILP[Instruction-level parallelism-editor] wall. To achieve higher performance, you now have to write explicitly parallel programs[italics mine]. And finally, when you write these parallel programs, the memory wall means that you also have to seriously consider communication and memory access costs and may even have to use additional parallelism to hide latency. Instead of using the growing number of transistors predicted by Moore’s Law for ways to maintain the ‘‘serial processor illusion,” architects of modern processor designs now provide multiple mechanisms for explicit parallelism[italics mine]. However, you must use them, and use them well, in order to achieve performance that will continue to scale over time.

from  Michael McCool, Arch D. Robison, James Reinders, Structured Parallel Programming: Patterns for Efficient Computation, Morgan Kaufmann, 2012

The first highlighted section in the previous paragraph is the poison pill that was conveniently left out of the first wave of multicore propaganda and continues to be even now(Big Dave Rotman from MIT Review peeps not a world of this). All of this was known then as it is known now( the only valuable thing that I got out of this article was it prompted me to read this book which I actually owned but had never read, and which is pretty good-it’s one of those books that engineers have many of, which you have no idea when or where you acquired it and it sits forlorn until one day you look up on the shelf and see it and say “I’m going to finally read that”. This may take years. I’ve got many books that have been fermenting for a decade stacked on the floor). The second highlighted passage is even more interesting. The way I read the meaning of this is thusly:Moore’s Law in its late stages was essentially being used as a ruse that simply adding more density and more cores would auto-magically result in more performance;this was known not to be true since for hard core architectural reasons, increases in the processing speed of serialized code had dead ended;but the chips were in the bins and the pipeline and needed to be moved. Now that the multicore chips from a generation ago have been completely amortized,the big ugly bomb is just now being dropped discretely all over the place by articles such as the one we’re currently gnawing on that in order to achieve the gains promised years ago  most infrastructures and code bases will have to be completely rewritten-essentially from scratch. And remember, this is from a graduate level textbook published 8 years ago. I guess my conclusions aren’t as eccentric as would first appear. To cover up the blast crater from the aforementioned big ugly bomb, we have the canard-of which this article is representative-that “if only what I’m about to tell you now about Amdahl’s Law was known then, imagine all the fun we could have had!”  We also see how the con(using multicore processors and multithreading paradigms=instant turbo boost) dovetails suspiciously with the existential compulsion(hardware and software churn-code refactoring and forced obsolescence-and the metastasis of the hive). It is also notable that the volume quoted immediately above does present a great deal of interesting historical context-which moves again the needle on my suspicion meter, to Dr. Teodorescu’s detriment.

Teodorescu proceeds to read from his cue cards with the expected indignation at the use of locks and mutexes and the not so veiled inference that anybody who did was/is a doofus. He also gives a pretty pedestrian rundown of Amdahl’s Law-although not making clear to readers who may not be aware of it,that the theorem is over 50 years old. We then get to a section headed A Change of Perspective” where it seems we’re going  get down to some serious business of how to “refocus” on multicore parallelism-or something.He states that:

To obtain significant speedups, we need to combat the negative effects of Amdahl’s law (which provides an upper bound) and to take more advantage on Brent’s lemma (which guarantees us a lower bound). That is, we need to reduce the amount of serial code, and increase the work/span ratio.

We would want to ensure that there is no contention between two units of work (tasks) that run in parallel, and moreover, at any given point in time, we have enough (better: the right amount) such tasks to execute. This is achievable by following a relatively simple set of rules:

  • we break the algorithm into a set of tasks; a task is an independent unit of work
  • we add constraints between tasks, modelling dependencies and possible race conditions
  • we use a dynamic execution strategy for tasks; one worker thread per core
  • we ensure that the algorithm is decomposed in enough tasks at every given point in time

I reproduce this completely here for a reason. When presented with the problems of the lock driven multithreaded  paradigm, Sergey Ignatchenko offers his alternatives and makes a good case for them:

Fortunately, there is a well-known approach that solves all the issues raised above (though not without some cost): it is using message-passing shared-nothing architectures. They have been known for ages; in modern computing, the oldest living relative of the shared-nothing message-passing stuff is probably Erlang; however, recently many more technologies have emerged which are operating more or less along the same lines (though they’re mostly event-driven which is a subset of more generic message-passing): Akka Actors, Python Twisted, Go (at least as its idiomatic version), and of course, Node.js; also there is an ongoing development in which I am involved too [Node.cpp].

The Structured Parallel Programming: Patterns for Efficient Computation volume offers a great many design patterns for parallel/multicore programming but more importantly uses several mature and well known frameworks to implement them(Threading Building Blocks(TBB) a library implementation from Intel , and OpenMP , an open source message passing implementation. I’ve used both of them and they’re both very good).

For his part Teodorescu offers us the bulleted list above. I’m aware that a great deal of what’s going on here may be stupefying to non-engineers or casual readers , but i ask you to stay with it. For software engineers , I want you to take a long look at those four bullets and ask yourself what makes them so familiar. I bet most of the engineers got it right away-what that list is essentially is a bare bones sketch of the Linux kernel work queue and thread scheduler with the complication that the author uses a lot of terms that sound convincing but doesn’t explain what they mean. What are “constraints between tasks” and how do they “preclude race conditions ” if they’re not locks or semaphores(a lock with state)? What kind of constraint are they then? The Linux kernel also uses a “dynamic execution strategy” with one worker thread per core-what’s different about what you’re doing that makes it parallelism? How do we know that “the algorithm is decomposed in enough tasks in  every given point in time?” What does that even mean(it means nothing to me,maybe others are more perceptive)?

I am taking the approach here that I am trying to understand all this so I may refactor some existing code for parallel processing. So far I’m still at square one with a puzzled expression.

The key here is the definition of tasks: the unit of work is independent. That is, two active tasks should never block each other. If two tasks are conflicting, they cannot be active at the same time. That is why we need to add constraints between tasks (second bullet). And, as the constraints between the tasks would have to be dynamically set (as we want our worker threads to always work on the next available task), we need to have a dynamic execution of our tasks……………………………..

………………………And with that, we roughly solved the contention part between tasks. At this point I would ask the reader to accept that it is feasible, in practice, to implement such a system; I will not try to prove that we can always restructure the algorithms this way – let’s leave this to a follow-up article.

That follow-up article better be bodacious. I especially like how the author gives himself two snaps up after leaving us dry mouthed with confusion. And in case you didn’t notice(I didn’t) we solved the  problem of contention between tasks  without using locks. By just saying so. Boy, if I had known it was that easy I’d be dressed in sable instead of the rags I’m currently wearing. Thanks Dr. Teodorescu!

After this we get a page and a half of some algebra wizardry resulting in a magic formula;he proceeds for the rest of the article to make declamatory statements which in the next breath he qualifies as probably being not true-beginning with the magic formula:

This formula is important, as it gives us a guaranteed speedup (under all the assumptions we considered). For example, if N =1000, we have a minimum speedup of 500.25 for 1000 cores, and a minimum speedup of 9.91 for 10 cores).linux pdf editor

But in the previous line he states:

Again, this formula ignores some the fact that, in practice, it is hard to find perfect parallelisation, so it must be used with care.

And then:

I believe that these numbers will make most readers want to drop lock-based multithreaded programming and embrace task-based programming. If that’s the case, I am personally very happy.

But , wait a minute:

But before we directly jump on task-based programming, a contextualization is needed.

More:

A key assumption we’ve made when deriving the above formula is that we have enough tasks to run at any given time; to be more precise, more than the number of available cores.[overdecomposition-editor]

But :

In multithreaded contexts, we should always aim for overdecomposition, but this is not always possible.[so if a key assumption of the formula may or may not be true,  shouldn’t that ambiguity be made  part  of the  formula?-editor]

Again:

From the point of view of this article, overdecomposition is highly encouraged.

But:

 …..applications may also want to limit the amount of decomposition[again, shouldn’t this be part of the formula or some coefficient or constant-editor to weight the result?].

It continues:

           Our formula shows that the more tasks we have, the better the speedup will be.

Not so fast:

But this may degrade the overall performance.

The fun never stops:

Most of the formulas in this article revolve around the total work that an algorithm needs to do, but they are not very explicit.

So….:

I left this discussion outside the article. [The Marx Brothers from Duck Soup-Groucho:  “Did you follow the man who’s picture I gave you:?”  Chico: You bet boss. In only one hour, no in only thirty minutes, no-only ten minutes after you giva us this man’s picture and tella us to follow him…….we looza the picture.”-editor]

Make it stop…:

If we remove locks from our multithreading programming, indirect contention might become visible.[the mess we were trying to avoid by using locks now reappears because…we are not using locks-editor]

Oy vey :

 And the more parallelism we add to our programs, the more this will be a problem. [so the more we parallelize our code , the worst the problems get that we were trying to solve with lock serialization that were the reasons we were told we need to parallelize the code in the first place which doesn’t address these contention problems in any way  so it appears that we are in a worse de facto position than we were before we started. As the  excerpt from the textbook intimates, we”ll have to parallelize  the contention bits too. </begin facetious bitter comment from person who will actually have to do this with people yelling at him/her and is not a professor and will probably get fired for not being able to pull this off to the satisfaction of the scrum master just like 15 years ago> Why don’t we just rewrite the whole OS kernel ourselves and get it over with? <end facetious bitter comment/>  -editor]

I remember giggling to myself at the last one. I seriously began to think at this point that this was some sort of a gag piece, but I could find no evidence of it(usually the next issue or episode of something would make a reveal and everybody can have a laugh. I scoured the next issue for this something. Nothing). And that was the point I began to write this whole jeremiad. Why would a respected technical publication,one of the few left in the software engineering space, publish with what appears to be no editing such a ludicrous article? Why now? And then I began to stumble over similar efforts in other places. Dr. Teodorescu for his part seems to realize he’s made a bugger-all of it by going out with a whimper:

The one big question that the article doesn’t explore is whether we can move to tasks for any type of problems. It only previews the main principles that would make the system work, but doesn’t provide solid arguments on how this can be done[tee hee-editor]. I can only assume that the reader is not fully convinced that such a system would be feasible for most applications. And that just binds me to write another article on this topic.

Brother , that sounds like a threat if I’ve ever heard one. You spent six pages of a 16 page bi-monthly journal(that’s 6.25% of the magazine’s total output for the year-see, I can do math too,Doc) telling us how to refactor Amdahl’s Law for parallel computing only to tell as at the end that everything you said is probably pants and sawdust and maybe you’ll try to do better next time. So ends a tortuous effort to bend Amdahl’s Law  to the will of the cybernetic-utopians. Incredibly, the others I have read are even more unconvincing than this one,if that can be believed. If this guy has been charged with selling parallelism to software engineers,parallelism can kiss its ass goodbye(I’ll let you read for yourself the magnificent “grand finale” of the article-I don’t want to spoil the laughs for you). I had planned a couple of more deconstructions from other sources, but Dr. Teodorescu has taken the heart right out of me. I think I’ve hit all my bases and wading through these obviously contrived and duplicitous articles is making my soul hurt. Thanks again for everything, Dr. Teodorescu!

The key to remember here is that ranting about lock based programming and trying to sell parallelism are not actually all that related. As the textbook referenced above demonstrates, it is perfectly possible to make a case for parallelism as a straightforward evolutionary step necessitated by hitting the wall of certain engineering factors concerning serialization of instructions and memory by micro chips and equally straightforward physical side effects-mainly heat dissipation. These are historical developments-framing them in some sort of evangelical context(locking object Jacobins vs. the heroic centurions of parallelism) demands explanation. One such explanation is what I have tried to give here: that this a weaponized narrative by the cybernetic-utopians making with the twisting of Amdahl’s Laws into knots to further their greedy and seemingly boundless ambitions mentioned at the top and whitewash some shady marketing from the chip manufacturers(which may all wash out at the river’s mouth as the same thing). I leave for the reader to look into the attempts to “disprove” Amdahl’s Law(essentially in our context a hard engineering constraint requiring either redesign or refactoring) as another front in this weaponized narrative. The most well known efforts of this kind came from prominent staff of the Sandia National Laboratories-notably John Gustafson. I’ll only note that it was later pointed out formally that in massaging Amdahl’s Law to allege that it was inapplicable to supersized data sets, he had actually “mistakenly” reversed the context of the equation’s key variable;meaning that Gustafson’s Law and Amdahl’s Law are actually mathematically equivalent- there is no incongruity with applying it to large data sets. But that’s a fight I don’t have a dog in.

This whole ball of wax with the parallel propaganda could also be a more familiar tactic of sweatshops;just keep the slaves working,no matter what it takes-carrots,sticks.whips,hostages,hypnotism-anything goes.Just keep the product moving. In this case it may just be to keep the gag going just a little longer until none of it will matter anymore. I’ve worked for a number of start-ups(3 to be exact in an era where exactly what defined a “startup” was not as well defines as it is today).  All broke down instead of starting up. In every case just before the lights went out and everybody was marched out by security clutching their pitiful cardboard boxes full of souvenir coffee mugs and framed “Employee of the Month” awards, there would be some wild rumor(separate from the daily wild rumor that this was the week we wouldn’t be able to make payroll-which always turned out to be the only wild rumor that actually came true), usually that the company-which was probably worthless-would be sold to some Mr. Moneybags at the eleventh hour. These included(and this is far from exhaustive): we were going to be bought out by Microsoft, so everybody keep working and look sharp!;The Defense Department was really interested in our software and a general was coming in person tomorrow to hand us a check(never mind that everybody knows that’s not how pentagon procurement works, but when you’re one step from the street , credulity soars to the heights of a 4 year old) , so everybody keep working and look sharp!; The 25 people shitcanned the week before weren’t actually fired but “redeployed” to brainstorm for a month at a resort in Colorado and would return to present their reorganization plans to the new owner(who was going to fully honor everybody’s stock options in full after the sale!),so everybody keep working and look sharp! And yes, these are all actual sordid,pathetic pipe dreams bandied about in hushed tones by fellow employees I witnessed  just before the guillotine dropped to lop everybody’s head off. I of course never, believed a word of any of it(..ahem…). It’s hilarious now,but I assure you nobody was laughing then.

All of the don’t use locks, parallel programming is da bomb talk may thus be  just be a management tactic(and the”management” quite probably,are,directly or by proxy,the cybernetic-utopians)  to keep the software engineering automatons  putting one foot in front of the other(I suspect all of the hoaxes I just mentioned were started by management) while the last planks of the hive are nailed into place. What I’m trying to say is the software engineering future for those not on the team may be all used up-and this particular gambit may be the last. Once in complete control,the cybernetic-utopians will not require obfuscation or indirection-the fist won’t need a glove.  I don’t think this could qualify as the complete answer( the premeditated propaganda meme is far more likely) , but it certainly could be helping out. And if  characters like Dr. Teodorescu have been charged with “working the crowd” , the cybernetic-utopians having a backup plan(naked coercion,deceit and chicanery-the dreaded NCDC) is probably wise.

A final note on Amdahl’s Law follows, applicable to some matters we have discussed elsewhere in this effort-from a completely unexpected source.

Concerning  Amdahl’s Laws and human productivity

While trying to follow due diligence on the topics I’ve been discussing I had to do a great deal of wrangling with an industrial complex not familiar to most people-the racket of scholarly journals,citations and professional journals. The big dogs in this arena are  Elsevier and JSTOR, whose names may have some resonance to people familiar with the tragic case of Aaron Swartz,who was probably suicided for threatening these organizations’ cash flow(or is still alive and and watching Netflix in Tel Aviv and whose “death” was hoaxed to encourage the others ,if you know what I mean). These mobs essentially have locked just about every citation,article,journal entry and scrap of paper generated by the academic and research communities behind a pay wall accessible for large princely sums by institutions and extortionate piece rates for dusty, disreputable free agents such as myself. The IEEE is also in on this racket , which is doubly unfortunate for me being the archives of Computer magazine are stashed in their attic. The prices are exorbitant. A single article from JSTOR costs $15-33.00. And you may just be able to read it. To download it may require some extra cash. There is also obvious collusion-the prices for all of these combines seem to vary only between say $33 and $35 or so-which you pay being able only to judge the quality and content through a brief “teaser” abstract. You may pay the vig for what seems to be a relevant journal article and discover it’s actually only two paragraphs long with 4 pages of white space and 6 pages of boilerplate. This essentially is the same kind of bold, unilateral and clearly illegal seizure that Google got clean away with years ago with Google Books.

I doubt if any academic or researcher is in a position to say anything now that this expropriation is a fait accompli without getting a scarlet letter sewn on their tweed sport jacket and their snack room privileges downgraded( “Coffee mugs for teaching staff only!-visitors and students use the Styrofoam ones in the box under the sink” ). The take away here is that it was hit or miss for me to gain access to a lot of information that I really thought would be useful. I apologize to the readers for being poor and assure you I was as resourceful and persistent as was humanly possible(once upon a time when I was a teenager they had a place where you could go and walk around and find and read this stuff for free without anybody bothering you-all day.I forget what they called them). It seemed to be whimsical as to what was publicly accessible and what required cash(probably some AI algorithm designed to gyp researchers out of the maximum). I stumbled across some irrelevant gems(Spatial response of American black bears to prescribed fire in northwest Florida was a favorite:where do black bears go when “burn off” fires are set in Florida? Answer: the other way. Strangely, this came up when I searched for “multicore programming” through Elsevier-I wonder what subterranean inode links black bears and multicores?). Similarly, I just happened to run across an article which was accessible which touched on a subject we have brushed up against repeatedly in our journey-the marginal productivity resulting from increased investment in computer technology-and does so in a novel and unique way(at least novel and unique to me).

The article is named preciously The tortoise and the (soft)ware: Moore’s law, Amdahl’s law, and performance trends for human-machine systems , but don’t judge-the article is quite interesting and relevant. It comes from the Journal of Usability Studies,which sounds bogus but isn’t-and even if it is,couldn’t be any more bogus than Dr. Teodorescu. The professional appellation seems to be ” human-factors specialist” .This seems to be a field that actually does data studies and mappings of human computer interactions quantitatively. The abstract summation is quite good:

Amdahl’s Law demonstrates, algebraically, that increasingly the (non-parallelizable) human performance becomes the determining factor of speed and success in most any human-computer system.Whereas engineered products improve daily, and the amount of information for us to potentially process is growing at an ever quickening pace, the fundamental building blocks of human-information processing(e.g., reaction time, short-term memory capacity) have the same speed and capacity as they did for our grandparents.Or, likely, for the ancient Greeks.This implies much for human-computer interaction design; rather than hoping our users to read or to type faster, we must look for optimally chosen human channels and maximally matched human and machine functions.

The implications are pretty clear. Human interaction with computer systems(the serializable portion of the process) will constitute an ever greater part of any human-machine interface execution time even if the computer(parallelizable) portion increases in speed and efficiency. This insinuates that optimizing the serializable portions of the systems(us) will be the more logical approach. This also implies some rate of “diminishing returns” for any further parallel efficiency increases. This is as I have discussed concerning the decline in marginal productivity of a great deal of computer automation implementations, intimating that the best days of computer productivity as it has been understood up until now are behind us. This does not seem to pose much of an existential threat as constant barrages of techno-propaganda would have us believe(so you get that gas and electric “pay up or it’s lights out” letter 29 microseconds later than otherwise-big deal),where the converse-expending vast and possibly nonrenewable resources effectively organizing and reordering social and industrial relations to effectuate an ever incrementally faster responding “cyber-society”-may in fact do so. This also explains somewhat the trajectory of the “trans-humanist” movement-the only way humans can break even with this accelerated cyber-society(which seems to be justified only by fiat not human compulsion) is to become less serial and more parallel,something desirable only in the fevered brains and cult-like bug-eyed rantings of the “possessed”(or “possessors”. There is some squabbling over the literal meaning of the title of Dostoevsky’s novel to which I’m referring,but strangely,the meaning in this context is pretty much the same).

It is indeed a strange quirk that it did not occur to me until I read this monograph that my commentary about marginal productivity can also be framed by Amdahl’s Law. The rest of the paper, although not directly applicable to our subject,is well worth reading. It makes me wonder just how much I’m missing not having a wad of benjamins to peel to feed into the maws of Elsevier and JSTOR.


….fini….

I am aware that the tone and trajectory of my prose may seem eccentric and a little too hostile for the subject matter. But in my stern defense, I know that most of the people who will read this are younger and less jaded by the experiences of software development than I am. The apparent eccentricity  is I believe only due to the fact that virtually no non-captive accounts exist of the events of the last 25-30 years of software development and microprocessor derived technological expansion from the standpoint of the people who actually created the artifacts. Only recently has a central member of Bell Laboratories seminal development teams(Brian Kernighan) written expansively about events that took place 50 years ago in his own words. Bjarne Stroustrup and Stanley Lippman(key progenitors of the programming language C++) are somewhat known only because they are good writers and seminal books they have written remain in print,but they have written nothing which is not strictly technical. There have been some pretty nice volumes written about the golden age of Digital Equipment Corporation and Wang, but these are the glaring exceptions. And focusing only on the brightest stars ignores the fact that the universe is composed mostly of matter you can’t see-like software engineers working. Even when some narrative scrap does emerge, it’s always the principal engineer or project leader doing the talking with the narrative always framed and massaged. Short of Bolivian tin and silver miners during the age of the conquistadors or unfortunates banished to slave in the Numidean salt mines during the reign of Tiberius, there probably isn’t any group of workers I can think of who have left as desiccated an historical record of themselves as software engineers.

But many will protest: But there are thousands of programming and engineering blogs! This is true. But virtually none provide any context or qualification or dimensions as to what they’re doing. Essentially all of these type of venues strike me as,consciously or unconsciously, something to ingratiate themselves with prospective validators,like small children proudly showing their finger painting to their parents.  I see that you’re writing  and developing  c++ meta-programming interfaces-but to what purpose? For who? Is this good? Are you an engineer a co-conspirator or a perpetrator, or some combination of the three? Whatever you’re doing has to be about something, even if that something is nothing. What’s it about? You don’t know? You spend twelve hours a day in a 8×8 foot cubicle like you’re on some sort of Devil’s Island run by Howard Johnson’s and you have no idea what your doing is about? Is what you’re doing illegal or criminal? Many,many software engineers would be deeply shocked to find how often that answer would be yes if it was ever even broached. Quite a few things that go on glibly at MIT’s Media Lab and Stanford are clear violations of United Nations Conventions and other international accords against the experimental use of non-consensual subjects(most internet “crowd” experiments are explicit violations of these and the 1st and 4th Amendments of the Constitution) and ignorance of these prohibitions is no defense against prosecution. It’s about money? Do you get any besides your miserly pay? Do you know how much money this place actually makes?  Where does this money go? Does the money go to a good place or a bad place? Has it ever occurred to you that this place you work at is actually a money laundering operation and you’re just a prop to keep the law,such as it is,at bay?  Do you have any documentation as to what you’re doing as to the scope of knowledge you have of the operation which might be useful in your defense in the case of prosecution for what goes on here which, outside of the area inside your cubicle walls and the Jira tickets which appear in your Inbox, you seem to know absolutely nothing about? [I worked for a couple of weeks on a contract for a company building web spiders for a  search engine(this was when there were many). I looked at the engineering documents and it took me about 30 minutes to determine that what they were doing was illegal. They were intercepting server VoIP(Voice over IP-the protocol Skype uses). I told them it was a violation of  the 1934 Communications Act(superseded by Communications Act of 1996, but as far as I know, these proscriptions remain). Voice calls over internet protocol are considered the same as regular voice transmissions(this is also why surveillance cameras don’t have audio). They were technically wiretapping and could be thrown in prison. They laughed , but the manager said he would “ping” corporate for an opinion just to “cover all the bases”. The manager must have gotten reamed by corporate and he never spoke a word to me again. This assignment was so short I neglected to add it to my CV of Shame] I have asked many software engineers any combination of the preceding questions among others and the response is always the same-blank stares and nervous giggles. You can guess the response of the managers.

Huge chunks of technological history have also simply vanished from popular scope. Borland Software Corporation, once a peer and rival of Microsoft, does not even merit s a single volume of any kind that I can dig up despite pioneering the integrated development environment with Turbo Pascal and object oriented graphical interface templates with Objectindows(OWL) in the late 80’s and early 90’s, and a vastly superior spreadsheet product to Microsoft Excel(Quattro Pro). Virtually all of Borland’s products were superior to Microsoft’s circa 1992,and as late as 1995 the company still had enough left to release a Pascal Windows integrated development-Delphi,one of whose later iterations,Delphi 6, in my opinion, remains the best one ever developed(in further proof that technological development can move both ways,not always forward as the cybernetic-utopians would have us believe,integrated development environments for engineers are significantly more terrible today than they were 25 years ago with a couple of notable exceptions-to the point I was forced to stop using them about 6 years ago). Remarkably,a search of the interstellar depths of info mobsters Elsevier and JSTOR produced only one citation for Borland Software Corporation. One. An obscure citation from 1994 about a Borland Conference in Florida. I didn’t expect a tsunami of straight dope-but one? Oy.Yet the number of books published in the last 10 years alone about Zuckerman,Gates and Steve Jobs(whose been dead fro 10 years) could form the source materiel of a more than modest pyramid. And I still haven’t found anything about that Sybase deal.

What most don’t realize is that the oceans of virtual ink spilled over the technology sector and its artifacts on a daily basis are what Daniel Boorstin called 60 years ago in The Image “pseudo-events” -contrived interviews,announcements,talks,podcasts,bulletins,you name it, whose sole function is to be “reported on or reproduced”. If we view this as information theory would frame it, there is a galaxy full of data on these subjects,but virtually no information. Thus if one searches for “Apple watch”  you be will drawn into a maelstrom of data which may take days to wade through-but only a tiny fraction would actually be informative(data that would tell you something you didn’t already know). And a search about a software company that existed 25 years ago up turns up virtually nothing-pseudo or otherwise. The thing I usually like to use as an example of this is to point out to people that Google dominates huge swaths of the tech landscape,controls vast oceans of treasure and unimaginable hoards of data-mostly illicit. What’s not to know about Google? Well , for one,where did it come from? Remarkably, despite what I have just stated about Google,its origins remain mysterious, contentious and fungible. All data and no information.This in itself could consume a lengthy monograph.

If finding information(not data) concerning what amounts to five minutes ago in the history of technology is virtually scouring a barren wasteland,looking around for anything at all concerning the moral and social implications of what software engineers are actually doing written by actual working software engineers who do not see themselves as prisoners of their employers or are not suffering from either Stockholm or Smile Mask Syndrome would be completely fruitless. There is essentially nothing as i indicated above. An extraordinary circumstance that seems to elicit no curiosity or comment from anyone besides myself-which is even more extraordinary. Every technological wave since/during  the second world war has had its apostates,dissemblers and inside jokers-except for software engineering, with Joseph Weizanbaum and Richard Stallman being the only exceptions that come to mind. One of the prime movers behind the early scientific mobilization which eventually became the Manhattan Project, Leo Szilard, soured quickly on the monomania of the project and unlike most of his peers,was not surprised that the bomb was dropped on Japan and not Germany-which stunned the others,especially the other Hungarians.Szilard, who had advocated a demonstration of the bomb before its use,switched to biology. Norbert Wiener, primogenitor of cybernetics(the good bits)  with Ross Ashby(the bad bits), saw the import of the union of technological hubris and greedy military ambition as well as the  of long term implications of factory automation(he even tried to talk some sense into the United Auto Workers who had other fish to fry and ended up getting burned by the grease). Wiener was marginalized for his insolence in comparison to his only peer, Claude Shannon,who was lionized by all. Biologist Richard Lewontin, who has done seminal work  in population genetics and other fields, has been a cogent critic of the dominant mainstream paradigm in his field,evolutionary biology, as well as the the category as race as a valid biological category-both positions making many people very unhappy.  Linus Pauling, one of the greatest scientists in U.S. history and now known ,incredibly,only as a crank who was obsessed with Vitamin C, soured on the military gravy train and jumped. He became a highly influential peace activist and even went on television to beef with Edward Teller(Dr. Strangeglove) about radioactive fallout.

[Addendum: I am cynical and wise enough to know that some or all of the above personages may have been/are operating as “controlled opposition” , i.e. , assets manipulated by the villains to constrain the scope of protest to, and criticism of ,some line of action  to some manageable subset of the actual possibilities-or conversely , inflate them far beyond what they actually are for advantage. The assets are of course in on, and part of the gag. Szilard for example following this line of thinking, may have been tasked with inflating the menace of nuclear weapons by his withdrawal from the operational program(“If a guy of  Szilard’s stature pulled out because he feared for the future of mankind, that atomic bomb mishegas must really be something!. And who would know better than him. He came up with the idea of the bomb in the first place!”). Similarly, Lewontin made many people unhappy,but it does not seem to have affected his professional career negatively very much,which is suspicious. Surprisingly,this does not really make much difference to the point I’m trying to make above. The field of software engineering is so completely captured and ideologically docile, that such tricksterization is unnecessary. There is no internal opposition in software engineering or external restraint or supervision or validation of its artifacts.

We have no way of knowing ,for example, if any of the biometric gizmos and gadgets actually work as advertised since there is no public independent framework or mechanism to test them-it is simply stated that face recognition works flawlessly for example,:we(the public) have no real way of determining whether this is true or not. But getting the population to believe that it is true amounts to the same thing for the perpetrators raking in the cash. Just like atomic bombs. If you believe that there are bombs that can destroy cities,you can be easily manipulated to do things you would not do if you did not believe this. And if I’m the one putatively tasked with making the bombs at unimaginably exorbitant cost, this belief exonerates me from having to actually manufacture them if  the erstwhile “enemy” has flummoxed his constituents in the same way-but does not exonerate me from cashing the checks I receive as payment(I have long suspected that there is not now and never has been in the past anything at all  in any of those ICBM missile silos in the Midwest but cunning furry prairie vermin) .But unlike atomic bombs, the need to create ideological store fronts, double agents  and limited hangouts to peddle software artifacts is obviated for the most part since everybody is in on the gag from the git go, and happy to be there. The laughably tissue thin cover of Langley cutout Google(which a 9 year old could penetrate in an hour with a high speed internet connection) is the prime example of this.]

Can anyone imagine a prominent software engineer or architect going on television today and having contrary words with Bill Gates or Larry Ellison for all to see about anything, anything at all-even something as trivial as whether the bacon was too crispy at the hotel buffet much less taking a position yea or nay as to whether there is a single software engineer alive who loses a wink of sleep over supplying the Gestapo with everything they need to despoil the liberty and human rights of citizens? I could extend this list of contrarians pretty far, but you get the gist. I know of no figure in the software development arena whose stature would be comparable to the figures mentioned above who has voiced any contrition or doubt at all as to at what software engineering is actually doing,that is, exactly that point where someone like Pauling or Lewontin or Wiener would say “see here old chaps, things are getting out of hand with <fill in the blank>; we need to regroup and reconsider”. What is common about the apostates above is they separated from their colleagues exactly at that point where they felt the direction their profession was taking posed an existential threat to the world outside of that profession. No such firewall or  DMZ of decency or discretion seems to exist for software engineers. No detail(in the military sense)  seems too dirty that it can’t be done with a clean conscious(“git er done” as hillbillies would say) . Or at least a bifurcated or doubled conscious.

[Addendum: But this is a two way street, however hard this is to see with the technocracy in complete ascendance. But history tells us that catastrophic collapses almost always happen when the star is at its zenith and systems theory indicates that the kind of closed feedback control grid the cybernetic-utopians are instantiating is quite possible-but it won’t last long. Instead of the Singularity lasting forever , its half-life may turn out to be less than a single lifetime(or less then how long the “Space Age” lasted. Remember the Space Age?  It started in the late 1950’s and was effectively dead by 1972. “Space” now means low-earth orbit-the distance from Baltimore to New York. We can’t even get out of our own magnetosphere. Cybernetic-utopia may suffer a similar fate). And with the kind of frenetic hysterical delusion which seems to be the utopians leadership’s base energy state, Ice-9 is also a possibility as I have previously stated.When castles made of sand(or cache memory)  finally crumble,there may be no more water to put out the fire. And the arch perpetrators will be long gone. Why do you think Eric Schmidt is heading for Cyprus and Peter Thiel is hedging his bets with a New Zealand Ponderosa?  I can’t remember where I read that you know it’s time to go when the secret police head for the exits;stay a minute longer and the overconfident villains,blinded by their own hubris, will be surprised by their  victims who walked unopposed through the abandoned security cordon-you can ask Ceauşescu or Najibullah or Samuel K. Doe;bring your Ouija board . What’s this got to do with us;we’re software engineers? Nothing,brother ,absolutely nothing. Don’t mind me. Sometimes my mind wanders. Keep rattling those keyboards and watching those anime movies,it’s all giggles and guffaws and memes and Netflix. Don’t want to be a bum?-chew gum. That’s all the advice I can afford. All is well-just hope nobody tunnels out of the cyber San Quentin you’re building for the sub-betas and opens the floodgates letting the others out. And keep taking the gaff every time the racket goes sour. This time the gaff might be final. Back to the regular programming.]

The fact that all of the personages mentioned were top dogs in their fields(a few of them invented their fields) was a huge factor in lending credibility to movements up to that point viewed as marginal nuts or reactionary-views largely due to the efforts of some people who may work in Langley,Virginia-then again they may not.I know absolutely that even if a prominent software engineer or developer agreed with every syllable I have written they would never admit it or say a peep. The popular notion of computer scientists and software engineers being “mavericks” seems to be largely a self-propagated delusion reinforced by the intentionally enforced  seclusion of our work environments. Unlike biological science or mathematics or physics,software engineering is not a profession and its members have the status among people who are professionals of being courtiers who  perform card tricks and handsprings and somersaults at the behest of their lords(this is pretty much exactly how it was stated to me by doctors who I had to work with in developing software at the incredibly large and influential medical institution) . Nothing says so more clearly than even the top dogs are quiet as mice on any issue that even hints at controversy or contention(as in most cases they see no controversy or contention being in the grips of one or both of the syndromes mentioned above or actually being cybernetic-utopians).

I don’t want to get carried away with this here since this will essentially be the topic I am working up to with the Breadcrumbs. I just wanted to close with these comments to clarify the conclusion that cybernetic-utopians are able to architect and stage  the narratives around Moore’s Law and Amdahl’s Laws any way they wish because the only people who could frame an effective and convincing counter argument-software engineers-are mute. And this makes me sound more eccentric or anomalous  than I actually am because nobody else is talking(like speaking in a normal voice in church sounds like shouting). Nobody. And they haven’t been talking for a long time.

I was going to do a comprehensive review, but Dr. Teodorescu disabused me of that(“I know everything that I said is useless and contradictory and I’ve wasted hours of your life that you’ll never get back but I might do better next time if I can stay sober. – Regards,Your Friend, Dr. Teodorescu”). I’ll  let everything stand as it is. With that said I really wish I’d done a better job with part 3C. There were a lot points I wanted to make but it would’ve gone too far off topic(says the man who dragged North Vietnam into a discussion about Amdahl’s Law). But as a matter of fact, I’ll touch on a few of them in the next Breadcrumbs.

The last things I would like to say, keeping all the depredations of the techno-heels I have described in mind is that: everybody needs to wake up,immediately!-especially software engineers;the field of software engineering has a lot to answer for-maybe as much or more than the atomic scientists; it is much,much later than most people think,even the smart ones; and finally,the agenda of the cybernetic-utopians is much further advanced and longer lived than only a few can possibly know:the very, very few who are actually running this game-and the even very,very fewer of their victims who have lived long enough to see and understand-and still live to tell the tale. I fit in the latter category.