3C – Cybernetic-utopianism and the tale of two laws: Moore’s Law and Amdahl’s law

deus lo volt

Events seen to be moving faster than I can type. Even as I leave liberal  room for doubt and delusion on my part,reality impinges and confirms my guesstimations and anecdotal meanderings. Remember back in Part 2 :

  …I essentially believe that the “App” ecosystems are and always have been smokescreens for the expansion,syncretism and command control of a real time unbounded node hive network which can operate in either point-to-point or peer-to-peer mode-or even both simultaneously(this is possible). This is an existential sine qua non compulsion for the cyber utopians I believe, since the previous continuous hardware and infrastructure churn was drawing down. This conflict is framed as a scuffle between two “App” providers. What it really seems to be, is a much more weighty battle: the Chinese(or whoever the Chinese are fronting for) are aware of how this hive is being built(they’ve built one themselves the same way) and want to expand into new territory. It also conceivable that this “conflict” is a carefully planned subterfuge, a nerf pillow fight or sissy slap exchange between two faux enemies to achieve the end game of software and hardware ferment to cloak the deployment of the hive systems. It is even more conceivable that following from this , there is actually only one side and the faux enemies take turns kicking each other in the pants in public to thimblerig the rubes.

And thus we have the “TikTok” phony war with the extraordinary spectacle of the President of the United States involving himself publicly in what would seem on the surface to be something far beneath the eminence of his office. A petty scuffle over a social media App? Or an ingenious subterfuge to import China’s largely complete and field tested hive control system to the U.S. disguised as a “viral” social media platform with “secret sauce”. As the saying goes, you can’t make this stuff up. And the proposed beneficiary of this largess? Why ,our old friends at Oracle(led by “former” Langley big man on campus Larry Ellison), who were gifted a lion’s share of the database market in the 90’s by the inexplicable licensing of Sybase’s database core to Microsoft(killing off Oracle’s most serious rival) and exchanged subsequent mickey shakes and masonic knuckle rubs  with Microsoft to pretend to compete against each other in the database market. But that’s like, so ancient history, Johnson-you have to learn to let go.

“TikTok” appears to be nothing else than the network protocol I have mentioned previously running on top of the real time duplex hive grid(or the stalking horse for that protocol). This protocol is laughably referred to in mainstream sources as ” a proprietary recommendation algorithm” instead of the node routing engine it almost certainly is(the comparatively vast bandwidth of video streaming can hide a multitude of protocol packet gremlins and control codes) .  Also remember the admonitions concerning how the purveyors of the end game(the hive) are fully prepared to utilize ersatz capitalism as a con to achieve their long terms interests. They’re not even trying to hide it anymore-and why should they? Nobody’s paying any attention but myself and  a few other ragged dead-enders of little or no consequence. I will of course have more to say on this, but let’s return to our regularly scheduled programming for now.

On the compulsion that cybernetic-utopianism and its stalking horses must appear to be both unprecedented and teleological….

Haven’t we squeezed Moore’s Law dry with laundry lists of he said/she said/who shot john and a  bitter proxy fight with  a man named Dave Rotman who may actually be a computer generated holographic avatar from the MIT AI bunker? I’m afraid the answer is no. Before we close the book by backtracking back along our long convoluted trail to Amdahl’s Law, a couple of more stops have to be made.

In our ramblings(rumbling?)  with Dave Rotman and MIT in the previous post, we ridiculed the assertion that Moore’s Law “defined progress itself”. Rotman of course, is a disseminator  of propaganda-that’s what he gets paid for. But as a propagandist, he isn’t concerned with what he himself thinks is true,but what he thinks you think is true. He wouldn’t have used the phrase ,in other words, if he did not think it had resonance with his expected audience. It reveals the significant modification the word “progress” has undergone in the space of a single lifetime(mine) and this has much to say about the subjects we have been obsessed with.

As a “little too young to be a baby boomer,”  progress was sold to my generation as meaning essentially “improvement over the present”. It was viewed as incremental,although radical in spots. The amount of change and dislocation which was deemed acceptable was directly proportional to the benefits,and those selling change knew this. The progenitors of the Tennessee Valley Authority had to promise jubilee level rewards for the complete physical and political transformation of several states. Americans willingness to go along with the space program was directly proportional to its rewards,in this case, national prestige-as real to many as electricity. Enthusiasm for the Apollo program rapidly declined after the Apollo 11 landings for the reason that most people(logically) thought it had achieved its aims-walking around on the moon for the fourth or fifth time seemed superfluous and non-progressive. The lunar program ended with astounding abruptness for many reasons we won’t go into here. But one of the main ones was the the public at large still maintained the idea that “progress” was still constrained by some sort of investment logic:how much “progress” could be a selling point in whatever scheme you were running was dependent on  how much treasure would be invested and how great the proposed benefits the “progress”  would bring.

However, the lunar program did introduce the seeds of a new concept of progress. As the interest in the program waned after the first few moon shots, an element in the efforts to keep the gravy train rolling which had always existed was now made its primary focus-the moons shots were no longer “progressive” primarily but now they were “unprecedented”. Something never seen before. Like a carnival trick-a geek biting the head off a chicken. The explorers had to keep coming up with gimmicks(unprecedented,if pointless,exercises):space melodrama(Apollo 13);technological sleight of hand(the moon buggy);nighttime launches,etc. This laid the seeds for a rather remarkable transition of “progress”. In the coming decades anything that was technologically unprecedented was opaquely declared “progressive”, a radically different formulation than what transpired before. Like most conceptual transitions of great import, the change was subtle and gradual. How much of this may have been diabolical we’ll leave alone for now.

What we won’t leave alone is how the new “progress” impinges on Moore’s Law and the Inevitabilists. Rotman rather half-heartedly and absurdly tries to play both sides of the street in the MIT article by throwing out a laundry list of gizmos and contrivances putatively the progeny of Gordon Moore and thus beholding to the older “progress”:  “We did what Gordon Moore said and look at all the presents we got!” But this is only a sham. What is really implied is “Moore’s Law(and thus the doubling of transistor density) is progressive because it is unprecedented-something useful may come of this;we’ll  keep you posted”. This is progressive in and of itself because each iteration is unprecedented by the new definition(and as is obvious, this approaches sophistry). Aware of the older and still threatening concept of progress, the new technological mandarins must constantly come up with post hoc benefits to justify the new “progress”-a complete reversal. Often these benefits are in fact,contrivances-like “computer power”. Also note the disingenuous identity of technological/research with social progress .

Progress now precedes its benefits(or to put it another way,progress and its side effects are now uncorrelated or decoupled) , a remarkable inversion which goes largely uncommented upon. This also implicitly absolves the artifact or program from having any benefits at all in the wider social context. Some vaccine is “progress” whether it works or not as long as it is technologically unprecedented. It also logically offers absolution to any liabilities(again,the case of vaccines is instructive-or alarming,depending on whether you have a seat on the gravy train or no). Thus, the TVA would have simply destroyed and transmogrified vast swaths of the country,announced that this was progress and as an afterthought mentioned that the transmission of electricity may be a beneficial side-effect-we’ll have to wait and see. Back in the day, whether the TVA was progress or not remained on the table until we actually saw the electricity. The new paradigm effectively identifies progress and technological novelty as the same thing-and never has to actually deliver the goods since the novelty already has the positive attribute of being progressive(the sophistry mentioned above). Again, we won’t go into the any potential villainy here-although it is implied.

The new paradigm also effectively removes the entire concept of technological assessment ;that which is unprecedented is already according to the new thing a positive default attribute which must be forfeited or abrogated to be removed. Technology assessment usually involves comparing an artifact to those of a similar type or class and determining whether the new artifact actually is fit for the job  it’s supposed to do. But as we have stated the new paradigm has already pronounced the thing “progressive” because it is novel. Again, this decouples progress from assessment and in a practical sense makes assessment redundant, or at least ambiguous. So new things which don’t work(Theranos) have equal technological standing with things which do(air conditioning) until proven otherwise-another inversion of previous logic. Under the new regime they must both be seen as progress, which of course creates a whole new nest of problems-as can be seen in the difficulty in actually prosecuting the perpetrators of the Theranos con. Clever Hans’s among us may also see that this identifies “progress” and “research” as synonyms. If you’ve discovered Sarin or Tabun,this is progress, although the victims may hold the contrary position.

This tangent is significant but not what I’m trying to get at here. What is relevant is that the old “progress” had a social and historical context-the assessment of the benefits and costs of something before we called it progress(and did not identify research and progress as the same thing). Those contexts no longer exist-technology is supposedly autonomous of history,directly implying that all research is progressive by definition. And we come full circle back to our old friends the cybernetic-utopians. Mention has been made of the circular logic of the new progress. This explains the  motivation of the cybernetic-utopians in adopting it with such enthusiasm: Why should cars drive themselves? Because they can. And the fact that they can is unprecedented. Which makes them progressive. Are you against progress?  No? Well then,we’ve settled that. Wanna buy one?

Thus one front moves forward-the new can always be justified in place as progress or social benefit and foot draggers vilified as the worst that any man can be in this era: non-progressive or reactionary, lacking in awe of the novel and profane-in  a religious context we find similarities. Those often opposed with the most violence to a particular religion often take the position that anyone who opposes the religion in question is taking a positive position, in this case identifying profanity(instead of novelty) with progress. This precludes criticism of the critics( a false binary of either you support the utopians or be labeled a Luddite and machine breaker). The preclusion usually allows all manner of charlatans and grifters to move into the wake of  the legitimate opposition-like it or not, the cybernetic-utopians have made themselves comfortable in the backwash of radical scientific rationalism, like lizards propagating to far flung Pacific islands by floating on the battered fauna and detritus of typhoons.

But this is one front. Cybernetic-utopians exploit the new paradigm  that all technology is progressive if it is unprecedented to foment the metastasis and replacement of existing technologies by new ones even where no real compulsion exists to do so for reasons we have gone to excessive lengths to outline. But we have stated previously that “technological autonomy”-technology without context,which effectively makes “new” and “progressive” synonyms-somewhat implying  it can be used as a synonym itself  for inevitability. This is not quite the whole story,which actually is the motivation for this post. To make cybernetic-utopians into Inevitablists requires a second front. Saying that a piece of technology is de facto progressive is not exactly the same thing as saying that technological progress is inevitable.

To spearhead the second front,Inevitabilists exploit the conclusion derived from  the question: If technology cannot be assessed without ingrained ambiguity for reasons we outlined above,how can we deduce what the technology is for? This is admittedly confusing. Let’s use the previous example of the autonomous vehicle. Why should cars drive themselves? Because they can. And the fact that they can is unprecedented. Which makes them progressive. But what are self-driving cars for? What is their function? What is their purpose? Without social context or assessment these questions cannot be answered Saying they’re “progressive” doesn’t get us anywhere. Saying the function of self-driving cars  is to drive themselves is a tautology that would embarrass even the cybernetic-utopians.  I can’t assess the technology impartially since it already has the positive attribute of progress which cannot be revoked. Alright, I’ll accept it being an artifact which is progressive by nature. But what kind of progress that differentiates it from any other technological artifact? What is the type of progress? What is a self-driving car? Which I have to know in order to know whether its effects will be positive or negative? And does the auto-attribute of being progressive cancel or overrule any negative assessments? In the real world which we leave in now the answer is most certainly yes.

The question “Saying that a self-driving car is a car that drives itself is a tautology…” should hint at alert readers where this is going-and where it comes from. I couldn’t have been the only one who asked exactly that question(“What for?”) when first presented with the idea of autonomous vehicles. The question actually means: I know self-driving cars drive themselves ,but to what purpose?-the semantic meaning of “What for?” in this case is actually “What is  a self-driving car?”, existentially speaking.This stems from the fact that the answer is not as self-evident as its proponents would maintain for reasons we have just stated. I have heard these boosters(grifters?)  actually answer publicly exactly : “..its progressive technology”. Which, to repeat, says nothing. The alert readers have already seen through all of this as the new incarnation of the teleological argument of Final Cause in a non-religious context. An Autonomous vehicle is a vehicle that operates autonomously-this is their nature.

As critics as far back as Hume and Bacon have pointed out, the Final Cause argument is fundamentally disingenuous-who or what establishes what the nature of things is and who or what established their nature? I can imagine nothing that would upset our technological heroes more than pointing out certain immutable planks of their platform share common cause with arguments from design, but there you have it. We most certainly don’t what to fall into that particular stew pot at this point. So let me just pose the question in a more localized fashion-who or what resolves the tautology which the new “progress” presents? How can we assess one way or another the technology of autonomous vehicles if we can’t even clearly state what they are due to the circular logic just presented?

This may appear to be metaphysical hair-splitting but it is not. Exactly this problem has vexed object oriented programming since its inception.Trying to compare categories of things by their functions runs into often surprising dead ends due to the fact that not all functional objects are transitive;A can be a type of  B;which doesn’t necessarily mean B can be a type of A;or not all functional objects are discrete and thus composable or decomposable. A bicycle is a vehicle with two wheels that rolls.If I add two bicycles does that make a car- a vehicle that rolls  with 4 wheels? And can I split a car into two bicycles?  Even at the most intuitive level there exists an understanding that any object cannot be completely defined by just explicating what it does. Saying that a chicken is an animal that lays eggs is true, but it doesn’t actually tell you what a chicken is. The “is a” relationship,as it is known in object oriented programming,depends on context and semantics,as does defining a chicken. What a chicken is is defined by a scientific and naturally derived taxonomical hierarchy(phylum). It is the only way to differentiate a chicken from a  platypus-which also lays-eggs in a formal,consistent sense. Trying to differentiate a chicken from a platypus by definition with a simple calculation of functional similarities or differences is of course ludicrous.

This does not say an artifacts definition cannot be deduced by observation of its functionality. William Harvey did just that,revolutionizing medicine in 17th century by elucidating how blood circulation in the human body actually worked,largely by figuring out what heart valves were actually for. The problems arise when you try to compare two objects which have some superficial similarity. Is a heart a kind of liver? Are livers and hearts distinct classes of organs? In an emergency can a liver be substituted for a heart? Of course the answers here are all no, but how would you systematically and formally express that? And this is especially critical in the case of comparisons. If a donkey and a horse are derived in some way from a common source,it would be logical(but not fool proof) to assume that if I can ride a donkey , I can ride a horse-which may be an improvement. This is a primitive “technological assessment”. More sophisticated ones require more rigorous taxonomy then anecdote alone.

Technological artifacts also have a taxonomy-and here is where the trouble begins. For the most part technological taxonomy is anecdotal, which is still another way of saying something is true in and of itself. There are a vast number of ways to categorize the natural world-by domain,genotype,phenotype,etc. The classifications of technology appear to be derivative,unrigorous and in a great many cases ,purposely disingenuous. The NASA Technology Taxonomy is an example. Self-driving vehicles are often characterized as synonymous with “autonomous” vehicles which the NASA taxonomy connotates  as  “….technologies that (in the context of robotics, spacecraft, or aircraft) enable the system to operate in a dynamic environment independent of external control.” But if we are to believe that this is the phylum of self-driving vehicles, it is to say the least, unconvincing. “Autonomous” cars are not in any sense, well, autonomous. The vehicles are receiving positioning data(GPS),interferometric inputs from sophisticated satellite feeds(distinct from GPS) and probably data radiated/broadcast from other near field nodes(other recently manufactured cars,i.e., V2V-Vechicle to Vehicle communication). It appears that “self-driving cars” are not the first stage of fully autonomous motor travel(jump in your car,punch in the destination and the car will wake you upon arrival) , but the first stage of the direct opposite-centrally controlled virtual “black marias”  which follow a trail of breadcrumbs sprinkled by others-a scenario as if someone had just miraculously discovered a heretofore unknown lost reel of Fritz Lang’s Metropolis. The more proper phylum would seem to be NASA’s  Guidance, Navigation, and Control (GN&C). 

The point of this for our discussion is that as mentioned, this “autonomous” business seems to be a deliberate ruse. The cybernetic-utopians are undoubtedly aware of the obvious sophistry of a great many of their technological claims and saying a self-driving vehicle is a thing that drives itself is a first-cause tautology straight from Thomas Aquinas.One way to get out this box is to borrow legitimacy from nature. Natural phyla are at root, determined not by whim but by natural selection(and thus are mutable). Although mutable, at any one point in time  a natural phylum and its members are a result of an inclusive set of selected attributes. This of course is the bugaboo which has haunted the lay acceptance of evolution. Evolution and natural selection are two distinct concepts-and their symbiosis is not particularly intuitive. Many early adherents of evolution used it naively or maliciously to establish concepts such as “survival of the fittest”-i.e, inevitable outcomes derived from intrinsic attributes.

This is the reading the Inevitablists are flogging. Create a taxonomic phylum(“autonomous” , “smart” , whatever);place the bogus entry,self-driving cars in this case,in this category even if they don’t fit and are not in fact,autonomous; then crow from the rooftops when after vast hoards of treasure and time have been expended, the artifact(self-driving vehicle) becomes a success-“naturally” displacing(like natural fauna) its sad sack non-autonomous cousins. Even more important when the “self driving vehicles”  transmogrify magically into Stolypin wagons, this is of course inevitable/teleological-as defined above-as well. The autonomous vehicle which was never actually autonomous, “evolves” into the Stolypin wagon-and everybody knows “evolution” is inevitable.Technological assessment is unnecessary(whether autonomous cars are actually better than the sad sacks)-“natural selection”  has given us the winner. However, the process of natural selection of the cybernetic-utopians is…wait for it…whether it is more “progressive” or not, which of course it is, since everything in the new phylum is…..unprecedented. And the snake eating its own tail becomes the coat of arms of the Inevitablists Sophistry at its finest.

So after all of this, what is a self-driving car? A “self-driving car” is(or will be when the scam has proliferated to a point of no return) a portable(not autonomous) node  in a physical sub-system of a real-time hive full duplex control grid-which is not directly controlled from a central authority(read-compute-react-post), but by an emulation of that hive from data streams and sensors in real time using  heuristics manipulated by a super computer or an array of such. This is key and we will return to this(blatantly non-intuitive) point repeatedly down the road: the progenitors of this grid seem to have discovered that it is in fact faster and more fail safe to substitute a control engine which in effect heuristically guesses(read-compute-post, “reacting” is superfluous) what to do from the emulation instead of reacting  in a feedback sense to the “real data”. The “real data” in this scenario could be “fired and forgotten”-the inputs(data) and output(guesses) are uncorrelated and the latency of a response could be reduced by magnitudes(the super computing engine could “guess” for example that any further data would not enhance the effectiveness of the response and simply ignore it and post whatever action it deems appropriate-it could anticipate by several iterations entire processes. We have essentially “feedback” without a loop so to speak, if that can be so phrased). The real hive and it’s doppelganger will also probably be operated in parallel-in some cases it may be more efficient or logical to let the “real” network respond and in others the heuristic doppelganger. This is what is generically called “artificial intelligence” but is actually just machine and deep learning with a few other information theoretical bits garnished in for flavor. None of this has anything really to do with the popular idea of computers turning into humans,i.e, what is known as artificial general intelligence-which has barely progressed at all since  Allan Turing’s work in the 1950’s. What we have here is simulacrum of artificial intelligence,which is more than good enough it seems to rake in that fat investor cash and those DOD cashier’s checks while keeping the real people running this show happy(forget them and you’ll be sorry). Or in a less prolix shorthand:  “self-driving” cars are the winners, grasshopper,the winners. And in a world where those who make the winners are becoming remorselessly savage in the pursuit of their appalling agendas, you can either leave it at that or prepare for a fight. I am not sanguine about the outcome of the latter.

And for those of you who think(again) that Mr.Johnson should put the cork back in that jug of corn squeezins he’s been nibbling off of , I will refer to the excellent work of technology writer Evgeny Morozov, particularly his book To Save Everything,Click Here. Although it is not transparently obvious that what I have just described and Morozov’s idea of “technological (specifically internet) Solutionism”  amount to the same thing , I believe a close reading of his work and many of the things I have just stated will show the common threads. As a working academic, Morozov has to be less apocalyptic by several degrees than myself, but his points are salient-especially that his “Solutionism ” is largely concerned with transforming the contingent into the inevitable. An excerpt from the mentioned book:

I call the ideology that legitimizes and sanctions such aspirations “solutionism.” I borrow this unabashedly pejorative term from the world of architecture and urban planning, where it has come to refer to an unhealthy preoccupation with sexy, monumental, and narrow-minded solutions—the kind of stuff that wows audiences at TED Conferences—to problems that are extremely complex, fluid, and contentious. These are the kinds of problems that, on careful examination, do not have to be defined in the singular and all-encompassing ways that “solutionists” have defined them; what’s contentious, then, is not their proposed solution but their very definition of the problem itself. Design theorist Michael Dobbins has it right: solutionism presumes rather than investigates the problems that it is trying to solve, reaching “for the answer before the questions have been fully asked.” How problems are composed matters every bit as much as how problems are resolved.

Is not the Inevitablists two headed hydra of a desiccated progress sans assessment and rube goldberg natural selection pretty much the same as ‘reaching “for the answer before the questions have been fully asked.”’? What I think Morozov is not grasping(or has to pretend not to grasp fro professional reasons) is that this”answer before the question”  business is actually exactly what one would expect from the control system/theoretical framework I just described above-this is essentially what machine and deep learning are. The matter that Morozov and Mr. Dobbins above are not stressing is that the ideology they are criticizing is dictated by the technological framework(simulacrum of artificial intelligence) which is being implemented-“solutionism” is not a philosophy or world view;it’s an artifact of this framework. And that one of the main intents of “solutionism”(and cybernetic-utopianism) is not only to obscure the symbiosis of the chicken and the egg-but to eliminate the chicken and egg altogether. Completely guaranteed outcomes. A seamless feedback loop/psuedo-loop(or something approximating this).


In relation to my comments about the emulated hive above, I would like to comment about this theory in relation to something I hear more and more knowledgeable commentators discussing:the  so called “end of the internet as we know it”. Ignoring the melodramatic bits of this, this is usually proposed as a natural progression developing from the de facto martial law declared worldwide over the “virus”. Portents of an internet lights out can be seen clearly in the rantings of old school eugenics Falangists  like Klaus Schwab so these fears are not  without foundation. And when the boys from Brazil start talking up something as an existential threat(terrorism,bio-terrorism,etc.) you know they’ve already got a working group burning the midnight lamp to make it so.

Previously I have commented that the bottom line the cybernetic-utopians could be expressed as the desire to completely obscure and erase the difference between the coercions of the cyber-state and the putative desires of the subjected: the oppressed will not be able to differentiate or articulate their own needs as being distinct from hegemonic demands of their overlords-brave new world. In light of this, could this have been taking place already per our emulator? Note that we’re talking of emulation not simulation. The emulated hive doppelganger is almost certainly a different beast both physically and logically-but is capable of imitating the IP(internet protocol) based packet switched network like the creature from the old horror movie The Thing-a virtual machine on a worldwide scale. I also believe that the IP network is already in the process of being replaced by absorption(the “cloud”, essentially universal Network Address Translation(NAT) redirection where it will be virtually impossible for the unprivileged  to resolve their network endpoints to any physical address).

Instead of “blacking” out the internet-the emulator would merely substitute a copy of itself at some point in time. A snapshot. This would be transparent. The interesting thing to note is this: we analogically speak of the emulator as a virtual machine.  I have intimated that this super computer/array of supercomputers probably has been in operation for some time. It probably has multiple snapshots of the entire internet  stored-just like I can store a snapshot of the entire state of the computer I’m working on at any minute. The “end of the internet”  would thus appear as essentially reverting to an archived copy-say 2 years ago. The plan may well be to trap those using the network into a perpetual “Groundhog Day”-with the “internet” being whimsically reset at will or reset so subtly that it would be impossible to notice. The emulated copy of the “internet” offline is just a binary copy,an incredibly large one,but still a copy. Is this what those enormous subterranean data stores are really for? To store the virtual images of the internet?

[ Note: I began to think about this emulator from some things out of my own past. I worked with several engineers back in the day who worked on the Hubble Space Telescope. They all told me that were/are actually two Hubbles. One,the well known scientific observation unit, and another twin secret lens whose purpose can be guessed.If you remember, the Hubble telescope was supposed to have been manufactured  with a flaw that required  a repair expedition using the Space Shuttle. My colleagues told me this was a ruse and a cover for the launch of the second telescope

I essentially see the “App” and the “Cloud”  as impostures of this sort. It should also be kept in mind that any intelligence technology whose existence and capabilities are “leaked” to the public is at least two or three generations behind what actually exists and has been deployed. The “leaks” are misdirection to spread FUD(fear,uncertainty and doubt) .Edward Snowden is essentially a FUD merchant  using a  true narrative(incontinent surveillance) to sell a bogus solution-use encryption to fight it,despite 99.9% of said encryption being created by the NSA/GCHQ. His real assignment:get people to use encryption that the NSA/GCHQ can easily compromise(since they wrote it) and not develop any of their own as well as stupefying the public as to the real capabilities of these organizations which are probably closer to what I’m describing. His revelations about PRISM and ECHELON almost certainly mean these programs are close to being decommissioned or deprecated, if they haven’t been already,and replaced by vastly superior ones. I also know personally that the NSA was capable of and actually doing the things Snowden has “revealed” in the early  80’s-as does anyone who worked at Fort Meade at that time.

In relation to our topic,simulacrum-of-AI communication networks have been used by the military for years(Self-Organizing/ Self-Optimizing (SON) networks being probably the best known and are commercially available as are Software Defined Networks(SDN) ),mostly in aircraft and satellite applications which have been demonstrated to the public. We see this recurring repeatedly: advanced technological artifacts are developed and funded by the DOD;stripped down and defanged versions are then “released” by commercial concerns who reap the benefits while the “weaponized” versions remain behind the firewall(Alexa.developed by DARPA and gifted to Google, is a prime example). This means almost certainly several generations or iterations of these products have actually already gone by behind the DOD firewall,with the latest possibly fully capable of doing what I’m describing. These sort of tricksterizations also have a long term well planned befuddling effect on the public-they essentially program this public  as to what is possible and what is not. The piece de resistance of this was of course,9/11. The alternative explanations for this event proposed by engineers and well connected intelligence veterans have been laughed off for 20 years,mainly due to this sort of programming. This is despite everything they propose being well within the capabilities of a well funded,strategically positioned and ruthless perpetrator contemporary with those events.]

Being just a copy,the images could be edited “offline”. To excise or implant any information or content or “disappear” completely malcontents and knuckleheads-simply edit the snapshot , bring  down the network(outage,meteor strike,cosmic rays,sunspots,something…) and bring up the edited “internet”. I won’t go into why, but if this scenario is correct the complete “reboot” would be required only once-subsequent “reboots” would just be in process memory overlays which high end network computers you can buy yourself are capable of now.Also note, extending our analogy of virtual machines,multiple virtual machines may be able to be run on a single host(the emulator) like I’m running 3 different ones of the computer I’m using at the moment. The hive emulator could theoretically in real time “localize” or “specialize” its “snapshots” to be unique for different regions. This could be done the same way I share the internet connection on my computer with the “guests”(the virtual machines) by way of a network bridge(or NAT). The “guests” have no knowledge of the host and believe themselves to be operating  as if they were hosts. Thus, the master host-our hive super array-could bridge any number of “internets”  to different parts of the world. The virtual internets would not be aware they are running inside the master host and each would envision itself the one and only “internet”. Note that VPN’s(virtual private networks) would get you nowhere since this would merely be a virtualized network inside a virtualized network. The entire IP packet switched network that you would see inside the virtual environment would actually be merely a localized fish bowl with a dark greasy towel thrown over it,routed to the master host(the super array) by a super-scaled  network bridge or cloaked NAT. Remove the towel and the faces of the cybernetic-utopians would be peering through the glass at you. Notice how this “coincidentally” segues into the “cloud”-a universal NAT(Network Address Translation)  jail.

This would also explain the new wave of “internet” censorship from an entirely new angle: the forces of reaction may be preparing a snapshot for future use in the way I have just outlined;a template to be used to clone future internets and restore ones which have become corrupted-by malcontents and knuckleheads-to a pristine state. “Firewalls” would be obviated and made superfluous-unlike the present internet, the virtual emulated internet you happen to be in would be bounded-everything about it would known before it became operable. It could therefore be maintained at steady state(no malcontents or knuckleheads) by minimal maintenance or shortening the reset interval. The “Groundhog Day” analogy appears again. Simply resetting the emulator say,every 5 minutes would place anyone using it into effective suspended animation. Even if inside the emulator and you realized this,you would only have 5 minutes to do anything that would not be erased or perform any transaction that would not be rolled back. And whatever you did, you would have only 5 minutes to enjoy it. 5 minutes would thus become the “window” of reality-not your lifetime or eternity. Even time would start to be measured by this interval(since Corona virus insanity has limited all your interactions with the world to digitally mediated interfaces). Some system would be derived to keep track of the resets-“I am 200 recycles old”. The best and the brightest would be anyone who could make any thing persist longer than the interval-but how?  If all of this sounds like a Twilight Zone episode-don’t blame me , I just work here. All jokes aside, I’m just trying to communicate how bad things could  get-and very soon.

Finally, wouldn’t this obviate and obsolete the methods used to expand the very network I’ve been raving about since the first paragraph of this effort? Yes. And this is as it should be. The metastasis of the hive by hardware and software syncretism I think has always been a transitional stage(just like I’ve mentioned that cell phones are a transitional stage leading to human beings themselves becoming the hive nodes(trans-humanism)-in engineering you can only work with what you’ve got. Since the utopians are going to be using a snapshot,it is only logical they had to wait until the snapshot contained everything that they wanted-including the control grid protocol,which is only now being completed. This gives even more weight to the template suggestion above. Now the new versions of the grid can be “released” in situ. No more backdoor gremlins and rootkits. And one more thing: I asked above whether my emulator could be the cybernetic-utopians knockout punch to elide their coercions and compulsions into our preferences and desires seamlessly. In light of what I’m proposing with the hive emulator doesn’t this agree eerily with what I see as the real strategy behind the ersatz pandemic with its lockdowns and isolation: these compulsions seem to serve only one purpose-that the entire population will have only a single path to communicate with the outside world and each other: the hive grid and its personal representative,the “internet”. An “internet” which may well be a virtual machine whose veracity, origin  and essence will be completely opaque to us all and offer only guaranteed outcomes(everything that is sold-you will be compelled to buy,because you can no longer differentiate what you want from what they want). We will all be reduced essentially to a child like state as seen by a parent: what the parent demands,the child is made to want by socially engineered coercion,i.e.,discipline(stay six feet apart and smile with your eyes). A state which a great many of my countrymen have embraced with unbridled enthusiasm; a state which if it  becomes pedestrian, will mark the real triumph of the cybernetic-utopians and pretty much the end of civil society as it has been known since what Johan Huizinga called “The Waning Of the Middle Ages”. When I read that book in college a lifetime ago,who would have thought I might live to see those ages return? This is not bombast. I have consistently used concepts such as “extinction event” and “Ice9”  as possible outcomes of the triumph of the cybernetic-utopians. And I have done so without irony. As should you.

What really differentiates my views as an engineer from other critics is that they see the current hijinks and exploits of the cybernetic-utopians as the “end” of something-“the internet as we know it”, “net neutrality”,”privacy”,etc. What I see is only the very early stages of the beginning. In engineering terms, the hive development and beta phase has ended and we are now in integration testing-the stage before final product release. The hive grid is actually in its infancy(when I speak of the hive being “almost finished” I mean its birth,not its future;the technologies which compose it are however quite advanced in my opinion) and making halting baby steps. It only appears to be a golem now because most observers are thinking “It can’t get any more serious than this”  owing  to  their technical grasp being limited. They’re wrong. The very,very worst is yet to come. The technologies which are in question are probably in their second or third development iterations,but they have not yet been fully deployed within the hive-and look at the situation we’re in already. And I do think it takes someone with an engineering background to see this clearly. Many of my assertions here are only what an engineering team would do if told to perform a technological assessment of some artifact or paradigm-they would walk it through to its logical conclusion. Some of the conclusions would be more improbable than others,but none would be illogical. And some would be unthinkable-except by engineers who are trained specifically to “see where things are going” in a rigorous manner. And if engineers aren’t sounding the alarms-which they’re not-then the first line of defense has already fallen.

But all that is impossible,Johnson. No.I know my business if nothing else. Everything I’ve mentioned is completely technically possible-even on the scale we’re talking about. These people have stolen trillions,they have unlimited resources, no oversight and nothing but time. This is a blog about seeing the world as a distributed software engineer would see it without artifice. That’s what I see. And that’s the way I would do it if I were Dr.Evil. Most people under the age of 35 are no longer neurologically capable of differentiating “reality” from video viscerally due to immersive habituation since childhood.  Would this demographic even notice the modifications through their centrifugal bumble puppies(cell phones) -or even care?

I will try to slip in as often as possible in future posts some technical information that may convince doubters that what I have just outlined is not only not  fantasy,but may already be some much greater than zero percentage of fact as we speak.

Addendum[3/15/2021[: One possible catastrophic scenario for hoaxing the network  sleight of hand I hint at above is outlined here.

Addendum[4/7/2021[: I came across an article dealing with the same subjects as above in  a fashion that  at first  glance seem  orthogonal to my  own. The unexpected location of the artifact was a blog which presents original papers in  “social theory” , an analytical field that I haven’t found a whole lot use for up to the present. The article however is not saturated with the usual metaphysical mumbo jumbo typical of the genre(Derrida,Foucault, et al)  and actually revealed to me that many people approaching from a wide range of directions are reaching the same conclusions as I have about the meaning and import of the technological narrative currently in vogue.

The article, No Future! Cybernetics and the Genealogy of Time Governance by Ceci Nelson, approaches what we have called inevitablism  as a similar process formulated as “defuturization“:

…the sciences now concern themselves with the attempt to control a menacing and essentially open-ended future. Despite the evident failures of the linear conception of time, rather than calling our teleological beliefs into question, postwar epistemology spliced and shortened the duration of time into apparently measurable steps. In order to plan and decide, diversity and complexity had to be reduced to factors that could be controlled here and now; the focus consequently shifted away from reason, towards regulation and feedback control. The war[World War 2-editor] created new opportunities for those aspiring to actively reshape society. Scientists and engineers volunteered to help by reducing uncertainties, effectively policing the future through a technique the German systems theorist Niklas Luhmann would refer to as defuturization.

Our discussion of the “self-driving” car above resonates here. The automated vehicle triumphs not by offering features different from any present vehicle(a self-driving  car is still just a car) but by reducing its capabilities to  a cybernetic subset of possible outcomes(uncertainties). The future has been been rotated 180 degrees,it does not produce more or better,but less and certain. The artifacts of cybernetic utopia aren’t “better” as that term was understood when I was a child, but merely more repeatable or predictable-which seems to be all the future the nasty version offered by the utopians can offer us. In light of the latter-this:

In their hopes of stabilizing power structures and the economy, rulers rely on such methods of calculation to eliminate undesirable possible futures. The goals they set for the future operate like a feedback loop, meaning that in the final analysis, political prognoses are not deemed true because they are correct, but rather because massive resources are deployed in the service of making them correct[italics mine-editor]. Thus, aren’t prognoses merely rhetorically-clad formulations of a pre-existing goal designed to steer politics toward a desired future?

And as if to stress that the end game of all this is hegemonic and sinister as we have outlined, Nelson gives us a balefully prescient caveat of John von Neumann,patron saint of the cybernetic-utopians:

John von Neumann illustrates the cybernetic interpretation of the problem concerning nonlinear systems by the maxim, “all stable processes we shall predict; all unstable processes we shall control.” The theory of self-regulated systems no longer aims to determine the course of action and eliminate possibilities for change. Instead, these possibilities are included in the system itself, and only technocratic authorities on a higher level are able (or “legitimized”) to observe and regulate them[italics mine-editor].

Well it seems I’ve squeezed Moore’s Laws dry-but if not me then who? The finale ensues.