“A reluctance to grapple with the often grim reality of an ongoing geopolitical struggle for power poses its own danger. Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed.
This is an arms race of a different kind, and it has begun.”
– Alex Karp, Palantir CEO
These were the recent words of Palantir CEO Alex Karp, proclaiming in the New York Times that the world has entered a new era of warfare with the rapid acceleration of Artificial Intelligence (AI) technologies. Playing on the recent release of the «Oppenheimer» movie by comparing the dawn of AI with the development of the atomic bomb, Karp argued that the growing role of AI in weapons systems has become “our Oppenheimer moment.”
In his op-ed, Karp states bluntly that this era is a new kind of arms race where inaction equals defeat, positing that a “more intimate collaboration between the state and the technology sector, and a closer alignment of vision between the two” is required if the West is to maintain a long-term edge over its adversaries.
Karp’s words are timely within the context of the ongoing conflict in Ukraine, which – from the beginning – has been a tech-fueled war, as well as a catalyst for further blurring the lines between nation states and the companies that own and operate such technologies. From Microsoft “literally mov[ing] the government and much of the country of Ukraine from on-premises servers to [its] cloud,” to Boston Dynamics’ robot dog, Spot, sweeping mines on the battlefield, as I recently reported for Unlimited Hangout, “much of Ukraine’s war effort, save for the actual dying, has been usurped by the private sector.”
But, as Karp’s words suggest, the longer the conflict goes on, the more technologically advanced the weapons, and weapons operating systems and software behind them, will become. Indeed, the US Military is testing Large-Language Models’ (LLMs) capacity to perform military tasks and exercises, including completing once days-long information requests in minutes as well as extensive crisis response planning. Ukrainian Minister of Digital Transformation Mykhailo Fedorov, who commands Ukraine’s “Army of Drones” program in a made-for-film collaboration with Star Wars actor Mark Hamill, even recently proclaimed that the proliferation of fully autonomous, lethal drones are “a logical and inevitable next step” in warfare and weapons development.
Indeed, AI tech and other major technologies are coming to the forefront of the war’s front lines. For instance, “kamikaze” naval drones equipped with explosives dealt heavy damage to the Crimean bridge in July, with the Washington Post also reporting that over 200 Ukrainian companies involved in drone production are working with Ukrainian military units to “tweak and augment drones to improve their ability to kill and spy on the enemy.”
As the conflict continues, corporations and controversial defense contractors, like data firm and effective CIA-front Palantir, defense contractor Anduril, and facial recognition service Clearview AI are taking advantage of the conflict to develop controversial AI-driven weapons systems and facial recognition technologies, perhaps transforming both warfare and AI forever.
Critically, these organizations all receive support from PayPal co-founder and early Facebook investor Peter Thiel, a prominent, yet controversial venture capitalist intimately involved in the start-up and expansion of a bevy of today’s prominent tech corporations and adjacent organizations whose work, often co-developed or otherwise advanced by governments and the intelligence community, includes bolstering the State’s mass surveillance and data-collection and -synthesis capacities despite his professed libertarian political beliefs.
As such, these Thiel-backed groups’ involvement in war serves to develop not only problematic and unpredictable weapons technologies and systems, but also apparently to advance and further interconnect a larger surveillance apparatus formed by Thiel and his elite allies’ collective efforts across the public and private sectors, which arguably amount to the entrenchment of a growing technocratic panopticon aimed at capturing public and private life. Within the context of Thiel’s growing domination over large swaths of the tech industry, apparent efforts to influence, bypass or otherwise undermine modern policymaking processes, and anti-democratic sentiments, Thiel-linked organizations’ activities in Ukraine can only signal a willingness to shape the course of current events and the affairs of sovereign nations alike. It also heralds the unsettling possibility that this tech, currently being honed in Ukraine’s battlefields, will later be implemented domestically.
In other words, a high-stakes conflict, where victory comes before ethical considerations, facilitates the perfect opportunity for Silicon Valley and the larger US military industrial complex to publicly deepen their relationship and strive towards shared goals in wartime and beyond.
The Thiel Connection
Importantly, what links Anduril, Palantir, and Clearview AI — the corporations spotlit in this piece— together is their common support from Peter Thiel. Indeed, Thiel has had long-term relationships with the companies that are the focus of this article, and with the tech start-up industry in general. Thiel, a Palantir co-founder, and Palantir CEO Alex Karp, for example, attended law school together at Stanford University before co-founding Palantir in 2004. Further, Thiel invested $200,000 in Clearview AI, then Smartcheckr, in 2017, making him Clearview AI’s first major financial backer. And Anduril founder Palmer Luckey says he was “19 years old, maybe 20” when he met Peter Thiel, one of the first investors in Luckey’s Oculus Virtual Reality headsets, which Luckey later sold to Facebook for almost $3 billion in 2014. Luckey then founded Anduril in 2017, which Thiel also supports through his venture capital firm Founders Fund. The fund has participated in several funding rounds that have all together helped boost Anduril’s valuation to around $8.5 billion as of late 2022.
Frequent investments made by Thiel’s venture capital firm Founders Fund, which describes itself as a “venture capital firm investing in companies building revolutionary technologies,” and even the Thiel Fellowship, which gives $100,000 to elite university students to drop out of school and create tech start-ups, elucidate Thiel’s affinity for fiscal dominance of, and functional influence over, the tech start-up space now and in the future. Notably, Thiel has funded or otherwise facilitated the rise of many of today’s most prominent corporations through Founders Fund, including LinkedIn, Yelp, Airbnb, and Elon Musk’s SpaceX.
What’s more, Thiel’s funding efforts signal interest in developing expansive surveillance technologies, especially in the name of combatting “pre-crime” through “predictive policing” style surveillance. As an example, Thiel’s provided significant funds to Israeli intelligence-linked startup Carbyne911 (as did Jeffrey Epstein), which develops call-handling and call-identification capacities for emergency services, and has been specifically marketed as a way to detect and stop prospective mass shooters in the US. As UH contributor Jeremy Loffredo and UH founder Whitney Webb reported in late 2020, Carbyne911’s prospects for mass surveillance are commendable: the service has a “predictive-policing component that is eerily similar [to] Palantir’s,” even obtaining “complete surveillance” over New Orleans’ emergency-services system and its users in a deal with the city. In fact, as Loffredo and Webb note, as Palantir’s predictive-policing programs have come under public scrutiny in the US, its services have simply been “baton-passed” to other Thiel-backed groups, with Carbyne911’s 911 and even COVID-response services effectively carrying out what Palantir had been doing previously in New Orleans and beyond. The incident showcases the “networked” nature of Thiel-linked groups, apparently able to seamlessly take over one another’s initiatives.
As I will expand upon later, Thiel also assisted in the development and subsequent privatized spinoffs of the US Government’s Defense Advanced Research Projects Agency’s (DARPA) Total Information Awareness project, which had aimed to create an “all-seeing” military surveillance apparatus in the wake of 9/11.
Altogether, Thiel’s massive investments in Silicon Valley, the defense industry, and other intelligence- and government-adjacent projects showcase a chronic interest in influencing the future of tech innovation, especially in the areas of data collection and surveillance, and bolstering intelligence capacities and presence in and across daily life.
And now, as we shall see, the extensive and high-tech wartime activities of Thiel-linked Clearview AI, Palantir, and Anduril are now striving towards Thiel’s long-term goals — on Ukraine’s battlefields.
Clearview AI’s Redemption on the Battlefield
Controversial facial recognition service Clearview AI has been banned from corporate use in much of the US due to widespread privacy concerns. Now, the Thiel-backed organization is providing its services to Ukraine’s war effort, free of charge.
At the time of writing, Clearview AI’s capacity to recognize human faces is quickly approaching an unsettling new milestone. A 2020 New York Times article elucidated that Clearview AI’s “tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.” According to the Washington Post, Clearview AI told investors in 2022 that “almost everyone in the world will be identifiable” through its system in 2023, with about 14 photos collected per person on earth.
Since the start of the conflict, Clearview AI has been used to identify Russian spies and military personnel in Ukraine, combat “misinformation,” and identify the deceased. As Clearview AI’s website explains, the company “has amassed over 2 billion images from public images on the Russian social media site, Vkontate, and [has been] immediately useful in identifying potential Russian soldiers and infiltrators at checkpoints.”
Naturally, the Ukraine war has also provided perfect PR for Clearview AI, which in fact wrote to Ukraine directly to offer its services in March 2022, not long after the onset of Russia’s Special Military Operation. Clearview AI’s subsequent operations in the country have scored the company high-level news segments on NBC and CNN. In one such segment, Ukraine’s Minister of Digital Transformation, Mykhailo Fedorov, boasted on CNN about Clearview AI’s facial recognition services and, more gruesomely, about how Russian families would be sent photos of their deceased relatives’ bodies after Clearview AI’s tech identified those killed.
The CNN report on Clearview AI’s wartime efforts elucidates its use of social media to obtain information about individuals or faces in question, reporting that “they upload a picture of a face [to Clearview AI]; the technology scrubs all the social networks really fast,” quickly revealing social media’s infrastructure as a boon for law enforcement, intelligence, and adjacent powers looking for information about members of the general public.
When asked in an NBC interview about Clearview AI’s possible negative ramifications for society, the company’s CEO, Hoan Ton-That, said “[a] lot of peoples’ minds on facial recognition technology were changed around Jan. 6th, when the insurrection happened [at the United States Capitol Building]. It was very instrumental in being able to make identifications quickly.”
Indeed, Clearview AI saw a 26 percent uptake of its services from law enforcement after January 6th, 2021. As per December 2022 tweet by Mykhailo Fedorov, “over 900 people from 7 Ukrainian government agencies used Clearview AI conducting over 125,000 searches. The newest global tech & our bravery — the win[ning] strategy for Ukraine.”
In short, while surveillance and privacy concerns have seemed to dampen Clearview AI’s success in the private sector, it has made the most of wartime to ensure both its long-term success and the further normalization of the use of facial recognition technologies, risking the larger populations’ privacy in the process.
Palantir’s Software Gives Ukraine a War-Time Boost
Secretive data mining firm Palantir, which Bloomberg declared in 2018 as “know[ing] everything about you,” has collaborated with the national security state on controversial projects since its inception, serving supporting roles in the US military operations of Iraq and Afghanistan, as well as Operation Warp Speed. In the case of the latter, as covered by Unlimited Hangout contributor Jeremy Loffredo and founder Whitney Webb in late 2020, Palantir helped facilitate COVID-19 vaccination development and roll-out in the United States.
Ultimately, there are sound reasons to suspect that the Thiel co-founded Palantir is a CIA front. As Loffredo and Webb explain: “Palantir was created to be the privatized panopticon of the national-security state, the newest rebranding of the big data approach of intelligence agencies to quash dissent and instill obedience in the population.” As noted elsewhere in that report, Palantir was created by Thiel and Karp with CIA-linked funds and with the clear intention to resurrect the military’s Total Information Awareness (TIA) project after TIA was scuttled due to public pushback over its goal to essentially eliminate constitutional guarantees of privacy for all American citizens.
In recent years, Palantir has continued to secure a number of defense contracts. As noted in previous UH reports, these included an $800 million contract with the US Army for an AI-powered “battlefield intelligence system,” a $91 million contract to develop the US Army Research Laboratory’s AI and machine learning capabilities, and Project Maven, a US Department of Defense AI-powered imagery project for improving drone footage and striking capabilities.
The bevy of US military contracts suggests Palantir’s apparent hyper-involvement in the war in Ukraine is par for the course. After all, a quick look at the company’s wartime actions quickly shows a heavy involvement in a range of Ukrainian civil affairs. Like BlackRock and other elite groups, Palantir is slated to assist with Ukraine’s reconstruction efforts, where it appears likely to contribute to the political class’s efforts to remold the war-torn country to its own ends.
According to its LinkedIn, Palantir has also signed the UK Foreign, Commonwealth and Development Office’s Ukraine Business Compact. Palantir has “documented war crimes” and its Foundry platform, which Palantir describes as an “Ontology-Powered Operating System for the Modern Enterprise,” has helped find homes for about 100,000 Ukrainian refugees in the UK. Collaborating with the Prosecutor General’s Office of Ukraine (OPG), Palantir is slated to assist the prosecution of Russian war crimes through the use of its software, satellite imagery, open source intelligence and other data collection processes to build a “map” of relevant evidence, and even establish attribution for crimes and other relevant activities, which investigators then can utilize as needed.
And on the battlefield, Palantir looks to do some heavy lifting with its programming, data collection and synthesis capacities. For starters, the company admitted in early 2023 that it was assisting Ukraine with military targeting efforts (i.e., selecting battlefield targets to neutralize or otherwise alter) on the battlefield. Namely, MetaConstellation, a satellite-powered tool that can extract and synthesize commercial data available about any given space, has given Ukraine “targeting [capabilities] with like a factor of 20 better” than its prior capacities, according to CEO Alex Karp. An Asahi Shimbun article further details Palantir’s data-related involvement in the Ukraine war, explaining that its software can “schedule image collection from hundreds of satellites orbiting the Earth to deliver critical information to decision makers,” enabling Palantir’s clients to track military movements for time-sensitive analysis and decision-making purposes alike.
Palantir is forthcoming about its efforts towards its Artificial Intelligence Platform for Defense (AIP), an AI platform that can utilize Large-Language Models (LLMs) like Open AI’s Generative Pre-trained Transformer 4, or GPT-4 — to fight wars, picking targets and suggesting and executing actions based on the AI platform’s data-based calculations and proposals and the human “operator’s” permissions.
Certainly, AIP’s capacity for danger is clear from the start. VICE contributor Matthew Gault’s comments on the platform, where he writes that “Palantir’s [AIP for Defense] pitch is, of course, incredibly dangerous and weird,” are perhaps the most succinct. Gault also notes that the LLMs that would help plan and execute an operators’ desired actions, at least at the time of writing, are prone to “hallucinating” (i.e., presenting false information as if it’s real), which could be especially dangerous in conflict-related scenarios. Furthermore, “[w]hile there is a ‘human in the loop’ in the AIP demo, they seem to do little more than ask the chatbot what to do and then approve its actions.” He notes that “[d]rone warfare has already abstracted warfare, making it easier for people to kill [at] vast distances with the push of a button,” a dangerous state of affairs considering that, “[i]n Palantir’s vision of the military’s future, more systems would be automated and abstracted.”
In short, Palantir’s involvement in Ukraine spans both local and international politics and is using the front lines of NATO’s latest proxy war to develop and expand its AI technologies’ lethal and surveillance capabilities. And if such conflict were to evolve from a proxy war into a broader regional or World War, it appears Palantir is at the ready with a variety of lethal AI-backed operating platforms and tools.
Anduril’s New Operating System — For War
Like fellow Thiel-linked counterparts Clearview AI and Palantir, another secretive defense contractor called Anduril is also utilizing the Ukraine war to roll out new technologies. Founded by serial Hawaiian shirt wearer and Oculus Virtual Reality founder Palmer Luckey, Anduril claims its competitive advantage as a defense contractor is that, instead of receiving clienteles’ funding to research and develop new technologies upfront, it develops its products at its own financial risk before selling new tech. The company came under fire soon after its establishment due to its government contracts to develop and build surveillance systems at the US-Mexico border, where Anduril’s autonomous towers “can detect a human from 2.8km away.”
In recent months, Ukraine has been given Altius-600 drones from Anduril’s recent Altius series, which boast an extended loitering time and can carry heavy munitions. Anduril posits that the series, and especially the Altius-600 model, allows for a new dimension of drone warfare, where drones have demonstrated an ability to carry out extended surveillance, “autonomous coordinated strike[s], target recognition and collaborative teaming.”
Notably, Anduril and Palantir are currently collaborating on a new product — an AI-powered ground station — designed for the US Army’s Tactical Intelligence Targeting Access Node (TITAN) prototype effort. TITAN, a “tactical ground station that finds and tracks threats to support long-range precision targeting” can, according to FedScoop, “integrate various types of data from numerous platforms to help commanders make sense of an increasingly dynamic and complex battlefield.” In 2022, Palantir received $36 million for TITAN system prototypes from the US Army, as did Raytheon Technologies, to develop competing candidate systems; Anduril is to assist Palantir’s second phase of prototype developments. The US Army plans to allocate $1.5 billion towards TITAN systems over the next five years in a larger bid to revamp their military capacities and technologies.
In early May, Anduril also unveiled its “Lattice for Mission Autonomy” technology, which it describes as “a hardware-agnostic, end-to-end software platform that enables teams of diverse robotic assets to work together under human supervision to dynamically perform complex missions in any domain.”
In a Twitter announcement, Anduril described the AI-powered Lattice as a “fundamental paradigm shift in how we conduct military operations.” According to Anduril Industries Co-Founder and Chairman Trae Stephens, Lattice, the operating system present in everything Anduril now builds, “does all of the taking [of data], the sensors, fusing that data and then helping the system make decisions about that data with a human kind of guiding that interaction over time.”
Anduril’s claims of a “paradigm shift” with Lattice cannot be dismissed: the software can allegedly help human operators command hundreds of autonomous drones for a variety of missions and needs, which Anduril says will help clientele bypass previous financial barriers towards commanding such sizable forces.
While Luckey is emphatic that Anduril’s products will always have a “human in the loop” as a safeguard, researcher Julia Scott-Stevenson notes in The Conversation that Luckey’s own depictions of the future do not match such promises. Luckey stated at a 2022 talk in Australia that “You’re going to see much larger numbers of systems [in future conflicts] … you can’t have, let’s say, billions of robots that are all acting together, if they all have to be individually piloted directly by a person, it’s just not going to work, so autonomy is going to be critical for that.”
In other words, Luckey is proposing the development of lethal, autonomous weapons systems that could eventually operate at a scale (“billions of robots”) where human attempts to interact, direct, or intervene will become increasingly meaningless.
Silicon Valley Deepens its Military Industrial Complex Ties
In the midst of a harrowing war in Ukraine, a new war unfolding in the Middle East and even signs of a possible war between the US and China emerging, defense industry leaders, especially those backed by Thiel, are taking to the press to reinforce the Silicon Valley-Pentagon relationship. While the relationship never actually dissolved, rank-and-file tech workers have contested Big Tech’s previous defense industry collaborations, including Google employees’ 2018 uproar against the company’s signing onto Project Maven, a Pentagon pilot program to improve AI-powered drone warfare, which Google punted after bad press (and Palantir subsequently picked up). A year later, in 2019, Microsoft faced internal opposition towards accepting a defense contract with the U.S. Army, with staff writing in an open letter that “[w]e did not sign up to develop weapons, and we demand a say in how our work is used.”
Bucking the wishes of some in the tech workforce, defense contractors’ leadership has increasingly appealed to Silicon Valley and to the public for support of their operations, collaborations, and larger worldview, which is in perfect alignment with the goals of the political class. Palantir CEO Alex Karp, for example, has repeatedly advocated for Big Tech to re-embrace its ties to the defense industries and effectively to Western hegemony over international affairs, calling for closer collaboration and alignment of “vision” between the state and tech sectors in a late July NYT op-ed, and saying in early 2023 that “[w]e want [employees] who want to be on the side of the West… you may not agree with that and, bless you, don’t work here.”
Similarly, Anduril co-founders Palmer Luckey and Trae Stephens co-wrote a Washington Post opinion piece entitled “Silicon Valley Should Stop Ostracizing the Military” back in 2018, where they wrote that “[i]f tech companies want to promote peace, they should stand with, not against, the United States’ defense community.” Anduril’s Luckey expressed an explicit pro-Western opinion in an April 2023 Maritime Executive Op-Ed entitled “It’s Time to Accelerate Defense Tech to Deter War,” writing that today’s “dangerous world is why Anduril exists: because the West must have critical tools to preserve our way of life, uphold the values of free and fair societies and deter powerful adversaries from dominating weaker nations.”
To these defense industry leaders, in other words, the tech industry must pick a “side”: picking a side means supporting it with all assistance, including even and especially military assistance, possible. Such executive perspectives may sound novel within the context of industry worker infighting over Big Tech’s collective relationship with the military industrial complex. In fact, the tech corporations now expanding their involvement in the Ukraine conflict are merely deepening their defense industry roots, which exist because the military industrial complex created and shaped much of today’s Silicon Valley in its service.
Unlimited Hangout has covered this relationship extensively, with Unlimited Hangout founder Whitney Webb writing in her 2021 article, “The Military Origins of Facebook,” that “most of the large Silicon Valley companies of today have been closely linked to the US national-security state establishment since their inception.” She explains that, today, “these companies are [now] more openly collaborating with the military-intelligence agencies that guided their development and/or provided early funding.”
In 2020 and 2021 reporting, further, Unlimited Hangout has highlighted the significance of the US Defense Advanced Research Projects Agency’s (DARPA) former Total Information Awareness (TIA) project, a project designed as an “all-seeing” military surveillance apparatus which jump-started analogous private-sector companies now playing critical roles in today’s Big Tech ecosystem and the larger private sector. While TIA apparently went defunct in 2003, it never fully dissolved. Instead, it manifested as a series of private sector entities such as Palantir, while companies like Facebook resurrected other shuttered DARPA programs of the period. TIA’s “Bio-Surveillance” program has apparently been re-tooled into the AI-powered US Department of Health and Human Services’ (HHS) Protect system, which surveilled waste waters and used advanced data modeling to predict COVID outbreaks up to 11 days in advance during the coronavirus crisis. Notably, all of the HHS’ COVID data, including that generated by its Protect System, has been managed by Palantir. Ultimately, all the mentioned are entities that became more palatable to the public due to their apparent, yet false separation from the defense sector. (As of 2022, the management of HHS Protect has moved to the Centers for Disease Control and Prevention (CDC).)
Naturally, this larger surveillance-intelligence apparatus, apparently spanning much of society, and the corporations, governmental and non-governmental bodies and individuals shaping it, are connected to ongoing developments in wartime Ukraine. As previously stated, Anduril, Palantir, and Clearview AI, all corporations perfecting the art of mass surveillance through means including mass data collection and synthesis, have all been financially supported, and in Palantir’s case, co-founded, by Peter Thiel, who was, as an early Facebook investor and former Meta board member, involved in the creation of privatized TIA-spinoffs.
And the hyper-networked nature of these relationships becomes all too clear upon scrutiny. For example, Anduril Co-Founder Trae Stephens is part of the management team for Carbyne911, which, as previously mentioned, is a Thiel-backed and Israeli intelligence-linked start-up that provides heavily-surveilled emergency response services to the public sector. Further, Stephens is also a Partner of Peter Thiel’s venture capital firm Founders Fund, where Stephens’ bio highlights his role as an early Palantir employee who “led teams focused on growth in the intelligence/defense space.”
Meanwhile, current “X” owner Elon Musk’s Thiel-linked SpaceX Starlink satellite system has donated about 40,000 satellite terminals (which receive a signal from Starlink) to Ukraine in an effort to provide Ukrainians with an internet connection. Musk characterizes Starlink as a “civilian system,” and says Starlink’s terms of service prohibit its use in “offensive military action”; in practice, Ukraine’s spy chief says that Starlink is used “on all front lines,”though Musk has at times partially rescinded or limited internet coverage to prevent Starlink’s use in a major act of war or conflict escalation. Ultimately, through such efforts, SpaceX is winning what UnHerd writer Thomas Fazi describes as a brewing satellite “arms race,” where SpaceX now operates over 4,500 of the estimated 7,500 satellites orbiting earth.
Similarly, the former CEO of Google (which also has major intelligence ties), US National Security Commission on Artificial Intelligence (NSCAI) Chair, and World Economic Forum Agenda Contributor Eric Schmidt has invested heavily into the Dare to Defend Democracy (D3) Military Tech Accelerator, an accelerator for Ukrainian-based defense start-ups that describes itself as helping “Ukraine defende [sic] democracy and win through tech, turn the winning solutions into global success stories.” Despite his proximity to AI lawmaking processes as per his NSCAI post, Schmidt is also a serial investor in AI technology, thus holding significant, simultaneous power over both the companies driving AI and military tech and their respective regulation.
Altogether, these serial and simultaneous elite investments in such corporations and Ukrainian war and data collection technologies alike do not simply suggest people like Eric Schmidt and Peter Thiel hope Ukraine’s war efforts are successful. Rather, these investments suggest further efforts towards influencing and controlling technologies that can facilitate, among other things, the advancement of a new class of questionable AI war technology that is unaccountable to the public and the furthering of a society-wide surveillance panopticon all but impossible for the average person to escape.
After all, the industries’ close-knit relationship means that Silicon Valley’s product developments and technological breakthroughs ultimately advance the defense industry’s aims, and vice-versa. For example, precise surveillance and mass data collection is key to drones’ success as a military tool that can reliably help human beings plan and execute military missions. Thus, it follows that the data collection and processing and surveillance advancements made to better AI warfare could also be used to better surveil or even control the public in times of “peace.”
These amped-up, forever-advancing surveillance, data collection and processing capacities are occurring within the context of a growing war on “domestic terrorism,” where, in the US for example, civilians are increasingly being probed for “pre-crime” tendencies. As Webb has noted in previous Unlimited Hangout reporting, the tech industries’ mass data collection and algorithm-building are critical for military-intelligence agencies’ efforts to surveil and scrutinize the general public for “problematic” behaviors.
In addition to cracking down on “problematic” civilians and constant efforts to influence or create surveillance-, data-collection-, and AI-powered technologies in Ukraine and beyond, Thiel and friends’ cynical views towards democracy perhaps suggest intentions to influence and even undermine traditional policymaking processes, especially in favor of unaccountable public-private partnerships and governance structures that Thiel himself either invests or participates in directly. Framing his donations to Senate Republicans as relationships of “convenience,” Max Read posits in Intelligencer that “Thiel has wed himself to state power not in an effort to participate in the political process but as an end run around it.” Certainly, Thiel’s own actions and statements signal this “run around” of political processes is ongoing. For example, in his 2014 book Zero to One, as journalist and Thiel biographer Max Chafkin puts it, Thiel “talks about how companies are better run than governments because they have a single decision maker — a dictator, basically. He is hostile to the idea of democracy.” Notably, Chafkin supposes that Thiel may be the most powerful person, not just in tech, but on earth. According to the New York Times, moreover, Thiel has “argued that democracy and economic freedom are incompatible and suggested that giving women the vote had undermined the latter.” And Palantir’s slated efforts to assist Ukraine’s prospective post-war reconstruction, as previously mentioned, do not simply signal a desire to “help” Ukraine; rather, they instead reflect Thiel’s, and the intelligence community’s, interest in influencing the affairs of (theoretically) sovereign nations and geopolitics in general. In the event that Thiel is indeed working to subvert or even undermine traditional political processes in the US, Ukraine, and elsewhere, the various technology and organizations he has invested in over the years appear perfectly positioned to assist.
Thiel’s exact worldview and end-goals remain opaque. What is clear, however, is that Big Tech and the defense industry’s inextricable relationship means that Silicon Valley’s decades of tech advances, propelled by people like Thiel, are easily adapted to the needs of the battlefield, if not made primarily with war in mind. Considered altogether, this larger intelligence-funded and -supported apparatus and its ever-growing appendages’ collective ability to establish and expand a technocratic panopticon appears infinite. Wartime only expands its reach.
An AI Race to Oblivion
As the war in Ukraine drags om, controversial Thiel-linked companies and organizations are going all-out in efforts to revamp AI-driven weapons systems, facial recognition technologies , and other high-tech instruments that may radically transform warfare, as well as peacetime, forever. In the process, Thiel’s influence over not only the tech and defense industries, but also increasingly the course of outcomes of current events, only deepens.
Critically, these companies repeatedly emphasize the war’s “pressure-cooker” effect on the need to develop the most advanced defense technologies possible, or risk getting beaten out by an enemy during another high-stakes conflict, such as that currently unfolding in the Middle East. Such fears are being leveraged to obtain government funding, with the leadership of 13 prominent tech companies and adjacent investors, including Palantir, Anduril and Thiel-funded venture capital firm Founders Fund signing an open letter in late June requesting that the US government reform its defense procurement processes so as to make money from its defense budget, likely to total in at a whopping $886 billion in 2024, more accessible to defense start-ups. . The letter warns that, if its funding recommendations are “neglected,” the US’ “competitors will continue to gain ground on the technological battlefield, and we will squander the advantages that accrue from the freest and most innovative marketplace on earth.”
Ultimately, the technologies I’ve described in this piece are controversial at best, and are created or supported by, or otherwise connected to, unsavory characters and organizations at the heart of the power elites’, and especially Peter Thiel’s, larger strides towards a technocratic nightmare. The poor and one-sided quality of Ukraine conflict coverage within mainstream press, however, ensures the public hears little of defense contractors’ and adjacent groups’ never-ending field day in Ukraine, as it appears that no efforts, nor Ukrainian soldiers and civilians, will be spared in NATO’s proxy war.
But as war settles into an apparent, yet unpredictable, stalemate, it’s clear Ukrainians are the ones worst off as conscriptions for much of the country’s male population take effect, and the Ukrainian government, which has long-since banned political opposition and even put off elections until after the war, actively boasts about opportunities for the world’s wealthiest to privatize its economy for their benefit, not Ukraine’s. Here, the war is one that only the power elite can win.