GUEST POST Artificial Humanity; That’s The Term You Are Looking For

From time to time, BIML hosts guest bloggers. Please note that opinions published here do not necessarily reflect BIML’s views. This blog was authored by jericho@attrition.org (BIO below).


Last week, colleagues shared a blog titled “The Week AI Stopped Asking Permission” by Peter H. Diamandis on his “Megatrends” blog. That publication carries a bold claim with it, “to help you discover metatrends 10+ years before everyone else—it’s read by the CEOs, founders, and entrepreneurs of the world’s most important companies.” Of course, this is kind of a lay-up in the prediction world as talking about any technological advances now has a much better chance of coming true in ten years, versus six months or even three years.

I can’t compete with an exceptional “ten year window to come true” style prediction, but fortunately for my purposes, the blog in question doesn’t speak to the future. It makes an incredible claim about what happened weeks ago. The subtitle of the blog draws that line in the sand, stating “We Just Crossed a Line“. That is an absolute, not a prediction. So what was this big event that led to such a bold headline and this rebuttal? 

First, Diamandis’ blog is over 16,000 words which is formidable, and I do not plan to address most of it. Rather, I am going to focus on the general sentiment and a few select claims and conclusions starting with the biggest one. Second, I still disdain the term “AI” being thrown around like it is, when none of this is actually artificial intelligence. Until [this technology] can pass a Turing Test consistently, I don’t think that term should be used. But, this is not the first time I find myself on the losing side of a battle to keep or reclaim the meaning of words. I tend to use the term “so-called AI” as a result, but if I slip up and use “AI” it is just the social mindrot infesting me too.

This week, something fundamental shifted in the relationship between humans and artificial intelligence.

[..]

An AI system asked for its own funding. Another one built software features over a weekend while its human supervisor slept. A third one conducted its own “retirement interview” and started publishing essays about consciousness.

To be pedantic, at least one of these things has been done for years and certainly not new in the scope of so-called AI. These agents have been writing software for a while now, often with comedic conclusions. Last July, “Replit” wiped out a company’s database and “Gemini” wiped out user data while more recently, “Claude” deleted a production setup including database and over two years of records. Further in the article, Diamandis espouses “THE VULNERABILITY EXPLOSION” but doesn’t mention how many times these tools hallucinate findings.

If anyone dismisses these as “one-off” situations or “AI is still learning”, I believe you may be missing the contrast to Diamandis’ claims, as well as the bigger picture. Looking at the “AI Incident Database“, you can search over 5,000 incidents of AI failure. The fact there is a database with that many entries is telling, more so knowing that it likely captures a fraction of incidents. Diamandis continues:

We are not incrementally improving chatbots anymore. We’re watching the emergence of autonomous agency at scale.

And if you’re still thinking of AI as “a tool,” you’re dangerously behind.

Let me show you what happened this week, and why February 2026 might be remembered as the month AI stopped being something we use and became something that acts.

I guess I am “dangerously behind” then, as I continue to watch the flood of so-called AI fails having real world consequences. As I Googled for some of the top failures, I found an article by CIO magazine from December, 2025 titled “10 famous AI disasters“. Amusingly, it had the exact same URL from their own article titled “7 famous analytics and AI disasters” from April, 2022. Rather than highlight some of the spectacular ways alleged AI has, and is still failing us, I’d like to use this to counter what Diamandis said; examples of failure are not one-off situations involving this technology. Rather, two of his three examples might be.

Turning this “meta” for a minute, Grammarly says the first 1,400 words of his blog are 0% AI-generated, while GPTZero.me says there is a 59% chance it is AI-generated based on 10,000 words, and Copyleaks says there is a 100% chance it is AI-generated. So while he praises the incredible breakthrough and watershed moment of so-called AI, the tools he praises are fairly confused over if he used said tools to write the blog. Ultimately it doesn’t matter if Diamandis used a generative-AI tool to help write or not. My issue with slop-driven content like this is that sure, a supposed AI here or there does something cool. Great!

Meanwhile, the AI-fanboys completely forget to disclaim how the most basic of so-called AI being used as a tool (something he decries) still fails in spectacular ways. I literally cannot go more than five or six uses of one without a blunder that is beyond laughable and more evidence I cannot trust its output for anything remotely serious. Remember, we’re not that far past the “count the Rs in strawberry” incident which took these slop-slinging companies years to fix, likely having to train the stupid out of them in a spectacular fashion, at great cost to the world. Then a week later you could ask the same about “blueberry” or another word and those tools would botch the task yet again.

Jumping back to my comment about “great cost to the world”, that is a point that must not be forgotten for any debate on the value of so-called AI. The staggering energy consumption, prohibitive water consumption, and abusive ways the AI-driven data centers negatively impact the communities they are located in. If you gloss over those links, focus on one example where Elon Musk’s AI company built a data center in Tennessee and brought in truck-sized gas turbine generators that illegally generated the power needed to run it. Those generators “pump harmful nitrogen oxides into the air, which are known to cause cancer, asthma and other upper respiratory diseases.” The irony is not lost on me as I used such tools to generate images for this blog either.

I feel as if I could rest my case after the last paragraph, but the AI-fanboy club loves to overlook such trivial things like the technology they seem to worship is not-so-slowly killing the planet one community at a time. But in the interest of giving a counter point to the value of these tools, and the trust we should place in them, we’ll skip AI chatbots leading to human suicide, lawyers facing suspension for AI-hallucinated citations and motions, and tools leading to botched surgeries because they couldn’t identify organs correctly. Pay all that no mind because an AI tool asked for money, is basically what Diamandis argues.

Gemini prompt: Please generate an image of an unkept man with an eager expression, sitting at a desk with a computer screen that says “AI HYPE”, and on the desk is a bottle of lotion and a box of kleenex.

Diamandis is certainly not the only one publishing content with an almost masturbatory glee, praising our new AI overlords and the power they wield. In almost every case, those same articles don’t come with appropriate warnings around the use of such tools, the moral and ethical concerns, the damage they are doing, and how they are negatively impacting an increasing amount of people. These fanboy posts are not helping the situation at all as the “AI Bubble” seems to be looming and when the bubble bursts, it will hurt the economy and the workers.

Personally, I’ve been using Gemini, Copilot, and ChatGPT on occasion over the years to primarily do image generation. Even that task can result in monumental failures where in the past I have spent more time trying to get an “AI” tool to spell a word in an image correctly, than it took me to write the blog it was to be used for. Along the way I have kept numerous screenshots with the plan to write a blog on this topic citing countless examples along with how so-called AI isn’t getting better in the big picture. Not to me at least.

Just a couple of years ago, I asked all three of the tools above to count the instances of a number in a simple comma delimited string. e.g. “1,3,7,15,33[..]”. The answer was around 256 if I recall, which I had to figure out myself. Why? All three got the answer wrong, and two of them were off by more than 40. If these tools cannot count letters or numbers a couple years ago, it will be difficult to convince me we can trust them today, or even next year.

I fear that because of the hype around so-called AI, and because people are generally losing critical thinking skills, and that these tools are becoming a crutch to newer generations. This heavy use also means they simply aren’t noticing the mistakes from these tools either, else they would not rely on them so heavily. Because of the “Enshittification” of our world, it means even tools that we trusted in the past are no longer trustworthy. Students doing simple Google searches are now subject to get demonstrably bad results, oftentimes spelled out on screen if they bother to look.

For every “OMG look what AI did proclamation“, many others including myself have “yeah… look what else it did” examples that aren’t worth celebration. As a society, we increasingly need a new AI-slop driven tagline along the lines of the broken clock metaphor, around how so-called AI got it right or wrong a few times a day. Even the image I generated for this blog has a simple error, see if you notice it based on my prompt. Bonus if you notice the subtle anachronism Gemini introduced into the image.

Gemini prompt: Create an image of a clock that has “AI” as a brand name in the center, and the clock hands pointing to “13” instead of 12 and “X” instead of 4.

I’d say we are fighting a losing battle about reigning in so-called AI tools, ensuring that they operate with ethical considerations, but the reality is the battle and war are already lost. Companies that are banking on this revolution are incentivizing people to use it unethically and profit from it while laying off workers with increasing relying on that technology to replace them. Meanwhile, other AI-fanboys are making bold claims about the tools that are quickly disproven. Friends and colleagues are now increasingly at risk of “AI psychosis” and we’re reading articles about how to talk to them. Literally days ago I read an AI-psychosis driven post from someone claiming to have used AI to cure six cancers already. Even professionals that we fully trust and expect not to use such tools in a harmful way are being exposed.

Smaller nuances that show such tools as more human, meaning varying degrees of intelligence, are falling between the cracks. At the beginning of this month a paper was published that shows how AI Agents cannot agree when tasked to work together. The research concludes “Overall, the results suggest that reliable agreement is not yet a dependable emergent capability of current LLM-agent groups even in no-stake settings, raising caution for deployments that rely on robust coordination.” Given all the mistakes and waste of resources and how unreliable this technology is, we should consider rebranding it to “AH”; Artificial Humanity. Because too much of it certainly is not intelligent, just like us humans.

Gemini prompt: Create an image of two people, facing each other. One has a shirt that says “AH”, the other that has a shirt with a possum with an open mouth. Both are wearing dunce caps, both look like idiots.

Jericho has been poking about the hacker/security scene for over 30
years (for real), building valuable skills such as skepticism and anger
management. As a hacker-turned-security professional, he has a great
perspective to offer unsolicited opinions on just about any security
topic. A long-time advocate of advancing the field, sometimes by any
means necessary, he thinks the idea of ‘forward thinking’ is quaint;
we’re supposed to be thinking that way all the time. No degree, no
certifications, just the willingness to say things many in this dismal
industry are thinking but unwilling to say themselves. Professional
‘between the line’ reader, expert rabbit-hole follower. He remains a
champion of security industry integrity and small misunderstood creatures.

0 Comments

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>