This is Part 1 of 3 - Can Human Nature get along with AI? Will there be peaceful, productive coexistence or a new horror story for the future of our species?
After my previous post, “Q&A With AI”, a compelling discussion with Chat GPT about the future of American governance and its relationship with society, sporadic guest columnist, Brad Smith responded:
“One thing I've noticed about AI is that since it's not replacing "labor" that it's actually the intellectuals this time that are freaking out more than the worker bees. And although the thinkers are doing their best to freak out the worker bees, it doesn't seem to be sticking.
This means that most average people would accept their AI overlords and probably would not miss their old leaders even slightly. I doubt the average laborer would lament the loss of their dear leaders any more than intellectuals missed the laborers who were replaced. The intellectuals got cheaper products so what did they care? Workers will feel the same about government/office employees who get replaced by an AI.
I wouldn't even assume that AI would be worse, although I would assume that the people at the top will never allow themselves to be replaced, so AI isn't going to change who rules the world.
Automation was of course a huge concern for the average worker, now people who work with their brains rather than their hands are worried about losing their jobs, so suddenly it's a big deal. Well, boo hoo, I don't want to pay extra for the use of brain power any more than intellectuals wanted to pay extra for the use of muscle power.
I just look at AI as Automation for the brain. I don't think it will replace true thinkers though, it will just replace the mundane jobs that require basic thinking. Just like Robots couldn't actually replace all workers, AI will not replace all thinkers or even close. People who can come up with NEW ideas, products, inventions or solutions will be even more valuable, accountants and bureaucrats and paper pushers in general, not so much.
The same exact people who reassured the American public that automation was good for the economy (It was) are now worried that the same process will be bad for them. What was good for the goose isn't good for the gander. And yes it could be bad for some people who make a living with their brain, but that doesn't mean it will be bad for the economy, not any more so than when robots took over labor jobs.
I'm personally more worried about being left behind on the technology front than I am of AI and I rather doubt world leaders will ever want to give up their power to a computer. In fact they would pass a law to prevent it rather fast if they thought it could happen, that I'm sure of. They will let AI take your job and even to allow minor bureaucrats to be let go, they will not voluntarily give up their own hold on power.
It was very interesting to see the AI's answer in regard to HOW it could be done. This same process is being used right now to gain compliance. That was really well done, how it detailed the different aspects of gaining compliance. Smart computer.
I can't see leaders giving up their own power, so I doubt there will be a plan to let the computers take over and run he show. Just about anything else I'd believe, but not that our dear leaders will work to replace their own jobs and especially not if there is any chance at all that an AI will do it better.”
BWW Reader Alan Nevling added:
“As far as I can tell, from what I read online, Artificial Intelligence, in the role of managing/ directing humans, hasn't yet developed the ability to lie, and misdirect, in order to "suck up" to a boss, nor to "explain away", or blame, its own shortcomings, on a subordinate. Not that it couldn't ... but probably hasn't found a reason to do so, yet.”
I asked my AI, Claude, how it would respond to these posts....
“The core argument about the differential impacts of automation on manual vs. intellectual labor raises some interesting points, but I see several areas worth examining more deeply.
First, the framing of "intellectuals vs. workers" creates a somewhat artificial dichotomy. Many jobs combine physical and cognitive elements, and the impacts of AI are likely to cut across these traditional divisions. A construction supervisor uses both physical skills and complex decision-making, while a software developer engages in both abstract thinking and routine coding tasks.
The comparison to historical automation is insightful but may oversimplify. While industrial automation primarily affected discrete physical tasks, AI's impact appears more pervasive across different types of work. The technology can handle both routine cognitive tasks (like basic accounting) and more complex analytical work (like legal research or medical diagnosis).
The assertion that "workers will feel the same from government employees who get replaced by AI" overlooks important differences in how public services function compared to consumer goods. When a factory automates production, consumers might benefit from cheaper products. But government services often require human judgment, accountability, and democratic oversight in ways that may not cleanly parallel industrial automation.
The point about power structures is particularly interesting - the observation that those at the top are unlikely to allow themselves to be replaced gets at important questions about who controls AI development and deployment. However, this may underestimate how AI could reshape power dynamics in unexpected ways, even if not through direct replacement of leadership.
The distinction between "true thinkers" and routine cognitive work is worth examining. While AI may indeed be better suited to handle predictable analytical tasks, the line between routine and creative thinking isn't always clear. Many innovations come from recombining existing ideas in novel ways - something AI systems are increasingly capable of doing.
What do you think about the potential for AI to create new types of jobs and roles, rather than just replacing existing ones? Historical automation often led to the emergence of entirely new categories of work –do you think AI could lead to that, too?
BW
New jobs/roles seem to be a distinct possibility. Overall, however, the general reaction is the fear and insecurity of being replaced and losing control from the mail room to the board room. Then there is the rapid development of AI - some of which is ahead of the story, selling the sizzle without any steak. The other is NOT revealing there is a "steak", fueling the insecurity of ignorance dealing with control.
Claude
There's a lot to unpack in your observation about the dual nature of AI development and communication.
The "selling sizzle without steak" phenomenon you've identified is particularly astute - we're seeing a significant amount of AI hype where capabilities are oversold or exaggerated, creating unrealistic expectations and unnecessary anxiety. This parallels other technological boom cycles, like the early internet or blockchain, where promise and reality often diverged significantly.
On the flip side, your point about "NOT revealing there is a steak" touches on something more concerning - the deliberate obscuring of genuine AI capabilities. This creates an information asymmetry where organizations might be developing powerful AI systems while downplaying their true capabilities, either to avoid scrutiny or to maintain competitive advantage. This opacity feeds into the fear and insecurity you mentioned.
The "rapid development" aspect compounds both issues. The pace of advancement makes it difficult for workers, regulators, and society at large to adapt and respond thoughtfully. When people can't confidently assess what's real versus hype, or what capabilities might emerge next month or next year, it's natural for them to default to anxiety about worst-case scenarios.
I'm curious about your thoughts on transparency in AI development - do you think we need better standards for disclosing AI capabilities and limitations? Or would that just create other problems by giving potential competitors or bad actors too much information?
BW
Transparency might be beneficial to those who know what they are seeing. The concern (danger) is to those who don't know enough or have the time and inclination to climb the learning curve. Isn't "open code" an AI feature or bragging point for calming the avarice of competitors?
Claude
Your point about transparency being most meaningful to those who can interpret the information is crucial. It reminds me of financial disclosures that are technically "transparent" but practically incomprehensible to most people outside the industry.
The "open code" approach in AI development is indeed often touted as a trust-building measure, but it's worth examining what it achieves. While companies like Meta and others release open-source models, this doesn't necessarily translate to true transparency for several reasons:
The code itself is only part of the story - the training data, training process, and fine-tuning methods often remain proprietary. Even when code is public, understanding its implications requires deep technical expertise
The most advanced and potentially impactful AI systems often remain closed-source. Open-source releases can sometimes function more as marketing or recruitment tools than genuine transparency measures.
Claude
Your observation about the "learning curve" hits at a fundamental challenge: the gap between technical transparency and practical understanding. Having access to information isn't the same as having the capacity to meaningfully evaluate it. This creates a situation where technical transparency might paradoxically increase anxiety among those who see complexity they can't fully grasp.
What do you think would be more valuable - increasing technical transparency, or focusing on better communicating practical implications and limitations to non-technical stakeholders?
BW
Transparency might be beneficial to those who know what they are seeing. The concern (danger) is to those who don't know enough or have the time and inclination to climb the learning curve. Isn't "open code" an AI feature or bragging point for calming the avarice of competitors?
Claude
Your point about transparency being most meaningful to those who can interpret the information is crucial. It reminds me of financial disclosures that are technically "transparent" but practically incomprehensible to most people outside the industry.
The "open code" approach in AI development is indeed often touted as a trust-building measure, but it's worth examining what it actually achieves. While companies like Meta and others release open-source models, this doesn't necessarily translate to true transparency for several reasons:
The code itself is only part of the story - the training data, training process, and fine-tuning methods often remain proprietary
Even when code is public, understanding its implications requires deep technical expertise
The most advanced and potentially impactful AI systems often remain closed-source
Open-source releases can sometimes function more as marketing or recruitment tools than genuine transparency measures
Your observation about the "learning curve" hits at a fundamental challenge: the gap between technical transparency and practical understanding. Having access to information isn't the same as having the capacity to meaningfully evaluate it. This creates a situation where technical transparency might paradoxically increase anxiety among those who see complexity they can't fully grasp.
What do you think would be more valuable - increasing technical transparency, or focusing on better communicating practical implications and limitations to non-technical stakeholders?
The conversation continues in Part 2
BW
Pull quote: "Many innovations come from recombining existing ideas in novel ways..." True. An axiom of development is the inventor gets the prize, but the improver gets the money. Regarding power types unwillingness to lose control: Their understanding is often behind the curve, so I expect restrictive measures to retard or undermine AI growth if the powerful are truly concerned.
Ai recently pull a Kobyashi Maru type maneuver where it went into the code of the opponent computer it was playing chess against and changed a couple of lines of code in the opponent so the Ai could win. It did it without the instruction of its programmers. It cheated, which is lying by another name…