...

















...



Breaking News

header ads

HOUSE OF KONG THOUGHTS - AI TAKEOVER

The AI Takeover — EMD Thesis Series

EMD Thesis Series — Topic 01  /  Technology

The AI
Takeover.

Friend, foe, or just a really smart employee we're too scared to fire? The question nobody in the boardroom wants on the agenda — and the answer that's arriving whether we're ready or not.

Technology & Society By Neal Lloyd  ·  EMD Thesis Series

Let's get one thing straight before we dive in: Artificial Intelligence is not coming for your job. That's old news. It's already arrived, it's raided the fridge, left the toilet seat up, and it's now asking HR about the pension plan. The question we should actually be asking — the one nobody at the boardroom table wants to put on the agenda — is this: did we just create the greatest tool in human history, the most dangerous invention we've ever produced, or something so profoundly in between that neither description will satisfy us?

The answer, frustratingly and fascinatingly, is: yes to all three.

We Taught
A Machine
To Think.
Then Panicked.

Here's how the story goes. For decades, the smartest people on the planet worked tirelessly to build a machine that could learn, reason, and solve problems the way humans do. They succeeded. And then — as if collectively realising they'd just taught a goldfish to drive — those same people started signing open letters begging governments to slow everything down.

Geoffrey Hinton, one of the literal godfathers of deep learning, left Google and immediately started warning the world about the thing he spent his career building. That's the equivalent of Thomas Edison inventing the lightbulb, then touring the country handing out candles. The cognitive dissonance is magnificent. You almost have to respect it.

But here's the thing about AI that the doomers and the utopians both tend to miss: it doesn't care about either of their narratives. It's a tool — an extraordinarily powerful, occasionally baffling, consistently impressive tool — and like every transformative technology before it, the story isn't about the tool. It's about us. It's about what we choose to do with it. And if human history is any indicator of that choice... well. Let's just say we have a complicated track record.

We built a machine that could pass the Turing Test, write symphonies, and still get basic arithmetic wrong in a way that makes you question everything.

The Jobs AI
Is Murdering
— And Creating

Every generation has its technological panic. The printing press was going to destroy the scribes. The industrial revolution was going to kill the craftsmen. The spreadsheet was going to end accounting. And yet — here we are, with more accountants, more craftsmen working in artisanal revival markets, and more words published daily than at any point in human history. Progress has a funny way of being net positive even when it stings on the way through.

That said, AI is hitting differently. This isn't automation replacing physical labour. It's automation replacing cognitive labour — the stuff we always assumed would be safe because it required a brain. Lawyers reviewing contracts. Radiologists reading scans. Junior coders writing boilerplate. Copywriters churning out product descriptions. The middle layer of the knowledge economy is looking nervously over its shoulder, and the footsteps are getting louder.

300M Jobs potentially impacted by AI globally
97M New roles AI is expected to create
$15.7T AI's projected contribution to global GDP by 2030

But before you spiral into a career crisis, consider this: the jobs AI is worst at are the ones that require the most distinctly human ingredients. Empathy. Nuanced judgment. Creativity that comes from lived experience. The ability to sit with a grieving client and actually make them feel less alone. The capacity to read a room and know when to shut up. These things remain stubbornly, beautifully human.

The real threat isn't replacement. It's complacency. The professional who refuses to learn AI tools isn't being noble — they're being the equivalent of the executive who refused to learn email in 1998. You remember what happened to those people. Their secretaries learned email. They became the secretaries.

Can A Machine
Make Art
Or Just Fake It?

This one keeps philosophers, artists, and drunk people at dinner parties arguing well past midnight. Can AI be creative? Or is it merely an incredibly sophisticated remix machine, sampling the entire history of human output and producing something that looks like originality the way a tribute band looks like the real thing?

Here's what we know: AI can produce images that make award-winning photographers weep. It can write prose that passes through editorial departments undetected. It can compose music that moves people to tears — people who didn't know it wasn't made by a human. By every observable, measurable outcome, the results are indistinguishable from human creativity.

And yet. There's something hollow at the centre. AI doesn't create from experience. It doesn't make a painting because it once sat by a window watching rain and felt something it couldn't name. It generates from pattern and probability. It is the world's greatest mimic — and mimicry, magnificent as it can be, is not the same thing as meaning.

If the output moves you — if the painting stirs something, if the story makes you cry — does it actually matter what produced it? That question doesn't have an easy answer. Which means it's exactly the kind we should spend more time on.

The Ethics
Problem Nobody
Wants To Solve

Let's say an AI diagnoses a patient incorrectly and treatment is delayed. Who goes to court? The hospital that deployed it? The company that built it? The engineer who wrote the training pipeline? Right now, the legal framework for answering that question is approximately as robust as a wet paper bag in a thunderstorm.

Or take deepfakes — AI-generated video so convincing that your eyes and ears, those trustworthy old senses you've relied on since birth, simply cannot detect the forgery. We are entering an era in which seeing is no longer believing. The implications for trust — in media, in politics, in personal relationships — are staggering.

And the frustrating reality is that regulation moves at the speed of democracy — slow, deliberate, consensus-seeking — while technology moves at the speed of venture capital, which is to say: at whatever speed makes the quarterly numbers work.

The gap between what AI can do and what our laws are prepared to handle isn't a crack in the pavement. It's a canyon. And we're building the bridge while standing on it.

So What Do
We Actually
Do With This?

The answer isn't to fear AI. Fear makes you passive. And passive is the worst possible stance to take toward a force that rewards the curious and the adaptable. The answer is engagement. Deep, critical, well-informed engagement. Learn what these systems actually do. Understand where they fail. Ask the hard questions about ownership, labour, ethics, and power.

And maybe most importantly: keep doing the things that make you irreplaceably human. Connect. Empathise. Create from lived experience. Ask questions that don't have easy answers. These are capabilities that no amount of compute power has figured out how to replicate. Yet.

The AI revolution is not something that's happening to you. It's something you are — whether you like it or not — part of. The only remaining question is whether you're going to be a passenger, a critic on the sideline, or someone leaning forward, hands on the wheel, helping to steer this extraordinary, terrifying, magnificent thing in a direction worth going.

The machine is smart. The choices about what to do with it? Those are still ours. At least for now.

Technology Artificial Intelligence Future of Work Ethics Business Thesis Series
NL
Written by Neal Lloyd  ·  EMD
Next in the Thesis Series

Topic 02: The Humanoid Robot Revolution — Why Your Next Coworker Might Have a Charging Port











...






...