UBI and the Age of Automation - 33coders
Skip to content
Back to blog
Ai Strategy

UBI and the Age of Automation

An AI practitioner wrestles with Universal Basic Income -- what changes when you are personally building the automation that displaces workers, and why the standard narratives about retraining and market adjustment are insufficient.

Damian Krawcewicz

Damian Krawcewicz

January 24, 2025

I have been thinking about Universal Basic Income for years, and my position has shifted more than I expected. When I first encountered the idea, I dismissed it as utopian nonsense -- the kind of policy proposal that sounds good at conferences but ignores how economies actually work. Then I spent the last several years watching AI reshape organizations from the inside, and my certainty started to crack.

Here is the thing that changed my mind: I have personally helped organizations automate work that used to require dozens of people. Not in theory. In practice. I have sat in rooms where we discussed which roles would be "optimized" and watched the math happen in real time. The productivity gains are real. The human cost is also real. And nobody has a good answer for what happens next.

The Automation Nobody Talks About Honestly

The public conversation about AI and jobs is stuck in two equally useless modes. Mode one: "AI will take all the jobs and we are doomed." Mode two: "AI will create new jobs and everything will be fine." Both are wrong because both treat automation as a single event rather than what it actually is -- a slow, uneven process that hits different people at different times.

I have watched automation arrive in organizations. It does not come with an announcement. It comes as a "process improvement" or a "digital transformation initiative." A claims processing team of 30 becomes a team of 12 with an AI system doing the initial triage. A content team of 15 becomes a team of 6 with AI generating first drafts. The work gets done. The people leave quietly.

The productivity gains I help create are real. The displacement they cause is also real. Comfort is not the standard we should be optimizing for.

Share

What the optimists miss is the transition cost. Yes, new jobs emerge. But the person who spent 15 years processing insurance claims cannot become a machine learning engineer in six months. The retraining narrative assumes a level of labor market fluidity that does not exist in most European economies. In Poland, where I work, the social safety nets are better than in the US but still not designed for the kind of structural displacement that AI is starting to cause.

From Fringe Idea to Real Experiments

For years, Universal Basic Income was a fringe concept -- the kind of thing discussed at policy seminars and think-tank conferences. That has changed. Finland ran a two-year experiment with 2,000 unemployed citizens receiving 560 euros per month with no conditions. The results were mixed but interesting: participants did not find jobs faster, but they reported better wellbeing, more trust in institutions, and more confidence in their ability to find work. Alaska has been distributing oil dividends to every resident for decades -- roughly $1,000-$2,000 per year. Communities in Kenya have tested direct cash transfers with measurable improvements in health, education, and local entrepreneurship.

None of these experiments are conclusive. But they are real data, and real data is better than the theoretical arguments that dominate most UBI debates. I find it telling that the most common objection to UBI -- "people will stop working" -- is not supported by any of the actual experiments. People mostly keep working. They just worry less.

The Tension That Nobody Resolves

Here is where I get uncomfortable, and I think the discomfort is the point. I help organizations become more efficient using AI. That efficiency often means fewer people doing the same work. I am, in a very direct sense, part of the automation problem that UBI is meant to address. I do not think stopping AI adoption is realistic or even desirable. The productivity gains are genuine. Organizations that do not adopt AI will be outcompeted by those that do. But the social consequences of widespread adoption are also genuine, and pretending they will sort themselves out through "market forces" is either naive or dishonest.

Capitalism is not designed to solve this problem. It is designed to optimize for profit, and AI is very good at helping with that optimization. The question is whether we can build social systems that absorb the displacement without letting millions of people fall through the gaps. UBI is one proposed answer. It is not the only one, and it may not even be the best one. But at least it takes the problem seriously.

What the Corporate AI Narrative Misses

I sit in corporate meetings where executives talk about AI "freeing people to do higher-value work." I have said this myself. And sometimes it is true -- AI does eliminate tedious tasks and lets skilled people focus on judgment, creativity, and relationship-building. But this narrative has a blind spot the size of a continent.

It assumes that everyone displaced by AI has the skills, resources, and opportunity to move to "higher-value work." That assumption is false for a significant portion of the workforce. A customer service representative whose job is automated does not automatically become an AI prompt designer. A data entry clerk does not become a data scientist. The transition requires training, time, financial support during the transition, and -- critically -- actual job openings in the new category.

I worked with one organization that automated 60% of their back-office operations over two years. They offered retraining programs. About 30% of the displaced employees successfully transitioned to new roles. The other 70% eventually left. The company celebrated the 30% in their CSR report. Nobody talked about the 70%.

"The market will adjust" is a bet I am not willing to make with other people's livelihoods.

Share

The Political Reality in Europe

I work primarily in Poland and the broader European market, where the labor politics around AI are different from the US. Europe has stronger social safety nets, more unionization, and the EU AI Act is creating regulatory pressure that does not exist elsewhere. But Europe is also dealing with its own version of this tension.

Poland's economy has grown rapidly partly by being a cost-competitive labor market for technology and business services. Hundreds of thousands of Polish workers are employed in shared service centers and outsourcing operations that exist precisely because Polish labor is cheaper than Western European labor. AI threatens that competitive advantage directly. If work can be automated, the cost of human labor in Warsaw versus San Francisco matters less. Polish companies that positioned themselves on labor cost arbitrage are going to face a reckoning, and it is coming faster than most people in the industry acknowledge.

I talk to CTOs at Polish companies regularly, and the awareness gap is striking. Some are actively preparing -- reskilling teams, repositioning their offerings, building AI-native capabilities. Others are still operating as if the current model will last another decade. It will not.

The EU AI Act addresses some risks -- high-risk AI systems, transparency requirements, and prohibited practices. But it does not address the economic displacement question at all. It regulates how AI is built and deployed. It says nothing about what happens to the people whose jobs the AI replaces. That gap is where the UBI conversation becomes relevant.

The Deeper Question: Purpose Beyond Employment

Even if we solve the financial problem -- even if UBI or something like it provides a baseline income -- there is a harder question underneath. Work provides more than money. It provides structure, social connection, identity, and purpose. When I talk to engineers and managers in organizations undergoing AI transformation, the anxiety is not primarily about money. It is about relevance.

"If the AI can do my job, what am I for?" That question comes up in almost every workshop I run, usually indirectly. People do not phrase it that way, but it is what they are asking. And I do not have a good answer. UBI addresses the income floor. It does not address the meaning floor. A society where basic needs are met but purpose is scarce is not a solved problem. It is a different kind of problem.

Where I Land (For Now)

I am not a UBI evangelist. I am an AI practitioner who sees the displacement happening in real time and finds the standard narratives insufficient. "The market will adjust" is a bet that I am not willing to make with other people's livelihoods. "Ban AI" is not realistic. "Retrain everyone" underestimates the scale and speed of the transition.

UBI, or something structurally similar, might be necessary as a bridge. Not because it solves everything -- it does not. But because it buys time. Time for people to adapt, for institutions to evolve, for society to figure out what "work" and "value" mean when machines can do most of what humans used to do for wages.

I do not know if UBI is the right answer. But I know that the current answer -- hoping the transition will be gentle and gradual -- is not working. And as someone who builds the systems that drive this transition, I think I owe it to be honest about that.

The productivity gains I help create are real. The displacement they cause is also real. Holding both truths simultaneously is uncomfortable, but comfort is not the standard we should be optimizing for. The honest conversation about AI and work has barely started. We owe it to the people affected to have it properly, with real data, real humility, and real proposals -- not just platitudes about retraining and market adjustment.

Damian Krawcewicz

Damian Krawcewicz

AI strategy consultant and practitioner. 20 years in engineering, currently leading AI adoption for 100+ engineers.

Learn more about Damian

Need help building your AI strategy?

Explore AI strategy advisory