Will AI replace jobs?
I have been reading a lot about whether AI will replace jobs, particularly if programming becomes obsolete. Here are my two cents.
I'm not necessarily advocating for one side but rather reflecting on the patterns of thinking I’ve noticed on both sides.
Humans, no matter how intelligent, cannot escape fallibility. This seems largely due to our biases and wishful thinking. For instance, I can see why a software engineer who has spent years mastering coding might feel furious at the idea of being replaced. This naturally fosters a desire to seek out arguments that reinforce their stance. Consequently, they tend to upvote ideas that match their views and disregard those that don’t and vice versa.
Take, for example, a post I recently saw from someone who claimed that it took over 50 years for translators to be replaced by AI since its inception, suggesting it would likely take a long time for programmers to face the same fate. This reasoning, however, is flawed. With advancements in technology, its accessibility, and its widespread adoption, progress isn’t linear, it’s more likely exponential. That said, I’m not trying to argue that programmers will be replaced either; I'm just pointing out that we need a good explanation for the conjectures on either side.
What seems to happen is that when you’re at the forefront of a technology, there’s a lot of noise, biases and conjectures posing as insights. This is why I believe forming our conjectures is crucial.
Why does having a better conjecture matter? Because we can’t deceive the market. My wishful thinking or personal desires won’t bend the universe to my will. In 10 years, we will eventually know how things turn out. The better our conjecture and judgment, the better the market will reward us. It’s not necessarily about how many people agree with you.
Let’s consider one possible outcome among many, especially at this early stage. If AI is indeed going to replace the workforce, anticipating that shift ahead of time would be far more valuable than holding onto the opposite belief. And since I’m human and fallible too, I must continually challenge my assumptions with criticism. What matters isn’t being right; it’s having a better conjecture of the objective truth.
With that said, one metric that does seem intriguing is the second derivative of AI progress over time…