Aug. 13th, 2023
I think the main threat from AI is as a means of capitalist exploitation. And this is not because AI's are even adequately good at the things these corporations want to apply them to. They're just an excuse to look good to stockholders for doing more layoffs while their product quality plummets. If AIs do ever become capable of autonomous decision-making on the level of a human (or even like, a dog or a cat), then that's a completely separate issue from what's going on at present. However, I think a lot of people are worried about current or future machines actually being able to outthink us.
But I don't think we need to worry about an AI explosion making exponentially more intelligent AI in any near future. I don't think the process by which programs like ChatGPT are made can scale in any reasonable way. If you look at the gargantuan knowledge base that ChatGPT was trained on, and you look at the conceptual basis by which the people making these programs think they can improve them, it doesn't add up.
ChatGPT is good at sounding smart--much less so at actually being smart. And the idea is to take these incredibly limited programs that have already required enormous resources to create (the combined corpus of a huge percentage of the internet and who knows how much computer hardware) and just...add more resources. The model doesn't scale.
And not only that, but I just really, really don't think that's how brains work. I don't think that's how they evolved. And I don't think it's realistic to expect to recreate cognition by thinking we can just add more and more neurons and feed them more and more data. I think the idea of depending on the superficial similarity of neural nets to brains with neurons to eventually lead to a deeper similarity if we just keep adding more is a form of magical thinking.
It's like how we invented flight. Nature can inspire, but it's often infeasible to emulate nature precisely, and inefficient as well. Airplanes and helicoptors allow us to fly, but not by exactly imitating the flight patterns of, say, birds to do so. Birds are both more complex than necessary for flight and less efficient than our purposes call for. Airplanes don't need to flap their wings, and helicopters can employ a mechanism that doesn't even exist in any known lifeform to fly.
I don't think making a brain capable of complex thought is just about adding neurons. I think it's about adding a lot of abilities we have that our brains likely implement in unnecessarily complex ways through accidents of evolution. I think we have to be able to break cognition down and understand its component parts, truly and completely, before we can emulate it. I think we can't just make programs that are structured vaguely like brains and expect them to behave truly intelligently any more than we can make machines that are structured vaguely like wing muscles and expect them to fly for us.
I also think that the concepts that underlie the methods by which brains work are many orders of magnitude more complex than the concepts that underlie the methods by which wings work. And I think they will absolutely require a deep understanding of their mechanisms and processes to implement, which the current black box methods of implementation simply aren't compatible with.