AI is just a tool
Who is using AI & how you should approach AI
Talking about AI doesn’t feel productive. Every conversation that I have seems to split into two camps: those who see AI as humanity’s doom and those who see it as our future. But AI is neither good nor evil by itself. It is just a tool, just like a hammer or a computer or the internet. The problem is not the tool itself, the problem is who gets to use it.
The question is not whether we should have AI because it is here & tech conglomerates won’t let that go. The question is how we make AI safer and who gets to shape it: algorithms have been here for ages (Siri, Social media algorithms, etc.) what LLMs are doing differently is that they process information faster. Here is what worries me most: while some people debate whether AI is good or bad, other people are already using it to gain power. They use it to make decisions quicker. They use it to understand patterns that others cannot see.
Right now, AI is being used in terrible ways. Tech startups are simply slapping ‘AI’ to their digital products/service for the hype, lacking real tech innovation. In places like Palestine, AI is being used by the Israeli forces to target Palestinians with deadly precision (leveraging data in immoral ways). Many companies are undervaluing tech professionals and laying them off, rather than upskilling their staff, which is exhausting for people who work in the field. The negative health & environmental impacts caused by AI Data centers (Google, X AI and more) in Uruguay, Kenya, Ireland, Chile, or the USA (Northern Virginia, Phoenix/Arizona). These cases are causing real harm to real people.
However, AI is very good at working with large amounts of information. It can go through thousands of documents in minutes. It can find patterns in data that would take humans days to spot. It can translate languages and summarize complex reports. These are powerful abilities and they can help ordinary people in extraordinary ways. A mother trying to understand her child’s medical condition can use AI to make sense of confusing medical papers. A small business owner can use AI to understand the market. A student can use AI to learn about topics that no local teacher knows well.
Another point you should focus on is whether you understand this technology enough to know where it is useful for you and where it is not. This means also demystifying AI - understand that AI currently needs quality data and lots of personalisation to be useful, AI is not magic, AI won’t and does not need to become conscious (except if we explore other alternatives to LLMs). I have been using AI for a while but the more frequently you use it, the more you keep seeing where it has it’s own flaws.
So crossing our arms and refusing to learn about AI will not solve these problems, it’s just keeping us uninformed. It will only make sure that when decisions about AI are made, ordinary people have no voice in those decisions. The people who will shape AI’s future are the people who understand how it works. If we want AI to serve humanity instead of harming it, we need humans from all backgrounds to be part of that conversation. We need indigenous voices and the voices of working people everywhere. AI and automation is becoming one of those forces. We can choose to understand it and shape it, or we can choose to let it be shaped by others.