Of course, I’m referring to AI (LLMs, specifically).
A few weeks ago, I downloaded Cursor, an AI-based code editor, and I’m astonished at how good it is. I asked it a few questions about this blog’s codebase, and it responded quickly with detailed and accurate answers. I then prompted it to make a few changes, and it did what I wanted with minimal effort.
I’ve since deleted Cursor because I already pay for GitHub Copilot. I primarily use Copilot as an advanced auto-complete tool, and it’s particularly helpful for handling repetitive tasks.
There’s no doubt in my mind that AI can boost a developer’s productivity. It’s not perfect, and letting it write code for you has its pitfalls, but if you know your stuff, it can really speed things up.
It’s easy to see how AI is going to make game-changing advances in all kind of industries. It really is an impressive technology.
But there are huge downsides.
Firstly, there’s the environmental impact. Models require massive amounts of electricity and water (to cool servers). An AI chatbot contributes more to the climate crisis than a Google search.
What’s more concerning is the incredible amount companies like Alphabet, Microsoft and Amazon are spending on new data centres to keep up with expected AI growth. As Noman Bashir at MIT points out:
“The demand for new data centers cannot be met in a sustainable way. The pace at which companies are building new data centers means the bulk of the electricity to power them must come from fossil fuel-based power plants”.
My naive optimism led me to believe that technology would help us fight climate change. I was wrong: AI and Crypto are net negatives in this regard.
There are also ethical concerns regarding the methods used to obtain data for training AI models. Most models scrape the web (and many other forms of media) without permission or concern about licensing. Sources used to train models are kept secret. Content creators can't even determine which parts of their work were used to train the model, as Mac Stories reports:
“Publishers and creators whose content was appropriated for training or crawled for generative responses (or both) can’t even ask AI companies to be transparent about which parts of their content was used. It’s a black box where original content goes in and derivative slop comes out.”
These AI scraping bots also profoundly impact the resource usage of web servers, leading to increased costs and performance issues. The Wikimedia Foundation explains:
“Since January 2024, we have seen the bandwidth used for downloading multimedia content grow by 50%. This increase is not coming from human readers, but largely from automated programs that scrape the Wikimedia Commons image catalog of openly licensed images to feed images to AI models. Our infrastructure is built to sustain sudden traffic spikes from humans during high-interest events, but the amount of traffic generated by scraper bots is unprecedented and presents growing risks and costs.”
LLMs are also contaminating the web with generic content produced without care or thought. This has been coined “AI slop”:
“AI slop is a derogatory term for low-quality media, including writing and images, made using generative artificial intelligence technology, characterised by an inherent lack of effort, logic, or purpose.”
This mass-produced slop is flooding blogs and social media, often replacing original and helpful content in search results. It’s actively making the web worse.
LLMs are well known for making shit up. The inconsistent or made-up things that LLMs return are known as “hallucinations” and are a fundamental flaw in the underlying architecture. It’s easy to forget that LLMs aren’t intelligent and don’t understand what is right or wrong. Misinformation and fake news have proliferated over the past decade, primarily thanks to social media, and sadly LLMs are only contributing to this trend.
I’ve spoken to dozens of people who use AI and appear oblivious to these issues or bury their heads in the sand and use it anyway (and I count myself in this latter group).
There’s a fear that you’ll get left behind if you don’t use AI. Or that AI is coming to replace your job.
It’s easy to rely on AI because of its utility while ignoring the damage it causes.
But the more I think about AI and the companies behind it, the less comfortable I feel using it, and the more gross it seems. I can’t help but feel the web would be a better place if the technology had never existed in the first place.
Jeremy Keith summed it up perfectly:
“If you’re going to use generative tools powered by large language models, don’t pretend you don’t know how your sausage is made.”