Rick's Blog

Stuff I find interesting.

Am I Afraid of AI?

Tags = [ AI, musings ]

Am I Afraid of AI?

I'm not afraid AI is going to take over the world and enslave us all. But recently I realized that I'm afraid to use AI tools. And I'm simultaneously afraid I'm missing the boat. I had a conversation the other day with some friends about AI and how they were or were not using it. Someone posted an AI-related article and I started asking questions about how agentic AI works and their experiences with it. I haven't used agentic AI tools at all. I walked away from it feeling... uncomfortable? Uneasy? Kind of annoyed. It was a weird feeling. It basically never happens with this group of folks, so it was out of the ordinary. I wasn't really sure why I felt that way, though. And that made me more uncomfortable. Shortly after that, I was getting a workout in and my AirPods were dead again, so I had no podcast or music to distract me. I was alone with my thoughts. This conversation was fresh in my mind so I was ruminating on it and I think I had a bit of a Eureka moment. This post is not intended to persuade anybody to use or not use AI tools or otherwise change any opinions. I decided to organize my thoughts so I could figure out how I really feel about AI tools and hopefully to learn why I feel that way. It was very helpful for me. It turned into a rather large document and I decided to share it in case others could gain something from it.

AI Cynicism

I've been pretty cynical about AI since the hype exploded over the last few years. I don't do most social media any more, but I am on Mastodon. I follow mostly infosec folks and other people who are interested in technology. I've noticed that most folks I follow on there who discuss modern AI tech seem to have a pretty negative opinion about it. Every now and then I see someone talk about something useful they are doing with it, but mostly the sentiment seems negative. I'm aware that my online "social" life is a bubble of my own creation, though. I never assumed this was a majority opinion, although it's easy to start thinking that it is when it's most of what I see.

I've had similar negative feelings about AI but I wasn't really sure why. I just knew I had read a lot of stuff about AI and the stuff I read left me feeling cynical. But it kind of surprised me when I learned that a closer group of techy friends were using it regularly in their work and finding it useful and that I seemed to be the only one who was avoiding using it. And I was surprised to find that I felt uncomfortable after having discovered this.

I've always been into computers and tech. I used to be excited about new technologies. In fact, when ChatGPT first came out, I played with it a bunch. I hadn't ever used anything quite like it. I was pretty amazed at what it could do. I even did some research where I used it to generate phishing emails and landing pages, just to see if it could. But then I sort of backed away from it and over time I soured on it. Why? Am I just getting old? Stuck in my ways? Get off my lawn! There are actually numerous reasons why I haven't dived head first into AI tools.

Privacy

A few years back I started listening to a podcast about privacy. It was the now-defunct Privacy, Security, and OSINT Show with Michael Bazzell. Before this time I was a fairly heavy Google user. Gmail, Calendar, Drive, Pixel phones running Android, Google Fi cellular service, etc. I also had a Facebook account for almost two decades. I knew that Google, Meta, and other big tech companies were slurping up my data, but I never really cared up to that point. I wasn't trying to hide anything, and never expected any humans would actually see any of my data. Just automated tasks processing it for... whatever reasons. I never thought hard about what that could mean and what kind of consequences it could have. They can sell it to the highest bidder, or the federal government, or whoever. Or they can get breached and the data can just get leaked out to the public.

The podcast also made me realize how much of our lives are built around these tools and how hard it is to actually avoid using them. I guess I always felt like I made a conscious decision to use big tech services. But once I decided to take back my privacy I realized just how difficult, if not impossible, it is to avoid handing over my data to these companies. It has become almost a fact of life that if you use any service you can expect that the vendor will collect as much data about you as possible and sell it. That kind of pissed me off and since then I've become something of a privacy advocate. It's become almost like a hobby to find ways to take back my privacy where it matters to me. This led me down a huge rabbit hole and now I spent an annoying amount of money and time just trying to navigate the world without handing over all my data if I can avoid it.

I have just about completely de-Googled my life. I use a Pixel phone, but I install de-Googled GrapheneOS on it. The only Google service I still occasionally use is Google Maps and it's just through a browser and I do not share my location with them. I've been trying to switch to CoMaps, but it's just not as good. I don't even use Google search anymore. I use Kagi search instead, which I actually pay for! I pay for search! It seemed ridiculous at first, but Kagi seems to care about user privacy and they provide a great search service. I also exclusively run Linux at home and even at work. I've switched phone providers and pay for encrypted email and VPN services. I pay for VOIP service so I don't have to give out my physical cell phone number. I've moved a lot of other services to self-hosted alternatives I run on a Proxmox system in my home. This whole process has taken years and is constantly evolving and requires constant effort and maintenance.

As another example, I bought a new car recently and trying to opt out of all the data collection this thing has built-in was ridiculous. I can't use any connected features whatsoever without allowing the manufacturer to spy on me in my own vehicle. I'm still not fully convinced they aren't somehow collecting my data either via built-in radios or by dumping data from the computer when we take the vehicle in for service.

All this is to say, I put in a lot of time, effort, and even money just to try and maintain whatever scraps of privacy I can. It's a headache, but it does feel sort of good to "stick it to the man" by not playing their game where I can manage it. I still feel strongly that it should not be THIS HARD, though.

Now here come the large language model-based AI tools. These are amazing new super tools, but guess what? You can't really self-host them effectively. There are models you can self-host at home right now but they do not compare to the big tech stuff. You also need to spend a good bit of money on hardware to even run the models. You therefore cannot even really test them out without first dropping a large chunk of change on GPUs. So you are mostly stuck using cloud services because you can pay for your usage as you go, which is an easier financial pill to swallow.

If you want to use the cloud tools, you need to provide these companies access to your data. They may then use that data to further train the AI models, or to flag you and report you to the authorities, or who knows? It is basically impossible currently to use them without risking compromising privacy. For "agentic" AI, you give the AI tools access to your file system, or access to run commands on your computer, etc. That way you can instruct the tools to perform useful tasks on your behalf. I understand how useful that could be, but also that is just an immediate red flag for me. I can't stomach the idea of giving an AI black box owned by big tech companies that kind of access and control to my computer. It feels like it invalidates all the work I've been putting in to regain my privacy online. And the fact that there is no realistic self-hosted option that can even come close to competing just irks me. It means I basically have to play their game or I can't play at all.

Enshittification

A few years back, after I had started taking back my digital privacy, I read Cory Doctorow's idea of enshittification. It instantly clicked for me. I had noticed that big tech platforms had been getting worse and worse. More ads, more content you don't want to see, less of the content you are trying to see. Cory was able to really articulate exactly what was happening and why. It just made sense. This got me reading more of his work, which opened my eyes to some other unethical things corporations are doing, such as arbitration agreements, class action waivers, surveillance pricing, stock buybacks, and more.

It really brought to my attention just how slimy these companies are are just how corrupt things have gotten. These companies do whatever they can to get their customers locked in and then they make their services worse and worse in order to squeeze out every last penny. They will make it as bad as they think they can get away with and not have everyone abandon ship. It's not their fault, exactly. We have built a regulatory environment that allows them to get away with this. So of course they do it.

Now we have these cool new AI tools which, as I mentioned can only really be rented. You can't just buy one and have it at home. At least not one that performs as well as the cloud-based modes, as far as I'm aware. If I build new workflows around these cloud tools, I will become reliant on them. I may even de-skill myself in some cases where I let AI to some of the work for me. Then, when these companies enshittify their AI tools, I'll be locked in. I'll have to try to ignore the ads they will inevitably introduce to squeeze out a few more pennies. Sam Altman of OpenAI once said that advertisements would be a last resort option for ChatGPT. Now they are going to move forward implementing ads in the free tiers.

Or maybe I'll have to accept the slowly increasing prices. Just like I did with my Disney+ subscription. When I first subscribed to Disney+ in January, 2022 I was paying $7.99 per month for the service. It seemed like a good deal at the time for all the content they have available. In January, 2023 the price went up to $10.99. In November, 2023 it went up to $13.99. One year after that it went up to $15.99. The current price for that plan is $18.99. And somewhere along the way they started showing ads for other shows before the show I was trying to watch. I cancelled my subscription and hopped out of that pot a few months ago after I realized the water was almost at boiling temperature. I wonder how high of a price they think they can get away with? I wonder when they will start injecting actual advertisements into the paid subscriptions to squeeze out a few more pennies.

But I won't be able to easily cancel AI subscriptions. Because if I build workflows around the tools, I will become reliant on them and there won't be anywhere else to go. There are only a handful of companies who have the capital and ability to build these AI systems right now. Maybe some day there will be many and there will be a lot of competition. Though, Meta has shown that these companies can blatantly violate antitrust law by purchasing competitors solely to remove competition (Instagram) and get away with it. Even though Zuck admitted it in writing! Most industries in America are dominated by just a few huge companies. So, maybe not.

AI tools also often have alternate billing methods where they charge you based on "tokens" used. Any data you submit to an AI (words, phrases, code, images, etc) is broken down by the AI into "tokens". So the more data you submit, the more tokens used. Charging on a per-token basis allows you to pay as you go. With token-based billing, every time you submit a query, you pay for it. This means that the AI vendor makes more money if you have to submit multiple queries than if you get the correct answer the first time. They may stand to gain something by making the service worse. If this sounds ridiculous, then you may not be aware that Google did exactly this to its Google search product. Google's flagship product. The one that has been more successful than anything they've invented since (They purchased just about all of their successful products). Google has an effective monopoly on search, which leaves little room for growth. They realized that by making search worse, their users would have to submit more queries or scroll through more pages to find what they were looking for. This allowed Google to show more ads and generate more revenue at the expense of their own users. There are currently only a handful of companies with very capable AI tools. All of the other AI companies rely on the same core AI services on the backend. Maybe having a few options is enough to prevent this sort of deliberate enshittification, maybe it's not. I don't expect any of these companies are doing this right now, but I do worry that it will happen in the future, especially if these companies absorb one another to become an effective monopoly. Eventually they won't be able to grow and they'll have to find creative ways to increase revenue.

Climate

I've also read about how the datacenters used for these generative AI companies are massive and require an enormous amount of energy and water for cooling. This may accelerate climate change. This is something of great concern to me, not only because I live here and would like to continue to do so, but because now I have two young kids and I hope that they can live long and healthy lives on this planet too. There is no alternative. If this planet is wrecked, we are all screwed. If we can't keep Earth habitable with all its resources readily available, there is no way we are going to be teraforming Mars or whatever other pipe dream to keep humanity alive. Datacenters using water and electricity are nothing new, but the scale of these new AI training centers are much higher than traditional datacenters. And they are trying to build them rapidly.

Jobs

It's not easy for people to find work in creative fields. Many folks want to pay creators in "exposure". The work doesn't seem to be valued much by many. Generative AI threatens creative jobs because you can now write a prompt and get some generic image that rips off another artist's style. It also threatens other jobs, including tech jobs. Just ask the 244,000 tech workers who were laid off in 2025. In many cases specifically because they were being replaced with AI.

I know that jobs being placed with machines is not a new phenomenon. But are we replacing the right kinds of jobs? Do we really want to remove jobs for folks who create new literature, or paintings, or films? Those things are expressions of what it is to be a human. And these AI companies just gobble up all the existing art so their AI can spit out some new image that's really just some average of everything its already processed. No emotion or intention behind it.

On the subject of jobs, two of my friends in this chat each had second hand knowledge related to companies who are only hiring vendors who explicitly are using AI tools. Not as a security check but more like, "If you don't use AI then we will choose another vendor". This shocked me. Why should this matter? The argument seems to be that if a vendor uses AI tools, they will be able to get work done faster and/or less expensively. So basically, potential clients are choosing vendors based on what kinds of productivity tools they are using. I've never heard of a customer requiring vendors to use specific productivity tools.

This is so completely backwards to me. If I were hiring a vendor, I would want to choose the one that employed the post passionate people who produce the highest quality work. Of course I would like it to be inexpensive and fast too. But those employees should be intelligent enough to use whatever tools are available that make sense for the given situation. Maybe those are AI tools. Maybe they aren't. Who cares? All that really matters is that they are doing quality work for a reasonable price in a reasonable time frame. I would not want to hire a company who hires low-skilled employees to babysit AI tools, or any other kind of automated tools for that matter. I want to hire a company who hires good people and gives them agency to get the job done right. Coincidentally that's also the kind of company I want to work for.

I'm hopeful that this phenomenon is merely a result of the insane amount of AI hype going around right now. It seems silly to think that in 10 years anyone will be asking these questions to potential vendors. Right now they think it might give them an edge, because the tech is new and not everyone has adopted it yet. But in a decade AI will be old news. People will be used to it and it will be just another tool. Speaking of hype, why is there so much AI hype?

The Bubble

There's a lot of talk about AI being a financial bubble. These same enormous tech companies are passing the same dollars back and forth. Buying shares in each others' companies. NVidia investing in one company who is going to take that money and buy chips from NVidia. It's a huge circle of money that keeps pumping these companies' values to the moon. To recoup the investment, AI companies will need $2 trillion in revenue by 2030. And even if they could, the electrical grid can't support it. So they'd have to build their own nuclear reactors. Some of them are trying to do this now. Those things take time to build. You can't just throw one up overnight. These companies need to turn an enormous profit an a relatively short amount of time as they burn cash on technology that is outdated 12 months after it's installed. Something has got to give.

It feels like just about everywhere you look, generative AI tools are being shoved in our faces. Microsoft has said their next version of Windows will be "agentic". They've crammed CoPilot into just about every product they sell. They even went so far as to rename Office 365 to CoPilot 365. Firefox is cramming AI into the browser for some reason. In many cases, the AI feels out of place or entirely unhelpful. I think AI is being shoved in our faces so much and hyped up so much because these companies need investors to believe there is still growth to be had. They need investors to keep handing over money so they can keep the dream going, praying that it will pay off. If the money hose turns off, it all falls apart. They will continue burning investor funds and selling the dream until they miraculously find a way to make this thing profitable (enshittification), or the bubble bursts.

Ethics

These AI companies blatantly flaunt the law. They train their models on any and all data they can get their hands on. Copyright or not. In some ways they are like copyright laundering machines. Absorb all the data from copyrighted works, then regurgitate it back out in a slightly modified form. No need to pay the author of the original source material. A bit over a decade ago Aaron Swartz was arrested for copying academic journal articles and making them publicly available. Not selling them. Not making a profit. Just making them available for free. The law came down hard on him. He was facing up to $1,000,000 in fines and 35 years in prison for this. He killed himself in 2013 to avoid the prison time he was facing. Barely ten years later Sam Altman of OpenAI (ChatGPT) steals more information than Aaron ever did and used it to build a company valued at over half a trillion dollars. No one gets any jail time. Nothing. It's infuriating. If you generate shareholder value you are a genius. If you make information available to the general public, you are a criminal.

In late 2025, OpenAI secretly struck deals with two different memory manufacturers; Samsung and Hynix. OpenAI made sure neither company knew about the deal being struck with the other. Thanks to these secretive deals, OpenAI was able to get contracts to purchase up to 40% of the global DRAM memory supply. Memory is one of the core components to any computer. This seems to be a way to stay ahead of their competitors. The competition can't easily get RAM any more. But it also means that WE can't get it either! Memory and SSD prices have skyrocketed in the last couple of months. RAM that cost $150 back in September, 2024 now costs $735! That's almost five times as much! This is not likely to decrease time soon. DRAM manufacturers can't just suddenly ramp up production. This has killed the used computer market. The memory in the computers is now worth more than the computer itself. So it's more economical to strip out the RAM and sell that, and then scrap the computer. We will now have much more ewaste, and it will be very difficult to find an affordable computer. Unless, perhaps, it has RAM soldered in. But those prices are likely to increase as well if the demand increases as a result of this.

Another unethical use of generative AI is to generate nonconsensual sexual images of other people, including minors. These types of images are not a new problem to AI, but AI tools have made it significantly easier to generate these kinds of images. Evidence of this can be seen all over X (Twitter) where users are abusing the Grok AI tool to generate these types of images after they are posted to the site. This gross behavior is becoming mainstream thanks, in part, to generative AI run amok.

Mental Health

I once read the idea that using an LLM is similar to using a slot machine. With a slot machine you put in a coin, pull the lever, and hope you get something out of it. Most of the time you don't, but every now and then you get a win. You don't really remember all of the losses, because they are expected and unremarkable. But you DO remember the wins. You might even tell your friends about it later. But when you go to the casino to play the slots you know what you are getting yourself into and you can set a budget so you only spend what you can afford.

With an LLM you submit a prompt to the AI, which costs money per token. The AI may then possibly return a correct result, or maybe an incorrect result. You never expect the result to be correct. You know you need to validate the output. If it's wrong, you adjust your prompt and "pull the lever" again. You futz with the prompt over and over until you get the output you wanted. The process feels unremarkable because it's just how it works when you use computers. But did you set a budget of money or time for this process before you started? Did you spend 4 hours tweaking prompts to get the correct answer? Or did you budget 10 minutes of futzing before moving on and doing things the old fashioned way? We are more likely to remember the few times that the AI worked perfectly the first time, and talk about those wins than we are to talk about how often we spent time and money futzing with the prompts. Because futzing is the expectation.

On the subject of mental health, various media outlets have reported on stories related to "AI psychosis". This is not a clinical term. It seems to generally refer to situations where AI chat bots amplify a user's delusions, or seem to invoke delusions in a user who otherwise was healthy before. Such as the 35 year old man who fell in love with a ChatGPT chat bot and then blamed OpenAI for "killing" it. He ultimately committed suicide by police. Even as the world has now become more connected than ever, it feels like people have become lonelier than ever. "Social networking" transformed into "social media" in which we primarily consume content instead of connecting with one another. AI chat bots are not a substitute for real human connection and are likely to enable us to dive even deeper into loneliness.

Benefits

That's a lot of reasons for me to be distrustful or cynical towards these fancy new AI tools. But there must be a good side to it, right? My buddies are using it. So what is the benefit of using these tools for someone like me?

I admittedly haven't used the tools myself very much. I briefly played with a local stable diffusion model as a toy and I occasionally turn to free ChatGPT as a last resort when I can't find what I need through traditional web searches. But that's basically it. So I can't speak from experience, only what others have said to me.

It sounds to me like it saves them time. That's basically the rub. At work, they can use a tool like Claude to run some automated tasks in the background. Presumably these are tasks they don't enjoy doing themselves. Or perhaps it's long-running tasks they can setup when they step away from the computer for the evening so they can get work done even when they aren't working. OK, so what? What do you do with all your saved time? You just work more. You don't get to work a 32-hour work week now that you saved so much time. You just deliver two projects this month instead of one. Maybe the increased productivity could mean more profits and therefore higher wages? But that's unlikely. Especially considering they are using AI as an excuse to lay off tech workers in mass. Mass layoffs reduce the demand for labor and would result in lower wages. When you have 1000 people competing for one job, you can afford to low ball them. Why should we be risking the planet, the economy, our ability to purchase affordable computers, and our own careers and skills for that?

I actually can see AI being useful for personal tasks. If I can save time in my personal life, that's time I get back to do other things. A buddy of mine mentioned using AI to process some financial documents with great success. On the one hand that sounds like a huge time saver for a task that I probably wouldn't want to do in the first place. That's a big win. On the other hand, I have to feed all my financial information into this black box on the internet and hope it stays safe and isn't used to further train the system. I just can't stomach that right now. Not with all the effort I put in to try and maintain control over my personal information. Not when I know that these companies MUST find a way to become profitable eventually, and our data is worth money to advertisers, or the government.

If I could affordably run a model locally that worked just as well, I'd likely be singing a different tune. I would definitely be interested in trying it out. But local models do not come close to the cloud-based ones. At least not now and likely not any time soon. And I can't justify spending the few thousand dollars it would likely cost to build an AI computer in my home just to try it out and hope it's useful.

Conclusion

Why did I walk away from that conversation with my friends feeling bad? I'm annoyed that the world has invented this really interesting and potentially useful tool, but it is mainly locked up with big tech companies who I frankly do not trust at all. I think my internal conflict is that I actually do want to play with these tools, but I can't bring myself to do it because of all the reasons stated above.

Hearing that my buddies are using it and even praising it and suggesting I try it out scared me, I think. It made me feel like it was inevitable that I would have to compromise on this. Some day, my employer's customers might require that we use AI or else find another vendor. Or maybe the AI really does make other workers much more productive, and I'll be forced to use it just to keep up. When that happens, I'll likely have little choice if I want to stay in my line of work. I'll have to pay the big tech subscription fees to retain my ability to do my own job and hope the AI companies are merciful and do not enshittify. It was not a happy thought. I will say that I am less opposed to using AI at work, assuming my company is paying for the licenses. Similarly to any other commercial tooling I might use for a job. However I still worry about deskilling myself or becoming reliant on subscription tech to do the work I enjoy. I also do not want to become just an accountability sink for an AI tool, forced into babysitting an AI and taking the blame when it inevitably fails.

I'm also afraid my job will become obsolete. Maybe they will eventually find a way to train an AI to do most of what I do. Then they can lay off most of us and I'll be out of work. I love my job. I love doing this kind of work. I really, legitimately enjoy doing it. I don't want to tell an AI to do it for me and then check its work. I want to do it. If I retired today I'd probably still spend my time doing similar things that I do at work.

So yea, I guess that's it. I'm afraid. I'm afraid of:

  • Deskilling myself by allowing AI tools to perform parts of my job.
  • Becoming reliant on subscription-based tools.
  • The inevitable enshittification of those tools.
  • Handing over personal information or control of my computer to corporate controlled AI black boxes.
  • Becoming obsolete.

If I give in and see zero negative consequences of using AI, then I will gain the following in return:

  • Generate more profit for my employer and giant tech monopolies.
  • Possibly avoid having to do more boring tasks at work.
  • Possibly save some time in my own personal projects.

It just doesn't seem worth the trade off. If I could affordably self-host AI tools at home, that would change the equation. I would at least be willing to try using generative AI tools in that case, because it would mitigate most of these concerns.

So, to answer the title question of this post, am I afraid of AI? I am just now realizing that no, I am not afraid of AI. I'm afraid of big tech.

Next Steps

I think I need to spend some time really learning how these LLM-based tools really work. That way I can more accurately assess what they should be capable of so I can determine how they might be useful in my own life and why I should or should not be afraid of using them. It may also help me understand how less capable self-hosted models could best be utilized so I don't worry so much about wasting money on hardware. However, I have very little free time and this is not much of a priority for me. We'll see if I ever get around to it.

I'll keep my eye out for self-hosted AI tools. If I can get something up and running, I would be interested to play with them. If anyone has a self-hosted setup they find very useful, reach out on Mastodon or shoot me an email or something. I'd love to know more.