I have written about AI before, and described my sentiment: useful, but to be treated carefully. This, I recognize, is a very neutral view. It also has little practicality, it doesn’t really give me any basis to decide for or against AI use in any given situation. I know I want to explore in what regards AI can improve my life, and that of others. But I also am very sure that I don’t want to mindlessly use it anywhere it is an option.

So, like many, I’m trying to figure out what all of this means. It is a complex topic. But it all circles back to value hierarchies.


A value hierarchy is our own ranking of things we consider important to be a human being. In essence, most disagreements are a matter of values being in confrontation with each other. This goes for conflicts one has with oneself, with someone else, and up to geopolitical conflicts.

But because any decision implies different values, we use relative value; how much some value weighs compared to another. This is what, I believe, the trolley problem is about. It takes two values we have (wanting to save lives and not being a culprit in murder) and plays around with them. It puts them in confrontation, to see where we prioritize one or the other.

A more common (and hot topic) example is when people decide to quit their corporate job to pursue their own thing. There are many conflicting interests (or things we value), but to keep it simple, I’ll take two: self-fulfillment vs. security. When leaving your job, you might lose the security of knowing that you’ll have a steady income, but you might gain a sense of self-fulfillment. For many, the decision goes in one direction because self-fulfillment is a higher value than security for them. But for others, security gets a new position in the hierarchy if you have to take care of your children, or sick parents. Or if you’re an anxious person who has difficulty in uncertainty.

Usually it’s not only two values that are in conflict with each other, but a whole set of them that have more weight, or less, depending on anyone’s particular situation, experiences, and beliefs.


For AI, that means that people have to decide which values are more important. And this becomes a problem at a time and age in which we don’t know what we value – both individually and as a society.

One value associated with AI is progress. Progress, the story goes, is what we need to strive for. So, naturally, progressive technologies is something good, desirable. We just accept this as is, because we’re not used to have an open debate about values. But we don’t think about what progress really means. Progress is not inherently good; it just describes the act of moving forward.

Progress can be desirable. For example, in the context of preventive medicine, AI might be a positive game changer. We should explore it, and I hope in the future my doctors will have AI tools at their disposition to do a better job.

But progress can be undesirable. In the context of Social Media, AI is probably a negative addition. Because it will be used to move forward in the wrong direction: making addictive platforms.

Basically, it’s about comparing something that we tag as ‘progressive’ and give it a certain value depending on the direction it progresses towards. This helps take a decision on what we want to endorse, and what not.

In the same way, we need to look at the other things AI implies: speed, comfort, access, understanding… to names a few. These are all things we want at certain times. But, these always come with a cost, which may or may not be worth it. Googling something is quicker, more comfortable, and more accessible than looking something up in an encyclopedia. The cost of it is relatively low, and includes mainly the disappearing of the industry of printed encyclopedia, and your personal tolerance for boring work. But for me, the positives outweighs the negative. And that’s why in certain instance I replaced Google with ChatGPT. 1

On the other hand, I do also value accuracy. And I value accuracy higher than speed or comfort of information. So it’s not an option to rely on AI for answers that I can’t double-check myself, because I know that it is often wrong. So unless I can check the source, and validate the findings, I won’t use it.


Now, where the big question is: how can we decide when there are many unknowns?

I can decide easily if it makes sense to use ChatGPT or Google to find the answer to a question I have, because the downsides and upsides are relatively clear.

I cannot do that, however, if we don’t have the information of how this will affect me, or us, long term.

And this is where I prefer to be on the side of the cautious. There are already signs of how dangerous it is to rely heavily on AI, not only because of its tendency to make mistakes, but also because it impairs cognitive function if used too much. Some months ago a Microsoft study came out citing reduces brain activity, critical thinking, memory, and creativity. And this is just the start; I believe we can prepare ourselves for a wave of people dealing with unintended negative consequences of AI-Use.

I value my brain. I want to be able to be as sharp as possible, to be able to think critically, and to understand complex topics myself. I sometimes forget that, and have to remind myself that the easier way is not always the best one. This not only applies to my usage of AI, but technologies in general. I am still figuring out exactly what I want this for, and what not.

There are also moral concerns around how these models have been trained, what it means for the workforce and even the people that are behind it. OpenAI itself seems to have completely given up the fight for moral high-ground, and have now created Sora 2 and hinted at the possibility of erotica content. I do have concerns about what this means for the world and the future, and I want to be more conscious of how my decisions now will shape our future.


And this brings us back to my point: there are a lot of complex decisions. There are a lot of factors to consider, and as an individual, the only person that can tell you what to do, is yourself – based on what you value most. And that’s why the way to go is to examine our values, and be sincere about which tradeoffs we’re willing to make, where we want to keep AI out of, and where we want to tread with caution.


  1. The main reason for this, is that I can search by sentences, using my argumentation and process of thought, and already narrow the results to what I specifically want. This was never possible. The tradeoff is, as far as I know, that I’ll be worse looking at information in previous systems like Google. Another tradeoff is also certain bias in what I get shown, but for polarized topics and opinions, I wouldn’t use it anyway – and that was an issue in previous options too. What I don’t have to sacrifice is accuracy, because I can always check the sources (which is essentially what google was for: finding sources). ↩︎

Leave a comment

Trending