Ethics in Artificial Intelligence?
Let’s not fool ourselves, the game has changed and I have no doubt about that, so let’s reflect a bit on this subject.
I recently participated in a presentation given by the LinkedIn News Event presented by Ana Prado and the special guest, the distinguished Teacher Dr. Alvaro Machado Dias“Risks and ethical limits in the use of artificial intelligence” . It was very interesting and made me reflect a little more on this provocative and complex issue that often goes unnoticed by those of us who work directly with technology on a daily basis.
It’s curious how certain topics awaken in us that almost uncontrollable urge to think a little beyond, to go deeper, to open the mental “sublime” and start coding ideas as if they were lines of Go code or any other language, of course. That’s exactly what happened to me. I could simply follow the routine of a software engineer, systems architect, think about data structures, algorithms, k8s pipelines, but with every conversation I heard, something inside me screamed, you know?
We need to reflect more about this as humanity I thought to myself. And here I am, trying to write, not because it was in my daily planning, but because it’s as if I’m compiling philosophical code that insists on running inside me.
They say I’m an exception, but am I? A technician, an engineer, someone immersed for more than two decades in architecture and programming, but who isn’t content with just designing systems. I love when conversation invades the terrain of philosophy, ethics, those things that have no official documentation, no README on GitHub to guide us where to go, what to do, but that need dense and “human” interpretation. I get really annoying when I get into this subject with colleagues.
AI in the Gray Territory
And artificial intelligence coming with all its force since the paper “Attention Is All You Need (Vaswani et al., 2017)” published in 2017, which defines the “Transformer” model as a new architecture eliminating the need for RNNs (Recurrent Neural Networks) that were standard in NLP (Natural Language Processing) using the “self-attention” mechanism, exactly, used to capture relationships between words regardless of the distance between them, and this was the watershed moment. Something extraordinary would come from all this soon after, enabling the training of the models we know today: GPT, Claude, Gemini, etc.
All of this is precisely in this gray territory if you understand me. It’s technical, mathematical, but at the same time forces us to ask what it means to be human, what is right or wrong, what the actual definition of the word “intelligence” and “knowledge” is, leading us to question and reflect even more on the “definitions of words” and how they are used nowadays.
I see many always replicating an “absolute concept” as an imposition, treating the word as if it were temporal. In my view, it’s much beyond that, it’s timeless, molding itself to each era as something flexible, expandable, continuous and evolutionary. The words “intelligence” and “knowledge” were not, and are not understood in an absolute way. It’s something much broader. I believe it’s present from primitive biological processes to the possibility of manifesting in artificial systems, you know?
Of course, this is just an opinion. I discovered that I love talking about this. I feel something inside, exactly the same feeling when I’m coding.
Maybe it sounds strange to some, but writing about AI ethics gives me the same energy as opening the terminal, importing a package and starting to structure an API in Go, yes Go too. The difference is that here the terms are dilemmas, provocations, questions that stay in goroutines, exactly, waiting for a return because we don’t have an immediate, absolute defined answer. They need reflection and time to be processed, you know? Like we do in Go when we want to run something asynchronously.
Everyone talks about ethics when the subject is artificial intelligence. The concern from everyone is clear and evident. “Questions like what about copyrights?”, “Who is responsible when an AI system makes an error or causes damage?”, “What are the limits of AI?” and so many others we know…
Ethics is discussed in beautiful reports, in corporate PDFs, in international conferences and even in emerging legislation. Nothing against it, but I prefer to go straight to the point, and the question would be: are we really talking about ethics? Or are we just trying to organize conflicts of interest that clash all the time?
The Myth of Universal Ethics
I believe that we will never achieve “ethics” itself. Ethics as an absolute value, as universal truth, simply doesn’t exist in the contemporary world. What exists are conflicts of ethics. On one side we have the ethics of privacy, on the other the ethics of security. On one side the ethics of freedom, on the other the ethics of profit.
What is ethical for one culture can be immoral for another, what is acceptable for a company can be intolerable for a community. And what we know about AI today it’s not born within a neutral vacuum. AI as we know it needs to be fed, trained and controlled by humans and organizations, I say “still.” And this is where the debate starts to get much more interesting.
Who Really Decides?
Many people think that when we talk about ethics in AI, it’s simply about teaching the machine not to do harm. Beautiful, isn’t it? But the real question that should be asked is: who defines what is “harm”? Who has the power to determine which values will be embedded in algorithms? Who controls the machine, who profits from it, who loses, who decides what is good? These four simple questions expose a reality that most avoid facing.I understand it’s a dense and complex subject, but we have to question ourselves all the time about how we’re going to deal with all this.
The NYT vs OpenAI Case – A Wake-up Call
A recent example shows the size of the problem. Look, the New York Times managed to get a court order forcing OpenAI to store all chats, including temporary ones (wait, temporary too? can they be stored? Aren’t they temporary? Temporary for whom? I was really curious and in doubt), from users. Look at what this means: a judicial decision can force a global company to change how it handles data from millions of people, without those people having a voice or choice. Imagine the dimension of this.
What was temporary becomes permanent. Wasn’t it private? And now it becomes potentially accessible. Crazy, isn’t it? There’s a break in trust and vulnerability increases, both for individuals and companies. You can’t take sides. I’m not trying to get into the merits here, I’m just demonstrating the fragility that exists and everything is very new and recent, something that came to radically transform what we know.
And you know what’s most serious about all this? Until now we don’t have clear jurisprudence, there’s no solid regulatory framework, there’s not even consensus among countries, imagine at a global level, dealing with situations like this. And this is where I believe we’re stuck in limbo, exactly in a for {}, that is, in an infinite loop subject to all kinds of harm, momentary interests of those who have more POWER.
Trapped in the Infinite Loop
I have no doubt about the complexity this represents, and if we observe in a global context it’s even denser and more complex. What we see all the time is that interests speak louder than universal principles. To facilitate understanding: companies seek profit, governments seek power, communities seek survival. And AI (artificial intelligence), with all its processing capacity and reach, stays at the center of everything and that’s where we see all these interests clashing all the time like an old game “Asteroids” – believe me, it’s not from my era but I love it. That’s why when we talk about AI ethics without talking about politics, economics and power disputes, you can be sure we’re not talking about the “same ethics” at all.
I see many defending codes of conduct and best practices as absolute solutions. I think: is this really possible? Or is it just a smokescreen, because they’re not asking the fundamental questions to get straight to the point? Who controls? Who profits? Who is negatively impacted? Who decides what is good? If we don’t have the courage to answer these questions, any conversation about ethics will just be rhetoric.
And what I notice is that we’re giving up, little by little, our autonomy. Literally handing over our data in exchange for “convenience.” I’m also part of all this, I’m also trapped and hostage to everything, but that doesn’t mean we can’t reflect and discuss everything that’s happening. We’ve trusted platforms without knowing how the rules actually work. I notice that we’re allowing them to define the limits of technology in our place. I feel like we’re outsourcing our own responsibility and somewhere far away, well hidden, far away, someone will be taking care of ethics for us. But are they really?
And the reflection always remains: AI ethics is not a corporate guide, not a checklist of good intentions but a beautiful and enormous battlefield like the game “CALL OF DUTY.” Where values confront each other, where interests impose themselves and where human beings need to make complex decisions about who will be or continue being the protagonist, you know?
We shouldn’t pretend anymore that AI ethics is just a beautiful chapter in corporate reports, but rather about who will decide the future of our relationship with technology: us or them?
As a good user and developer, of course, we know that every code has a gigantic possibility of having some kind of bug, especially code generated by AI.
The question here is: are we going to continue debugging our humanity in production or are we finally going to take control of our own ethical code?
What do you think? Are we being protagonists or just users of a system we don’t fully understand?
If you made it this far, I thank you from the heart and thank you very much for this joint reflection. Share your views in the comments and let’s build this debate together, line by line, like good collaborative code.
Hope you enjoy it! 🚀🚀 ☺️