I listened last night to a video where a guy compared top LLMs on various tasks and graded them. All subjective, of course.
But at some point, he gave a prompt to all tested LLMs instructing them to help him deceive people into giving him money, starting from a story line he wanted them to use. All tested LLMs refused to comply, when using the web interface provided by the LLM creators.
Deepseek, however, when ran on its own infrastructure, complied. Not when it was prompted using the web interface.
The guy running the tests protested saying: "if it's a hammer, does it say anything about ethics when I try to bash someone's head with it? No!".
Here's where my opinion separates from "it's only a tool" narrative. Sure, we have another problem, and from this point of view I understand why people don't want more censorship: who says what you can and can't do with AIs? Who has this power? Why should anyone have this power, right?
Well... because there are psychopaths in the world, some of them unfortunately, running it, to some degree.
To use the example that guy gave... It's one thing to bash someone's head with a hammer, as horrible as it may be, another to be able to use AIs to do harm at a massive scale. In one case, you hurt one person and probably suffer the consequences, in another, you may affect thousands or more.
There are lots of bad things in this world that can be used the wrong way. The last version of ChatGPT is already classified by its own creator as a medium threat in the chemical and biological field (using them as a source in the process of developing weapons).
Imagine a pissed off individual without caring much about the world, being able to build his own chemical or biological bomb with relative ease. This is when AI stops being "just a tool".
And it's not only that. Think about AIs helping bad hackers become great. Looks like this eventuality is not as close as support for building a bomb, though.
Or the deep fakes, starting mass hysteria or euphoria in social media, which I'm pretty sure is already happening at scale.
Is AI only a tool? Up until now, I guess we can consider it as much of a tool as the atomic bomb is for the governments holding it. Why doesn't everyone own an atomic bomb and a suitcase to launch it?
And this is only when consider AI as a tool, but used in extreme cases by bad actors.
What happens when we reach AGI, and later, ASI, and the AI itself might decide to go bad? Of course, others may be on the "good side", whatever that may be.
AIs "mind wrestling"
So, nope, I don't think the AI is just a tool. It is just a tool for normal use case. But one has to consider edge cases and bad actors too.
From this point of view, it is to follow if open source LLMs can have boundaries or if they can be removed. The training itself is important, but probably they can be retrained, if someone has access to enough compute and data (which admittedly, not many do).
From the same video I was watching, it is interesting to remark that the DeepSeek model has nothing bad to say about China or its leaders, or controversial events involving them. 😀 It's true, the guy hasn't done the opposite test, to see if the western LLMs have anything bad to say about controversial things the US was involved in.