Skip to main content

Command Palette

Search for a command to run...

the only human quality ai cannot replace

with rapid use of ai the capacity to be accountable for what we create stay stubbornly human

Updated
2 min read

lately grok, ai chatbot on x, was exploited by users who generated adult images of real women by putting them into fake bikinis. public hate focused on grok, and its behavior was changed after the backlash. without a single doubt generating sexualized images of real people without their consent is completely wrong, regardless of which tool is used. what we need to understand is that the model did what it was told to do. it lacks any grasp of where a line of morality is crossed, so the responsibility falls squarely on people who head, develop and use ai as a tool.

what responsibility is?

when we say “responsibility” we mean understanding what you are doing and what it can cause. it includes knowing you could have acted differently. it also means accepting blame or praise for the result. that’s something humans can do (not always) while tools do not.

why ai can’t own responsibility?

ai doesn’t pick its own goals or values; it follows the objectives, data, and constraints given by people. that is why different companies with different visions produce different ai outputs, where chatgpt feels like a more conversational companion and grok has more freedom with its own caveats. if a system is trained and instructed in a harmful direction, it will still optimize toward that direction unless guardrails stop it. ai models also cannot be punished, sued, or feel guilty. all the caring and all the risk stay on our side for businesses, engineers, and users.

why this should matter for us?

when ai writes your code, email, or comment, it still ships under your name. if it goes wrong, the result can affect your job, your relationships, and your reputation. ai may automate hard parts of the work, but the hardest part remains human: deciding what the model will execute and what outcomes you are willing to own. so, ai can execute parts of the work at scale, but choosing the target, the constraints, and the impact on other businesses and users is on you. ai should remain under meaningful human control, with humans both in the loop during generation and over the loop in supervising how these systems are built and deployed.

conclusion

to conclude, the current state of ai is basically a metal, super‑smart mechanical horse that can move fast and break a lot in any direction people point it. and thus the danger and the value come from the rider, not the machine. surely automation will grow, but the need for human‑in‑the‑loop will grow with it. people and teams who practice that kind of responsibility will be the ones worth trusting.