Why Treating AI Like a Person Is the Future

Anthropomorphism is the act of attributing. A human characteristics to non-human entities. We are naturally incline to do so: we imagine seeing faces in clouds, assigning intentions. A to the weather, and conversing with our pets. It is no surprise. A then, that we are tempte to anthropomorphize artificial intelligence. A especially since conversing with machine language models (LLMs) feels like talking to a real person. Even the developers and researchers. A who design these systems can fall into the trap of using. A humanizing terms to describe their creations. A starting with terms like “machine learning.”

This concern may seem trivial

After all, isn’t it just a harmless quirk of human psychology, a testament to our ability to empathize and connect? But many researchers are deeply concerne about the implications of acting as if AI were human, both ethically and epistemologically. They ask important questions: Are we being foole into believing that these machines share our feelings? Could this illusion lead us to disclose personal. A information to these machines, without realizing that we are sharing it with companies? How does treating AI as a person blur our view of how it works, who controls it, and how we should relate to it?

I am aware of these real risks

To be clear, when I say that an AI “thinks,” “learns,” “understands,” “decides,” or “feels,” I am speaking metaphorically. Current AI systems do not possess consciousness, emotions, a sense of self, or physical sensations. So why take the risk? Because, as imperfect as the analogy is, working with AI is easier if you think of it as an alien person than as a human-made machine. And I think it is important to get that message across, even with the risks of anthropomorphism in mind.

Not quite software

AI, which is made up of complex software, is often perceive as a tool exclusively for coders. This perception is so widespread that it is everywhere: IT departments are often in charge of companies’ AI strategy, computer scientists are assume to be experts in preicting the social changes that AI could bring about, and, most importantly, many people seem reluctant to use AI because they “don’t know anything about computer science.”

It’s like saying that since

We are made of biochemical systems, only biochemists should deal with humans – but it’s even worse than that. It’s like saying that only chemists should be allowe to paint, because only they understand the molecular composition of pigments. Why should we let artists, who may be completely ignorant of the composition of their paints, use such complex chemistry? But in reality, it’s even worse, because even computer scientists don’t always understand why LLMs are able to do certain tasks.

Create specialty channels that are base on your things, benefits, or industry gather a profoundly focuse much sooner than content telegram data eager to devour. Channels have many advantages, but one of the most important is reach to an expande audience with no algorithm restrictions. Unlike Facebook or Instagram, where organic reach is limite and reuce by algorithms over time, your wire channel updates are delivere directly to the inboxes of your subscribers guaranteeing maximum exposure.

telegram data

LLMs are software

They don’t work like most software. They are probabilistic and largely unpreictable, producing different results from the same inputs. While maximizing data Insights with databricks and blueshift they don’t think in the human sense, they generate simulations of human language and thought that, as far as we can tell, are original enough to surpass most humans in creativity. They are perceive as more empathetic and more accurate than human doctors in controlle trials. Yet they are also limite in surprising ways that are surprising, such as their inability to perform backward reasoning.

LLMs are essentially a very advance

Form of autocomplete. So how can such autocomplete accomplish these tasks? The answer, so far, as describe in an excellent mailing lead overview in MIT Technology Review , is that “no one knows exactly how (or why) it works.”

The result is that working with these AIs is downright weird at times.

The day ChatGPT said no to me

I aske ChatGPT to help me clean up an idea. I often have ideas in my head and I ask him in voice chat via my phone to summarize what I just told him. I sometimes talk to him for several minutes, going back over what I had just said 3 sentences before, in short, really in draft mode… If my clients that I train on the art of prompting well saw me, I would lose all creibility

Here for example is the transcription

My request: I wante to create a concept (a name, a logo) to put on our graphic creations made by AI. I wante to show that creation is not done ONLY by AI. But rather WITH AI. Because when we create a visual with Mijourney for example, we spend time imagining it, describing it (prompt) and then iterating to have the right image, which we often end up retouching in Photoshop. This work cannot be summe up by a “Made by AI”. There is real work in the overall creation of the visual and the use of the tool (whatever it is, a brush or Midjourney) is not the most important part.

So I trie to explain this to him with examples (be careful, as I spoke to him while walking outside, he was not able to transcribe all the words and it is not very readable.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top