You like it or not, and he learns how to influence you

You like it or not, and he learns how to influence you


When I used to be a child, there have been 4 AI agents in my life. Their names were ink, blinking, Pinky and Clyde and tried to hunt me. It was the Eighties, and the agents were 4 colourful ghosts in the iconic PAC-Man arcade game.

According to today’s standards, they weren’t particularly smart, but they seemed to chase me with cunning and intention. It was a long time before the neural networks were used in video games, so their behaviors were controlled by easy algorithms called heuristics, which determine how they chase me through the maze.

- Advertisement -

Most people are not aware of this, but 4 ghosts have been designed with others “personality. “Good players can observe their activities and learn to predict their behavior. For example, the Red Spirit (Blinky) was programmed with a “closer” personality, which pulls directly towards you. On the other hand, Pink Ghost (Pinky) received the personality of the “ambush”, which predicts where you are going and trying to get there. As a result, if you fall directly on Pinky, you can use her personality against her, causing that she actually turned away from you.

I remember, because in 1980 a qualified man could observe these AI agents, decode his unique personalities and use these observations to outsmart them. Now, 45 years later, the tides will soon turn around. They like it or not, AI agents will soon be implemented whose task is to decoding so that they’ll use these observations optimally influence You.

The way forward for AI manipulation

In other words, we intend to turn into unconscious players in “People’s Game” and they will probably be AI agents who are trying to get a high result. I mean it literally – most AI systems aim to maximize “Prize function“It earns to achieve goals. This allows AI systems to find optimal solutions quickly. Unfortunately, without regulatory protection, we, people, will probably turn into the purpose that AI agents are designed to optimize.

I’m most anxious about conversation agents This will involve us in a friendly dialogue through our every day lives. They will talk to us through photorealistic avatars on our computers and telephones, and soon, by AI powered glasses This will lead us through our days. Unless there are clear restrictions, these agents will probably be designed to examine us on information so that they’ll characterize our temperaments, trends, personality and desires and use these features maximize their convincing influence When working on selling American products, issue US services or persuade us to imagine in disinformation.

This is called “AI manipulation problem“And I warn the risk regulators From 2016. Until now, decision -makers have not taken decisive actions, perceiving the threat as too far in the future. But now, along with the release of Deepseek-R1, the final barrier for the universal implementation of AI-Time AI-STOP agents-significantly reduced. Before this yr, AI agents will turn into a recent type of targeted media that are like that interactive and adaptiveIt can optimize the ability to influence our thoughts, direct our feelings and direct our behavior.

Superhuman “sellers”

Of course, people sellers are also interactive and adaptive. They engage us in a friendly dialog box, so that we are able to quickly find the buttons that may press to sway. AI agents will make them look like amateurs, able to get information from us with such finesse, it would intimidate an experienced therapist. And will use these observations to adapt their conversational tactics in real time, working Convince us more effectively than any used automotive seller.

These will probably be asymmetrical meetings in which an artificial agent has an advantage (practically speaking). Finally, when you engage a man who tries to influence you, you can often feel their motives and honesty. It is not going to be a fair fight with AI agents. They will have the ability to develop you thanks to superhuman skills, but you is not going to have the ability to develop them at all. This is because they may look, sound and behave so human, we are going to unknowingly trust them When they smile with empathy and understanding, forgetting that their influence on the face is only a simulated facade.

In addition, their voice, vocabulary, speaking style, age, gender, breed and facial expression will probably be personally adapted to each of us maximize our recipe. And, unlike human sellers who need the size of each customer from scratch, these virtual entities may have access to stored data on our origin and interests. They could then quickly use this personal data Get trustAsking you about your kids, work, or possibly the one you love New York Yankees, making it easier for you with suspension.

When AI reaches cognitive supremacy

To educate decision -makers on the risk of manipulation of powered AI, I helped in creating an award -winning short film entitled Privacy lost This was produced by the responsible alliance of Metaverse, Mordderoo and Gildia XR. Rapid 3-minute narrative He presents a young family eating in a restaurant, wearing glasses of reality with automatic (AR). Instead of human servers, avatars accept orders of every restaurant, using the power of artificial intelligence to increase them in a personalized way. The film was recognized as science fiction when it was released in 2023-but only two years later Big Tech deals with total Armaments race to make AI powered glasses This might be easily used in this fashion.

In addition, we must take into account the psychological influence that can occur when we, people, begin to imagine that AI agents giving us advice are smarter than us on almost every front. When AI reaches the perceived state of “cognitive supremacy” in relation to the average man, it will probably cause us to blindly accept his suggestions and not use our own critical considering. This respect for the perceived higher intelligence (whether it is really higher or not) will make agents manipulation much easier to arrange.

I’m not a fan of too aggressive regulation, but we’d like intelligent, narrow AI restrictions to avoid superhuman manipulation by conversation agents. Without protection, these agents will persuade us to buy things that we do not need, imagine in things that are unfaithful and accept things that are not in our greatest interest. It is easy to tell yourself that you is not going to be susceptible, but with the optimization of the artificial intelligence of each word that tells us, it is likely that we are going to all be overtaken.

One of the solutions is to prohibit AI agents Back loop in which they optimize their persuasion by analyzing our reactions and adapting their tactics many times. In addition, AI agents must be obliged to inform about their goals. If their goal is to persuade you to buy a automotive, vote for a politician or emphasis on a family doctor for a recent medicine, these goals must be given in advance. And finally, AI agents mustn’t have access to personal data about your origin, interests or personality, if such data might be used for rocking.

In today’s world, directed influences are an overwhelming problem and is mostly distributed as Buckshot fired in your general direction. Interactive AI agents will transform the targeted impact into heat looking for heat that find the best path to each of us. If we do not protect against this risk, I’m afraid that we could all lose the game of individuals.

Datadecisionmakers

Welcome to the Venturebeat community!

DatadecisionMakers is a place where experts, including technical people performing data, can provide observations and innovations related to data.

If you want to read about the latest ideas and current information, the best practices and the future of knowledge and data technology, join us at DatadecisionMakers.

You may even consider your personal article!

Latest Posts

Advertisement

More from this stream

Recomended