Chapter 0537: The most fatal weakness of a living being



Although the ten-year-old girl looked a little unreliable, Fang Zheng still handed the commentator's body to her - after all, it was just maintenance and repair, and judging from the dog, the technological capabilities of that world were quite good, so simple maintenance work should not be a problem.

Fang Zheng returned to his room and began to analyze the commentator's program.

The reason why he planned to do this himself instead of giving it to Nymph was that Fang Zheng wanted to use the program of this commentator girl to analyze and make some adjustments to the manufacturing of artificial AI. Moreover, he also wanted to see what level of artificial AI technology other worlds had developed to. Although he couldn't borrow all of it, he could learn from others' experience.

"Hoshino Yumemi..."

Looking at the file name displayed on the screen, Fang Zheng fell into a long period of thought. The analysis program itself was not difficult. Fang Zheng himself copied Nymph's electronic intrusion ability, and he had been learning about this from Nymph during this period. Therefore, he did not spend too much time on the analysis program itself.

However, when Fang Zheng disassembled the core of Hoshino Yumemi's program and broke down its functions into lines of code, he suddenly thought of a very special problem.

What are the dangers of artificial AI? Is artificial intelligence really dangerous?

Taking the female commentator as an example, Fang Zheng could easily find the underlying instruction code of the Three Laws of Robotics in her program, and the relationship between these codes had proved to Fang Zheng that the person who had talked to him before was not a living being, but a robot. Her every move, every frown and smile were controlled by the program, which analyzed the scene in front of it and then made the most priority action it could choose.

To put it bluntly, in essence, what this girl did is actually no different from the robots working on the assembly line or the NPCs in the game. You choose to act, and it will react based on these actions. Just like in many games, players can increase their kindness or evilness values ​​based on their actions, and the NPC will react based on these accumulated data.

For example, you can set that when the kindness value reaches a certain level, the NPC may make more excessive demands on the player, or it may be easier for the player to pass through a certain area. On the other hand, if the malice value reaches a certain level, the NPC may be more likely to succumb to certain requests from the player, or prevent the player from entering certain areas.

But this has nothing to do with whether the NPC likes the player, because the data is set in this way, and they themselves do not have the ability to judge in this regard. In other words, if Fang Zheng changes the range of this value, then people can see an NPC smiling and welcoming players who have done evil, but ignoring kind and honest players. This also has nothing to do with the NPC's moral values, because this is the data setting.

So, back to the previous question, Fang Zheng admitted that his first meeting with Hoshino Yume was quite dramatic, and the robot girl who was the commentator was also very interesting.

Let's make an analogy. If the female commentator gave Fang Zheng a bouquet of flowers made of a pile of non-combustible garbage, and Fang Zheng suddenly became furious, smashed the garbage bouquet into pieces, and then directly cut the robot girl in front of him into two halves, then what would be the reaction of the robot girl?

She would not cry or get angry. According to her program settings, she would only apologize to Fang Zheng and think that it was her wrong behavior that caused the customer to be dissatisfied with her. Perhaps she would even ask Fang Zheng to find staff to repair it.

If other people saw this scene, they would certainly feel pity for the female commentator and think that Fang Zheng was a nasty bully.

So, how did this difference come about?

In essence, this robot narrator is just like an automatic door, escalator, or other tool, which is programmed to do its job. If an automatic door fails, it won't open when it should, or it snaps shut when you walk past it. You won't think that automatic door is stupid, and you'll just want to open it quickly. If it can't open, he might smash the broken door and walk away.

If other people saw this scene, they might think that this person was a bit rude, but they would not feel disgusted with what he did, nor would they think that he was a bully.

There is only one reason, and that is interactivity and communication.

And this is also the biggest weakness of living beings - emotional projection.

They project their emotions onto an object and expect it to respond. Why do humans like to keep pets? Because pets respond to everything they do. For example, when you call a dog, it will run over and wag its tail at you. A cat may just lie there and not move, but when you stroke it, it will still wag its tail, or some cute and well-behaved ones will even lick your hand.

But if you shout at a table or touch a nail, even if you are full of love, they will not give you any response, because they have no feedback on your emotional projection, so naturally they will not be taken seriously.

Similarly, if you have a TV and one day you want to replace it with a new one, you will not have any hesitation. Perhaps price and space will be factors you consider, but the TV itself is not among them.

But on the other hand, if you add an artificial AI to the TV, every day when you come home, the TV will welcome you home, tell you what programs are available today, and agree with your complaints when you are watching the programs. And when you decide to buy a new TV, it will complain, "Why, am I not doing well, so you don't want me anymore?"

Then you will naturally hesitate when buying a new TV to replace it. Because your emotional investment is rewarded here, and the artificial AI of this TV also has the memory of all the time you have been with it. If there is no memory card to move it to another TV, will you hesitate or give up on replacing a new TV?

Of course they will.

But be reasonable, brother. This is just a TV. Everything it does is programmed. All of this is done by the merchants and engineers specifically for user stickiness. They do this to ensure that you will continue to buy their products, and the pleading voice is just to prevent you from switching to other brands. Because when you say you want to buy a new TV, the artificial AI is not thinking "I am sad that he is abandoning me" but "The owner wants to buy a new TV, but the new TV is not our own brand. Then according to this logical feedback, I need to start the 'pray' program to keep the owner sticky and loyal to our own brand."

This is indeed the truth, and this is also the fact, but will you accept it?

Won't.

Because life is emotional, and the integration of sensibility and rationality is a consistent manifestation of intelligent life.

This is why humans always do many irrational things.

So when they feel pity for AI, it is not because AI is really pitiful, but because they "feel" that AI is pitiful.

That's enough, no one cares what the truth is.

This is why there are always conflicts between humans and AI. AI itself is not wrong. Everything it does is within the scope of its own program and logic processing, and all of this is created and defined by humans. It's just that in this process, the emotional projection of humans has changed, and thus gradually changed their minds.

They will expect AI to respond more to their emotional projections, so they will adjust the AI's processing range to give them more emotions and reactions and self-awareness. They think that AI has learned emotions (in fact, it has not), so they can no longer treat them as machines, thus giving them the right to self-awareness.

However, when AIs gained self-awareness, began to awaken and act according to this setting, humans began to fear.

Because they found that they had created something that was beyond their control.

But the problem is that "out of control" itself is a setting instruction made by themselves.

They thought the AI ​​had betrayed them, but in fact, from the beginning to the end, the AI ​​had only acted according to the instructions they had set. There was no betrayal at all, on the contrary, they were just confused by their own emotions.

This is a dead end.

If Fang Zheng himself sets out to create an AI, he might get stuck in it. Suppose he creates an AI of a little girl, he will definitely treat her like his own child, gradually perfecting her functions, and eventually giving her some "freedom" because of "emotional projection."

In this case, AI may react completely beyond Fang Zheng's expectations because its logic is different from that of humans.

By that time, Fang Zheng's only thought was... that he had been betrayed.

But in fact, it was all his own fault.

“Maybe I should consider another way.”

Looking at the code in front of him, Fang Zheng was silent for a long time, then he sighed.

He used to think that this was a very simple matter, but now, Fang Zheng is not so sure.

But before that...

Looking at the code in front of him, Fang Zheng reached out and placed his hand on the keyboard.

Just do what you should do.


Recommendation