LOGO

guan lei ming

technical director | java

ai: good medicine or poison?

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

his view has been confirmed to some extent, because ai is developing so fast that it has even exceeded human expectations. we live in an era where technology is developing rapidly, allowing people to see the powerful potential of ai, but also its potential harm. for example, the application of some ai algorithms has caused conflicts and confrontations in some originally unrelated fields. for example, in the medical field, ai has an increasing impact on treatment effects, but it has also triggered some ethical and moral controversies.

this kind of problem is not accidental, it reflects the problems of human beings themselves: weaknesses such as greed, aggressiveness and short-sightedness. these problems lead us to be full of ambitions for the development of technology, but we often ignore the potential risks it brings. this is just like the birth of the car. it was initially used as a tool to help people travel, but with the advancement of technology, the car has also become a weapon that can cause safety accidents.

therefore, we need to take measures to prevent the development of ai from getting out of control. harari believes that the government should legislate to require artificial intelligence research and development companies to spend at least 20% of their budgets on research and development of safety measures to ensure that the artificial intelligence they develop does not get out of control and does not cause harm to social order and people's psychological level. he believes that it is like when we learn to drive, we must first learn how to apply the brake and then learn how to apply the accelerator. if we want to develop ai technology, we need to take preventive measures first so that it can truly exert its value and not lead to the collapse of social order and the degradation of human nature like some unprecedented technologies.

2024-10-02