When it comes to AI, there’s always been a huge debate. There are people who question and want to know if a strong AI will ever be achieved and if it will ever be available to the general public and some insist that it is important that superintelligent AI is created soon because its guaranteed to be beneficial. Developers understand this but they are also there to recognise that an artificial intelligence system can unintentionally or intentionally cause great harm if not developed correctly. Research is the power we should beleive in because that alone will help us prepare for such consequences in the near future, making us enjoy ai in ecommerce, and helping us avoid the pitfalls around it at the same time.
How can AI be so dangerous?
A lot of researchers out there have agreed that a super intelligent AI is not going to be able to show emotions like hate or love and you don’t have any reason to believe that AI can be intentionally benevolent or even malevolent. However, when scientists think of AI becoming a risk, experts can think of two possible scenarios.
- AI can be programmed to do something large scale and devastating. When we talk about AI systems that are designed to kill, we talk about autonomous weapons. If autonomous weapons are not in the right hands, then mass casualties can occur. More so, with every country diving themselves deep in the arms race, people fear that the next thing would be an AI war. To avoid a weapon being used by the enemy, the weapons at hand can be designed to be extremely difficult to just switch off. This way humans might just lose control of a situation because there’s no way the system can stop easily. This is the kind of risk that is even present when we talk about narrow AI. However, it grows as the levels of both AI and autonomy slowly increases.
- Even if the AI is programmed in such a way so as to be beneficial for a large number of people, it can swerve and develop a destructive methodology to achieve the goal it’s pursuing. This happens whenever the programmer doesn’t align the goals of the AI with the goals of the community. Let’s take the example of a superhuman car. If you order a dedicated and obedient car (programmed to be super intelligent), to take you to your destination as fast as possible, the car will probably take you there in no time but will do it by getting covered in broken shards and chased by helicopters, leaving destruction in its wake. But will do literally whatever you asked it to do. If a superintelligent system is given the task of a geoengineering ambitious project then it might cause destruction and leave havoc in the way and will look at humans who try to control or stop it, as a threat.
All these examples can tell you that the concern behind advanced AI isn’t all for no good reason. When you have a super intelligent AI on you, you will see that it is great at accomplishing all its goals but incase those goals aren’t in alignment with humanity’s goals, then there is a huge issue to be addressed. There are numerous people out there like Elon Musk, Bill Gates, Stephen Hawking, Steve Wozniak and other names on the field of technology and science who have quite recently expressed their conern about the risk that AI comes with, but have also said that artificial intelligence in ecommerce industry can do a whole lot of benefit.