This is Elon Musk who says that AI, artificial intelligence, is more dangerous than nuclear bombs. Chatgpt’s founder, Sam Altman, a few days ago met Narendra Modi, for AI discussion. He’s around the world. He has created himself a company that can be dangerous to world leaders. Today, a.Company like Google also fears about ChatGPT. Whenever we watch The Matrix or Terminator-like movies, we think that robots, artificial intelligence will make us and will start to control the world. Is this science fiction or future real life?
Calculators and mathematicians don’t replace mathematicians. Similarly, AI has not been created to replace humans. Line is absolutely awesome, but how.
True is it? IBM has declared that they’re going to stop hiring for jobs because AI can do all the work that used to be humans. This year, in May 80,000 jobs were hired by AI.
Logic is simple. AI can finish many entry-level jobs. Hiring a new employee, training a new employee, and paying a salary, is better that a company invests in a new technology.
Tryiton, a generative AI model where you just have to upload some selfies and then AI, too, will take you professional photos and then send them to you. Let’s say you have gone to a studio to take photos from a professional photographer, and to take the video, and that’s only just $17, which is something around 1,400. That means where you spend a few thousand bucks for these photos, that work, you can do it only 1400 bucks.
Companies use it for their business cards, for their IDs, for their website, for content marketing. Basically, all of the traditional use cases that they required a photographer for; they can now do it online 10 times faster, 10 times cheaper.
Whenever a new technology comes, first blue-collar jobs are at risk of white-collar jobs and then creative jobs. Because we have always believed that a machine can’t do better creative work than a human. But AI is creating better art. Photographs, studios, graphic designers, copywriters, these all jobs are now at risk of getting them.
But when we ask this question, Is AI dangerous? Then often the discussion stops here that how AI will eat our jobs.
The bulls that were driving their cars are not going to be the ones that are going to drive a car. Of course, those people who have been jobless have left the bulls and didn’t learn how to drive their car, have not learned how to drive. Jobs have been replaced and those jobs are required skills replaced. If Industrial Revolution destroyed 100,000 old jobs, then it would have also created 100,000 new jobs were invented, but AI’s danger is just a job replacement, not just job replacement. Let’s take a step forward. One year before Google was developing an AI, and their engineer went in the media and said something like this, which then Google removed that engineer from the job. That single employee said, AI is conscious. Ai has a soul. Ai is risky. Why? Because of trolley problems. What is trolley problem? Self-driving cars. It is a reality. You may have seen videos online where a Tesla-like smart car, upon itself, highway 100K speed, and driver on the highway and doesn’t even need to be trying to pay attention to the road. The decision that the driver takes today is that decision that AI takes today. But suppose, the car is running and the corner of the road is two children are following their ball and are on the way of their way.
There’s only one way to save these two children, that you turn the car hard and take it across the road, and there’s a truck coming from the other side. After encountering with this truck, the person sitting in the car won’t be able to save themselves. In such a situation, what should AI do? This is why in 1967, Philippa foot, a philosopher, named Trolley.
Now you’re thinking that there’s no difference between AI and human driver. What is the difference? The difference is responsibility. If the car driver is driving a car, then whatever he does, he is responsible for it. But what should AI do? Most importantly, who decides who will decide?
This is just one ethical dilemma. There are many practical problems which all AI companies have to get on a stage. It’s not just me, it’s many of the greatest tech minds say.
Future of Life Institute has written an open letter whose title is Pause giant AI experiments. Chatgpt, who have created themselves. Sam Altman also believes that we should pause to regulations and make powerful AI until we can come to regulations.
Here, the key word is to all companies. Because today, apart from Google and OpenAI, there are many different companies that are developing AI for different uses, and all companies’ intentions are different.
This is a Xeno Bot, which is a millimeter-to-matter size robot. Its specialty is that it’s the world’s first living robot. These robots can reproduce themselves. This, a Pac-Man-like-looking zinobot, finds its own stem cells and collects them and puts them into its mouth. Then a few days later, new Xeno Bot are born.
This develop is for medical purposes. With the help of AI, these zinobots deliver to your body where they’re at a problem, and they can deliver them to you. Similarly, they can be left on the ocean, so that they can collect microplastics and collected together to clean our oceans.
But such technology, advanced weapons, can be used to develop advanced weapons. Covid, we’ve seen that a virus can cause the world to pause the world, it can destroy the supply chains, and it sssscan destroy the economy. If AI was used to use biological warfare, then targeted viruses can only attack specific people.
It’s possible that a country can develop a virus that can only eliminate one race. Robots are made to destroy the enemy’s military infrastructure.
China, a techno-nation nationalist nation, which every year spends $450 billion just on research and development. On top of that, China has in the last year, taken the last one year $17 billion foreign investment, which will be used only for AI development. We have started hearing about AI since the last year, but China, since 1980s, was developing AI.
1980s, when Infosys, companies, had to wait for computers to order themselves for months, then China had the Chinese Association for Artificial Intelligence established. 2006, they had already planned what the next 15 years what they’re going to do for the next 15 years. They’re ready for it. China’s plan is to make sure that AI technology output reaches $22 billion in 2025 and $2030 to $147 billion.
China has high risk, high reward, and trusts them. Netdragon is a company that has made AI their CEO. That means all the decisions finally go through AI.
This has been another benefit because no one can go to AI. If today this company drops tomorrow, then investors can’t go to AI.
In future, this model, many companies can apply to this model where all the board of directors, AI, will be able to take on AI, so that the company can take on high-risk decisions and focus profit on increasing profit.
Conclusion:
Every coin has two sides positive and negative. Likewise innovation of AI also has 2 sides. AI is useful but also dangerous too.