Massive tech company Microsoft just called on Congress to pass a law that would make it illegal to use artificial intelligence (AI)-generated voices and images that defraud people.
Also, the proposed law would require AI companies to build tech that can easily identify fake AI images with their own products. It seems that the technology monster that these companies have created are running amuck and need to be heavily censured.
Governments and AI
On Tuesday morning, Microsoft released a 50-page document outlining its vision on how governments should approach the new issues coming up with AI.
At first, AI platforms that created audio and video were heavily unregulated, allowing users to make videos of whatever they want. Before long, people would use the manufactured videos to create videos of high profile people and political leaders intended to defraud people.
How to Regulate AI
Lawmakers and regulators have been having tough discussions on how to regulate AI. The companies who own the new technology boom have released suggestions of how they think politicians should treat the industry.
Microsoft has a long history of lobbying government on issues that affect its business. Now, it is trying to position itself on how legislation should be created for AI technology.
Small Companies Accuse Microsoft of Underhanded Dealings
Smaller technology companies and venture capitalists have often been sceptical of big companies lobbying governments and legislation. Microsoft, Google, and OpenAI might be trying to make it more difficult for new competition to enter the market.
This wouldn’t be the first time that a big company sneaks in legislation into government using its money and power to eliminate any other companies nipping at their heels.
Massive Issues With AI
It’s unclear why the companies that created AI didn’t introduce their own platform based rules and regulations that would eliminate these issues.
However, now, the big companies are blaming the government for not jumping on the issues earlier and regulating problems like cyberbullying and the spread of disinformation.
Deepfake Fraud Statute
Microsoft called for a “deepfake fraud statute” that would make it illegal to use AI to defraud people. Voice and image generated AI has allowed fraudsters to impersonate family members and try to get them to sent them money.
Tech lobbyists have argues that the established anti-fraud laws are enough to police AI and that government does not need any extra legislation to control the new issues.
AI Can Also Be Helpful
There are plenty of breakthroughs that AI has already been used for that have advanced civilization or added some form of benefit to people.
For instance, a new AI tool has achieved an 84% accuracy when detecting prostate cancer in men. The early diagnosis has been linked to better patient outcomes.
Microsoft Has Had Long Issues With AI Firms
In the past year, Microsoft actually split with other companies concerning similar issues with the way that they think AI should be regulated.
The big tech firm suggested that the government should create a stand-alone agency to regulate the new technology. Other businesses think that the FTC and the DOJ are capable of controlling AI.
Should AI Companies Build Provenance Tools?
Since deepfake detection is incredibly difficult and unreliable, some experts question whether it will be possible to separate AI content from real images and audio.
To resolve the issue, Microsoft also called on congress to force AI companies to build “provenance” tools into their products that would create a signature on computer generated content. The outcome would allow consumers to immediately recognize if they are looking at something real or AI made.
No Technology Currently Exists
When fraudsters and scammers want to use AI to steal money from someone, the person that they are scamming has no way of knowing if the content they are looking at is real or not.
This was one of the main issues surrounding the writer and actor strike in 2023. The film and TV industry was concerned that consumers would have no way of knowing if they’re looking at real people or not.
AI Content Is Already Tricking Seniors
Many of the people most susceptible to AI scams are seniors and those less familiar with technology.
Many experts note that technology is its own language; to understand it fluently, it needs to be learned before a certain age and to be immersed in the intricacies like the upsides and the downfalls. Seniors often have a trusting nature and can fall to scam artists because they aren’t aware of the dangers.
Congress Should Update Laws that Address Child Exploitation
Another major concern that Microsoft brought up is that AI tools can often be used to create child sexual exploitation imagery.
They called on lawmakers and Congress to update the laws surrounding this disturbing reality. AI tools have already been used for this devious behaviour and they hope to eliminate it in the future.