Tech Giants' "Calculations": AI Industry Needs "Regulation" but Sets Its Own Rules

Wallstreetcn
2023.06.27 17:36
portai
I'm PortAI, I can summarize articles.

Regardless of whether it is OpenAI or other technology companies, they have been vigorously advocating for strengthened regulation recently, including their own "calculations": obtaining more support from politicians, becoming the rule makers of this emerging industry, and constantly maintaining their leading position.

Since last year, various achievements in artificial intelligence have emerged one after another. In just the first four months of this year, the industry has raised more than $1 billion in venture capital, and the concept of artificial intelligence has expanded to various fields such as toothbrushes and drones. However, the degree and speed of technological development will depend largely on whether regulation is involved.

Large technology companies have expressed their desire to be regulated, but the reality is much more complicated, with significant differences in technology regulation rules between Europe and the United States.

In the United States, Alphabet-C, Microsoft, IBM, and OpenAI have requested that legislators supervise artificial intelligence, believing that this is necessary to ensure safety. However, in the European Union, politicians recently voted to approve a legislative proposal that sets limits on generative AI. The lobbyists for these companies opposed the legislation, believing that it would put Europe at a disadvantage in the current global competition in artificial intelligence. The European Union has had comprehensive data protection laws for more than five years and has implemented strict competition and content review guidelines. However, in the United States, there has been almost no regulation for more than twenty years.

Technology companies know that they cannot ignore the European Union, especially since its social media and data protection rules have become global standards. The EU's "Artificial Intelligence Act" may be implemented in the next two to three years, marking the first attempt to regulate artificial intelligence in Western markets, including provisions for severe penalties. If a company violates the law, the EU may impose fines equivalent to 6% of the company's annual revenue and prohibit the product from operating in the EU. It is estimated that the EU accounts for 20% to 25% of the global artificial intelligence market and is expected to be worth over $13 trillion in the next ten years.

This puts the industry in a delicate situation. Gry Hasselbalch, co-founder of the think tank Data Ethics, said that if the law is passed, the largest artificial intelligence providers "will need to fundamentally change the way they are transparent, handle risks, and deploy models."

Compared with the rapid development of artificial intelligence technology, the progress of relevant legislation in the European Union is slow.

As early as 2021, the European Commission released a draft of the Artificial Intelligence Act, initiating a lengthy negotiation process. The proposal takes a "risk-based" approach and prohibits artificial intelligence in extreme cases. Most of the content of the draft is focused on the rules for "high-risk" events. In addition, the draft also stipulates the transparency of deep fakes and chatbots: people must be notified when they are chatting with an artificial intelligence system. And the content to be generated or manipulated needs to be marked.

Large technology companies generally welcomed this risk-based approach at the time.

IBM has stated that it hopes to ensure that "general artificial intelligence" - a broader category that includes image and speech recognition, audio and video generation, pattern detection, question answering, and translation - is excluded from regulatory oversight. IBM's head of EU affairs, Jean-Marc Leclerc, said:

We urge the continued use of risk-based approaches rather than attempting to control technology as a whole.

However, some industry observers believe that if risk-based regulation is adopted, some of the most powerful artificial intelligence systems will be excluded from regulatory oversight. Future of Life is a non-profit organization initially partially funded by Musk. Its president, Max Tegmark, believes that:

Future artificial intelligence systems will need clear regulation. Regulatory bills should not classify them according to a limited set of predetermined uses, as the EU draft does, and only regulate a part of them.

At that time, it seemed that large technology companies would get what they wanted - the EU even considered completely excluding general artificial intelligence from the text of this bill - until spring 2022, when politicians began to worry that they had underestimated its risks. At the urging of France, EU member states began to consider regulating all general artificial intelligence, regardless of its purpose.

At this point, OpenAI, which had previously been outside the European legislative process, decided to intervene. In June 2022, the company said it was "concerned that some proposals may inadvertently lead to all general artificial intelligence systems being default regulated objects."

After the launch of ChatGPT, the European Parliament was shocked by the ability and unprecedented popularity of chatbots, and lawmakers took a tougher stance in the next draft. The latest version approved two weeks ago requires developers of "basic models" such as OpenAI to summarize copyrighted materials used to train large language models to increase transparency.

One of the two main authors of the "Artificial Intelligence Act," Dragos Tudorache, explained:

Ultimately, our requirement for generative artificial intelligence models is to increase transparency. In addition, if there is a risk of "exposing algorithms to bad actors," developers must work to provide safeguards.

Although Meta, Apple, and Amazon have remained largely silent on this issue, other major technology companies have opposed it. Alphabet-C said that the EU Parliament's control measures would treat general artificial intelligence as high-risk, which is not actually the case. Alphabet-C also protested that the new rules could interfere with existing rules, as several companies have said they have implemented internal controls.

Actively requesting supervision, what medicine are AI companies selling?

As the public discovers more and more flaws in generative artificial intelligence, technology companies are increasingly outspoken about the need for supervision - some officials say they are more willing to negotiate on regulatory issues. After contacting Altman and Alphabet-C CEO Sundar Pichai since the end of May, Margrethe Vestager, the EU's chief of market competition, admitted that large tech companies are accepting the EU's requirements for transparency and risk.

However, some critics believe that these companies are actively seeking regulation at this time to ensure their market dominance. Joanna Bryson, professor of ethics and technology at the Berlin Hertie School, said:

If the government intervenes now, these big companies can consolidate their leading position and identify competitors who are approaching them.

In addition, people are more concerned that the involvement of these star companies in how to regulate will make them the rule makers - ultimately making the rules favorable to them.

OpenAI CEO Sam Altman has been advocating for government regulation of AI technology, even suggesting the establishment of a government organization similar to the International Atomic Energy Agency to regulate AI, and he believes that the organization should focus on regulating AI technology and issuing licenses for entities using the technology.

However, Alphabet-C does not want AI regulation to be dominated by the government, and the company prefers a "multi-level, multi-stakeholder AI governance approach." Others in the AI field, including researchers, have expressed similar views to Alphabet-C, saying that government regulation of AI may be a better way to protect marginalized communities - although OpenAI believes that technological progress is too fast to adopt this approach.

Analysis suggests that the controversy between OpenAI and Alphabet-C in the regulatory field highlights that both sides want to become the rule makers of the game and formulate regulatory rules that are favorable to themselves.