Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124


Lawmakers who helped to shape the Landmark of the European Union concerned that 27-member Bloc was thinking about the AI’s instructions in the Technology Company’s lobbying.
the I have action Approved for more than a year ago, but its rules for the overall purpose of AI models such as GPT-4o in Openia can only be an impact in August. Before that, the European Commission – that its Euchotive Artrive-art is New AI Office who prepares a practice code for large AI companies, how they need to follow the law.
But now a group of lawmakers in European lawmakers, who have helped refine the language of the law because it has passed the legislative process, declared the effect of “dangerous, non-melocratic” ways. Leading Americans AI raised their lobby against parts of the EU AI Aid Aido, and the legislators also concernate with AI-Innovation and Anti-American agreement.
EU lawmakers say the third draft of the code, published by AI Office last month, necessary obligations under AI ACTS “Full volunteers.” These obligations include test models to determine how they allow things such as extensive discrimination and disinformation spread.
In a letter sent Tuesday to the European Commission Vice President and Tech Chief Henna Virkkunen, first reported to Periodic Period But published in full for the first time below, current and former lawmakers said making these model tests could potentially allow ai providers who “adopt more extreme political positions” to warp European elections, restrict freedom of information, and disrupt the EU economy.
“In today’s geopolitical condition, it is more important than the EU would rise to the challenge and strong in the basic rights and democracy,” they wrote.
Brando Benifei, one of the negotiators of the primary European Parliament of AI text in AI and the first signature of the weekend, told wealth Wednesday that political climate could have related to watering the practice code. The second Trump administration is CON toward European regulation; Vice President JD Vance Warned In a burning language in Paris Ai Activate Summit in February “tightly screws Teach companies” can be a “awful mistake” for European countries.
“I think there is pressure from the United States, but it is better [to think] that we can enjoy the Trump administration by going in this direction, because it’s not enough, “Benifi found, now seats in US relationships
Benifei said he and the other AI AC AC Act Negotiadors met with the commission office experts, which drafted the practice code, on Tuesday. Based on that meeting, he declares hope that the offensive changes can be rolled back before the code ends.
“I think the issues we raise are considered considered, and therefore there is space for improvement,” he said. “We’ll see that in the next few weeks.”
Virkkunen does not provide an answer to the letter, nor Benife’s comment about the US pressure, during publication. However, he has previously pressed that the EU tech rules are relatively and consistently used by companies from any country. The Command Commissomerer Teresa Ribera is also maintained that the EU “cannot change human rights [or] democracy and values ”to put US
The key part of AI Act here is Article 55Which places significant obligations on the providers of general-purpose ai models that come with “systemic risk” -A term that the law defines as meaning the model could have a major impact on the eu economy or has “actual or reasonably foreseeable negative effects on public health, safety, public security, safety, public security, safety, public security, safety, public security Fundamental rights, or the society as a whole, that can be propagated and scale. “
ACT says that a model can be confirmed with a systematic risk if computation power is used in the training “measured in operations in the floating point [FLOPs] more than 1025. “This is likely to include Most of the most powerful AI models, although the European Commission can also teach any model general objective in a systematic risk if scientific advisors recommend doing so.
Under the law, providers of such models should check it “with a view to recognize and lighten up” any systematic risks. This probe should include trying to suffer-in other words, trying to get the model to do evil, to know what should be protected against. They should tell the European Commission Office about the evaluation and what it was found.
Here where the Third version of the draft code in practice becomes a problem.
The first version of the code was clear that ai companies need to treat large-scale disinformation or misinformation as systemic risks when evaluating their models, because their threat to democratic values and their potential for election interference. The second version does not specifically talk about disinformation or misinformation, but still said “large maneuvering risks in basic rights or democratic risks.
The first and second version is also clear that models providers should consider the possibility that the great discrimination is as a systematic risk.
But the third version only lists the risks of democratic processes, and in Basic right to europe such as non-discrimination, as “for potential consideration to choose systematic risks.” The official summary of third draft changes prevented this “additional risks that may be selected by providers to assess and reduce the future.”
In the letter this week, lawmakers who negotiate with the commission of the last text of the law express “it is not intended to be beaten.
“The risks of basic rights and democracy are systematic risks that the most influential AI providers need to evaluate and ease,” the letter reading. “It is dangerous, not noticeable and creates legal uncertainty in perfect reinterpreting and narrow a legal text approved by co-workers, by a practice code.”
This story originally shown Fortune.com