Tech-News
A ‘vast paedophile network’ connected by Instagram's algorithms, says WSJ report
Instagram's recommendation algorithms linked and encouraged a "vast network of paedophiles" seeking illicit underage sexual content and conduct, according to the Wall Street Journal (WSJ).
These algorithms also marketed the sale of unlawful "child-sex material" on the network, it said.
The report is based on a joint investigation by the Wall Street Journal and researchers from Stanford University and the University of Massachusetts Amherst looking into child pornography on Meta's platform. Buyers might even "commission specific acts" or organize "meet ups" on some accounts.
Also read: Instagram adds new tools to help content creators earn money
"Pedophiles have long used the internet, but unlike the forums and file-transfer services that cater to people who have interest in illicit content, Instagram doesn't merely host these activities. Its algorithms promote them," the WSJ report said. "Instagram connects pedophiles and guides them to content sellers via recommendation systems that excel at linking those who share niche interests."
According to the investigation, Instagram users may search for child-sex abuse hashtags.
According to the researchers, these hashtags directed users to accounts that offered to sell paedophilic items and even included footage of minors harming themselves.
Also read: Meta brings Facebook Reels to Bangladesh
Anti-paedophile campaigners alerted the corporation to accounts purporting to belong to a girl selling underage sex content.
The activists got automated answers that stated, "Because of the high volume of reports we receive, our team hasn't been able to review this post." In another situation, the message advised the user to conceal the account in order to avoid viewing its material, the report said.
A Meta spokesperson confirmed receiving the reports but failing to act on them, attributing the failure to a technological glitch, it also said.
Also read: Instagram adds new tools to help content creators earn money
The company told the WSJ that it has repaired the flaw in its reporting system and is offering fresh training to its content moderators.
"Child exploitation is a horrific crime. We're continuously investigating ways to actively defend against this behaviour," the spokesperson said.
Meta claims to have shut down 27 paedophile networks in the last two years and is preparing more. It also stated that hundreds of hashtags that sexualize minors, some with millions of postings, had been banned, the report concluded.
Read more: Wish you could tweak that text? WhatsApp is letting users edit messages
Microsoft will pay $20M to settle U.S. charges of illegally collecting children's data
Microsoft will pay a fine of $20 million to settle Federal Trade Commission charges that it illegally collected and retained the data of children who signed up to use its Xbox video game console.
The agency charged that Microsoft gathered the data without notifying parents or obtaining their consent, and that it also illegally held onto the data. Those actions violated the Children’s Online Privacy Protection Act, the FTC stated.
Read:Twitter accuses Microsoft of misusing its data, foreshadowing a possible fight over AI
In a blog post, Microsoft corporate vice president for Xbox Dave McCarthy outlined additional steps the company is now taking to improve its age verification systems and to ensure that parents are involved in the creation of child accounts for the service. These mostly concern efforts to improve age verification technology and to educate children and parents about privacy issues.
McCarthy also said the company had identified and fixed a technical glitch that failed to delete child accounts in cases where the account creation process never finished. Microsoft policy was to hold that data no longer than 14 days in order to allow players to pick up account creation where they left off if they were interrupted.
Read: Microsoft reports boost in profits, revenue, as it pushes AI
The settlement must be approved by a federal court before it can go into effect, the FTC said.
Apple is expected to unveil sleek headset aimed at thrusting the masses into alternate realities
Apple appears poised to unveil a long-rumored headset that will place its users between the virtual and real world, while also testing the technology trendsetter's ability to popularize new-fangled devices after others failed to capture the public's imagination.
After years of speculation, the stage is set for the widely anticipated announcement to be made Monday at Apple's annual developers conference in a Cupertino, California, theater named after the company's late co-founder Steve Jobs. Apple is also likely to use the event to show off its latest Mac computer, preview the next operating system for the iPhone and discuss its strategy for artificial intelligence.
But the star of the show is expected to be a pair of goggles — perhaps called “Reality Pro,” according to media leaks — that could become another milestone in Apple's lore of releasing game-changing technology, even though the company hasn't always been the first to try its hand at making a particular device.
Apple's lineage of breakthroughs date back to a bow-tied Jobs peddling the first Mac in 1984 —a tradition that continued with the iPod in 2001, the iPhone in 2007, the iPad in 2010, the Apple Watch in 2014 and its AirPods in 2016.
But with a hefty price tag that could be in the $3,000 range, Apple's new headset may also be greeted with a lukewarm reception from all but affluent technophiles.
If the new device turns out to be a niche product, it would leave Apple in the same bind as other major tech companies and startups that have tried selling headsets or glasses equipped with technology that either thrusts people into artificial worlds or projects digital images with scenery and things that are actually in front of them — a format known as “augmented reality.”
Apple's goggles are expected be sleekly designed and capable of toggling between totally virtual or augmented options, a blend sometimes known as “mixed reality." That flexibility also is sometimes called external reality, or XR for shorthand.
Facebook founder Mark Zuckerberg has been describing these alternate three-dimensional realities as the “metaverse.” It's a geeky concept that he tried to push into the mainstream by changing the name of his social networking company to Meta Platforms in 2021 and then pouring billions of dollars into improving the virtual technology.
But the metaverse largely remains a digital ghost town, although Meta's virtual reality headset, the Quest, remains the top-selling device in a category that so far has mostly appealed to video game players looking for even more immersive experiences.
Apple executives seem likely to avoid referring to the metaverse, given the skepticism that has quickly developed around that term, when they discuss the potential of the company's new headset.
In recent years, Apple CEO Tim Cook has periodically touted augmented reality as technology's next quantum leap, while not setting a specific timeline for when it will gain mass appeal.
“If you look back in a point in time, you know, zoom out to the future and look back, you’ll wonder how you led your life without augmented reality,” Cook, who is 62, said last September while speaking to an audience of students in Italy. “Just like today you wonder how did people like me grow up without the internet. You know, so I think it could be that profound. And it’s not going to be profound overnight.”
The response to virtual, augmented and mixed reality has been decidedly ho-hum so far. Some of the gadgets deploying the technology have even been derisively mocked, with the most notable example being Google's internet-connected glasses released more than a decade ago.
After Google co-founder Sergey Brin initially drummed up excitement about the device by demonstrating an early model's potential “wow factor” with a skydiving stunt staged during a San Francisco tech conference, consumers quickly became turned off to a product that allowed its users to surreptitiously take pictures and video. The backlash became so intense that people who wore the gear became known as “Glassholes,” leading Google to withdraw the product a few years after its debut.
Microsoft also has had limited success with HoloLens, a mixed-reality headset released in 2016, although the software maker earlier this year insisted it remains committed to the technology.
Magic Leap, a startup that stirred excitement with previews of a mixed-reality technology that could conjure the spectacle of a whale breaching through a gymnasium floor, had so much trouble marketing its first headset to consumers in 2018 that it has since shifted its focus to industrial, healthcare and emergency uses.
Daniel Diez, Magic Leap's chief transformation officer, said there are four major questions Apple's goggles will have to answer: “What can people do with it? What does this thing look and feel like? Is it comfortable to wear? And how much is it going to cost?”
The anticipation that Apple's goggles are going to sell for several thousand dollars already has dampened expectations for the product. Although he expects Apple's goggles to boast “jaw dropping” technology, Wedbush Securities analyst Dan Ives said he expects the company to sell just 150,000 units during the device's first year on the market — a mere speck in the company's portfolio. By comparison, Apple sells more than 200 million iPhones, its marquee product a year. But the iPhone wasn't an immediate sensation, with sales of fewer than 12 million units in its first full year on the market.
In a move apparently aimed at magnifying the expected price of Apple's goggles, Zuckerberg made a point of saying last week that the next Quest headset will sell for $500, an announcement made four months before Meta Platform plans to showcase the latest device at its tech conference.
Since 2016, the average annual shipments of virtual- and augmented-reality devices have averaged 8.6 million units, according to the research firm CCS Insight. The firm expects sales to remain sluggish this year, with a sales projection of about 11 million of the devices before gradually climbing to 67 million in 2026.
But those forecasts were obviously made before it's known whether Apple might be releasing a product that alters the landscape.
“I would never count out Apple, especially with the consumer market and especially when it comes to finding those killer applications and solutions,” Magic Leap's Diez said. “If someone is going to crack the consumer market early, I wouldn’t be surprised it would be Apple.”
OPPO launches MR Glass developer edition
OPPO released its latest breakthrough in the XR field, the OPPO MR Glass Developer Edition, during the Augmented World Expo (AWE) 2023.
This state-of-the-art mixed reality (MR) device is designed to offer an optimal environment for advanced developers to create and present exciting MR experiences.
OPPO anticipates a surge in XR technology adoption in the near future, with MR as one of the most viable modalities. To drive innovation in MR applications, the OPPO MR Glass will be made available as an official Snapdragon Spaces developer kit in China to help attract more developers to the field and push the boundaries of XR technology.
During the keynote speech at AWE 2023, Yi Xu, Director of XR Technology at OPPO said, OPPO MR Glass represents our latest breakthrough in this exploration, equipped with the advanced capabilities of Snapdragon Spaces to empower developers.”
Xu highlighted the OPPO MR Glass as a breakthrough product, equipped with the advanced capabilities of Snapdragon Spaces, which empower developers to unlock boundless possibilities for XR innovation.
OPPO and Qualcomm Technologies share a long-standing relationship and a common vision to establish an open ecosystem that empowers developers and unlocks the potential for XR innovation. Said Bakadir, Senior Director, XR Product Management at Qualcomm Technologies, Inc., acknowledged OPPO’s efforts in exploring technologies, products, content, and services for XR.
Said Bakadir, Senior Director, XR Product Management, at Qualcomm Technologies, Inc. said, “We recognize OPPO’s long-standing efforts in exploring technologies, products, content, and services for XR, which make OPPO an ideal partner in this field. Through potential solutions improving productivity, creativity, and gaming experiences on OPPO MR Glass, we are glad to see growing vitality among developer groups and hope to find more MR content to enliven the platform, which is significant for creating innovative experience and bringing breakthroughs for the industry. In the future, we look forward to deepening our collaboration with OPPO to stimulate more innovations in the MR ecosystem.”
OPPO MR Glass is built to provide developers with the best platform to create and test the latest MR experiences. Powered by the Snapdragon XR2+ platform, the MR Glass features OPPO’s proprietary SUPERVOOC fast charging and heart rate detection function, enabling a wide range of new applications.
The device is crafted with skin-friendly material and incorporates Binocular VPT (Video Pass Through) technologies, dual front RGB cameras, pancake lenses, and a 120Hz high refresh rate.
Brazil: UN regional group has endorsed Amazon city to host 2025 climate conference
Brazil’s government announced Friday that a U.N. Latin America regional group has endorsed a Brazilian city in the Amazon region to host the 2025 U.N. climate change conference, though the world body has not yet publicly confirmed the venue.
President Luiz Inácio Lula da Silva initially said Brazil will hold the conference, known as COP 30, in the city of Belem, state of Para, in the heart of the Brazilian rainforest, reflecting his intention to bring attention to the Amazon.
A statement from the Brazilian government later clarified that the region's support was merely a step in the selection process. The “support for the Brazilian candidacy demonstrates the region’s confidence in Brazil’s capacity to advance the agenda in the fight against climate change,” the statement read.
The latest U.N. climate conference was hosted by Egypt in Sharm el-Sheikh, and this year’s will take place in Dubai.
The U.N. has not yet announced the 2024 venue, let alone the 2025 one, but the locations tend to rotate among regions, and the Brazilian government statement Friday indicated that a Latin American working group was choosing the 2025 venue, and had endorsed Belem. The final decision won't be made until COP 29 next year.
“It will be a honor for Brazil to welcome representatives from all over the world in a state in our Amazon,” Lula said in a video posted on his social media channels. “I went to COPs in Egypt, in Paris, in Copenhagen, and all people talk about is the Amazon. So I said, ‘Why don’t we go there so you see what the Amazon is like?'”
Brazil's foreign minister, Mauro Vieira, says in the video that the decision was made at the U.N. on May 18. The U.N. has yet to confirm the venue.
Brazil's announcement comes in a week that Lula's administration's environmental governance has faced headwinds from Brazil's congress. Lawmakers by a large majority approved a measure that eroded the environment ministry's authority over construction in forested and coastal areas, as well as other development.
Also this week, the congress is debating whether the state-run oil giant should be allowed to drill off the coast in the Amazon states of Amapa and Para.
EU official says Twitter abandons bloc's voluntary pact against disinformation
Twitter has dropped out of a voluntary European Union agreement to combat online disinformation, a top EU official said Friday.
European Commissioner Thierry Breton tweeted that Twitter had pulled out of the EU's disinformation “code of practice” that other major social media platforms have pledged to support. But he added that Twitter's “obligation” remained, referring to the EU's tough new digital rules taking effect in August.
“You can run but you can’t hide,” Breton said.
San Francisco-based Twitter responded with an automated reply, as it does to most press inquiries, and did not comment.
The decision to abandon the commitment to fighting false information appears to be the latest move by billionaire owner Elon Musk to loosen the reins on the social media company after he bought it last year. He has rolled back previous anti-misinformation rules, and has thrown its verification system and content-moderation policies into chaos as he pursues his goal of turning Twitter into a digital town square.
Google, TikTok, Microsoft and Facebook and Instagram parent Meta are among those that have signed up to the EU code, which requires companies to measure their work on combating disinformation and issue regular reports on their progress.
There were already signs Twitter wasn't prepared to live up to its commitments. The European Commission, the 27-nation bloc's executive arm, blasted Twitter earlier this year for failing to provide a full first report under the code, saying it provided little specific information and no targeted data.
Breton said that under the new digital rules that incorporate the code of practice, fighting disinformation will become a “legal obligation.”
“Our teams will be ready for enforcement,” he said.
ChatGPT-4: All you need to know
OpenAI’s ChatGPT-4 is the latest iteration of the groundbreaking Generative Pre-trained Transformer (GPT) series. Building on the success of its predecessors, GPT-4 offers enhanced capabilities, improved performance, and a more user-friendly experience. GPT-4 was publicly released on March 14, 2023, making it accessible to users worldwide. Let’s explore how to use ChatGPT-4, its new features, and more.
New Features of OpenAI's ChatGPT-4
OpenAI highlights three significant advancements in this next-generation language model: creativity, visual input, and longer context. According to OpenAI, GPT-4 demonstrates substantial improvements in creativity, excelling in both generating and collaborating with users on creative endeavors. Let’s see some of the top new features of ChatGPT-4.
Can Understand More Advanced Inputs
One of the major breakthroughs of GPT-4 lies in its enhanced capacity to comprehend intricate and nuanced prompts. OpenAI reports that GPT-4 showcases performance at equivalence with human-level expertise on diverse professional and academic benchmarks.
Read more: 7 Ways to Earn Money with ChatGpt
This achievement was demonstrated by subjecting GPT-4 to numerous human-level exams and standardized tests, including the SAT, BAR, and GRE, without any specific training. Remarkably, GPT-4 not only grasped and successfully tackled these tests, but it also consistently outperformed its predecessor, GPT-3.5, across all assessments.
GPT-4 boasts support for more than 26 languages, including less widely spoken ones like Latvian, Welsh, and Swahili. When assessed based on three-shot accuracy using the MMLU benchmark, GPT-4 surpassed not only GPT-3.5 but also other prominent LLMs such as PaLM and Chinchilla in terms of English-language proficiency across 24 languages.
Multimodal Functionality
In contrast to the previous version, ChatGPT, GPT-4 introduces a remarkable advancement in its range of multimodal capabilities. This latest model can now process not only text prompts but also image prompts.
Read more: How to Use AI Tools to Get Your Dream Job
This groundbreaking feature enables the AI to accept an image as input, interpret it, and explain it as effectively as a text prompt. The model seamlessly handles images of varying sizes and types, including documents that combine text and images, hand-drawn sketches, and even screenshots.
Enhanced Steerability
OpenAI further claims that GPT-4 exhibits a remarkable level of steerability. Notably, it has become stronger in staying true to its assigned character, reducing the likelihood of deviations when deployed in character-based applications.
Developers now have the ability to prescribe the AI’s style and task by providing specific instructions within the system message. These messages enable API users to customize the user experience extensively while operating within defined parameters. To ensure model integrity, OpenAI is also actively working on enhancing the security of these messages, as they represent the most common method for potential misuse.
Read more: ChatGPT ‘passed’ BCS exam, according to Science Bee’s experiment
ChatGPT's chief to testify before US Congress as concerns grow about artificial intelligence's risks
The head of the artificial intelligence company that makes ChatGPT is set to testify before US Congress as lawmakers call for new rules to guide the rapid development of AI technology.
OpenAI CEO Sam Altman is scheduled to speak at a Senate hearing Tuesday.
His San Francisco-based startup rocketed to public attention after its release late last year of ChatGPT, a free chatbot tool that answers questions with convincingly human-like responses.
Also Read: How Europe is building artificial intelligence guardrails
What started out as a panic among educators about ChatGPT's use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.
And while there's no immediate sign that Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.
“Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls,” said a prepared statement from Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law.
Founded in 2015, OpenAI is also known for other AI products including the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.
Also Read: Future of AI and humanity: 4 dangers that most worry the 'Godfather of AI'
Also testifying will be IBM's chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI's latest model, GPT-4, described as more powerful than ChatGPT.
“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said the panel's ranking Republican, Sen. Josh Hawley of Missouri. “This hearing marks a critical first step towards understanding what Congress should do.”
Altman and other tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules. In a copy of her prepared remarks, IBM's Montgomery asks Congress to take a “precision regulation" approach.
"This means establishing rules to govern the deployment of AI in specific use-cases, not regulating the technology itself,” Montgomery said.
Musk, new Twitter CEO Linda Yaccarino spar over content moderation during on-stage interview
On Friday, Elon Musk announced that NBC Universal's Linda Yaccarino will serve as the new CEO of Twitter. Yaccarino is a longtime advertising executive credited with integrating and digitizing ad sales at NBCU. Her challenge now will be to woo back advertisers that have fled Twitter since Musk acquired it last year for $44 billion.
Since taking ownership, Musk has fired thousands of Twitter employees, largely scrapped the trust-and-safety team responsible for keeping the site free of hate speech, harassment and misinformation, and blamed others — particularly mainstream media organizations, which he views as untrustworthy “competitors” to Twitter for ad dollars — for exaggerating Twitter's problems.
In April, the two met for an on-stage conversation at a marketing convention in Miami Beach, Florida. Here are some highlights of their conversation:
MUSK AND YACCARINO SPAR OVER CONTENT MODERATION
The Miami discussion was cordial, although both participants drew some distinct lines in the sand. On a few occasions, Yaccarino steered the conversation toward issues of content moderation and the apparent proliferation of hate speech and extremism since Musk took over the platform. She couched her questions in the context of whether Musk could help advertisers feel more welcome on the platform.
At one point, she asked if Musk was willing to let advertisers “influence” his vision for Twitter, explaining that it would help them get more excited about investing more money — "product development, ad safety, content moderation — that's what the influence is."
Musk shut her down. “It’s totally cool to say that you want to have your advertising appear in certain places in Twitter and not in other places, but it is not cool to to try to say what Twitter will do," he said. “And if that means losing advertising dollars, we lose it. But freedom of speech is paramount.”
MUSK REPEATS: NO SPECIAL INFLUENCE FOR ADVERTISERS
Yaccarino returned to the issue a few moments later when she asked Musk if he planned to reinstate the company's “influence council,” a once-regular meeting with marketing executives from several of Twitter's major advertisers. Musk again demurred.
“I would be worried about creating a backlash among the public,” he said. “Because if the public thinks that their views are being determined by, you know, a small number of (marketing executives) in America, they will be, I think, upset about that."
Musk went on to acknowledge that feedback is important, and suggested Twitter should aim for a “sensible middle ground” that ensures the public “has a voice” while advertisers focus on the ordinary work of improving sales and the perception of their brands.
PRESSING ELON ON HIS OWN TWEETS
Musk didn't pass up the opportunity to sell the assembled marketers a new plan to solve Twitter's problems with objectionable tweets, which the company had announced the day before. Musk called the policy “freedom of speech but not freedom of reach," describing it as a way to limit the visibility of hate speech and similar problems without actually removing rule-breaking tweets.
Yaccarino took a swing. “Does it apply to your tweets?” Musk has a history of posting misinformation and occasionally offensive tweets, often in the early morning hours.
Musk acknowledged that it does, adding that his tweets can also be tagged with “community notes” that provide additional context to tweets. He added that his tweets receive no special boosts from Twitter.
“Will you agree to be more specific and not tweet after 3 a.m.?" Yaccarino asked.
“I will aspire to tweet less after 3 a.m.,” Musk replied.
How Europe is building artificial intelligence guardrails
Authorities around the world are racing to draw up rules for artificial intelligence, including in the European Union, where draft legislation faces a pivotal moment on Thursday.
A European Parliament committee is set to vote on the proposed rules, part of a yearslong effort to draw up guardrails for artificial intelligence. Those efforts have taken on more urgency as the rapid advance of ChatGPT highlights benefits the emerging technology can bring — and the new perils it poses.
Here's a look at the EU's Artificial Intelligence Act:
HOW DO THE RULES WORK?
The AI Act, first proposed in 2021, will govern any product or service that uses an artificial intelligence system. The act will classify AI systems according to four levels of risk, from minimal to unacceptable. Riskier applications will face tougher requirements, including being more transparent and using accurate data. Think about it as a "risk management system for AI,” said Johann Laux, an expert at the Oxford Internet Institute.
WHAT ARE THE RISKS?
One of the EU's main goals is to guard against any AI threats to health and safety and protect fundamental rights and values.
That means some AI uses are an absolute no-no, such as “social scoring” systems that judge people based on their behavior or interactive talking toys that encourage dangerous behavior.
Predictive policing tools, which crunch data to forecast where crimes will happen and who will commit them, are expected to be banned. So is remote facial recognition, except for some narrow exceptions like preventing a specific terrorist threat. The technology scans passers-by and uses AI to match their faces to a database. Thursday's vote is set to decide how extensive the prohibition will be.
The aim is “to avoid a controlled society based on AI,” Brando Benifei, the Italian lawmaker helping lead the European Parliament's AI efforts, told reporters Wednesday. “We think that these technologies could be used instead of the good also for the bad, and we consider the risks to be too high.”
AI systems used in high risk categories like employment and education, which would affect the course of a person's life, face tough requirements such as being transparent with users and putting in place risk assessment and mitigation measures.
The EU's executive arm says most AI systems, such as video games or spam filters, fall into the low- or no-risk category.
WHAT ABOUT CHATGPT?
The original 108-page proposal barely mentioned chatbots, merely requiring them to be labeled so users know they’re interacting with a machine. Negotiators later added provisions to cover general purpose AI like ChatGPT, subjecting them to some of the same requirements as high-risk systems.
One key addition is a requirement to thoroughly document any copyright material used to teach AI systems how to generate text, images, video or music that resembles human work. That would let content creators know if their blog posts, digital books, scientific articles or pop songs have been used to train algorithms that power systems like ChatGPT. Then they could decide whether their work has been copied and seek redress.
WHY ARE THE EU RULES SO IMPORTANT?
The European Union isn't a big player in cutting-edge AI development. That role is taken by the U.S. and China. But Brussels often plays a trendsetting role with regulations that tend to become de facto global standards.
"Europeans are, globally speaking, fairly wealthy and there’s a lot of them," so companies and organizations often decide that the sheer size of the bloc’s single market with 450 million consumers makes it easier to comply than develop different products for different regions, Laux said.
But it's not just a matter of cracking down. By laying down common rules for AI, Brussels is also trying to develop the market by instilling confidence among users, Laux said.
“The thinking behind it is if you can induce people to to place trust in AI and in applications, they will also use it more,” Laux said. “And when they use it more, they will unlock the economic and social potential of AI.”
WHAT IF YOU BREAK THE RULES?
Violations will draw fines of up to 30 million euros ($33 million) or 6% of a company's annual global revenue, which in the case of tech companies like Google and Microsoft could amount to billions.
WHAT’S NEXT?
It could be years before the rules fully take effect. The flagship legislative proposal faces a joint European Parliament committee vote on Thursday. The draft legislation then moves into three-way negotiations involving the bloc’s 27 member states, the Parliament and the executive Commission, where faces further wrangling over the details. Final approval is expected by the end of the year, or early 2024 at the latest, followed by a grace period for companies and organizations to adapt, often around two years.