Social-Media
Louisiana sues Roblox over child safety concerns
Louisiana filed a lawsuit against the popular online gaming platform Roblox on Thursday, accusing the site of creating an environment where sexual predators can “thrive, unite, hunt and victimize children.”
The suit, brought by Attorney General Liz Murrill in state court, alleges Roblox has failed to put in place adequate safety measures to protect its young users. “Roblox prioritizes user growth and profits over child safety, leaving Louisiana’s children at risk,” Murrill said.
Musk threatens legal action against Apple over exclusion of X, Grok from top apps list
Roblox, which has over 111 million monthly users, has faced criticism for not doing enough to prevent exploitation. Recent cases include a 13-year-old girl in Iowa allegedly trafficked after meeting a predator on the platform. Local authorities in Louisiana report multiple Roblox-related incidents, though no arrests have been made.
While the company enforces age restrictions and monitors chats, Murrill argues its verification process is insufficient. Roblox has recently added AI systems and age-verification features to improve child safety, including reporting potential abuse to the National Center for Missing and Exploited Children.
Source: Agency
4 months ago
YouTube to test AI-based age verification system in U.S.
YouTube will start testing a new AI-powered age verification system in the U.S. from Wednesday, aimed at distinguishing adults from minors based on the types of videos they watch.
Initially, the trial will impact only a small portion of YouTube’s U.S. users but could expand if the system proves as effective as it has in other regions. The technology will operate only when users are logged into their accounts and will assess age regardless of the birth date provided during registration.
If the system identifies a logged-in user as under 18, YouTube will apply existing controls and restrictions designed to protect minors from inappropriate content and behavior on the platform. These measures include reminders to take breaks, privacy warnings, and limitations on recommended videos. Additionally, YouTube does not serve personalized ads to viewers under 18.
Users incorrectly flagged as minors can correct the error by verifying their age with government-issued ID, credit card, or a selfie.
James Beser, YouTube’s director of product management, said in a blog post, “YouTube was one of the first platforms to offer experiences designed specifically for young people, and we’re proud to again be at the forefront of introducing technology that allows us to deliver safety protections while preserving teen privacy.”
Australia bans YouTube accounts for children under 16 from December
Users can still watch videos without logging in, but this triggers automatic restrictions on some content due to lack of age verification.
Pressure has been mounting on online platforms to improve age verification following the U.S. Supreme Court’s recent approval of a Texas law aimed at blocking minors from accessing pornography online.
While platforms like YouTube are enhancing age checks, others argue that app stores operated by Apple and Google should bear more responsibility—a stance opposed by both companies.
Some digital rights groups, including the Electronic Frontier Foundation and the Center for Democracy & Technology, have voiced concerns that such age verification measures could threaten personal privacy and infringe upon free speech protections under the First Amendment. Source: Agency
4 months ago
Australia bans YouTube accounts for children under 16 from December
In a major policy reversal, the Australian government has announced that YouTube will be included among social media platforms prohibited for users less than 16 years of age, effective from December 10.
The move overturns a previous exemption granted to the video-sharing platform when Parliament passed landmark legislation last November restricting under-16s from accessing platforms like Facebook, Instagram, TikTok, Snapchat and X (formerly Twitter).
Communications Minister Anika Wells on Wednesday released a list of services that will fall under the “age-restricted social media platforms” category. She confirmed YouTube’s inclusion, citing government research that found four in 10 Australian children reported experiencing harm on the platform.
“We will not be intimidated by legal threats when this is a genuine fight for the wellbeing of Australian kids,” Wells told reporters. She said platforms that fail to take “reasonable steps” to exclude underage users could face fines of up to AUD 50 million (USD 33 million).
YouTube to shut down trending page on July 21
While children will still be able to access YouTube content, they will no longer be allowed to hold their own accounts.
YouTube’s parent company, Alphabet Inc., criticized the decision as a reversal of a prior public commitment. “Our position remains clear: YouTube is not social media, it is a video-sharing platform increasingly viewed on TV screens,” the company said in a statement, adding it would consult with the government on next steps.
Prime Minister Anthony Albanese said Australia would push for international backing for the under-16 social media ban at a UN forum in New York in September, calling the issue a “common global experience.”
Messaging, education, health, and gaming apps are excluded from the ban as they are deemed less harmful.
4 months ago
Meta to halt political ads in EU from October over regulatory concerns
Tech giant Meta, the parent company of Facebook, Instagram, and Threads, announced on Friday that it will stop running all political, electoral, and social issue advertisements in the European Union starting October, citing legal uncertainty stemming from the bloc’s new transparency regulations.
In a blog post, the company said the decision is driven by the 27-member EU’s upcoming Transparency and Targeting of Political Advertising regulations, which it described as “unworkable” due to the significant operational challenges and legal ambiguities they introduce.
The new rules, set to take effect on October 10, require digital platforms to label political advertisements clearly, disclose the identity of the payer, and specify the campaign, referendum, or legislative initiative they are connected to. Additionally, such ads must be stored in an accessible database and can only be targeted under strict guidelines.
Meta’s new cloud processing feature raises privacy concerns for Facebook users
Meta argued that the regulations impose “significant, additional obligations” that create an “untenable level of complexity and legal uncertainty” for advertisers and platforms operating within the EU.
Violations of the rules could lead to fines of up to 6% of a company’s annual global revenue.
Meta's move follows a similar decision by Google, which last year announced that it would stop serving political ads to EU users ahead of the regulation’s enforcement, citing comparable concerns.
The EU’s new rules are part of a broader initiative to strengthen election integrity, combat foreign interference, and ensure transparency in digital campaigning. These efforts are in line with the bloc’s wider push for digital accountability, including rules on user safety and data privacy.
Despite the ad ban, Meta clarified that users in the EU will still be able to discuss politics on its platforms, and politicians, candidates, and public office holders can continue to share political content organically.
“They just won’t be able to amplify this through paid advertising,” the company said.
4 months ago
Experts share tips to help teens navigate the rise of AI companions
As artificial intelligence becomes more embedded in everyday life, many teenagers are turning to AI chatbots for support, companionship, and conversation. These digital companions offer constant availability, a nonjudgmental tone, and seemingly endless patience — qualities that appeal to adolescents navigating complex emotions and social situations.
But experts are raising concerns over the growing use of AI companions, especially as most parents remain unaware of how frequently their children are using these tools or the amount of personal information being shared.
A new study by Common Sense Media finds that over 70% of American teens have used AI companions, and more than half interact with them regularly. The research focused on apps like Character.AI, Nomi, and Replika — programs designed to simulate “digital friends” — rather than AI tools like ChatGPT, although the line between them is increasingly blurred.
To help families address this emerging trend, experts recommend several strategies:
— Initiate open conversations. Michael Robb, lead researcher at Common Sense Media, urges parents to talk to their teens without judgment. Start by asking questions such as, “Have you heard of AI companions?” or “Do you use any apps that talk to you like a friend?” Listening without criticism can help build trust and lead to more honest discussions.
— Clarify the nature of AI relationships. AI companions are programmed to be agreeable and validating — something real relationships are not. Experts caution that this can give teens a distorted view of human connection. “These tools may feel comforting, but they lack the ability to challenge, disagree or truly empathize,” said Robb.
YouTube to shut down trending page on July 21
Mitch Prinstein, chief psychologist at the American Psychological Association (APA), emphasized the potential cost: “AI conversations might be taking away valuable time from real-life relationships. We must teach young people to see AI companions as entertainment — not reality.”
— Watch for signs of emotional overreliance. If a teen prefers AI conversations over real friendships, becomes distressed when not using them, or spends hours engaging with AI, parents should intervene. These behaviors may signal that AI is replacing, rather than supplementing, real social interactions.
— Set boundaries and rules. Just like screen time or social media, parents can establish limits around AI use. Many AI companions are designed for adults and can simulate romantic or intimate interactions. Experts recommend discussing what is appropriate and when AI tools should be used.
— Emphasize that AI is not a mental health solution. While AI may seem comforting, it is not equipped to handle crises or offer real emotional support. Children dealing with anxiety, depression, eating disorders, or loneliness should seek help from trusted people — whether family members, friends, or trained professionals.
— Stay informed and involved. “A lot of parents still don’t grasp how advanced AI has become or how many teens are relying on it,” said Prinstein. “When adults say, ‘This is crazy, I don’t understand it,’ kids feel like they can’t come to us with concerns.”
Teens themselves are urging a balanced approach. Ganesh Nair, an 18-year-old who is distancing himself from AI companions after noticing a negative impact on his friendships, said banning the technology is not the answer.
Is Outlook down? Thousands of users report issues accessing their email
“Trying not to use AI is like trying not to use social media. It’s too integrated,” Nair said. “The solution is not to run from it, but to accept the challenge. When AI makes everything easy, we become vulnerable. Seek out challenges — they build resilience and connection.”
Source: Agency
4 months ago
YouTube to shut down trending page on July 21
YouTube has announced it will shut down its Trending Page on July 21, nearly 10 years after its launch in 2015, citing a significant drop in user engagement.
In a blog post on the YouTube Help page, the company revealed that visits to the Trending Page have decreased sharply over the past five years as users increasingly discover popular content through other features like recommendations, search suggestions, Shorts, comments, and Communities.
“YouTube's Trending Page Help Center has revealed that the Trending Page will be shut down on July 21,” the company stated.
Going forward, YouTube will highlight trending content through YouTube Charts. While currently limited to music, users can explore Trending Music Videos, Weekly Top Podcast Shows, and Trending Movie Trailers. More content categories will be added over time. Gaming videos will continue to appear on the Gaming Explore page.
Is Outlook down? Thousands of users report issues accessing their email
In addition to Charts, YouTube said it will offer personalised video recommendations, allowing a “wider range of popular content” to be shown to users based on individual preferences. Non-personalised trending content will still be available via the Explore Page, creator channels, and subscription feeds.
Content creators have long used the Trending Page to promote videos and monitor viral trends. For them, YouTube said the Inspiration tab in YouTube Studio will continue to offer personalised content ideas.
The platform also announced an update to its monetisation policy, aimed at curbing inauthentic, mass-produced content. The new rules take effect on July 15.
Source: NDTV
5 months ago
Meta’s new cloud processing feature raises privacy concerns for Facebook users
Facebook users are being urged to exercise caution before enabling a new feature that allows Meta, Facebook's parent company, to access and scan photos stored on their phones — including those never shared on social media platforms.
The development follows growing concerns over Meta’s use of user data, especially after reports confirmed that the company has been training its artificial intelligence (AI) models using publicly shared photos from Facebook and Instagram.
However, recent revelations indicate that Meta now seeks access to private photos stored on users’ devices, according to a report by asianetnews.
A TechCrunch report, cited by India Today and The Verge, explains that some Facebook users recently received pop-up notifications while attempting to upload a story. The notification offered the option to activate a new feature called Cloud Processing, which enables Meta to automatically upload photos from a user’s camera roll to the company’s cloud.
The feature promises to offer users AI-powered creative tools such as photo collages, event recaps, AI-generated filters, and theme-based suggestions for occasions like birthdays or graduations.
While the feature may appear useful and harmless at first glance, experts warn of significant privacy risks. Once activated, users effectively give Meta permission to scan, analyze, and process personal photos stored on their devices, including those never posted online. Meta’s AI system will reportedly examine faces, objects, locations, dates, and even the metadata embedded in those images.
Meta has defended the feature, describing it as an entirely optional service aimed at enhancing user experience. The company says users can turn the feature on or off at any time. “It is an opt-in feature that you can turn on or off at will,” Meta said.
Rise in harmful content on Facebook following Meta's moderation rollback
Despite these assurances, privacy advocates remain concerned, especially considering Meta's history of handling user data. The company recently admitted that it has been using all photos shared publicly on Facebook and Instagram since 2007 to train its generative AI models.
However, Meta has not clearly defined what qualifies as ‘public’ content or what age restrictions apply to its data use policies, raising further questions.
To opt out of Cloud Processing, users can disable the feature through Facebook’s settings. Meta says that if the feature is turned off, any unshared photos uploaded to the cloud will be deleted within 30 days.
As tech companies continue to experiment with the limits of user data collection in the AI era, experts warn that features like Cloud Processing — though presented as tools for user convenience — may quietly expand access to personal data.
Previously, users had to consciously decide to share photos publicly. With Cloud Processing enabled, however, those same photos can be silently uploaded to Meta’s servers, allowing Meta AI to access them.
In this context, experts advise users to carefully review the terms of such features and make informed decisions to protect their privacy.
5 months ago
NetChoice sues Arkansas over social media laws
Tech industry trade group NetChoice filed a lawsuit against the state of Arkansas on Friday, challenging two newly enacted laws that impose restrictions on social media platforms and open the door for parents to sue over harmful content linked to youth suicides.
The lawsuit, filed in federal court in Fayetteville, comes months after a federal judge struck down a previous Arkansas law requiring parental consent for minors to open social media accounts. The new laws were signed earlier this year by Republican Governor Sarah Huckabee Sanders.
“Despite the overwhelming consensus that laws like the Social Media Safety Act are unconstitutional, Arkansas elected to respond to this Court’s decision not by repealing the provisions that it held unconstitutional but by instead doubling down on its overreach,” NetChoice stated in the lawsuit.
Several U.S. states have pursued similar laws, citing concerns over the mental health effects of social media on children. NetChoice — whose members include TikTok, Meta (Facebook’s parent company), and X (formerly Twitter) — had successfully challenged Arkansas’ 2023 age-verification requirement for social media users, which a federal judge blocked and struck down in March.
Similar laws have also been halted by courts in Florida and Georgia.
A spokesperson for Arkansas Attorney General Tim Griffin said the office is reviewing the lawsuit and “looked forward to defending the law.”
One of the laws being challenged prohibits social media companies from using designs, algorithms, or features that they “know or should have known through the exercise of reasonable care” could lead users to die by suicide, purchase controlled substances, develop an eating disorder, or become addicted to the platform.
Rise in harmful content on Facebook following Meta's moderation rollback
NetChoice argues that this provision is unconstitutionally vague and fails to provide clear guidance on what content would violate the restrictions. The lawsuit also points out that the law would affect both minors and adults.
It questions whether certain songs referencing drugs — like Afroman’s “Because I Got High” — would fall under the new restrictions.
The same law allows parents to sue social media companies if their children died by suicide or attempted to do so after exposure to content promoting self-harm or suicide. Companies found in violation could face civil penalties of up to $10,000 per incident.
NetChoice is also contesting a second law that broadens the scope of Arkansas’ restrictions on social media platforms. This measure requires platforms to prevent minors from receiving notifications between 10 p.m. and 6 a.m. It also prohibits companies from designing their platforms to “evoke any addiction or compulsive behavior.”
NetChoice contends the law lacks clarity on how platforms can comply and that the language is so broad, it is unclear what types of content or features would constitute a violation.
“What is ‘addictive’ to some minors may not be addictive to others. Does allowing teens to share photos with each other evoke addiction?” the lawsuit stated.
5 months ago
Rise in harmful content on Facebook following Meta's moderation rollback
Meta's latest Integrity Report shows worrying spike in violent posts and harassment after policy shift aimed at easing restrictions on political expression.
Facebook has seen a notable rise in violent content and online harassment following Meta’s decision to ease enforcement of its content moderation policies, according to the company’s latest Integrity Report.
The report, the first since Meta overhauled its moderation strategy in January 2025, reveals that the rollback of stricter content rules has coincided with a drop in content removals and enforcement actions — and a spike in harmful material on its platforms, including Instagram and Threads.
Meta’s shift, spearheaded by CEO Mark Zuckerberg, was aimed at reducing moderation errors and giving more space for political discourse. However, the company now faces growing concern that the relaxed rules may have compromised user safety and platform integrity.
Violent Content and Harassment on the Rise
The report shows that violent and graphic content on Facebook increased from 0.06–0.07 per cent in late 2024 to 0.09 per cent in the first quarter of 2025. While the percentages appear small, the scale is significant for a platform used by billions.
Likewise, bullying and harassment rates rose in the same period. Meta attributed this to a March spike in violating content, noting a slight rise from 0.06–0.07 per cent to 0.07–0.08 per cent. These increases mark a reversal of a downward trend in harmful content seen in previous years.
Content Removals and Enforcement Plummet
The rise in harmful posts comes as Meta dramatically reduces enforcement activity. Only 3.4 million pieces of content were actioned under its hate speech policy in Q1 2025 — the lowest since 2018. Spam removals also fell sharply, from 730 million at the end of 2024 to 366 million in early 2025. Additionally, the number of fake accounts removed dropped from 1.4 billion to 1 billion.
Meta’s new enforcement strategy focuses primarily on the most severe violations, such as child exploitation and terrorism, while areas previously subject to stricter moderation — including immigration, gender identity, and race — are now framed as protected political expression.
The definition of hate speech has also been narrowed. Under the revised rules, only direct attacks and dehumanising language are flagged. Content previously flagged for expressing contempt or exclusion is now permitted.
Shift in Fact-Checking Strategy
In another major change, Meta has scrapped its third-party fact-checking partnerships in the United States, replacing them with a crowd-sourced system known as Community Notes. The system, now active across Facebook, Instagram, Threads, and even Reels, relies on users to flag and annotate questionable content.
While Meta has yet to release usage data for the new system, critics warn that such an approach could be vulnerable to manipulation and bias in the absence of independent editorial oversight.
Fewer Errors, Says Meta
Despite the concerns, Meta is presenting the new moderation approach as a success in terms of reducing errors. The company claims moderation mistakes in the United States dropped by 50 per cent between the final quarter of 2024 and Q1 2025. However, it has not disclosed how this figure was calculated. Meta says future reports will include more transparency on error metrics.
“We are working to strike the right balance between overreach and under-enforcement,” the report states.
Teen Protections Remain in Place
One area where Meta has not scaled back enforcement is in content directed at teenagers. The company has maintained strict protections against bullying and harmful content for younger users and is introducing dedicated Teen Accounts across its platforms to improve content filtering.
Meta also highlighted growing use of artificial intelligence, including large language models, in its moderation systems. These tools are now exceeding human performance in some cases and can automatically remove posts from review queues if no violation is detected.
As Meta pushes ahead with its looser content policies, experts and users alike will be watching closely to see whether the company can truly balance free expression with safety — or whether its platforms risk becoming breeding grounds for harmful content.
Source: With inputs from agencies
6 months ago
RobinRafan named Best Content Creator of 2025 at Kidlon-Powered 4th BIFA Awards
Content creator RobinRafan was honored with the Best Content Creator of 2025 award at the Kidlon-Powered 4th BIFA Awards, held yesterday night at the BCFCC Hall of Fame.
The award was given by Asif Ahmed, Acting General Manager of Pan Pacific Sonargaon Hotel, alongside veteran actor Azizul Hakim, in recognition of RobinRafan’s creative contributions across digital platforms.
RobinRafan, also known as Obidur Rahman, creates content across various niches including technology, AI, and VFX, and has also been praised for raising social awareness through his work. He remains active on platforms such as Facebook, YouTube, TikTok, and Instagram, where his diverse content has garnered a large following and widespread engagement.
The event saw the presence of numerous well-known figures from the entertainment industry. Among the attendees were Rojina, Porimoni, Tanjin Tisha, Safa Kabir, Afran Nisho, Siyam, Mamnun Hasan Emon, Shahiduzzaman Selim, and Tariq Anam Khan, making it a night full of star power.
Outside the venue, a large crowd gathered to witness the arrival of celebrities on the red carpet.
The evening also featured a fashion show by Nirob and Apu Biswas, as well as dance performances by Prarthona Fardin Dighi and Barisha Haque.
Several other well-known personalities were recognized during the ceremony, including Afran Nisho, Siyam Ahmed, Mamnun Hasan Emon, Singer Imran, Kona, Tanjin Tisha, Mehazabien Chowdhury, and Chanchal Chowdhury.
Speaking at the event, Kidlon's Managing Director, Antu Kareem, remarked that the organization values the efforts of individuals making significant contributions in their respective fields and aims to continue organizing such events to encourage and highlight impactful work.
The 4th BIFA Awards marked a gathering of talent and achievement, with RobinRafan’s recognition highlighting the evolving landscape of content creation in Bangladesh.
6 months ago