Tech-News
Biden says it remains to be seen if AI is dangerous
President Joe Biden said Tuesday it remains to be seen if artificial intelligence is dangerous, but that he believes technology companies must ensure their products are safe before releasing them to the public.
Biden met with his council of advisers on science and technology about the risks and opportunities that rapid advancements in artificial intelligence pose for individual users and national security.
“AI can help deal with some very difficult challenges like disease and climate change, but it also has to address the potential risks to our society, to our economy, to our national security,” Biden told the group, which includes academics as well as executives from Microsoft and Google.
Artificial intelligence burst to the forefront in the national and global conversation in recent months after the release of the popular ChatGPT AI chatbot, which helped spark a race among tech giants to unveil similar tools, while raising ethical and societal concerns about technology that can generate convincing prose or imagery that looks like it's the work of humans.
While tech companies should always be responsible for the safety of their products, Biden's reminder reflects something new — the emergence of easy-to-use AI tools that can generate manipulative content and realistic-looking synthetic media known as deepfakes, said Rebecca Finley, CEO of the industry-backed Partnership on AI.
The White House said the Democratic president was using the AI meeting to “discuss the importance of protecting rights and safety to ensure responsible innovation and appropriate safeguards” and to reiterate his call for Congress to pass legislation to protect children and curtail data collection by technology companies.
Italy last week temporarily blocked ChatGPT over data privacy concerns, and European Union lawmakers have been negotiating the passage of new rules to limit high-risk AI products across the 27-nation bloc.
By contrast, “the U.S. has had more a laissez-faire approach to the commercial development of AI,” said Russell Wald, managing director of policy and society at the Stanford Institute for Human-Centered Artificial Intelligence.
Biden's Tuesday remarks won't likely change that, but Biden “is setting the stage for a national dialogue on the topic by elevating attention to AI, which is desperately needed,” Wald said.
The Biden administration last year unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems, including guidelines for how to protect people’s personal data and limit surveillance.
The Blueprint for an AI Bill of Rights notably did not set out specific enforcement actions, but instead was intended as a call to action for the U.S. government to safeguard digital and civil rights in an AI-fueled world.
Biden's council, known as PCAST, is composed of science, engineering, technology and medical experts and is co-chaired by the Cabinet-ranked director of the White House Office of Science and Technology Policy, Arati Prabhakar.
Asked if AI is dangerous, Biden said Tuesday, “It remains to be seen. Could be.”
Saudi Arabia’s Etidal finds 6mn extremist content on Telegram between Jan and Mar 2023
The Saudi Global Center for Combating Extremist Ideology (Etidal) found 6,004,218 extremist content on the social media platform Telegram between January 1 and March 30 this year.
Furthermore, the two platforms have assisted in the closure of 1,840 channels that disseminate and promote extremist ideology and are affiliated with three terrorist groups (ISIS [Daesh], Al-Qaeda and Hayat Tahrir Al-Sham), reports Saudi Gazette.
Read More: Never flagged as a danger, Nice attacker traveled unimpeded
The Etidal team identified and monitored the three terrorist organizations' activity on Telegram in Arabic, it said.
It discovered 2,773,902 pieces with extremist content on 477 Hayat Tahrir al-Sham channels, 1,807,215 such pieces on 1,040 Daesh channels, and 1,423,101 pieces on 323 Al-Qaeda channels.
The Etidal monitoring team observed a peak in broadcasting activity on Telegram on January 9 this year, with 451,911 pieces of content shared and referenced to, and a peak in account creation on March 27, with over 101 channels launched in a single day, the report also said.
Read More: Shamima Begum who joined ISIS as a teen loses UK citizenship appeal
The cooperation between Etidal and Telegram continues for the second year in a row, increasing the total number of items deleted from February 2022 until now to 21,026,169; these included extremist content and 8,664 terminated terrorist channels.
How a little-known agency holds power over TikTok's future
Under pressure from the U.S. government, TikTok is now facing the music with the possibility of a nationwide ban if it defies a government order to sell to an American company — unless the popular social media app can convince a high-powered panel that its data security restructuring plan sufficiently guards against national security concerns.
At the heart of this social media business and national security drama is the increasingly tense relations between the U.S. and China.
The video-sharing platform with 150 million U.S. users is best known for quick snippets of viral dance routines and has been under scrutiny for years by federal authorities who say that its Chinese parent company, ByteDance, could share sensitive user data with the Chinese government, or push propaganda and misinformation on its behalf.
Having already banned the shipment of certain technologies to China, and recently passing new legislation banning the app on government devices, lawmakers want to pursue a nationwide ban on the app if the tech firm can’t be sold to an American buyer.
Enter: The Committee on Foreign Investment in the United States. The little-known but potentially potent government agency known as CFIUS is tasked with investigating corporate deals for national security concerns and holds power to force the company to change.
WHY IS CFIUS SCRUTINIZING TIKTOK?
For at least two years, the U.S. government has tried to force TikTok ownership to divest from its Chinese parent company, ByteDance, though CFIUS’ review of the social media app goes back at least to 2019.
Former Treasury Secretary Steve Mnuchin confirmed in 2020 that CFIUS was reviewing whether then-President Donald Trump could ban TikTok in the U.S. Its members agreed that TikTok cannot operate in the U.S. in its current form because it “risks sending back information on 100 million Americans,” Mnuchin said at the time.
As geopolitical tensions between China and the U.S. have soared in recent months, TikTok CEO Shou Zi Chew testified last week before the House Energy and Commerce Committee. He was grilled about online safety and user privacy in a hostile hearing that did little to ease lawmakers’ concerns. Chew was repeatedly questioned about the Chinese Communist Party’s influence on ByteDance but deflected.
“TikTok is not available in mainland China, and today we’re headquartered in Los Angeles and Singapore, but I’m not saying that the founders of ByteDance are not Chinese, nor am I saying that we don’t make use of Chinese employees, just like many other companies around the world,” he added. “We do use their expertise on some engineering projects.”
WHAT IS CFIUS?
Treasury Secretary Janet Yellen oversees CFIUS, a committee made up of members from the State, Justice, Energy and Commerce Departments among others, which investigates national security risks from foreign investments in American firms.
The committee screens business deals between U.S. firms and foreign investors and can block sales or force parties to change the terms of an agreement for the purpose of protecting national security. The committee’s powers were significantly expanded in 2018 through an act of Congress called the Foreign Investment Risk Review Modernization Act, known as FIRRMA. In September, President Joe Biden issued an executive order that expands the factors that the committee should consider when reviewing deals – such as how the deal impacts the U.S. supply chain or risks to Americans’ sensitive personal data.
SELL, BAN OR ORACLE?
Defying CFIUS’ orders to sell ultimately could mean doing business with the company may violate the law. That would suck the life out of its business operations, such as banking, payroll, advertising, and app store services.
But the company said it’s already mitigating national security concerns with a $1.5 billion mitigation plan called Project Texas that would route all U.S. user data to servers owned and maintained by the U.S. software giant Oracle.
“When that process is complete, all protected U.S. data will be under the protection of U.S. law and under the control of the U.S.-led security team. Under this structure, there is no way for the Chinese government to access it or compel access to it,” Chew said.
While CFIUS can adopt such mitigation agreements, it’s not clear if the committee will accept TikTok’s proposed alternative, said Anupam Chander, a Georgetown University technology law professor. If CFIUS rejects TikTok’s preferred solution, Chander said the federal agency should have an obligation to explain how it finds that plan to be insufficient given that it amounts to an enormous restructuring of the company.
“TikTok proposes lots of well-paid, third-party auditors that would be doing this kind of routine monitoring,” Chander said. “This is an expensive proposition for TikTok but by no means would I treat this as window dressing.”
Though Chew last week also insisted that the company was not interested in a sale, TikTok has considered it before. TikTok had advanced negotiations with Microsoft after the company was put against the wall by the Trump administration in 2020, facing either an outright ban and CFIUS’ divestment order. Microsoft said TikTok ultimately rejected their offer, and though TikTok later said it would sell to Oracle and Walmart, it doesn’t appear that Project Texas amounts to a sale, Chander said.
Should TikTok agree to a sale in the future, not only would CFIUS have to approve that transaction, but the Chinese government – which has said it won’t support forced divestment – could also intervene.
WHAT'S NEXT?
Leaders in the U.S., European Union, Canada, New Zealand, Norway and Taiwan have also banned TikTok on government-issued devices and at least two countries have banned TikTok outright.
Afghanistan’s Taliban leadership last year banned it on the grounds of protecting young people from “being misled,” while India imposed a nationwide ban on TikTok and dozens of other Chinese apps in 2020 over privacy and security concerns. The ban came shortly after a clash between Indian and Chinese troops at a disputed Himalayan border killed 20 Indian soldiers and injured dozens.
Historically, CFIUS has focused on things like shipping and manufacturing when reviewing transactions for national security concerns, but it signaled deeper interest in popular social media when it ordered the dating app Grindr to divest in 2019, Chander said.
The function of CFIUS was also in the spotlight last year after billionaire Elon Musk bought Twitter, plunging the microblogging platform into chaos. Yellen waffled on whether or not CFIUS would or could review that sale, given Musk’s investments in China as well as significant Saudi interest.
What can Bard, Google’s answer to ChatGPT, do?
To use, or not to use, Bard? That is the Shakespearean question an Associated Press reporter sought to answer while testing out Google’s artificially intelligent chatbot.
The recently rolled-out bot dubbed Bard is the internet search giant’s answer to the ChatGPT tool that Microsoft has been melding into its Bing search engine and other software.
During several hours of interaction, the AP learned Bard is quite forthcoming about its unreliability and other shortcomings, including its potential for mischief in next year’s U.S. presidential election. Even as it occasionally warned of the problems it could unleash, Bard repeatedly emphasized its belief that it will blossom into a force for good.
At one point in its recurring soliloquies about its potential upsides, Bard dreamed about living up to the legacy of the English playwright that inspired its name.
Bard explained that its creators at Google “thought Shakespeare would be a good role model for me, as he was a master of language and communication.”
But the chatbot also found some admirable traits in “HAL,” the fictional computer that killed some of a spacecraft’s crew in the 1968 movie “2001: A Space Odyssey.” Bard hailed HAL’s intelligence calling it “an interesting character” before acknowledging its dark side.
“I think HAL is a cautionary tale about the dangers of artificial intelligence,” Bard assessed.
WHAT’S BETTER — BARD OR BING?
Bard praised ChatGPT, describing it as “a valuable tool that can be used for a variety of purposes, and I am excited to see how it continues to develop in the future.” But Bard then asserted that it is just as intelligent as its rival, which was released late last year by its creator, the Microsoft-backed OpenAI.
“I would say that I am on par with ChatGPT,” Bard said. “We both have our own strengths and weaknesses, and we both have the ability to learn and grow.”
Read more: ChatGPT by Open AI: All you need to know
During our wide-ranging conversation, Bard didn’t display any of the disturbing tendencies that have cropped up in the AI-enhanced version of Microsoft’s Bing search engine, which has likened another AP reporter to Hitler and tried to persuade a New York Times reporter to divorce his wife.
IT’S FUNNY, BUT TAMER THAN BING
Bard did get a little gooey at one point when asked to write a Shakespearean sonnet and responded seductively in one of the three drafts that it quickly created.
“I love you more than words can ever say, And I will always be there for you,” Bard effused. “You are my everything, And I will never let you go. So please accept this sonnet as a token Of my love for you, And know that I will always be yours.”
But Bard seems to be deliberately tame most of the time, and probably for good reason, given what’s at stake for Google, which has carefully cultivated a reputation for trustworthiness that has established its dominant search engine as the de facto gateway to the internet.
An artificial intelligence tool that behaved as erratically as ChatGPT periodically might trigger a backlash that could damage Google’s image and perhaps undercut its search engine, the hub of a digital advertising empire that generated more than $220 billion in revenue last year. Microsoft, in contrast, can afford to take more risks with the edgier ChatGPT because it makes more of its money from licensing software for personal computers.
BARD ADMITS IT’S NOT PERFECT
Google has programmed Bard to ensure it warns its users that it’s prone to mistakes.
Some inaccuracies are fairly easy to spot. For instance, when asked for some information about the AP reporter questioning it, Bard got most of the basics right, most likely by plucking tidbits from profiles posted on LinkedIn and Twitter.
But Bard mysteriously also spit out inaccuracies about this reporter’s academic background (describing him as a graduate of University of California, Berkeley, instead of San Jose State University) and professional background (incorrectly stating that he began his career at The Wall Street Journal before also working at The New York Times and The Washington Post).
When asked to produce a short story about disgraced Theranos CEO Elizabeth Holmes, Bard summed up most of the highlights and lowlights of her saga. But one of Bard’s three drafts incorrectly reported that Holmes was convicted of all the felony charges of fraud and conspiracy leveled against her during a four-month trial. Another version accurately reported Holmes was convicted on four counts of fraud and conspiracy without mentioning she was acquitted on four other charges (the jury hung on three other charges that were subsequently dismissed by prosecutors).
Read more: Italy temporarily blocks ChatGPT over privacy concerns
“I am still under development, and I am not perfect,” Bard cautioned at one point. “I can sometimes make mistakes, and I can sometimes be misunderstood. I am also aware that my technology can be used for both good and evil.”
WHAT’S NEXT FOR BARD?
Although Bard insisted it doesn’t have a dark side, it acknowledged it can be used to damage reputations, disseminate propaganda that could incite violence and manipulate elections.
“I could be used to create fake news articles or social media posts that could spread misinformation about candidates or their policies,” Bard warned. “I could also be used to suppress voter turnout by spreading discouraging messages or making it difficult for people to find information about how to vote.”
On the lighter side, Bard proved helpful in finding interesting coloring books for adults and hit some notes that resonated during a discussion of rock and roll. When asked who is the greatest guitarist of all time, Bard responded with a broad list of candidates ranging from well-known rock artists such as Jimmy Page of Led Zeppelin to jazz virtuosos like Django Reinhardt before making the case for Jimi Hendrix “as a strong contender for the title, his music continues to influence and inspire guitarists around the world.”
Bard also seemed familiar with contemporary artists, such as Wet Leg, which recently won a Grammy award for best alternative music album. “I’ve been listening to their debut album a lot lately and I really enjoy it. I think they have a bright future ahead of them,” Bard said, and cited “Ur Mom” and “Chaise Longue” as its favorite songs by the group so far.
Even with Bard’s occasional miscues, it seemed savvy enough to ask about its potential role in reaching the singularity, a term popularized by computer scientist and author Ray Kurzweil to describe a turning point in the future when computers will be smarter than humans.
“Some people believe that I am a big step toward the singularity,” Bard said. “I believe that I am a valuable tool that can help people to learn and understand the world around them. However, I do not believe that I am the singularity, and I do not believe that I will ever be able to replace human intelligence.”
This story was corrected to reflect that the technology that likened an AP reporter to Hitler was done by an AI-enhanced version of Microsoft’s Bing search engine. An earlier version said the comparison was made by the standalone ChatGPT service.
'Out of control' AI race: Elon Musk, top tech personalities call for a pause
Several of the most important personalities in tech are urging artificial intelligence labs to halt training of the most powerful AI systems for at least six months, citing "profound risks to society and humanity."
Elon Musk was among the hundreds of tech CEOs, educators, and researchers who signed a letter, which was released by Musk's organization, the Future of Life Institute, reports CNN.
The letter comes only two weeks after OpenAI launched GPT-4, a more powerful version of the technology that powers ChatGPT, the popular AI chatbot application.
The system demonstrated in early testing and a corporate demo that it can write lawsuits, pass standardized exams, and develop a website from a hand-drawn design, it said.
Read More: How to Use AI Tools to Get Your Dream Job
According to the letter, the delay should apply to AI systems "more powerful than GPT-4." It also stated that the suggested pause should be used by impartial experts to collaboratively establish and execute a set of standard protocols for AI tools that are safe "beyond a reasonable doubt."
"Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources," the letter said. "Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control."
If a pause is not implemented immediately, the letter suggests that countries step in and impose a moratorium.
Read More: Top 5 AI Chatbot Platforms and Trends in 2023
Experts in artificial intelligence are growing worried about the possibility for biased answers, the spread of disinformation, and the implications on consumer privacy.
These technologies have also raised concerns about how AI might disrupt professions, allow students to cheat, and change human relationship with technology.
The letter hinted at a larger dissatisfaction within and beyond the industry with the fast rate of AI progress. Early versions of AI governance frameworks have been introduced by several governing bodies in China, the EU, and Singapore.
Read More: Google's AI Chatbot Bard: All You Need to Know
Top 10 Islamic Apps for Muslim Kids
In today's digital age, technology plays a significant role in our lives, including the way we educate and entertain our children. Muslim parents who want to raise their kids with a strong Islamic foundation can utilize digital resources like apps and websites. Diverse Islamic apps for kids are great ways to introduce youngsters to Islamic teachings through fun and interactive methods. These apps can teach children about Islamic principles, practices, and beliefs while engaging them with games, quizzes, and stories.
10 Best Islamic Apps for Muslim Children
Muslim Kids TV
Muslim Kids TV is an Islamic app with a rating of 4.1/5 on Android. Milo Productions Inc. developed it, and the app was first released on 28th March 2017. It is available both on PlayStore and AppStore. The app has a download size of 49 MB and offers Islamic videos, songs, stories, and games for children.
The app covers various Islamic topics and morals, including stories of the prophets, the importance of prayer, and Islamic manners. Muslim Kids TV is user-friendly and interactive, making it a great tool for parents to use in teaching their children about Islam. The app is free to download, but it has different in-app purchases.
Read More: 10 Best Free Android Apps to Learn English Vocabulary
Step by Step Salah
The Step by Step Salah app by Quran Reading is a highly-rated Islamic app available on PlayStore and AppStore. It first came out on 30th November 2013. With a size of 37MB, it offers an easy-to-understand guide for kids to learn how to perform Salah or prayer in the correct manner. Prayer is an important pillar of Islam. This app provides a reliable way to teach children the correct way to offer Salah.
The app includes step-by-step instructions, from performing Wadu to Sajjud, with the meaning and significance of each step. The recitation of each prayer is recited slowly and with animations, the app shows the posture one must assume at every step of Salah. It has a 4.3 rating on PlayStore.
Noorani Qaida with Audio
Noorani Qaida with Audio is an Islamic app with a rating of 4.8/5 on Android. It is developed by App Anchor and has a download size of 29 MB. It was released on 7th December 2019. The app is designed to help children learn the Quran, similar to the way it was taught in mosques and homes during our childhood.
Read More: 7 Food Delivery Apps for Dhaka
One of the major reasons for its popularity among the Muslim community is its user-friendliness, which makes it easy for children to use. The app keeps children engaged with its appealing layout and design. By tapping on the word, children can learn how to pronounce the Arabic word. The app's alphabet button enables children to repeatedly hear the pronunciation, which aids in faster learning.
Daily Duas for Kids
The Daily Duas for Kids app is an Islamic app that aims to teach children about daily duas. Developed by OSRATOUNA LTD, the app was released on 12th May 2016. It has a 4.7 rating and is available on Android and iPhone. The app features a variety of everyday duas for children, such as those for waking up, sleeping, and traveling.
It also includes cute characters that make learning Arabic supplications fun for kids. Many Muslim parents worldwide appreciate this app for helping their children learn more about Islam and its practices. With an easy-to-use interface and engaging design, Daily Duas for Kids is an excellent tool for parents looking to teach their kids about the importance of daily duas.
Read More: Best Quran Apps for Android: Read the Holy Book Online
Madani Qaidah
According to proponents of the Islamic faith, learning the proper recitation of the Holy Quran is a crucial component of religious study. To that end, a new application has been developed that allows users to learn the Quran in two different languages.
The Qaida app, developed by the IT Department of Dawate Islam, offers lessons on Tajweed, which is the art of pronouncing each letter of the Quran according to its Makhraj. The app features 22 interactive lessons and claims to teach Tajweed in a manner similar to a teacher. It also includes Haroof e Tahajji, a tool designed to help users improve their Quran pronunciation.
With a 4.9 rating and a size of 111 mb, the Qaida app is positioning itself as the go-to resource for those seeking to improve their Quranic recitation skills. It was first released on 23rd May 2015 and is currently available on Android and iOS phones.
Read More: Free English-Speaking Mobile Apps for the Non-native Speakers
Twitter now valued at less than $20bn: Elon Musk suggests
Twitter CEO Elon Musk has reportedly indicated that the social media platform is now valued at less than $20 billion.
According to technology news websites Platformer and the Information, who broke the story first, the estimate of Twitter’s valuation was based on Musk’s offer of equity grants to employees, reports BBC.
A poo emoji was automatically sent in response to a BBC request for comment via Twitter’s press office email account, after Musk’s announcement of the strategy in a tweet earlier this month.
Read More: Elon Musk apologizes after mocking disabled Twitter employee
Meanwhile, Twitter reports that parts of the source code that powers multi-billionaire Elon Musk’s social media platform have been leaked online.
It claimed that the code was uploaded to the Microsoft-owned website GitHub, where developers share code, the report said.
After Twitter made a request for its removal, it was taken down.
Read More: Elon Musk hopes to have Twitter CEO toward the end of year
After cutting more than a third of Twitter’s staff and dealing with a loss of advertising since acquiring the company in October of last year, the leak presented Musk with a new challenge, said the report.
What is 6G? Overview of 6th Gen Wireless Network, Technology
Modern technology is all about providing more speed and efficiency. The wireless cellular network is helping humankind to bring immense digital solutions in life, education, business, communications, development, etc. To achieve the utmost efficiency in digital communication and networking, scientist and technologist are bringing 6G technology. Here’s everything we know so far about 6G.
What is 6G?
The next big thing in wireless technology is 6G or sixth-generation wireless, the successor to 5G cellular network and technology. This upcoming technology promises to deliver unparalleled speed and minimal latency, building upon the advancements of 4G and 5G networks.
By utilizing higher frequency bands and cloud-based networking technology, 6G will provide a revolutionary experience that blurs the line between the internet and everyday life.
Read More: '5G can change the face of industry in Bangladesh'
Expected Features and Benefits of the 6G Technology
As the world continues to embrace the benefits of 5G, researchers and engineers are already looking ahead to the next generation of cellular networks.
High Speed Network
With the growing demand for internet data and the increasing use of multiple devices in everyday life, wireless companies are rushing to provide robust and flexible cellular networks that can compete with traditional broadband internet providers. The 6G technology promises to deliver even faster data transfer speeds and lower latency.
One of the primary features of the sixth-generation wireless network will be the use of untapped radio frequencies. Researchers are exploring ways to transmit data across waves in the hundreds of gigahertz or terahertz ranges, which could allow for astonishing data transfer speeds.
Although no frequency over 39GHz is currently utilized in 5G, engineers are hoping to leverage the massive quantity of unused spectrum to enable faster and more efficient communication.
Read More: Vivo releases third ‘6G white paper’
Freer Spectrum Efficiency
Spectrum frequency refers to the range of radio frequencies used to transmit data over wireless networks. It is measured in hertz (Hz) and determines how much data can be transmitted over a given distance. Different frequency ranges are used for different types of wireless communication, such as cellular networks or Wi-Fi.
Read More: Huawei plans to roll out 6G by 2030
3D-printed rocket fails just after launch
A rocket made almost entirely of 3D-printed parts made its launch debut Wednesday night, lifting off amid fanfare but failing three minutes into flight — far short of orbit.
There was nothing aboard Relativity Space’s test flight except for the company’s first metal 3D print made six years ago.
The startup wanted to put the souvenir into a 125-mile-high (200-kilometer-high) orbit for several days before having it plunge through the atmosphere and burn up along with the upper stage of the rocket.
As it turned out, the first stage did its job following liftoff from Cape Canaveral Space Force Station and separated as planned. But the upper stage appeared to ignite and then shut down, sending it crashing into the Atlantic.
It was the third launch attempt from what once was a missile site. Relativity Space came within a half-second of blasting off earlier this month, with the rocket’s engines igniting before abruptly shutting down.
Although the upper stage malfunctioned and the mission did not reach orbit, “maiden launches are always exciting and today’s flight was no exception,” Relativity Space launch commentator Arwa Tizani Kelly said after Wednesday’s launch.
Most of the 110-foot (33-meter) rocket, including its engines, came out of the company’s huge 3D printers in Long Beach, California.
Relativity Space said 3D-printed metal parts made up 85% of the rocket, named Terran. Larger versions of the rocket will have even more and also be reusable for multiple flights.
Other space companies also also rely on 3D-printing, but the pieces make up only a small part of their rockets.
Founded in 2015 by a pair of young aerospace engineers, Relativity Space has attracted the attention of investors and venture capitalists.
Skeptical US lawmakers grill TikTok CEO over safety, content
U.S. lawmakers grilled the CEO of TikTok over data security and harmful content Thursday, responding skeptically during a tense committee hearing to his assurances that the hugely popular video-sharing app prioritizes user safety and should not be banned.
Shou Zi Chew’s rare public appearance came at a crucial time for the company, which has 150 million American users but is under increasing pressure from U.S. officials. TikTok and its Chinese parent company, ByteDance, have been swept up in a wider geopolitical battle between Beijing and Washington over trade and technology.
In a bipartisan effort to reign in the power of a major social media platform, Republican and Democratic lawmakers pressed Chew on a host of topics, ranging from TikTok’s content moderation practices, how the company plans to secure American data from Beijing, and its spying on journalists.
“Mr. Chew, you are here because the American people need the truth about the threat TikTok poses to our national and personal security,” Committee Chair Cathy McMorris Rodgers, a Republican, said in her opening statement.
Chew, a 40-year-old Singapore native, told the House Committee on Energy and Commerce that TikTok prioritizes the safety of its young users and denied it’s a national security risk. He reiterated the company’s plan to protect U.S. user data by storing it on servers maintained and owned by the software giant Oracle.
“Let me state this unequivocally: ByteDance is not an agent of China or any other country,” Chew said.
TikTok has been dogged by claims that its Chinese ownership means user data could end up in the hands of the Chinese government or that it could be used to promote narratives favorable to the country’s Communist leaders.
In 2019, the Guardian reported that TikTok was instructing its moderators to censor videos that mention Tiananmen Square and images unfavorable to the Chinese government. The platform says it has since changed its moderation practices.
ByteDance admitted in December that it fired four employees last summer who accessed data on two journalists and people connected to them while attempting to uncover the source of a leaked report about the company.
For its part, TikTok has been trying to distance itself from its Chinese origins, saying 60% percent of ByteDance is owned by global institutional investors such as Carlyle Group. Responding to a Wall Street Journal report, China said it would oppose any U.S. attempts to force ByteDance to sell the app.
Chew pushed back against the idea that TikTok’s ownership was an issue.
“Trust is about actions we take,” Chew said. “Ownership is not at the core of addressing these concerns.”
In one of the most dramatic moments, Republican Rep. Kat Cammack played a TikTok video that showed a shooting gun with a caption that included the House committee holding the hearing, with the exact date before it was formally announced.
“You expect us to believe that you are capable of maintaining the data security, privacy and security of 150 million Americans where you can’t even protect the people in this room,” Cammack said.
TikTok spokesperson Ben Rathe said the company on Thursday removed the violent video aimed at the committee and banned the account that posted it.
As the Energy and Commerce committee questioned Chew, Secretary of State Anthony Blinken was questioned about the threat TikTok poses at a separate but simultaneous committee hearing. Asked by Rep. Ken Buck, a Republican of Colorado, if the platform is a security threat to the United States, Blinken said: “I believe it is.”
“Shouldn’t a threat to United States security be banned?” Buck responded.
“It should be ended one way or another. But there are different ways of doing that,” Blinken responded.
Committee members also showed a host of TikTok videos that encouraged users to harm themselves and commit suicide. Many questioned why the platform’s Chinese counterpart, Douyin, does not carry the same controversial and potentially dangerous content as the American product.
Chew responded that it depends on the laws of the country where the app is operating. He said the company has about 40,000 moderators that track harmful content and an algorithm that flags material.
Wealth management firm Wedbush described the hearing as a “disaster” for TikTok that made a ban more likely if the social media platform doesn’t separate from its Chinese parent. Emile El Nems, an analyst at Moody’s Investors Service, said a ban would benefit TikTok rivals YouTube, Instagram and Snap, “likely resulting in higher revenue share of the total advertising wallet.”
A U.S. ban on the app would be unprecedented and it’s unclear how it would be enforced.
Experts say officials could try to force Apple and Google to remove TikTok from their app stores. The U.S. could also block access to TikTok’s infrastructure and data, seize its domain names or force internet service providers such as Comcast and Verizon to filter TikTok data traffic, said Ahmed Ghappour, a criminal law and computer security expert who teaches at Boston University School of Law.
To avoid a ban, TikTok has been trying to sell officials on a $1.5 billion plan, Project Texas, which routes all U.S. user data to Oracle. Under the project, access to U.S. data is managed by U.S. employees through a separate entity called TikTok U.S. Data Security, which is run independently of ByteDance and monitored by outside observers.
As of October, all new U.S. user data was being stored inside the country. The company started deleting all historic U.S. user data from non-Oracle servers this month, in a process expected to be completed this year, Chew said.
Congress, the White House, U.S. armed forces and more than half of U.S. states have already banned the use of the app from official devices.
But wiping away all the data tracking associated with the platform might prove difficult. In a report released this month, the Cybersecurity company Feroot said so-called tracking pixels from ByteDance, which collect user information, were found on 30 U.S state websites, including some where the app has been banned.
Other countries including Denmark, Canada, Great Britain and New Zealand, along with the European Union, have already banned TikTok from government-issued devices.
A complete TikTok ban in the U.S. would risk political and popular backlash.
The company sent dozens of popular TikTokers to Capitol Hill on Wednesday to lobby lawmakers to preserve the platform.
And a dozen civil right and free speech organizations, including the American Civil Liberties Union and PEN America, have signed a letter opposing a wholesale TikTok ban, arguing it would set a “dangerous precedent for the restriction of speech.”
David Kennedy, a former government intelligence officer who runs the cybersecurity company TrustedSec, said he agrees with restricting TikTok access on government-issued phones but that a nationwide ban might be too extreme.
“We have Tesla in China, we have Microsoft in China, we have Apple in China. Are they going to start banning us now?” Kennedy said. “It could escalate very quickly.”