Tech-News
How to get around Instagram’s new limits on political content
Instagram has started an automatic clamp down on the amount of political content appearing in its users' feeds, but there is a relatively quick and easy way to turn off the controls if you don't want to keep the limitations place.
As part of an initiative Instagram announced last month, the popular social media service owned by Meta Platforms has stopped “proactively” recommending political content posted on accounts that users don't choose to follow. To do that, Instagram has automatically set the “political content” control to “limit” on user accounts.
The limits also affect users with Threads accounts tied to their Instagram accounts.
The change has triggered an uproar among some users who feel as if Instagram is unnecessarily limiting political discourse in a year that pivotal elections are being held in U.S. and other countries.
Here's how to get around Instagram's political curbs in just a few steps.
1. To open up the political spigot again on Instagram, open up the app on your smartphone. Then tap the three-dash menu at the top right.
2. Navigate to “settings and privacy,” then choose "content preferences," then open the “Political content” menu.
3. Find and turn on the "Don't limit" option.
Once that is done, you should once again start to see posts relating to government, elections and other political matters shared from accounts that you don't follow flowing through your feed.
1 year ago
How to spot AI-generated deepfake images
AI fakery is quickly becoming one of the biggest problems confronting us online. Deceptive pictures, videos and audio are proliferating as a result of the rise and misuse of generative artificial intelligence tools.
With AI deepfakes cropping up almost every day, depicting everyone from Taylor Swift to Donald Trump, it’s getting harder to tell what’s real from what’s not. Video and image generators like DALL-E, Midjourney and OpenAI’s Sora make it easy for people without any technical skills to create deepfakes — just type a request and the system spits it out.
Infinix Hot 40 Review: Is It A Good Deal Under BDT 18000?
These fake images might seem harmless. But they can be used to carry out scams and identity theft or propaganda and election manipulation.
Here is how to avoid being duped by deepfakes:
HOW TO SPOT A DEEPFAKEIn the early days of deepfakes, the technology was far from perfect and often left telltale signs of manipulation. Fact-checkers have pointed out images with obvious errors, like hands with six fingers or eyeglasses that have differently shaped lenses.
But as AI has improved, it has become a lot harder. Some widely shared advice — such as looking for unnatural blinking patterns among people in deepfake videos — no longer holds, said Henry Ajder, founder of consulting firm Latent Space Advisory and a leading expert in generative AI.
Still, there are some things to look for, he said.
A lot of AI deepfake photos, especially of people, have an electronic sheen to them, “an aesthetic sort of smoothing effect” that leaves skin “looking incredibly polished,” Ajder said.
He warned, however, that creative prompting can sometimes eliminate this and many other signs of AI manipulation.
Check the consistency of shadows and lighting. Often the subject is in clear focus and appears convincingly lifelike but elements in the backdrop might not be so realistic or polished.
LOOK AT THE FACESFace-swapping is one of the most common deepfake methods. Experts advise looking closely at the edges of the face. Does the facial skin tone match the rest of the head or the body? Are the edges of the face sharp or blurry?
If you suspect video of a person speaking has been doctored, look at their mouth. Do their lip movements match the audio perfectly?
Ajder suggests looking at the teeth. Are they clear, or are they blurry and somehow not consistent with how they look in real life?
Bill that could ban TikTok passed in the US House of Representatives
Cybersecurity company Norton says algorithms might not be sophisticated enough yet to generate individual teeth, so a lack of outlines for individual teeth could be a clue.
THINK ABOUT THE BIGGER PICTURESometimes the context matters. Take a beat to consider whether what you’re seeing is plausible.
The Poynter journalism website advises that if you see a public figure doing something that seems “exaggerated, unrealistic or not in character,” it could be a deepfake.
For example, would the pope really be wearing a luxury puffer jacket, as depicted by a notorious fake photo? If he did, wouldn’t there be additional photos or videos published by legitimate sources?
USING AI TO FIND THE FAKESAnother approach is to use AI to fight AI.
Microsoft has developed an authenticator tool that can analyze photos or videos to give a confidence score on whether it’s been manipulated. Chipmaker Intel’s FakeCatcher uses algorithms to analyze an image’s pixels to determine if it’s real or fake.
There are tools online that promise to sniff out fakes if you upload a file or paste a link to the suspicious material. But some, like Microsoft’s authenticator, are only available to selected partners and not the public. That’s because researchers don’t want to tip off bad actors and give them a bigger edge in the deepfake arms race.
Open access to detection tools could also give people the impression they are “godlike technologies that can outsource the critical thinking for us” when instead we need to be aware of their limitations, Ajder said.
THE HURDLES TO FINDING FAKESAll this being said, artificial intelligence has been advancing with breakneck speed and AI models are being trained on internet data to produce increasingly higher-quality content with fewer flaws.
That means there’s no guarantee this advice will still be valid even a year from now.
Experts say it might even be dangerous to put the burden on ordinary people to become digital Sherlocks because it could give them a false sense of confidence as it becomes increasingly difficult, even for trained eyes, to spot deepfakes.
1 year ago
AI supercharges threat of disinformation in a big year for elections globally
Artificial intelligence is supercharging the threat of election disinformation worldwide, making it easy for anyone with a smartphone and a devious imagination to create fake – but convincing – content aimed at fooling voters.
It marks a quantum leap from a few years ago, when creating phony photos, videos or audio clips required teams of people with time, technical skill and money. Now, using free and low-cost generative artificial intelligence services from companies like Google and OpenAI, anyone can create high-quality “deepfakes” with just a simple text prompt.
A wave of AI deepfakes tied to elections in Europe and Asia has coursed through social media for months, serving as a warning for more than 50 countries heading to the polls this year.
“You don’t need to look far to see some people ... being clearly confused as to whether something is real or not,” said Henry Ajder, a leading expert in generative AI based in Cambridge, England.
The question is no longer whether AI deepfakes could affect elections, but how influential they will be, said Ajder, who runs a consulting firm called Latent Space Advisory.
As the U.S. presidential race heats up, FBI Director Christopher Wray recently warned about the growing threat, saying generative AI makes it easy for "foreign adversaries to engage in malign influence.”
With AI deepfakes, a candidate’s image can be smeared, or softened. Voters can be steered toward or away from candidates — or even to avoid the polls altogether. But perhaps the greatest threat to democracy, experts say, is that a surge of AI deepfakes could erode the public’s trust in what they see and hear.
Some recent examples of AI deepfakes include:
— A video of Moldova's pro-Western president throwing her support behind a political party friendly to Russia.
— Audio clips of Slovakia's liberal party leader discussing vote rigging and raising the price of beer.
— A video of an opposition lawmaker in Bangladesh — a conservative Muslim majority nation — wearing a bikini.
The novelty and sophistication of the technology makes it hard to track who is behind AI deepfakes. Experts say governments and companies are not yet capable of stopping the deluge, nor are they moving fast enough to solve the problem.
As the technology improves, “definitive answers about a lot of the fake content are going to be hard to come by,” Ajder said.
ERODING TRUSTSome AI deepfakes aim to sow doubt about candidates' allegiances.
In Moldova, an Eastern European country bordering Ukraine, pro-Western President Maia Sandu has been a frequent target. One AI deepfake that circulated shortly before local elections depicted her endorsing a Russian-friendly party and announcing plans to resign.
World’s First Transparent Laptop by Lenovo: A Peek into The Future
Officials in Moldova believe the Russian government is behind the activity. With presidential elections this year, the deepfakes aim “to erode trust in our electoral process, candidates and institutions — but also to erode trust between people,” said Olga Rosca, an adviser to Sandu. The Russian government declined to comment for this story.
China has also been accused of weaponizing generative AI for political purposes.
In Taiwan, a self-ruled island that China claims as its own, an AI deepfake gained attention earlier this year by stirring concerns about U.S. interference in local politics.
The fake clip circulating on TikTok showed U.S. Rep. Rob Wittman, vice chairman of the U.S. House Armed Services Committee, promising stronger U.S. military support for Taiwan if the incumbent party's candidates were elected in January.
Wittman blamed the Chinese Communist Party for trying to meddle in Taiwanese politics, saying it uses TikTok — a Chinese-owned company — to spread “propaganda.”
A spokesperson for the Chinese foreign ministry, Wang Wenbin, said his government doesn't comment on fake videos and that it opposes interference in other countries' internal affairs. The Taiwan election, he stressed, “is a local affair of China.”
BLURRING REALITYAudio-only deepfakes are especially hard to verify because, unlike photos and videos, they lack telltale signs of manipulated content.
In Slovakia, another country overshadowed by Russian influence, audio clips resembling the voice of the liberal party chief were shared widely on social media just days before parliamentary elections. The clips purportedly captured him talking about hiking beer prices and rigging the vote.
It's understandable that voters might fall for the deception, Ajder said, because humans are “much more used to judging with our eyes than with our ears.”
In the U.S., robocalls impersonating U.S. President Joe Biden urged voters in New Hampshire to abstain from voting in January's primary election. The calls were later traced to a political consultant who said he was trying to publicize the dangers of AI deepfakes.
In poorer countries, where media literacy lags, even low-quality AI fakes can be effective.
Such was the case last year in Bangladesh, where opposition lawmaker Rumeen Farhana — a vocal critic of the ruling party — was falsely depicted wearing a bikini. The viral video sparked outrage in the conservative, majority-Muslim nation.
“They trust whatever they see on Facebook,” Farhana said.
Experts are particularly concerned about upcoming elections in India, the world’s largest democracy and where social media platforms are breeding grounds for disinformation.
A CHALLENGE TO DEMOCRACYSome political campaigns are using generative AI to bolster their candidate’s image.
In Indonesia, the team that ran the presidential campaign of Prabowo Subianto deployed a simple mobile app to build a deeper connection with supporters across the vast island nation. The app enabled voters to upload photos and make AI-generated images of themselves with Subianto.
As the types of AI deepfakes multiply, authorities around the world are scrambling to come up with guardrails.
The European Union already requires social media platforms to cut the risk of spreading disinformation or “election manipulation.” It will mandate special labeling of AI deepfakes starting next year, too late for the EU's parliamentary elections in June. Still, the rest of the world is a lot further behind.
Bill that could ban TikTok passed in the US House of Representatives
The world's biggest tech companies recently — and voluntarily — signed a pact to prevent AI tools from disrupting elections. For example, the company that owns Instagram and Facebook has said it will start labeling deepfakes that appear on its platforms.
But deepfakes are harder to rein in on apps like the Telegram chat service, which did not sign the voluntary pact and uses encrypted chats that can be difficult to monitor.
Some experts worry that efforts to rein in AI deepfakes could have unintended consequences.
Well-meaning governments or companies might trample on the sometimes “very thin” line between political commentary and an “illegitimate attempt to smear a candidate,” said Tim Harper, a senior policy analyst at the Center for Democracy and Technology in Washington.
Major generative AI services have rules to limit political disinformation. But experts say it remains too easy to outwit the platforms' restrictions or use alternative services that don't have the same safeguards.
Even without bad intentions, the rising use of AI is problematic. Many popular AI-powered chatbots are still spitting out false and misleading information that threatens to disenfranchise voters.
And software isn't the only threat. Candidates could try to deceive voters by claiming that real events portraying them in an unfavorable light were manufactured by AI.
“A world in which everything is suspect — and so everyone gets to choose what they believe — is also a world that’s really challenging for a flourishing democracy,” said Lisa Reppell, a researcher at the International Foundation for Electoral Systems in Arlington, Virginia.
1 year ago
World’s First Transparent Laptop by Lenovo: A Peek into The Future
Lenovo is one of the most well-known brands when it comes to design and innovation. It’s the same company that brought the world’s first foldable laptop with the ThinkPad X1 Fold. This year, Lenovo took things a step further by unveiling the world’s first transparent laptop at the MWC (Mobile World Congress) 2024. Here’s what we know so far.
The Story So Far
This year, the trend seems to be transparent. Earlier, at the CES 2024, both Samsung and LG showcased transparent televisions as the future of in-house entertainment systems. LG went a notch up with a retractable black screen for a more ‘normalized’ viewing experience.
Keeping up with that, Lenovo brought a proof-of-concept product Project Crystal in the CES 2024. It was a Concept Laptop with a transparent display.
Read more: Samsung Galaxy Ring: Specs, Features, and Probable Release Date
The completely transparent laptop, called ThinkBook, was finally displayed by Lenovo at the MWS2024 event in Barcelona.
Making Transparent Laptop a Reality
Tom Butler, the executive director of Lenovo’s Worldwide Product Management division talked in detail about the new transparent laptop. The 17.3-inch laptop uses micro-LED technology to achieve the transparent look of the ThinkBook. Lenovo incorporated a 55% transparency on the display to make it optimally visible for both indoor and outdoor use. The display can churn out up to 1000 nits with the screen resolution standing at 720P.
Lenovo talked about the limitations of OLED and why they had to opt for a micro-LED solution. With the currently available tech, the R&D team couldn't push beyond 480P with an OLED panel. There’s also the limitation of transparency as the laptop cannot incorporate a black screen for absolute contrast.
Read more: OPPO Air Glass 3: What's Special About It
A Paradigm Shift For the Future
With its current set of limitations, it is clear that Lenovo isn’t planning to release the ThinkBook as a mainstream laptop. Instead, the proof-of-concept product would see its feature trickle down to the mainstream product lineup over the next 5 years.
1 year ago
New AI tools can record your medical appointment or draft a message from your doctor
Don’t be surprised if your doctors start writing you overly friendly messages. They could be getting some help from artificial intelligence.
New AI tools are helping doctors communicate with their patients, some by answering messages and others by taking notes during exams. It's been 15 months since OpenAI released ChatGPT. Already thousands of doctors are using similar products based on large language models. One company says its tool works in 14 languages.
AI saves doctors time and prevents burnout, enthusiasts say. It also shakes up the doctor-patient relationship, raising questions of trust, transparency, privacy and the future of human connection.
A look at how new AI tools affect patients:
IS MY DOCTOR USING AI?
In recent years, medical devices with machine learning have been doing things like reading mammograms, diagnosing eye disease and detecting heart problems. What's new is generative AI's ability to respond to complex instructions by predicting language.
Your next check-up could be recorded by an AI-powered smartphone app that listens, documents and instantly organizes everything into a note you can read later. The tool also can mean more money for the doctor’s employer because it won’t forget details that legitimately could be billed to insurance.
Your doctor should ask for your consent before using the tool. You might also see some new wording in the forms you sign at the doctor’s office.
Other AI tools could be helping your doctor draft a message, but you might never know it.
“Your physician might tell you that they’re using it, or they might not tell you,” said Cait DesRoches, director of OpenNotes, a Boston-based group working for transparent communication between doctors and patients. Some health systems encourage disclosure, and some don’t.
Doctors or nurses must approve the AI-generated messages before sending them. In one Colorado health system, such messages contain a sentence disclosing they were automatically generated. But doctors can delete that line.
“It sounded exactly like him. It was remarkable,” said patient Tom Detner, 70, of Denver, who recently received an AI-generated message that began: “Hello, Tom, I’m glad to hear that your neck pain is improving. It’s important to listen to your body.” The message ended with “Take care” and a disclosure that it had been automatically generated and edited by his doctor.
Detner said he was glad for the transparency. “Full disclosure is very important,” he said.
WILL AI MAKE MISTAKES?
Large language models can misinterpret input or even fabricate inaccurate responses, an effect called hallucination. The new tools have internal guardrails to try to prevent inaccuracies from reaching patients — or landing in electronic health records.
“You don’t want those fake things entering the clinical notes,” said Dr. Alistair Erskine, who leads digital innovations for Georgia-based Emory Healthcare, where hundreds of doctors are using a product from Abridge to document patient visits.
The tool runs the doctor-patient conversation across several large language models and eliminates weird ideas, Erskine said. “It’s a way of engineering out hallucinations.”
Ultimately, “the doctor is the most important guardrail,” said Abridge CEO Dr. Shiv Rao. As doctors review AI-generated notes, they can click on any word and listen to the specific segment of the patient’s visit to check accuracy.
In Buffalo, New York, a different AI tool misheard Dr. Lauren Bruckner when she told a teenage cancer patient it was a good thing she didn't have an allergy to sulfa drugs. The AI-generated note said, “Allergies: Sulfa.”
The tool “totally misunderstood the conversation,” Bruckner said. “That doesn’t happen often, but clearly that's a problem.”
WHAT ABOUT THE HUMAN TOUCH?
AI tools can be prompted to be friendly, empathetic and informative.
But they can get carried away. In Colorado, a patient with a runny nose was alarmed to learn from an AI-generated message that the problem could be a brain fluid leak. (It wasn’t.) A nurse hadn’t proofread carefully and mistakenly sent the message.
“At times, it’s an astounding help and at times it’s of no help at all,” said Dr. C.T. Lin, who leads technology innovations at Colorado-based UC Health, where about 250 doctors and staff use a Microsoft AI tool to write the first draft of messages to patients. The messages are delivered through Epic’s patient portal.
The tool had to be taught about a new RSV vaccine because it was drafting messages saying there was no such thing. But with routine advice — like rest, ice, compression and elevation for an ankle sprain — “it’s beautiful for that,” Linn said.
Also on the plus side, doctors using AI are no longer tied to their computers during medical appointments. They can make eye contact with their patients because the AI tool records the exam.
The tool needs audible words, so doctors are learning to explain things aloud, said Dr. Robert Bart, chief medical information officer at Pittsburgh-based UPMC. A doctor might say: “I am currently examining the right elbow. It is quite swollen. It feels like there’s fluid in the right elbow.”
Talking through the exam for the benefit of the AI tool can also help patients understand what's going on, Bart said. “I’ve been in an examination where you hear the hemming and hawing while the physician is doing it. And I’m always wondering, ‘Well, what does that mean?’”
WHAT ABOUT PRIVACY?
U.S. law requires health care systems to get assurances from business associates that they will safeguard protected health information, and the companies could face investigation and fines from the Department of Health and Human Services if they mess up.
Doctors interviewed for this article said they feel confident in the data security of the new products and that the information will not be sold.
Information shared with the new tools is used to improve them, so that could add to the risk of a health care data breach.
Dr. Lance Owens is chief medical information officer at the University of Michigan Health-West, where 265 doctors, physician assistants and nurse practitioners are using a Microsoft tool to document patient exams. He believes patient data is being protected.
“When they tell us that our data is safe and secure and segregated, we believe that,” Owens said.
1 year ago
Samsung Galaxy Ring: Specs, Features, and Probable Release Date
Samsung unexpectedly revealed a new product, called the Galaxy Ring, alongside its Galaxy S24 smartphone series. However, there was very little information available about this “smart ring”, except for the expected release later this year. Recently, at the Mobile World Congress in Barcelona, the company hosted a discussion to provide some additional details about this new product category.
What is the Samsung Galaxy Ring?
Samsung is marketing the Galaxy Ring as a device suited for those who desire the perks of a smartwatch without being overwhelmed by excessive data. Since the smart ring doesn't have a screen for interaction, it's believed that users will find it easier to adopt and wear one. They can still enjoy health tracking features without having to learn a new interface.
However, it's important to note that the ring isn't intended to replace Samsung's Galaxy Watch series. According to Hon Pak, vice president of Samsung's Digital Health Team, individuals will have the option to wear both a Galaxy Watch6 and the Galaxy Ring simultaneously, allowing them to access more comprehensive health data.
Read more: Best and Worst Android OS Considering Bloatware in 2024
Invention
The Galaxy Ring from Samsung represents a significant innovation in wearable technology, particularly in the realm of digital health. Unlike traditional smartwatches, the Galaxy Ring focuses on essential health tracking without the need for screen interaction, offering a minimalist and user-friendly experience. With its lightweight design and customizable options, including various sizes and finishes, it sets a new standard for comfort and personalization.
Equipped with advanced sensors for heart rate monitoring and step counting, the Galaxy Smart Ring promises to deliver comprehensive health insights in a sleek and convenient form factor. As anticipation grows for its release, it's poised to revolutionise how we approach personal health monitoring.
Read more: Android TV Box: Buyer’s Guide and Price Ranges in Bangladesh
1 year ago
10 Best Free Apps for Ramadan on Android and iOS
Fasting for the entire month during the holy month of Ramadan is essentially a hallmark of the Muslim lifestyle. A systematic plan of action is necessary to maintain health while carrying out different religious activities like fasting, prayers, recitation of holy Quran etc effectively. Nowadays, various Ramadan apps can help to track religious activities. Let's delve into the top 10 free mobile apps for Ramadan available on both Android and iOS platforms.
10 Useful Ramadan Related Apps for Free Downloads on Smartphones
Al Quran (Tafsir and By Word)
Developed by Greentech Apps Foundation on August 26, 2016, this app serves as a dedicated platform for studying and memorizing the Quran. It includes translations of every word in more than 35 languages, including Bengali, and offers access to over 70 tafsir. Users can listen to Quran recitations by more than 30 renowned reciters.
This app provides statistics on previously completed verses during each recitation. Its self-contained library is made accessible through the verse bookmark feature.
All these features are available without any cost and free from all sorts of in-app advertising.
The current version is 1.26.1, last updated on February 22, 2024. It requires a smartphone OS (operating system) of at least Android 5 to run, and the Android app size is 21 MB. It has been downloaded over 5 million times and holds an average rating of 4.8 stars from 28,400 reviewers.
The iOS version is 26.2 MB in size and has hit a pristine 4.9 stars after 348 reviews. It is compatible with iOS 14.2 or later versions.
Read more: Best Quran Apps for Android: Read the Holy Book Online
Tarteel: Quran Memorization
When it comes to reciting the Quran directly from a mobile screen, Tarteel is the best. It utilizes AI to track the reciter's voice, displaying the corresponding verse on the screen with each recitation. It is free to use and does not contain any ads. However, there is a premium version, which offers various advanced features such as error correction in reciting.
Released on January 31, 2016, Tarteel is crafted by Tarteel Inc. The mobile app requires 108 to 120 MB of storage, depending on different Android phones, and the OS must be Android 5 or higher. The latest version is 5.38.1, updated on March 6, 2024. With over 1 million downloads, it has been rated 4.7 stars with 38,000 reviews contributing.
The iOS version’s size registers at 198.7 MB and requires at least iOS 13.4 to run. It has reached a 4.7-star rating from more than 2,500 reviewers.
Read more: Free English-Speaking Mobile Apps for the Non-native Speakers
Quran Majeed – Ramadan 2024
The most appealing aspect of this app, developed by Pakdata, is its user-friendly interface. Users can freely repeat verses, pause, and adjust speed while reading or memorizing the Quran. Additionally, the app features a Qibla direction compass, live streaming from Makkah and Madinah, a Hijri calendar, and a Quranic engagement meter.
Created on August 18, 2014, and last updated on March 8, 2024, the app's current version is 7.3.2, requiring Android 5 or higher to run. With a storage capacity ranging from 54 MB to 120 MB, it has been downloaded over 10 million times and holds a score of 4.7 stars from 793,000 reviews.
The iOS version holds a score of 4.8 stars from over 200,000 ratings. It consumes 137.5 MB of space and is compatible with iOS 12 or later versions.
Read more: 10 Best Audiobook Apps for Android, iOS
1 year ago
Rampant NID data trade: Mobile banking institutions alert law enforcement
In a concerning development last year, personal information of individuals registered with the country's Smart National Identity Card (NID) system was reportedly available on several Telegram channels, with instances of buying and selling such data coming to light.
A quarter with vested interest has recently emerged, engaging in the illicit trade of customer data. To elevate the prominence of such data trading, there have been attempts to associate the names of reputable mobile banking institutions.
Sources indicate that what began with the exploitation of Telegram bots has evolved into a more organized operation, with perpetrators setting up a dedicated website alongside Telegram channels to facilitate this illicit trade. These individuals are also actively advertising on social media platforms, falsely claiming to hold customer information from reputable Mobile Financial Services (MFS), thus aiming to mislead and deceive customers through various tactics.
Former Twitter executives sue Elon Musk over firings, seek more than $128 million in severance
The modus operandi involves circulating specific links through Telegram channels that purportedly allow access to personal details of individuals by inputting their NID numbers and birthdates. Despite skepticism about the authenticity of such claims, due to the absence of verification mechanisms, the alleged data breaches have stirred concern among the public, already wary from previous incidents involving the national ID card database.
ICT experts emphasize the vulnerability of the populace to misinformation regarding new leaks, amidst conflicting statements and opportunities for fraudulent activities. They note that while opening an account in any financial institution requires national ID information and a photo, the allegedly leaked information on digital platforms is deemed non-exploitable for fraudulent purposes within Bangladesh.
The national ID database, a critical repository of personal information for approximately 120 million citizens, remains a target for cybercriminals. IT expert Tanvir Hassan Zoha warns, "Those selling information online and those purchasing it are both committing offenses and could face legal consequences under the vigilant oversight of law enforcement agencies. Interestingly, there's no real profitability in buying such information in Bangladesh."
A significant data breach in July last year exposed sensitive information of millions through the website of the Office of the Registrar General, Birth and Death Registration, searchable via Google. Subsequent leaks involving smart card data have seen such information circulated as belonging to customers of mobile banking institutions and banks. However, verification efforts have unveiled inconsistencies, with varying pieces of information available in different groups.
This proliferation of customer information on digital platforms has sown discomfort and fear among citizens, potentially eroding trust in financial institutions and fostering a climate of insecurity. Mobile banking entities, while yet to issue formal statements, have reportedly notified law enforcement agencies about these breaches.
Best and Worst Android OS Considering Bloatware in 2024
A senior official from a leading mobile banking institution highlighted the challenges of combating digital platform propaganda related to data breaches, emphasizing their prompt communication with law enforcement upon becoming aware of the recent incidents.
Md Hassan Shahriar Fahim, managing director of Octagram Limited that specializes in cybersecurity, underscores the risks to individuals enticed into purchasing such data. Collaborative analysis with law enforcement has revealed that information thieves also retain data on buyers, exposing them to potential hacking, blackmail, and other complications. Victims, in turn, find themselves in a predicament when seeking legal assistance, often unable to disclose the circumstances of their online harassment.
1 year ago
Former Twitter executives sue Elon Musk over firings, seek more than $128 million in severance
Former senior executives of Twitter are suing Elon Musk and X Corp., saying they are entitled to a total of more than $128 million in unpaid severance payments.
Twitter’s former CEO Parag Agrawal, Chief Financial Officer Ned Segal, Chief Legal Counsel Vijaya Gadde and General Counsel Sean Edgett claim in the lawsuit filed Monday that they were fired without a reason on the day in 2022 that Musk completed his acquisition of Twitter, which he later rebranded X.
Because he didn’t want to pay their severance, the executives say Musk “made up fake cause and appointed employees of his various companies to uphold his decision.”
The lawsuit says not paying severance and bills is part of a pattern for Musk, who's been sued by “droves" of former rank-and-file Twitter employees who didn't receive severance after Musk terminated them by the thousands.
“Under Musk’s control, Twitter has become a scofflaw, stiffing employees, landlords, vendors, and others,” says the lawsuit, filed in federal court in the Northern District of California. “Musk doesn’t pay his bills, believes the rules don’t apply to him, and uses his wealth and power to run roughshod over anyone who disagrees with him.”
Representatives for Musk and San Francisco-based X did not immediately respond to messages for comment Monday.
The former executives claim their severance plans entitled them to one year's salary plus unvested stock awards valued at the acquisition price of Twitter. Musk bought the company for $44 billion, or $54.20 per share, taking control in October 2022.
They say they were all fired without cause. Under the severance plans, “cause” was narrowly defined, such as being convicted of a felony, “gross negligence” or “willful misconduct.”
According to the lawsuit, the only cause Musk gave for the firings was “gross negligence and willful misconduct,” in part because Twitter paid fees to outside attorneys for their work closing the acquisition. The executives say they were required to pay the fees to comply with their fiduciary duties to the company.
“If Musk felt that the attorneys’ fees payments, or any other payments, were improper, his remedy was to seek to terminate the deal — not to withhold executives’ severance payments after the deal closed,” the lawsuit says.
X faces a “staggering” number of lawsuits over unpaid bills, the lawsuit says. “Consistent with the cavalier attitude he has demonstrated towards his financial obligations, Musk’s attitude in response to these mounting lawsuits has reportedly been to ‘let them sue.’”
1 year ago
Water Battery: What's Special About It
Lithium-ion batteries have been ruling the world for storing electricity since they were invented. From simple charger lights to giant electric cars, most modern devices use lithium-ion batteries as their powerhouse. However, they have gone a little off-fame for their explosive nature and safety concerns for large-scale grid energy. To overcome the risk, a multinational team of scientists, researchers, and industry collaborators have come up with the world’s first water batteries. These batteries are claimed to be less toxic, recyclable, and incombustible.
What is a Water Battery?
The concept of a water battery is not avant-garde. However, experts have been trying to make an ideal version of such a battery to use in the industrial field for a long time and finally succeeded. A traditional battery uses organic electrolytes as a conveyor of electricity between negative and positive edges. Interestingly, in a water battery, pure and plain water serves as the electrolyte.
It took a fair deal of effort to make a water-based battery an acceptable, stable, and modern technology that is usable all across the digital industry. In the manufacturer's terms, these batteries will be termed as aqueous metal-ion batteries.
Read more: Most Anticipated Smartphones Coming in March 2024
Invention of Water Batteries
A team led by RMIT University and headed by Distinguished Professor Tianyi Ma developed recyclable 'water batteries.' These batteries are safer than lithium-ion ones, as they don't catch fire or explode. They utilize water instead of volatile materials. The team's breakthroughs in aqueous energy storage devices significantly enhance performance and lifespan. Their manufacturing simplicity allows mass production, utilizing abundant, inexpensive materials like magnesium and zinc.
Unlike traditional batteries, water batteries use water as an electrolyte, replacing hazardous substances. They work similarly to lithium-ion batteries but without the associated risks. Additionally, managing water levels in these batteries is crucial for longevity and requires periodic replenishment to maintain efficiency. This innovation offers a safer, more environmentally friendly energy storage solution for various applications.
1 year ago