New York, Sep 4 (AP/UNB) — When Stephen Dennis was raising his two sons in the 1980s, he never heard the phrase "screen time," nor did he worry much about the hours his kids spent with technology. When he bought an Apple II Plus computer, he considered it an investment in their future and encouraged them to use it as much as possible.
Boy, have things changed with his grandkids and their phones and their Snapchat, Instagram and Twitter.
"It almost seems like an addiction," said Dennis, a retired homebuilder who lives in Bellevue, Washington. "In the old days you had a computer and you had a TV and you had a phone but none of them were linked to the outside world but the phone. You didn't have this omnipresence of technology."
Today's grandparents may have fond memories of the "good old days," but history tells us that adults have worried about their kids' fascination with new-fangled entertainment and technology since the days of dime novels, radio, the first comic books and rock n' roll.
"This whole idea that we even worry about what kids are doing is pretty much a 20th century thing," said Katie Foss, a media studies professor at Middle Tennessee State University. But when it comes to screen time, she added, "all we are doing is reinventing the same concern we were having back in the '50s."
True, the anxieties these days seem particularly acute — as, of course, they always have. Smartphones have a highly customized, 24/7 presence in our lives that feeds parental fears of antisocial behavior and stranger danger.
What hasn't changed, though, is a general parental dread of what kids are doing out of sight. In previous generations, this often meant kids wandering around on their own or sneaking out at night to drink. These days, it might mean hiding in their bedroom, chatting with strangers online.
Less than a century ago, the radio sparked similar fears.
"The radio seems to find parents more helpless than did the funnies, the automobile, the movies and other earlier invaders of the home, because it can not be locked out or the children locked in," Sidonie Matsner Gruenberg, director of the Child Study Association of America, told The Washington Post in 1931. She added that the biggest worry radio gave parents was how it interfered with other interests — conversation, music practice, group games and reading.
In the early 1930s a group of mothers from Scarsdale, New York, pushed radio broadcasters to change programs they thought were too "overstimulating, frightening and emotionally overwhelming" for kids, said Margaret Cassidy, a media historian at Adelphi University in New York who authored a chronicle of American kids and media.
Called the Scarsdale Moms, their activism led the National Association of Broadcasters to come up with a code of ethics around children's programming in which they pledged not to portray criminals as heroes and to refrain from glorifying greed, selfishness and disrespect for authority.
Then television burst into the public consciousness with unrivaled speed. By 1955, more than half of all U.S. homes had a black and white set, according to Mitchell Stephens, a media historian at New York University.
The hand-wringing started almost as quickly. A 1961 Stanford University study on 6,000 children, 2,000 parents and 100 teachers found that more than half of the kids studied watched "adult" programs such as Westerns, crime shows and shows that featured "emotional problems." Researchers were aghast at the TV violence present even in children's programming.
By the end of that decade, Congress had authorized $1 million (about $7 million today) to study the effects of TV violence, prompting "literally thousands of projects" in subsequent years, Cassidy said.
That eventually led the American Academy of Pediatrics to adopt, in 1984, its first recommendation that parents limit their kids' exposure to technology. The medical association argued that television sent unrealistic messages around drugs and alcohol, could lead to obesity and might fuel violence. Fifteen years later, in 1999, it issued its now-infamous edict that kids under 2 should not watch any television at all.
The spark for that decision was the British kids' show "Teletubbies," which featured cavorting humanoids with TVs embedded in their abdomens. But the odd TV-within-the-TV-beings conceit of the show wasn't the problem — it was the "gibberish" the Teletubbies directed at preverbal kids whom doctors thought should be learning to speak from their parents, said Donald Shifrin, a University of Washington pediatrician and former chair of the AAP committee that pushed for the recommendation.
Video games presented a different challenge. Decades of study have failed to validate the most prevalent fear, that violent games encourage violent behavior. But from the moment the games emerged as a cultural force in the early 1980s, parents fretted about the way kids could lose themselves in games as simple and repetitive as "Pac-Man," ''Asteroids" and "Space Invaders."
Some cities sought to restrict the spread of arcades; Mesquite, Texas, for instance, insisted that the under-17 set required parental supervision . Many parents imagined the arcades where many teenagers played video games "as dens of vice, of illicit trade in drugs and sex," Michael Z. Newman, a University of Wisconsin-Milwaukee media historian, wrote recently in Smithsonian .
This time, some experts were more sympathetic to kids. Games could relieve anxiety and fed the age-old desire of kids to "be totally absorbed in an activity where they are out on an edge and can't think of anything else," Robert Millman, an addiction specialist at the New York Hospital-Cornell University Medical Center, told the New York Times in 1981. He cast them as benign alternatives to gambling and "glue sniffing."
Initially, the internet — touted as an "information superhighway" that could connect kids to the world's knowledge — got a similar pass for helping with homework and research. Yet as the internet began linking people together, often in ways that connected previously isolated people, familiar concerns soon resurfaced.
Sheila Azzara, a grandmother of 12 in Fallbrook, California, remembers learning about AOL chatrooms in the early 1990s and finding them "kind of a hostile place." Teens with more permissive parents who came of age in the '90s might remember these chatrooms as places a 17-year-old girl could pretend to be a 40-year-old man (and vice versa), and talk about sex, drugs and rock 'n' roll (or more mundane topics such as current events).
Azzara still didn't worry too much about technology's effects on her children. Cellphones weren't in common use, and computers — if families had them — were usually set up in the living room. But she, too, worries about her grandkids.
"They don't interact with you," she said. "They either have their head in a screen or in a game."
Anchorage, Sep 3 (AP/UNB) — Britt'Nee Brower grew up in a largely Inupiat Eskimo town in Alaska's far north, but English was the only language spoken at home.
Today, she knows a smattering of Inupiaq from childhood language classes at school in the community of Utqiagvik. Brower even published an Inupiaq coloring book last year featuring the names of common animals of the region. But she hopes to someday speak fluently by practicing her ancestral language in a daily, modern setting.
The 29-year-old Anchorage woman has started to do just that with a new Inupiaq language option that recently went live on Facebook for those who employ the social media giant's community translation tool. Launched a decade ago, the tool has allowed users to translate bookmarks, action buttons and other functions in more than 100 languages around the globe.
For now, Facebook is being translated into Inupiaq only on its website, not its app.
"I was excited," Brower says of her first time trying the feature, still a work in progress as Inupiaq words are slowly added. "I was thinking, 'I'm going to have to bring out my Inupiaq dictionary so I can learn.' So I did."
Facebook users can submit requests to translate the site's vast interface workings — the buttons that allow users to like, comment and navigate the site — into any language through crowdsourcing. With the interface tool, it's the Facebook users who do the translating of words and short phrases. Words are confirmed through crowd up-and-down voting.
Besides the Inupiaq option, Cherokee and Canada's Inuktut are other indigenous languages in the process of being translated, according to Facebook spokeswoman Arielle Argyres.
"It's important to have these indigenous languages on the internet. Oftentimes they're nowhere to be found," she said. "So much is carried through language — tradition, culture — and so in the digital world, being able to translate from that environment is really important."
The Inupiaq language is spoken in northern Alaska and the Seward Peninsula. According to the University of Alaska Fairbanks, about 13,500 Inupiat live in the state, with about 3,000 speaking the language.
Myles Creed, who grew up in the Inupiat community of Kotzebue, was the driving force in getting Inupiaq added. After researching ways to possibly link an external translation app with Facebook, he reached out to Grant Magdanz, a hometown friend who works as a software engineer in San Francisco. Neither one of them knew about the translation tool when Magdanz contacted Facebook in late 2016 about setting up an Inupiatun option.
Facebook opened a translation portal for the language in March 2017. It was then up to users to provide the translations through crowdsourcing.
Creed, 29, a linguistics graduate student at the University of Victoria in British Columbia, is not Inupiat, and neither is Magdanz, 24. But they grew up around the language and its people, and wanted to promote its use for today's world.
"I've been given so much by the community I grew up in, and I want to be able to give back in some way," said Creed, who is learning Inupiaq.
Both see the Facebook option as a small step against predictions that Alaska's Native languages are heading toward extinction under their present rate of decline.
"It has to be part of everyone's daily life. It can't be this separate thing," Magdanz said. "People need the ability to speak it in any medium that they use, like they would English or Spanish."
Initially, Creed relied on volunteer translators, but that didn't go fast enough. In January, he won a $2,000 mini grant from the Alaska Humanities Forum to hire two fluent Inupiat translators. While a language is in the process of being translated, only those who use the translation tool are able to see it.
Creed changed his translation settings last year. But it was only weeks ago that his home button finally said "Aimaagvik," Inupiaq for home.
"I was really ecstatic," he said.
So far, only a fraction of the vast interface is in Inupiaq. Part of the holdup is the complexity of finding exact translations, according to the Inupiaq translators who were hired with the grant money.
Take the comment button, which is still in English. There's no one-word-fits-all in Inupiaq for "comment," according to translator Pausauraq Jana Harcharek, who heads Inupiaq education for Alaska's North Slope Borough. Is the word being presented in the form of a question, or a statement or an exclamatory sentence?
"Sometimes it's so difficult to go from concepts that don't exist in the language to arriving at a translation that communicates what that particular English word might mean," Harcharek said.
Translator Muriel Hopson said finding the right translation ultimately could require two or three Inupiaq words.
The 58-year-old Anchorage woman grew up in the village of Wainwright, where she was raised by her grandparents. Inupiaq was spoken in the home, but it was strictly prohibited at the village school run by the federal Bureau of Indian Affairs, Hopson said.
She wonders if she's among the last generation of Inupiaq speakers. But she welcomes the new Facebook option as a promising way for young people to see the value Inupiaq brings as a living language.
"Who doesn't have a Facebook account when you're a millennial?" she said. "It can only help."
Dhaka, Sep 2 (AP/UNB) - Nearly a year after Russian government hackers meddled in the 2016 U.S. election, researchers at cybersecurity firm Trend Micro zeroed in on a new sign of trouble: a group of suspect websites.
The sites mimicked a portal used by U.S. senators and their staffs, with easy-to-miss discrepancies. Emails to Senate users urged them to reset their passwords — an apparent attempt to steal them.
Once again, hackers on the outside of the American political system were probing for a way in.
"Their attack methods continue to take advantage of human nature and when you get into an election cycle the targets are very public ," said Mark Nunnikhoven, vice president of cloud research at Trend Micro.
Now the U.S. has entered a new election cycle. And the attempt to infiltrate the Senate network, linked to hackers aligned with Russia and brought to public attention in July, is a reminder of the risks, and the difficulty of assessing them.
Newly reported attempts at infiltration and social media manipulation — which Moscow officially denies — point to Russia's continued interest in meddling in U.S. politics. There is no clear evidence, experts said, of efforts by the Kremlin specifically designed to disrupt elections in November. But it wouldn't take much to cause turmoil.
"It's not a question of whether somebody is going to try to breach the system, to manipulate the system, to influence the system," said Robby Mook, who managed Hillary Clinton's presidential campaign and co-directs a Harvard University project to protect democracy from cyberattacks, in an interview earlier this year. "The question is: Are we prepared for it?"
Online targeting of the U.S. political system has come on three fronts — efforts to get inside political campaigns and institutions and expose damaging information; probes of electoral systems, potentially to alter voter data and results; and fake ads and accounts on social media used to spread disinformation and fan divisions among Americans.
In recent weeks, Microsoft reported that it had disabled six Russian-launched websites masquerading as U.S. think tanks and Senate sites. Facebook and the security firm FireEye revealed influence campaigns , originating in Iran and Russia, that led the social network to remove 652 impostor accounts, some targeted at Americans. The office of Republican Sen. Pat Toomey of Pennsylvania said hackers tied to a "nation-state" had sent phishing emails to old campaign email accounts.
U.S. officials said they have not detected any attempts to corrupt election systems or leak information rivaling Kremlin hacking before President Donald Trump's surprise 2016 victory.
Still, "we fully realize that we are just one click away of the keyboard from a similar situation repeating itself," Dan Coats, the director of national intelligence, said in July.
Michael McFaul, the architect of the Obama administration's Russia policy, has said he believes Russian President Vladimir Putin perceives little benefit in a major disruption effort this year, preferring to keep his powder dry for the 2020 presidential contest.
But even if the upcoming elections escape disruption, that hardly means the U.S. is in the clear .
Trump's decision in May to eliminate the post of White House cybersecurity coordinator confirmed his lack of interest in countering Russian meddling, critics say. Congress has not delivered any legislation to combat election interference or disinformation. Last week, a review of the bipartisan "Secure Elections Act" was canceled after Republican leaders registered objections, congressional staffers said.
The risks extend beyond the midterms.
"The biggest question is going to be how are you going to make sure that people actually trust the results, because democracy relies on credibility," said Ben Nimmo, a researcher at the Atlantic Council. "It's not over after November."
Experts said it is too late to safeguard U.S. voting systems and campaigns this election cycle. But with two months to go, there is time enough to take stock of the Russian-sponsored interference that has come to light so far — and to assess the risks of what we don't know.
In mid-2016, hackers found a way into the voter registration database at the Illinois State Board of Elections and spent three weeks poking around. After the breach was discovered, officials said the infiltrators had downloaded the records of up to 90,000 voters.
It's not clear that anything nefarious was done with those records. But when special counsel Robert Mueller charged a dozen Russian intelligence agents with hacking this July, the indictment clarified the potential for damage. The hackers had, in fact, stolen information on 500,000 voters, including dates of birth and partial Social Security numbers.
"The internet allows foreign adversaries to attack Americans in new and unexpected ways," Deputy Attorney General Rod Rosenstein said, in announcing the indictments.
The Illinois hack is the most notable case of foreign tampering with U.S. election systems to come to light. There has been no evidence of efforts to change voter information or tamper with voting machines, though experts caution hackers might have planted unseen malware in far-flung election systems that could be triggered later.
Potential problems are not limited to Illinois.
A week before the 2016 general election, Russian intelligence agents sent spear-phishing emails to 122 local elections officials who were customers of VR Systems, a Tallahassee, Florida-based election software vendor.
In addition to Illinois, at least 20 other state systems were probed by the same Russian military unit that targeted VR's customers, federal officials said.
"My unofficial opinion is that we're kind of fooling ourselves if we don't think that they tried to at least make a pass at all 50 states," said Christopher Krebs, the undersecretary for critical infrastructure at DHS.
In June 2017, the federal Election Assistance Commission informed dozens of local voting officials that hackers had attempted to penetrate the systems of a voting system manufacturer, presumed by many to be VR.
"Attempts have been made to obtain voting equipment, security information and in general to probe for vulnerabilities," the EAC wrote officials. Despite those concerns, federal officials have moved slowly to share intelligence with officials who supervise elections. As of mid-August, 92 state officials had been given clearances.
Much of the machinery used to collect and tabulate votes is antiquated, built by a handful of unregulated and secretive vendors, with outdated software that makes them highly vulnerable to attacks, researchers said.
"If someone was able to compromise even a handful of voting machines I think that would be sufficient to cause people to not trust the system," said Sherri Ramsay, a former National Security Agency senior executive.
This spring, a website used by Knox County, Tennessee, officials to display election-night results was knocked offline by an unidentified perpetrator. While the attack was little noticed, it would not be hard to replicate, experts said. Combined with a social media campaign alleging vote tampering, such mischief could cast a shadow over an election, they said.
Election officials have been sandboxing such scenarios for weeks as they prepare for November's balloting.
There's already a Russian playbook for thwarting an election: In Ukraine in 2014, the presidential contest was disrupted by a virus that scrambled election-management software, followed by a media disinformation campaign claiming a pro-Moscow candidate had won.
Democratic Sen. Claire McCaskill of Missouri is plenty busy this fall as she seeks re-election in a state that voted overwhelmingly for Trump. So when an attempt by Russian hackers to infiltrate her campaign came to light in July, she acknowledged it only briefly.
"While this attack was not successful, it is outrageous that they think they can get away with this," McCaskill said. "I will not be intimidated. I've said it before and I will say it again, Putin is a thug and a bully."
The failed hack, which included an attempt to steal the password of at least one McCaskill staffer through a fake Senate login website identified by Microsoft, is the most notable instance of attempted campaign meddling by Russia made public this year.
Microsoft executives said recently that the company had detected attempts by Russia's GRU military intelligence agency to hack two senators. One was presumably McCaskill, but the others have not been identified.
The group behind that attempt, Fancy Bear, is the same one indicted July 13 and identified by Microsoft as the creator of fake websites targeting the Hudson Institute and the International Republican Institute, frequent critics of the Kremlin. Since the summer of 2017, Fancy Bear has aggressively targeted political groups, universities, law enforcement agencies and anti-corruption nonprofits in the U.S. and elsewhere, according to TrendMicro.
"Russian hackers appear to be broadening their target set, but I think tying it to the midterm elections is pure speculation at this point," said Michael Connell , an analyst at the federally funded Center for Naval Analyses in Arlington, Virginia.
There have been other recent reports of U.S. congressional campaign websites targeted by hackers, but that doesn't mean Russian agents are to blame. Experts said most are likely run-of-the-mill criminal cyberattacks seeking financial gain rather than political change.
But Eric Rosenbach, who served as assistant secretary of defense for global security during President Barack Obama's administration and is now at Harvard, said the limited examples of Russian intrusion that have come to light may be only a tip to more significant, still hidden schemes.
"There probably have already been compromises of important campaigns in places where it could sway the outcome or undermine trust in the election," Rosenbach said. "We might not see that until the very last moment."
The risk is magnified by poor efforts to protect many campaign sites, said Josh Franklin, until last month the lead National Institutes of Standards and Technology researcher on voting systems security.
Nearly a third of the 527 House of Representatives campaigns examined by Franklin and fellow researchers had such poor cybersecurity they were graded worse than failing.
"We couldn't go any further with our scan," he said. "We were told that we would be in danger of being sued by the candidate campaigns."
By the time a group called "ReSisters" began organizing a rally against white nationalism for Aug. 10, it had spent more than a year sharing left-wing posts about feminism, immigration and other hot-button topics.
"Confront + Resist Fascism," the group urged on a Facebook event page for its "No Unite the Right 2" protest in Washington, D.C. Like-minded Facebook users posted information about transportation, materials and location so those interested could attend.
In late July, Facebook short-circuited the effort, shutting down the pages and accounts of ReSisters and 31 others. Despite appearing to speak for Americans, the company said, the accounts were planted by unidentified outsiders to fuel divisions among U.S. voters. Researchers at the Atlantic Council who examined the accounts said they acted in ways echoing Russian troll operations before the 2016 election, pointing to English on the pages speckled with grammatical mistakes typical of native Russian speakers.
"We face determined, well-funded adversaries who will never give up and are constantly changing tactics," Facebook said. The outing of the sites is a reminder as November approaches that Russians and other foreign actors continue to use social media to try to influence U.S. politics.
Since the 2016 election, officials and researchers have learned much more about such infiltration. The May release by House Democrats of more than 3,500 ads placed on Facebook by Russian agents from 2015 to 2017 revealed a deliberate campaign to inflame racial divisions in the U.S. Facebook and other tech companies say they are working hard to combat such behavior. But it is not nearly enough, experts said.
The companies must be forced to act faster against Russian and other disinformation campaigns and be made more accountable , said Dipayan Ghosh, a fellow at Harvard's Kennedy School of Government who has worked at both the White House and Facebook on tech policy including social media manipulation.
Ghosh said quantifying Russian disinformation on social media is difficult because they "are operating behind a commercial veil" of for-profit networks that are not subject to public scrutiny.
"The industry is currently accountable to nobody," Ghosh said.
After Facebook was criticized for allowing a data-mining firm to collect information about millions of its users, CEO Mark Zuckerberg said he was open to regulation. But the "Honest Ads Act," which would require online political ads to be identified as they are in traditional media, has stalled in Congress.
The bill's sponsors include the late John McCain and Sen. Mark Warner, the Virginia Democrat who has pressed Facebook for change since the 2016 elections. Executives from Facebook, Twitter and Google are expected to testify before Warner and other members of the Senate Intelligence Committee this week.
Experts said they are uncertain of the effectiveness of Russian disinformation, complicating assessment of the threat it might now pose.
In 2016, Russian actors likely did the greatest damage by hacking and leaking emails from Hillary Clinton's campaign and Democrats' national organization, which were widely reported by the news media. But comparatively few American voters saw individual pieces of misinformation on social media, making it unlikely that it swayed votes , said Brendan Nyhan, a University of Michigan political scientist who has analyzed the scope and impact of the Russian operations.
"There's still too much simplistic thinking about all-powerful propaganda that doesn't correspond to what we know from social science about how hard it is to change people's minds. I'm more concerned about the threat of intensifying polarization and calling the legitimacy of elections into question than I am about massive swings in vote choice," he said.
Still, it is clear that Russian intelligence views its efforts as successful and their example has already stirred others, like Iran, to try similar strategies. Such efforts are bent on coloring U.S. politics even if they are not tied to a specific election, said Lee Foster, FireEye's manager of information operations analysis.
"Where do you draw the line between efforts to influence the election or an election or efforts to influence U.S. domestic politics in general?" Foster said. "We can't just think in the context of the next election. It's not like this goes away after the midterms."
New York, July 28 (AP/UNB) — Cracking down on hate, abuse and online trolls is also hurting Twitter's standing with investors.
The company's stock plunged Friday after it reported a decline in its monthly users and warned that the number could fall further in the coming months. The 20.5 percent plunge comes one day after Facebook lost 19 percent of its value in a single day.
Twitter says it's putting the long-term stability of its platform above user growth. That leaves investors seemingly unable to value what the biggest companies in the sector, which rely on their potential user reach, are worth.
Twitter had 335 million monthly users in the quarter, below the 339 million Wall Street was expecting, and down slightly from 336 million in the first quarter. That overshadowed a strong monthly user growth of 3 percent compared with the previous year.
The company said its monthly user number could continue to fall in the "mid-single-digit millions" in the third quarter.
While Friday was Twitter's second-worst loss since it went public in November 2013, the stock has still doubled in value over the last 12 months.
Long criticized for allowing bad behavior to run rampant on its platform, Twitter has begun to crack down, banning accounts that violate its terms and making others less visible.
Twitter is now attempting to rein in the worst offenders after years as one of the Wild West corners of the internet.
At the same time, it must convince people it's the go-to platform in social media, even though it is dwarfed right now by Facebook.
Facebook has more than 2.23 billion users while its apps WhatsApp, Instagram and Messenger each have over 1 billion.
Twitter on Friday reiterated its efforts to "to invest in improving the health of the public conversation" on its platform, making the "long-term health" of its service a priority over short-term metrics such as user numbers.
As part of these efforts, Twitter said that as of May, its systems identified and challenged more than 9 million accounts per week that are potentially spam or automated, up from 6.4 million in December 2017. The company has previously disclosed these numbers.
A Washington Post report put the total number of suspended accounts in May and June at 70 million. The Associated Press also found that Twitter suspended 56 million such accounts in the last quarter of 2017. While Twitter maintains that most of these accounts were dormant and thus not counted in the monthly user figure, the company also warned that its cleanup efforts could affect its counted user base without giving specific numbers.
"We want people to feel safe freely expressing themselves and have launched new tools to address problem behaviors that distort and distract from the public conversation," CEO Jack Dorsey said in a prepared statement.
Twitter's market value dropped by more than $6 billion Friday, to around $26 billion. Investors still value Facebook at $503 billion. Facebook lost $119 billion in value on Thursday.
Twitter's second-quarter net income hit $100.1 million, after a loss last year during the same period. It's the company's third profit in a row, the third it has ever posted.
Per-share, the San Francisco company's net income was 13 cents, or 17 cents adjusted, in line with expectations, according to a poll by Zacks Investment Research.
Revenue of $710.5 million, up 24 percent and edging out expectations of $696 million.
New York, Jul 25 (AP/UNB) — Facebook is blocked in China but it's still setting up a subsidiary in the world's most populous country.
The company says it wants to set up an "innovation hub" in Zhejiang to support Chinese developers, innovators and startups. It has done the same elsewhere, including France, Brazil, South Korea and India. But it is not blocked in those countries.
Facebook said on Tuesday that the subsidiary will focus on training and workshops for developers and entrepreneurs.
According to The Washington Post, a filing published on China's National Enterprise Credit Information Publicity System listed the company as Facebook Technology (Hangzhou) Co. The filing, which is no longer accessible, noted that the company is owned by Facebook Hong Kong Ltd. It has registered capital of $30 million.