Move over, vantablack. MIT researchers have accidentally created a material that captures more than 99.995 percent of light. It’s made of vertically aligned carbon nanotubes that were originally to be applied to electrically conductive materials in order to improve their electrical and thermal properties.(more…)
New research in Psychological Science has found that votes create “false memories after seeing fabricated news stories, especially if those stories align with their political beliefs.”
“In highly emotional, partisan political contests, such as the 2020 US Presidential election, voters may ‘remember’ entirely fabricated news stories,” said lead author Gillian Murphy of University College Cork. “In particular, they are likely to ‘remember’ scandals that reflect poorly on the opposing candidate.”
From the release:
Nearly half of the respondents reported a memory for at least one of the made-up events; many of them recalled rich details about a fabricated news story. The individuals in favor of legalizing abortion were more likely to remember a falsehood about the referendum opponents; those against legalization were more likely to remember a falsehood about the proponents. Many participants failed to reconsider their memory even after learning that some of the information could be fictitious. And several participants recounted details that the false news reports did not include.
“This demonstrates the ease with which we can plant these entirely fabricated memories, despite this voter suspicion and even despite an explicit warning that they may have been shown fake news,” Murphy said.
The worst part? The cognitive ability of the viewer has an affect on the outcome of fake news.
“Participants who scored lower on the cognitive test were no more prone to forming false memories than were higher scorers, but low scorers were more likely to remember false stories that aligned with their opinions,” wrote the researchers.
A new study led by University of Waterloo Faculty of Mathematics student, Alexandra Vtyurina, who collaborated with Microsoft researchers has created a way to merge a voice assistant with a web browser to allow the visually impaired to surf the web. Called Voice Exploration, Retrieval, and Search, or VERSE, the system understands web context and can help folks traverse web pages fluidly.
“People with visual impairments often rely on screen readers, and increasingly voice-based virtual assistants, when interacting with computer systems,” said Vtyurina. “Virtual assistants are convenient and accessible but lack the ability to deeply engage with content, such as read beyond the first few sentences of an article, list alternative search results and suggestions. In contrast, screen readers allow for deep engagement with accessible content, and provide fine-grained navigation and control, but at the cost of reduced walk-up-and-use convenience.”
“Our prototype, VERSE, adds screen reader-like capabilities to virtual assistants, and allows other devices, such as smartwatches to serve as input accelerators to smart speakers.”
From the release:
The primary input method for VERSE is voice; so, users can say “next”, “previous”, “go back” or “go forward”. VERSE can also be paired with an app, which runs on a smartphone or a smartwatch. These devices can serve as input accelerators, similar to keyboard shortcuts. For example, rotating the crown on a smartwatch advances VERSE to the next search result, section, or paragraph, depending on the navigation mode.
“At the outset, VERSE resembles other virtual assistants, as the tool allows people to ask a question and have it answered verbally with a word, phrase or passage,” said Vtyurina. “VERSE is differentiated by what happens next. If people need more information, they can use VERSE to access other search verticals, for example, news, facts, and related searches, and can visit any article that appears as a search result.
“For articles, VERSE showcases its screen reader superpowers by allowing people to navigate along words, sentences, paragraphs, or sections.”