call_end

    • chevron_right

      NASA really wants you to know that 3I/ATLAS is an interstellar comet

      news.movim.eu / ArsTechnica • 19 November • 1 minute

    Since early July, telescopes around the world have been tracking just our third confirmed interstellar visitor, the comet 3I/ATLAS—3I, for third interstellar, and ATLAS (Asteroid Terrestrial-impact Last Alert System) for the telescope network that first spotted it. But the object’s closest approach to the Sun came in late October during the US government shutdown. So, while enough people went to work to ensure that the hardware continued to do its job, nobody was available at NASA to make the images available to the public or discuss their implications.

    So today, NASA held a press conference to discuss everything that we now know about 3I/ATLAS, and how NASA’s hardware contributed to that knowledge. And to say one more time that the object is a fairly typical comet and not some spaceship doing its best to appear like one.

    Extrasolar comet

    3I/ATLAS is an extrasolar comet and the third visitor from another star that we’ve detected. We know the comet part because it looks like one, forming a coma of gas and dust, as well as a tail, as the Sun heats up its materials. That hasn’t stopped the usual suspect ( Avi Loeb ) from speculating that it might be a spacecraft, as he had for the earlier visitors. NASA doesn’t want to hear it. “This object is a comet,” said Associate Administrator Amit Kshatrya. “It looks and behaves like a comet, and all evidence points to it being a comet.”

    Read full article

    Comments

    • chevron_right

      Massive Cloudflare outage was triggered by file that suddenly doubled in size

      news.movim.eu / ArsTechnica • 19 November

    When a Cloudflare outage disrupted large numbers of websites and online services yesterday, the company initially thought it was hit by a “hyper-scale” DDoS (distributed denial-of-service) attack.

    “I worry this is the big botnet flexing,” Cloudflare co-founder and CEO Matthew Prince wrote in an internal chat room yesterday, while he and others discussed whether Cloudflare was being hit by attacks from the prolific Aisuru botnet . But upon further investigation, Cloudflare staff realized the problem had an internal cause: an important file had unexpectedly doubled in size and propagated across the network.

    This caused trouble for software that needs to read the file to maintain the Cloudflare bot management system that uses a machine learning model to protect against security threats. Cloudflare’s core CDN, security services, and several other services were affected.

    Read full article

    Comments

    • chevron_right

      Rocket Lab Electron among first artifacts installed in CA Science Center space gallery

      news.movim.eu / ArsTechnica • 19 November

    It took the California Science Center more than three years to erect its new Samuel Oschin Air and Space Center, including stacking NASA’s space shuttle Endeavour for its launch pad-like display.

    Now the big work begins.

    “That’s completing the artifact installation and then installing the exhibits,” said Jeffrey Rudolph, president and CEO of the California Science Center in Los Angeles, in an interview . “Most of the exhibits are in fabrication in shops around the country and audio-visual production is underway. We’re full-on focused on exhibits now.”

    Read full article

    Comments

    • chevron_right

      He got sued for sharing public YouTube videos; nightmare ended in settlement

      news.movim.eu / ArsTechnica • 19 November

    Nobody expects to get sued for re-posting a YouTube video on social media by using the “share” button, but librarian Ian Linkletter spent the past five years embroiled in a copyright fight after doing just that.

    Now that a settlement has been reached, Linkletter told Ars why he thinks his 2020 tweets sharing public YouTube videos put a target on his back.

    Linkletter’s legal nightmare started in 2020 after an education technology company, Proctorio, began monitoring student backlash on Reddit over its AI tool used to remotely scan rooms, identify students, and prevent cheating on exams. On Reddit, students echoed serious concerns raised by researchers , warning of privacy issues, racist and sexist biases, and barriers to students with disabilities.

    Read full article

    Comments

    • chevron_right

      Critics scoff after Microsoft warns AI feature can infect machines and pilfer data

      news.movim.eu / ArsTechnica • 19 November

    Microsoft’s warning on Tuesday that an experimental AI Agent integrated into Windows can infect devices and pilfer sensitive user data has set off a familiar response from security-minded critics: Why is Big Tech so intent on pushing new features before their dangerous behaviors can be fully understood and contained?

    As reported Tuesday , Microsoft introduced Copilot Actions, a new set of “experimental agentic features” that, when enabled, perform “everyday tasks like organizing files, scheduling meetings, or sending emails,” and provide “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”

    Hallucinations and prompt injections apply

    The fanfare, however, came with a significant caveat . Microsoft recommended users enable Copilot Actions only “if you understand the security implications outlined.”

    Read full article

    Comments

    • chevron_right

      Testing shows Apple N1 Wi-Fi chip improves on older Broadcom chips in every way

      news.movim.eu / ArsTechnica • 19 November • 1 minute

    This year’s newest iPhones included one momentous change that marked a new phase in the evolution of Apple Silicon: the Apple N1 , Apple’s first in-house chip made to handle local wireless connections. The N1 supports Wi-Fi 7, Bluetooth 6, and the Thread smart home communication protocol, and it replaces the third-party wireless chips (mostly made by Broadcom) that Apple used in older iPhones.

    Apple claimed that the N1 would enable more reliable connectivity for local communication features like AirPlay and AirDrop but didn’t say anything about how users could expect it to perform. But Ookla, the folks behind the SpeedTest app and website , have analyzed about five weeks’ worth of users’ testing data to get an idea of how the iPhone 17 lineup stacks up to the iPhone 16, as well as Android phones with Wi-Fi chips from Qualcomm, MediaTek, and others.

    While the N1 isn’t at the top of the charts, Ookla says Apple’s Wi-Fi chip “delivered higher download and upload speeds on Wi-Fi compared to the iPhone 16 across every studied percentile and virtually every region.” The median download speed for the iPhone 17 series was 329.56Mbps, compared to 236.46Mbps for the iPhone 16; the upload speed also jumped from 73.68Mbps to 103.26Mbps.

    Read full article

    Comments

    • chevron_right

      Celebrated game developer Rebecca Heineman dies at age 62

      news.movim.eu / ArsTechnica • 19 November

    On Monday, veteran game developer Rebecca Ann Heineman died in Rockwall, Texas, at age 62 after a battle with adenocarcinoma. Apogee founder Scott Miller first shared the news publicly on social media, and her son William confirmed her death with Ars Technica. Heineman’s GoFundMe page, which displayed a final message she had posted about entering palliative care, will now help her family with funeral costs.

    Rebecca “Burger Becky” Heineman was born in October 1963 and grew up in Whittier, California. She first gained national recognition in 1980 when she won the national Atari 2600 Space Invaders championship in New York at age 16, becoming the first formally recognized US video game champion. That victory launched a career spanning more than four decades and 67 credited games, according to MobyGames.

    Among many achievements in her life, Heineman was perhaps best known for co-founding Interplay Productions with Brian Fargo, Jay Patel, and Troy Worrell in 1983. The company created franchises like Wasteland , Fallout , and Baldur’s Gate . At Interplay, Heineman designed The Bard’s Tale III: Thief of Fate and Dragon Wars while also programming ports of classics like Wolfenstein 3D and Battle Chess .

    Read full article

    Comments

    • chevron_right

      DeepMind’s latest: An AI for handling mathematical proofs

      news.movim.eu / ArsTechnica • 19 November

    Computers are extremely good with numbers, but they haven’t gotten many human mathematicians fired. Until recently, they could barely hold their own in high school-level math competitions.

    But now Google’s DeepMind team has built AlphaProof, an AI system that matched silver medalists’ performance at the 2024 International Mathematical Olympiad, scoring just one point short of gold at the most prestigious undergrad math competition in the world. And that’s kind of a big deal.

    True understanding

    The reason computers fared poorly in math competitions is that, while they far surpass humanity’s ability to perform calculations, they are not really that good at the logic and reasoning that is needed for advanced math. Put differently, they are good at performing calculations really quickly, but they usually suck at understanding why they’re doing them. While something like addition seems simple, humans can do semi-formal proofs based on definitions of addition or go for fully formal Peano arithmetic that defines the properties of natural numbers and operations like addition through axioms.

    Read full article

    Comments

    • chevron_right

      How Louvre thieves exploited human psychology to avoid suspicion—and what it reveals about AI

      news.movim.eu / ArsTechnica • 19 November • 4 minutes

    On a sunny morning on October 19 2025 , four men allegedly walked into the world’s most-visited museum and left, minutes later, with crown jewels worth 88 million euros ($101 million). The theft from Paris’ Louvre Museum—one of the world’s most surveilled cultural institutions—took just under eight minutes.

    Visitors kept browsing. Security didn’t react (until alarms were triggered). The men disappeared into the city’s traffic before anyone realized what had happened.

    Investigators later revealed that the thieves wore hi-vis vests, disguising themselves as construction workers . They arrived with a furniture lift, a common sight in Paris’s narrow streets, and used it to reach a balcony overlooking the Seine. Dressed as workers, they looked as if they belonged.

    This strategy worked because we don’t see the world objectively. We see it through categories—through what we expect to see. The thieves understood the social categories that we perceive as “normal” and exploited them to avoid suspicion. Many artificial intelligence (AI) systems work in the same way and are vulnerable to the same kinds of mistakes as a result.

    The sociologist Erving Goffman would describe what happened at the Louvre using his concept of the presentation of self : people “perform” social roles by adopting the cues others expect. Here, the performance of normality became the perfect camouflage.

    The sociology of sight

    Humans carry out mental categorization all the time to make sense of people and places. When something fits the category of “ordinary,” it slips from notice.

    AI systems used for tasks such as facial recognition and detecting suspicious activity in a public area operate in a similar way. For humans, categorization is cultural. For AI, it is mathematical.

    But both systems rely on learned patterns rather than objective reality . Because AI learns from data about who looks “normal” and who looks “suspicious,” it absorbs the categories embedded in its training data. And this makes it susceptible to bias .

    The Louvre robbers weren’t seen as dangerous because they fit a trusted category. In AI, the same process can have the opposite effect: people who don’t fit the statistical norm become more visible and over-scrutinized.

    It can mean a facial recognition system disproportionately flags certain racial or gendered groups as potential threats while letting others pass unnoticed.

    A sociological lens helps us see that these aren’t separate issues. AI doesn’t invent its categories; it learns ours. When a computer vision system is trained on security footage where “normal” is defined by particular bodies, clothing, or behavior, it reproduces those assumptions.

    Just as the museum’s guards looked past the thieves because they appeared to belong, AI can look past certain patterns while overreacting to others.

    Categorization, whether human or algorithmic, is a double-edged sword. It helps us process information quickly, but it also encodes our cultural assumptions. Both people and machines rely on pattern recognition, which is an efficient but imperfect strategy.

    A sociological view of AI treats algorithms as mirrors: They reflect back our social categories and hierarchies. In the Louvre case, the mirror is turned toward us. The robbers succeeded not because they were invisible, but because they were seen through the lens of normality. In AI terms, they passed the classification test.

    From museum halls to machine learning

    This link between perception and categorization reveals something important about our increasingly algorithmic world. Whether it’s a guard deciding who looks suspicious or an AI deciding who looks like a “shoplifter,” the underlying process is the same: assigning people to categories based on cues that feel objective but are culturally learned.

    When an AI system is described as “biased,” this often means that it reflects those social categories too faithfully. The Louvre heist reminds us that these categories don’t just shape our attitudes, they shape what gets noticed at all.

    After the theft, France’s culture minister promised new cameras and tighter security . But no matter how advanced those systems become, they will still rely on categorization. Someone, or something, must decide what counts as “suspicious behavior.” If that decision rests on assumptions, the same blind spots will persist.

    The Louvre robbery will be remembered as one of Europe’s most spectacular museum thefts. The thieves succeeded because they mastered the sociology of appearance: They understood the categories of normality and used them as tools.

    And in doing so, they showed how both people and machines can mistake conformity for safety. Their success in broad daylight wasn’t only a triumph of planning. It was a triumph of categorical thinking, the same logic that underlies both human perception and artificial intelligence.

    The lesson is clear: Before we teach machines to see better, we must first learn to question how we see.

    Vincent Charles , Reader in AI for Business and Management Science, Queen’s University Belfast and Tatiana Gherman , Associate Professor of AI for Business and Strategy, University of Northampton.

    This article is republished from The Conversation under a Creative Commons license. Read the original article .

    Read full article

    Comments