Mar 7, 2023

I Shouldn't, But I Will


I would normally avoid quoting or boosting a Never-Trumper in any way, but sometimes even a pimp like Charley Sykes should get a look-see.


Retribution, Eradication, and the Coming Storm
The words of CPAC


Former U.S. President Donald Trump speaks to reporters before his speech at the annual Conservative Political Action Conference (CPAC) (Photo by Anna Moneymaker/Getty Images)
Hello darkness, my old friend.

If you want a preview of what’s coming our way, take a look at the vocabulary of CPAC, including the former president’s promise of retribution, obliteration, and war.

Attention, perhaps, should be paid.

“Retribution”

“In 2016, I declared, ‘I am your voice,’” Donald Trump told his acolytes at CPAC. “Today, I add: I am your warrior. I am your justice. And for those who have been wronged and betrayed: I am your retribution.”

In case anyone missed it, Trump repeated the phrase: “I am your retribution”.

It was probably the strongest line of his speech, and the threat was intentionally unsubtle and unmistakable. He would “totally obliterate the ‘deep state,” and wreak vengeance on the sinister scum who opposed him.

Ronald Reagan proclaimed “It’s Morning in America”; Trump declared, I am Nemesis.

This is not, to put it mildly, normal political rhetoric, at least in the English language. But it gives a taste of the bleak storm to come. The Atlantic’s John Hendrickson writes:


For much of the speech, Trump’s voice took on more of a soft and haggard whisper than the booming, throaty scream that characterized his campaign rallies. His language, by contrast, was bellicose.

Tonight’s address was among the darkest speeches he has given since his “American carnage” inaugural address. Trump warned that the United States is becoming “a nation in decline” and a “crime-ridden, filthy communist nightmare.”

He spoke of an “epic battle” against “sinister forces” on the left.

He repeatedly painted himself as a martyr, a tragic hero still hoping for redemption. “They’re not coming after me; they’re coming after you, and I’m just standing in their way,” Trump told the room. He pulled out his best, half-hearted Patton: “We are going to finish what we started. We’re going to complete the mission. We’re going to see this battle through to ultimate victory.”

He was heavy on adjectives, devastating with nouns. “We will liberate America from these villains and scoundrels once and for all,” he said…

And he is all-in on the Insurrection:


After seven mind-bending, soul-crushing years, it’s easy to get numbed by this sort of thing. But, as former congressman Joe Walsh writes in this morning’s Bulwark, we ought take this sort of language seriously. Tom Nichols agrees, writing yesterday, “We need to stop treating support for Trump as if it’s just another political choice and instead work to isolate his renewed threat to our democracy and our national security.”

“There can be no middle way in dealing with transgenderism. It is all or nothing. If transgenderism is true, if men can become women, then it’s true for everybody of all ages.

“If it is false, then for the good of society, and especially for the good of the poor people who have fallen prey to this confusion, transgenderism must be eradicated from public life entirely — the whole preposterous ideology.”

“Eradicated”

ICYMI, Michael Knowles, a commentator for the Daily Wire and BFF of Ted Cruz, declared at CPAC:


“There can be no middle way in dealing with transgenderism. It is all or nothing. If transgenderism is true, if men can become women, then it’s true for everybody of all ages.

“If it is false, then for the good of society, and especially for the good of the poor people who have fallen prey to this confusion, transgenderism must be eradicated from public life entirely — the whole preposterous ideology.”
After Knowles was accused of using genocidal language, he threatened lawsuits and indignantly insisted that he was not talking eradicating transgender people, only transgender-ism. This was enough of a distinction that the Daily Beast changed it’s original headline “to more literally reflect the words Knowles used.” Insisted Knowles:

“Nobody’s calling to exterminate anybody because the other problem with that statement is that transgender people is not a real ontological category,” he added. “It’s not a legitimate category of being.”

But “eradicated,” is a distinctive word with distinctive connotations and associations, and we should pay Knowles the compliment of thinking that he chose it carefully, specifically, and specially for the occasion.

He could have said that we should “challenge,” “confront,” “oppose,” “resist,” or “push-back” against transgenderism.

Instead, he chose to use the word “eradicate.” Oxford Languages offers a few synonyms for his word choice: eliminate, exterminate, destroy, annihilate, extirpate, obliterate, kill, wipe out, liquidate, decimate, abolish, extinguish.

So let’s try an experiment here. Imagine that we had a speaker — at an event that is definitely not meant to be CPAC — who said, “Jew-ishness must be eradicated from public life.” Or how about “Judai-ism must be eradicated.” Or, how about saying “Zion-ism must be eradicated from the Middle East.:

He (or she) might deny that this in any way suggested the eradication, elimination, or extermination of any actual Jews or Zionists. But it seems unlikely that anyone except the most determined denialists and rationalizers would swallow that explanation.

“Knowles may or may not be smart enough to realize that a word like eradicate inherently carries a hint of physical menace,” writes Jonathan Chait.


The most generous account of his argument is that he lacks the intelligence to grasp the implications of his own position. The least generous account is that he is making a winking nod to ugly and hateful forces he has no intention of holding back.

But Michael Knowles is not dumb.

In fact, he is a very smart guy, who understands the language of the right: its nuances, and its various dog whistles. Back in the Before Times (2016) he actually wrote a valuable guide to to the Alt-Right, which included “8 Things You Need To Know.” (It was so good, I included it in a footnote in my book, “How The Right Lost Its Mind,” page 169.)

The first thing Knowles said we should know about the New Right?

  • Racism is not a fringe element of the Alt-Right; it’s the movement’s central premise.
And he offered some examples of the language and signals they used:

  • Sam Francis, the late syndicated columnist who famously called for a “white racial consciousness”
  • Theodore Robert Beale, the white nationalist blogger better known by his pen name Vox Day, who counts as a central tenet of the Alt-Right that “we must secure the existence of white people and a future for white children,” which represents one half of the white nationalist, neo-Nazi numerical symbol 1488. (That phrase contains 14 words, while 8 refers to the eighth letter of the alphabet, H, which doubled represents “Heil Hitler.”)
  • Paul Ramsey, a white nationalist who produced a video titled “Is it wrong not to feel sad about the Holocaust?” and who seeks to revise historical accounts of the Holocaust, asking, “Do you mean that six million figure? You know that six million figure has been used many times before World War II, did you know that?”
Some other things Knowles said we needed to know about the Alt-Right:
  • It’s also explicitly anti-Semitic….
  • The Alt-Right loves Christendom but rejects Christianity. The Alt-Right admires Christendom primarily for uniting the continent and forging white European identity.
  • The Alt-Right wants to burn American politics to the ground. The Alt-Right most immediately opposes conservatism.
  • The Alternative Right asks conservatives to trade God for racial identity, liberty for strongman statism, and the unique American idea that “all men are created equal and endowed by their Creator with certain unalienable rights” for a cartoon Nazi frog.
The point is that Knowles knows the nomenclature, the memes, and the signals of the movement that now seems to have blended into CPAC/MAGAism. So, while he may not have been advocating actual genocide, or even the harm of any individuals, he knew perfectly well that he was using the sort of eliminationist rhetoric that fires up the fever swamps.

He knew his audience and the words they wanted to hear.


“When I said, like, I don’t know, it’s sort of weird that Pennsylvania managed to elect a vegetable, they criticize me as being ableist. I didn’t know what that was. But there’s always an ‘ist’. It doesn’t matter what you’re talking about. And apparently an ableist is someone who discriminates against those with disabilities.”

“I said: ‘Well, I’m not discriminating against any... ‘ I’d love for John Fetterman to have, like, good gainful employment. Maybe he could be, like, a bag guy at a grocery store. But, like, is it unreasonable for me to expect, as a citizen of the United States of America, to have a United States senator have basic cognitive function?”

Nota bene: A reminder that Trumpism is not merely post-truth and post-shame, but also post-even-a-shred-of-decency.



Today's Reddit


And one random rabbit.

In Other Bad News

Map of global average surface temperature in 2022
compared to the 1991-2020 average,
with places that were warmer than average colored red,
and places that were cooler than average colored blue.
The bars on the graph shows global temperatures
compared to the 20th-century average each year
from 2022 (right) back to 1976 (left)

From over a year ago:


Scientists solve a major climate mystery, confirming Earth is hotter than it's been in at least 120 centuries

Scientists have resolved a controversial but key climate change mystery, bolstering climate models and confirming that Earth is hotter than it's been in at least 12,000 years, and perhaps even the last 128,000 years, according to the most recent annual global temperature data.

This mystery is known as the "Holocene temperature conundrum," and it describes a debate that has gone on over how temperatures have changed during the Holocene, an epoch that describes the last 11,700 years of our planet's history. While some previous proxy reconstructions suggest that average Holocene temperatures peaked between 6,000 and 10,000 years ago and the planet cooled after this, climate models suggest that global temperatures have actually risen over the past 12,000 years, with the help of factors like rising greenhouse gas emissions and climate change.

This "conundrum" has "cast doubts among skeptics about the efficacy of current climate models to accurately predict our future," lead author Samantha Bova, a postdoctoral researcher associate at Rutgers University, told Space.com in an email.

The new research puts this uncertainty to rest, however, demonstrating that current climate projections are right on the money.

The study "eliminates any doubts about the key role of carbon dioxide in global warming and confirms climate model simulations that show global mean annual temperature warming, rather than cooling, across the Holocene period," Bova said.

Specifically, the team demonstrated "that late Holocene cooling as reconstructed by proxies is a seasonal signal," Bova told Space.com.

To do this, the team developed a new method that allowed them to "use seasonal temperatures to come up with annual averages. Using our new method, we demonstrate that Holocene mean annual temperatures have been steadily rising," Bova added.

The scientists analyzed previously published sea surface temperature data, which used information about the fossils of foraminifera — single-celled organisms that live on the surface of the ocean —and other biomarkers from marine algae. This allowed them to reconstruct temperatures through history.

With this data, "we show that the post-industrial increase in global temperature rose from the warmest mean annual temperature recorded over the past 12,000 years," Bova said, adding that this is contrary to recent research.
"Earth’s global temperatures have therefore reached uncharted territory that has not been observed over at least the past 12,000 and perhaps the past 128,000 years."

"Given that 2020 is tied for the warmest year on record based on the new NASA/NOAA data release, our results demonstrate that average annual temperatures in 2020 were the warmest of the last 12,000 years and possibly the last 128,000 years," Bova concluded. (NOAA is the U.S. National Oceanic and Atmospheric Administration.)

By confirming temperature records throughout this time period, the team didn't just provide additional evidence for "the efficacy of current climate models in accurately simulating climate over the past 12,000 years," Bova said. The work also "gives confidence in their ability to predict the future."

This work was published Jan. 27 in the journal Nature.

Today's Pix

click
































Mar 6, 2023

Today's "Conservative" Thing


I honestly don't understand what "conservatives" find so terribly wrong about that kinda thing - unless it's all part of some weird self-hatred syndrome, which makes it a projection of their own inner demons (?)

Is that why it seems so important for them to impose limits on people? Are they saying they can't possibly control their own potentially vile behavior, so they need to build a societal mechanism that imposes limits - by proxy - on themselves?

In the immortal words of Stan Marsh:

Ukraine


A day or two ago, Prigozhin was making noise about "if Wagner is withdrawn from Bakhmut, the whole line will collapse." At the time, I thought he was issuing a not-so-vague threat - ie: give me what I want or I'm outa here, and you can kiss the Donbas good-bye.

Now I'm thinking Shoigu and Putin may be having to consider stripping their western front in order to shore up the defense of Crimea.

Of course, I don't know much about such things, but something's cooking, and it's starting to smell like several more "tragic accidents" are about to befall some policy makers at the Kremlin.




Слава Україні 🌎🌏🌍❤️🇺🇦
... and stay lucky, you magnificent bastards.

Today's Reddit


I got questions:
When white people talk shit and threaten to throw down on somebody, why does it seem like they feel the need to sound like black people?
 
Is it just another manifestation of their racism?
 
More cultural appropriation?


It is a wonderment. 

Checking In


We were live at CPAC - and we were just about the only ones.



Mar 5, 2023

On Corporate Evolution

Once upon a time, there was an automobile industry that was so hung up on playing PR games trying maintain its image as Numero Uno, that it missed about a hundred signs that it was having its lunch money pilfered either by the American company across town, or by foreign (ie: Japanese) car makers.

Enter Lee Iacocca, Carroll Shelby, and the Ghost Of Harley Earl.

Those companies had become ossified, mostly because top-down plutocrats couldn't figure out how to listen to the guys on the line - the guys who were actually doing the work.

Things have and haven't changed a bit.


A 120-Year-Old Company Is Leaving Tesla in the Dust

Tesla had me convinced, for a while, that it was a cool company.

It made cars that performed animatronic holiday shows using their lights and power-operated doors. It came up with dog mode (a climate control system that stays running for dogs in a parked car), a GPS-linked air suspension that remembers where the speed bumps are and raises the car automatically, and “fart mode” (where the car makes fart sounds).

And, fundamentally, its cars had no competition. If you wanted an electric car that could go more than 250 miles between charges, Tesla was your only choice for the better part of a decade. The company’s C.E.O., Elon Musk, came across as goofy and eccentric: You could build great cars and name each model such that the lineup spells “SEXY.”

Or you would, if not for the party-killers over at boring old Ford. Ford thwarted Mr. Musk’s “SEXY” gambit by preventing Tesla from naming its small sedan the Model E, since that sounds a bit too much like a certain famous Ford, the Model T. So Mr. Musk went with Model 3, which either ruins the joke or elevates it, depending on how much you venerate Tesla and Elon Musk. I count myself as a former admirer of Mr. Musk and Tesla, and in fact put a deposit on a Model 3 after my first drive of one.

But the more I dealt with Tesla as a reporter — this was before Mr. Musk fired all the P.R. people who worked there — the more skeptical I became. Any time I spoke to anyone at Tesla, there was a sense that they were terrified to say the wrong thing, or anything at all. I wanted to know the horsepower of the Model 3 I was driving, and the result was like one of those oblique Mafia conversations where nothing’s stated explicitly, in case the Feds are listening. I ended up saying, “Well, I read that this car has 271 horsepower,” and the Tesla person replied, “I wouldn’t disagree with that.” This is not how healthy, functional companies answer simple factual questions.

That was back in 2017. In the years since, Tesla’s become even crankier, while its competition has loosened up. Public perception hasn’t yet caught up with the reality of the situation. If you want to work for a flexible, modern company, you don’t apply to Tesla. You apply to 120-year-old Ford.

Tesla’s veneer of irreverence conceals an inflexible core, an old-fashioned corporate autocracy. Consider Tesla’s remote work policy, or lack thereof. Last year, Mr. Musk issued a decree that Tesla employees log 40 hours per week in an office — and not a home office — if they expected to keep their jobs. On Indeed.com, the question, “Can you work remotely at Tesla?” includes answers like, “No,” and “Absolutely not, they won’t let it happen under any circumstances,” and “No, Tesla will work you until you lose everything.”

But on the other hand, the cars make fart noises. What a zany and carefree company!

Ford’s work-from-home rules for white-collar employees, meanwhile, sound straight out of Silicon Valley, in that the official corporate policy is that there is no official corporate policy — it’s up to the leaders of individual units to require in-person collaboration, or not, as situations dictate. There are new “collaboration centers” in lieu of cubicle farms, complete with food service and concierges. That’s not the reality of daily work life for every person at Ford — you can’t exactly bolt together an F-150 from home — but it’s an attempt to provide some flexibility for as many people as possible.

Ford also tends to make good on its promises, an area that’s become increasingly fraught for Tesla. Ford said it would offer a hands-free driver assist system, and now it does, with BlueCruise; you can take your hands off the steering wheel when it is engaged on premapped sections of highway. Tesla’s Full Self-Driving system is not hands-free in any situation, despite its name, and Tesla charges customers $15,000 for the feature on the promise that someday it will make the huge leap to full autonomous driving.

If you want to pay $15,000 for a feature that’s currently subject to a National Highway Traffic Safety Administration recall whose filing is titled “Full Self-Driving Software May Cause Crash,” don’t let me stop you, but a Tesla engineer also recently testified that a company video purporting to show the system in flawless action was faked. This makes sense, given all the other very real videos of Full Self-Driving doing things like steering into oncoming traffic or braking to a complete stop on a busy street for no reason. Tesla’s own website warns, “The currently enabled features require a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment.” So, full self-driving, except for that.

Tesla’s long-promised new vehicles, like the Cybertruck and a new version of its Roadster, also keep getting delayed. The Cybertruck was unveiled in 2019, and on Tesla’s most recent earnings call Mr. Musk admitted that it won’t be in production this year, which is becoming an annual refrain. Sure, Ford sold only 15,617 electric F-150 Lightning pickups in 2022, but that beats the Cybertruck’s sales by, let’s see, 15,617. Besides stealing Tesla’s market share on trucks, Ford’s stealing its corporate impishness, too — when the electric Mustang Mach-E was unveiled, Ford demonstrated its tailgating possibilities by filling its drainable front trunk (or “frunk”) with shrimp. “Frunk shrimp” became a meme, which surely tormented the emperor of try-hard social media posting, Elon Musk.

Speaking of which: Twitter. I will hazard the opinion that Mr. Musk’s $44 billion purchase of Twitter has not exactly burnished Tesla’s reputation. Besides showcasing the questionable decision-making inherent in paying that much for Twitter, Mr. Musk’s heightened profile on the platform hasn’t really done him any favors. For instance, when the bulk of your car company’s sales are in blue states, is it helpful to tweet, “My pronouns are Prosecute/Fauci”? Moreover, you’d think that the self-appointed class clown of corporate America would at least strive for a joke that eschews the hacky “my pronouns are/I identify as” construction. Maybe just go with “Fauci makes me grouchy”? Elon, let’s workshop this next time.

Maybe predictability isn’t trendy, but if you buy a new car you’d probably like to think that its manufacturer won’t cut the price by $13,000 the next week, thus destroying your car’s resale value. And you might hope that features you pay for work on the day you pay for them, and not at some unspecified future date. Maybe you want a car from a company whose C.E.O. isn’t indelibly associated with the product.

I just bought a Jeep and I have no idea who the C.E.O. is there. That’s cool with me.

BTW - when was the last time you saw a 4-door sedan stomp a tricked out Corvette in a drag race?


This is not your grandma's Insight


Today's Nerdy Stuff

 
Note: "Nerdy" doesn't (necessarily) mean a little light comedy at the smart guys' expense, or too densely complicated to register in a normal person's brain.

Onward -

On the surface, information like this piece in NYT leads me to think:
"Great - while others are applying it in ways that could solve some pretty big problems, what're we doing with AI here in USAmerica Inc? ChatBots that get pissy if an interviewer asks a challenging question."

But that sells us a bit short. New tech often starts out in a kind of game form. We play with it to see what all we can get it to do. That gives the base product a good and thorough workout, and gathers important user-supplied feedback so the thing can either become much more robust, or be exposed as too deeply flawed to pursue it outside of the lab.

It does bother me that a dog-ass dictatorship like Orbon's Turkey is out front making some pretty amazing advances with it, even though it's been on the radar here for years.

All that said, I really don't care where it comes from, I'll take what sounds like a win on the good side of things, as I'm sure DARPA (and the Turkish Ministry of Defense, et al) are very busily trying to co-opt it as the next logical step towards Skynet.


Using A.I. to Detect Breast Cancer That Doctors Miss

Hungary has become a major testing ground for A.I. software to spot cancer, as doctors debate whether the technology will replace them in medical jobs.

Two radiologists had previously said the X-ray did not show any signs that the patient had breast cancer. But Dr. Ambrózay was looking closely at several areas of the scan circled in red, which artificial intelligence software had flagged as potentially cancerous.

“This is something,” she said. She soon ordered the woman to be called back for a biopsy, which is taking place within the next week.

Advancements in A.I. are beginning to deliver breakthroughs in breast cancer screening by detecting the signs that doctors miss. So far, the technology is showing an impressive ability to spot cancer at least as well as human radiologists, according to early results and radiologists, in what is one of the most tangible signs to date of how A.I. can improve public health.

Hungary, which has a robust breast cancer screening program, is one of the largest testing grounds for the technology on real patients. At five hospitals and clinics that perform more than 35,000 screenings a year, A.I. systems were rolled out starting in 2021 and now help to check for signs of cancer that a radiologist may have overlooked. Clinics and hospitals in the United States, Britain and the European Union are also beginning to test or provide data to help develop the systems.

A.I. usage is growing as the technology has become the center of a Silicon Valley boom, with the release of chatbots like ChatGPT showing how A.I. has a remarkable ability to communicate in humanlike prose — sometimes with worrying results. Built off a similar form used by chatbots that is modeled on the human brain, the breast cancer screening technology shows other ways that A.I. is seeping into everyday life.

Widespread use of the cancer detection technology still faces many hurdles, doctors and A.I. developers said. Additional clinical trials are needed before the systems can be more widely adopted as an automated second or third reader of breast cancer screens, beyond the limited number of places now using the technology. The tool must also show it can produce accurate results on women of all ages, ethnicities and body types. And the technology must prove it can recognize more complex forms of breast cancer and cut down on false-positives that are not cancerous, radiologists said.

The A.I. tools have also prompted a debate about whether they will replace human radiologists, with makers of the technology facing regulatory scrutiny and resistance from some doctors and health institutions. For now, those fears appear overblown, with many experts saying the technology will be effective and trusted by patients only if it is used in partnership with trained doctors.

And ultimately, A.I. could be lifesaving, said Dr. László Tabár, a leading mammography educator in Europe who said he was won over by the technology after reviewing its performance in breast cancer screening from several vendors.

“I’m dreaming about the day when women are going to a breast cancer center and they are asking, ‘Do you have A.I. or not?’” he said.

Hundreds of images a day

In 2016, Geoff Hinton, one of the world’s leading A.I. researchers, argued the technology would eclipse the skills of a radiologist within five years.

“I think that if you work as a radiologist, you are like Wile E. Coyote in the cartoon,” he told The New Yorker in 2017. “You’re already over the edge of the cliff, but you haven’t yet looked down. There’s no ground underneath.”

Mr. Hinton and two of his students at the University of Toronto built an image recognition system that could accurately identify common objects like flowers, dogs and cars. The technology at the heart of their system — called a neural network — is modeled on how the human brain processes information from different sources. It is what is used to identify people and animals in images posted to apps like Google Photos, and allows Siri and Alexa to recognize the words people speak. Neural networks also drove the new wave of chatbots like ChatGPT.

Many A.I. evangelists believed such technology could easily be applied to detect illness and disease, like breast cancer in a mammogram. In 2020, there were 2.3 million breast cancer diagnoses and 685,000 deaths from the disease, according to the World Health Organization.

But not everyone felt replacing radiologists would be as easy as Mr. Hinton predicted. Peter Kecskemethy, a computer scientist who co-founded Kheiron Medical Technologies, a software company that develops A.I. tools to assist radiologists detect early signs of cancer, knew the reality would be more complicated.

Mr. Kecskemethy grew up in Hungary spending time at one of Budapest’s largest hospitals. His mother was a radiologist, which gave him a firsthand look at the difficulties of finding a small malignancy within an image. Radiologists often spend hours every day in a dark room looking at hundreds of images and making life-altering decisions for patients.

“It’s so easy to miss tiny lesions,” said Dr. Edith Karpati, Mr. Kecskemethy’s mother, who is now a medical product director at Kheiron. “It’s not possible to stay focused.”

Mr. Kecskemethy, along with Kheiron’s co-founder, Tobias Rijken, an expert in machine learning, said A.I. should assist doctors. To train their A.I. systems, they collected more than five million historical mammograms of patients whose diagnoses were already known, provided by clinics in Hungary and Argentina, as well as academic institutions, such as Emory University. The company, which is in London, also pays 12 radiologists to label images using special software that teaches the A.I. to spot a cancerous growth by its shape, density, location and other factors.

From the millions of cases the system is fed, the technology creates a mathematical representation of normal mammograms and those with cancers. With the ability to look at each image in a more granular way than the human eye, it then compares that baseline to find abnormalities in each mammogram.

Last year, after a test on more than 275,000 breast cancer cases, Kheiron reported that its A.I. software matched the performance of human radiologists when acting as the second reader of mammography scans. It also cut down on radiologists’ workloads by at least 30 percent because it reduced the number of X-rays they needed to read. In other results from a Hungarian clinic last year, the technology increased the cancer detection rate by 13 percent because more malignancies were identified.

Dr. Tabár, whose techniques for reading a mammogram are commonly used by radiologists, tried the software in 2021 by retrieving several of the most challenging cases of his career in which radiologists missed the signs of a developing cancer. In every instance, the A.I. spotted it.

“I was shockingly surprised at how good it was,” Dr. Tabár said. He said that he did not have any financial connections to Kheiron when he first tested the technology and has since received an advisory fee for feedback to improve the systems. Systems he tested from other A.I. companies, including Lunit Insight from South Korea and Vara from Germany, have also delivered encouraging detection results, he said.

Proof in Hungary

Kheiron’s technology was first used on patients in 2021 in a small clinic in Budapest called MaMMa Klinika. After a mammogram is completed, two radiologists review it for signs of cancer. Then the A.I. either agrees with the doctors or flags areas to check again.

Across five MaMMa Klinika sites in Hungary, 22 cases have been documented since 2021 in which the A.I. identified a cancer missed by radiologists, with about 40 more under review.

“It’s a huge breakthrough,” said Dr. András Vadászy, the director of MaMMa Klinika, who was introduced to Kheiron through Dr. Karpati, Mr. Kecskemethy’s mother. “If this process will save one or two lives, it will be worth it.”

Kheiron said the technology worked best alongside doctors, not in lieu of them. Scotland’s National Health Service will use it as an additional reader of mammography scans at six sites, and it will be in about 30 breast cancer screening sites operated by England’s National Health Service by the end of the year. Oulu University Hospital in Finland plans to use the technology as well, and a bus will travel around Oman this year to perform breast cancer screenings using A.I.

“An A.I.-plus-doctor should replace doctor alone, but an A.I. should not replace the doctor,” Mr. Kecskemethy said.

The National Cancer Institute has estimated that about 20 percent of breast cancers are missed during screening mammograms.

Constance Lehman, a professor of radiology at Harvard Medical School and chief of breast imaging and radiology at Massachusetts General Hospital, urged doctors to keep an open mind.

“We are not irrelevant,” she said, “but there are tasks that are better done with computers.”

At Bács-Kiskun County Hospital outside Budapest, Dr. Ambrózay said she had initially been skeptical of the technology — but was quickly won over. She pulled up the X-ray of a 58-year-old woman with a tiny tumor spotted by the A.I. that Dr. Ambrózay had a hard time seeing.

The A.I. saw something, she said, “that seemed to appear out of nowhere.”