Victoria Fox's Blog, page 224
June 27, 2023
Chinese government mouthpiece vows Beijing will ramp up drive for AI global supremacy
China is fully embracing the potential transformative power of artificial intelligence and determined to emerge as the world’s leading AI power, according to experts and Chinese state media.
The People’s Daily, the mouthpiece newspaper of the ruling Chinese Communist Party (CCP), on Monday published its second commentary in two weeks vowing to intensify efforts to unleash the potential of AI.
“[AI] will become an important driving force in the new wave of technological revolution and industrial transformation, with a major impact on people’s production and life,” the commentary says.
The article, first flagged by the South China Morning Post, lists several areas where China could benefit from AI, such as daily office work, pharmaceuticals, and meteorology.

Experts weighed American and Chinese military and civil investments in artificial intelligence and while some believe the U.S. has a slight advantage in developing the technology currently, others worry China has already surpassed U.S. capability (Getty Images)
SENATE URGED TO PUNISH US COMPANIES THAT HELP CHINA BUILD ITS AI-DRIVEN ‘SURVEILLANCE STATE’
The People’s Daily’s focus on AI comes as experts in the U.S. are warning of China’s tech ambitions.
In a new report, the Center for Strategic and Budgetary Assessments concludes that, while the U.S. currently leads China in industrial might and national security technology, Beijing is on the offensive in areas like AI and believes it can be a global leader in the next decade.
“The United States has a powerful advantage because it played a central role in establishing the existing global techno-security order,” the report states. “But the current revolution in global technology affairs offers a window of opportunity for China to stake a leadership claim on emerging domains such as 5G, AI, quantum technology, cybersecurity, clean energy, and biotechnology.”
The term “techno-security” describes various innovations that can be applied to national security requirements.
According to the report, the U.S. needs to take action now to secure its stronger position or risk China catching up in the near future as it continues to close the technological gap between the two.

ChatGPT logo and AI Artificial Intelligence words are seen in this illustration taken, May 4, 2023. (REUTERS/Dado Ruvic/Illustration)
AI PROGRAM FLAGS CHINESE PRODUCTS ALLEGEDLY LINKED TO UYGHUR FORCED LABOR: ‘NOT COINCIDENCE, IT’S A STRATEGY’
“The U.S. techno-security system remains better organized and structured for the long-term techno-security competition than China, but it cannot be complacent and needs to urgently address a raft of structural flaws in its system,” the authors conclude. “As China ramps up its efforts to transform its techno-security capabilities and sets deadlines to achieve its goals over the next 5–10 years, the U.S. has only a limited window of opportunity to act.”
Tai Ming Cheung, a co-author of the report and a professor at the University of California, San Diego, noted that China “is now doubling down” on AI, telling the U.S. Naval Institute’s news website that the Chinese “think they have a real chance to lead” in this sector.
Since the AI tool ChatGPT was releasee in November, many observers both in and out of government have highlighted the strategic importance of tracking China’s focus on developing AI to boost industrial productivity and its economy — the world’s second largest.
In April, the Politburo, the CCP’s decision-making body, said China should prioritize the “development of artificial general intelligence” and “create an ecosystem for innovation” while simultaneously trying to mitigate the risks of AI, according to a statement issued by state media outlet Xinhua summarizing the Politburo’s quarterly meeting on the country’s social and economic development.
According to the accounting and consulting firm PwC, China will benefit significantly from AI, which is set to contribute to a 26% increase in China’s gross domestic product by 2030.

U.S. President Joe Biden, left, and Chinese President Xi Jinping, right. (Getty Images)
HOUSE DEMANDS AI UPDATE FROM PENTAGON AS THREATS FROM CHINA, OTHER ADVERSARIES PILE UP
However, the People’s Daily noted that China still faces challenges, such as a lack of compute-in-memory chips, tough ethical questions, intellectual property rights, and potential issues for personal privacy and online fraud.
“The evolving nature of AI also poses certain risks,” the commentary read.
The Chinese government mouthpiece also called on governments and industry leaders to address the risks associated with AI.
CLICK HERE TO GET THE FOX NEWS APP
Chinese leaders are deliberating over a new law to target telecoms and online fraud using AI face swapping technology.
Meanwhile, the Biden administration has reportedly reached out to China about working together on international norms for AI in weapons systems — a potential new area of both cooperation and competition amid tensions between the two countries.
Aaron Kliegman is a politics reporter for Fox News Digital.
Former Trump Cabinet member Betsy DeVos undecided on 2024 endorsement
FIRST ON FOX: Betsy DeVos – who served as head of the Education Department under former President Donald Trump – remains undecided on whom she will support in the 2024 race for the White House.
“She’s watching the race closely but has not yet made a decision on an endorsement,” Nate Bailey, DeVos’ chief of staff, told Fox News Digital. “She’s very encouraged to see all of the candidates talking seriously about expanding education freedom and empowering parents.”
DeVos, the 11th person to serve as the U.S. secretary of education from 2017 to 2021, was one of few Trump-era Cabinet members to maintain her post for his entire term in office.
As education secretary, the Michigan native championed school choice, arguing that parents should have the power to take tax dollars allocated for their child to different schools if their local public school doesn’t meet their needs.
‘STOP THE INVASION’: DESANTIS 2024 CAMPAIGN VIDEO PREVIEWS MAJOR BORDER POLICY ROLLOUT

Secretary of Education Betsy DeVos testifies during a Senate hearing on March 28, 2019, in Washington, D.C. (Zach Gibson/Getty Images)
DeVos touted Trump’s 1776 Commission as an alternative to the historically inaccurate 1619 Project, which pegs slavery as the foundation of American history.
But the former education secretary, now 65, has shown warmth to a number of other 2024 Republican candidates.
On May 31, DeVos appeared with former Vice President Mike Pence in Grand Rapids, Michigan, for a conversation on what conservatives believe.

Betsy DeVos, Donald Trump and Mike Pence pose for a photo at Trump International Golf Club in Bedminster, New Jersey, on Nov. 19, 2016. (Drew Angerer)
The DeVos family financially backed Florida Gov. Ron DeSantis’ gubernatorial campaigns. According to state campaign finance records, DeVos personally contributed $5,500 to a super PAC that backed DeSantis’ reelection bid in April 2022.
DeVos continues to be an influential voice about American education. Last year, she released her best-selling book, “Hostages No More: The Fight for Education Freedom and the Future of the American Child,” which covers critical race theory in education, COVID-19 pandemic school lockdowns and how to fix America’s schools.
Before serving as education secretary for the Trump administration, DeVos advocated for school choice, charter schools and free speech on campuses.

Former Secretary of Education Betsy DeVos arrives for the 45th Kennedy Center Honors at the John F. Kennedy Center for the Performing Arts on Dec. 4, 2022. (Stefani Reynolds)
CLICK HERE TO GET THE FOX NEWS APP
She and her husband started All Children Matter in 2003 in support of voucher programs. In 2010, she helped found the school choice advocacy organization American Federation for Children.
DeVos and her husband founded the Dick and Betsy DeVos Family Foundation in 1989, which donated to charter and Christian schools, organizations supporting school choice, and various universities and arts foundations.
Elizabeth Troutman is a College Associate at Fox News Digital.
Follow Elizabeth on Twitter @ElizTroutman
Israel embraces cutting-edge AI to thwart cyberattacks, foil terrorism
Israel continues to explore innovative uses for artificial intelligence (AI) in various aspects of security and law enforcement, helping to foil numerous threats.
“AI technology has been incorporated quite naturally into the Shin Bet’s interdiction machine,” Shin Bet Director Ronen Bar said in a speech to the Cyber Week conference in Israel. “Using AI, we have spotted a not-inconsequential number of threats.”
Shin Bet, the Israeli counterpart to the FBI or Britain’s MI5, has created its own generative AI platform, akin to ChatGPT, Bar revealed. He explained that the platform has allowed the intelligence service to streamline its work by flagging surveillance anomalies and sort “endless” amounts of intelligence.
“Since the beginning of 2022, ISA handled 600 ISIS-related cases, many of them consumed similar violent and dangerous content on social media and on the web. Some were even arrested just before attacking,” Bar said. “They are added to roughly 800 major attacks we have foiled since January 2022.”
NEW TECHNOLOGY SAVES FARMERS TIME IN FIELD BY TACKLING MOST ‘TEDIOUS, TIME-CONSUMING’ PROBLEM
“An alarming number of them have a strong basis on the web – posts, inspiration, knowledge or social groups,” he added. “The trend is clear. Traditional security organizations must adapt to the new situation, where any angry person with access to the Internet may become a threat.”
“Already today, with AI, we have identified a significant number of threats,” he said. “The machine and its ability to detect anomalies create a protective wall against our enemies, alongside our traditional capabilities. … Since we have understood we can’t fight this war with sticks and stones, we recognize the threats but also see opportunities using AI.”

The head of the Shin Bet domestic security agency, Ronen Bar, speaks at the Cyber Week conference in Tel Aviv, Israel.
Retired Maj. Gen. Isaac Ben-Israel, director of Blavatnik Interdisciplinary Cyber Research Center at Tel Aviv University, argued that the “accelerated increase in the use of artificial intelligence has a drastic impact on the cybersecurity arena, cyberdefense and the nature of malicious cyberattacks.”
“Accelerating rise in the use of artificial intelligence has a drastic impact on the cybersecurity arena, cyberdefense and the nature of malicious cyberattacks,” he said. “As the use of AI increases, our society becomes more and more dependent on computers, leading to a greater need for strong cyberdefense measures.”
UN CALLS FOR AI WATCHDOG AGENCY DUE TO ‘TREMENDOUS’ POTENTIAL
Gaby Portnoy, director-general of Israel National Cyber Directorate, told the Cyber Week conference that “Anyone who carries out cyberattacks against Israeli citizens must take into account the price he will pay.”
“In the past year, we have been working hard to develop our resilience and expand our capabilities to detect cyberattacks, raise our shields and expose malicious activities, specifically Iranian,” Portnoy said, adding that the vast majority of attacks are thwarted.

Officers utilize a range of information to identify and locate targets. (IDF Spokesperson unit)
Portnoy described some of the projects the Cyber Directorate has pursued over the past year, saying that Israel is working with “our partner from the UAE [United Arab Emirates], His Excellency Dr. [Mohamed] Al Kuwaiti” to build “a multinational cybercollaboration platform for cyberinvestigation and knowledge building.”
Rafael Advanced Defense Systems Ltd., a global leader in defense technology, helped develop a new system called Puzzle, which uses AI to combine and analyze visual data, communications information and other information to create a “comprehensive and filtered dataset,” according to a company press release.
AI REVEALS CHEMICALS THAT COULD STOP AGING IN ITS TRACKS
Puzzle seamlessly interfaces with existing command and control systems, helping to make sense of the incoming data to prioritize necessary targets within tight time frames, helping improve the efficacy of AI targeting.
Essentially, the Puzzle system works like a filter for the incredible amount of information AI can handle as many analysts and officials look to keep the human element involved in any AI-powered process.

Puzzle seamlessly interfaces with existing command and control systems, helping to make sense of the incoming data to prioritize necessary targets within tight time frames, helping improve the efficacy of AI targeting. (IDF Spokesperson Unit)
Israel has remained on the cutting edge of AI and its uses across various security fields: The Israel Defense Forces (IDF) has invested in AI, which officials have argued presents “a leap forward” even as researchers raise concerns about the potential escalation it would create.
Col. Uri, head of the data and AI department, Digital Transformation Division, previously told Fox News Digital that “Anyone who wants to make such a change faces a huge challenge.”
CLICK HERE TO GET THE FOX NEWS APP
The IDF used AI in a 2021 operation to successfully target at least two Hamas commanders, producing “200 new target assets” by using the new digital methods to create likely targets and locations to hit.
“Because we don’t have a lot of manpower, we need to find creative ways to compensate,” Ram Ben Tzion, founder and CEO of tech firm Ultra, previously told Fox News Digital. “So, when it comes to data and intelligence, many times we’ve had to rely on innovation and technology to compensate for lack of resources, human or other.”
Peter Aitken is a Fox News Digital reporter with a focus on national and global news.
Google quietly removes drag show from Pride promo section after Christian employees file petition: report
Google quietly removed a “Pride and Drag Show” from a list of company-promoted LGBTQ+ events in California after hundreds of Christian employees and others signed a petition labeling it as a direct attack on “religious beliefs and sensitivities,” according to a report.
The petition’s supporters said the event at the LGBTQ+ bar Beaux in San Francisco Tuesday disrespected the Christian faith and specifically described the performance of a drag artist who goes by the name “Peaches Christ” as “provocative and inflammatory,” CNBC reported.
“Their provocative and inflammatory artistry is considered a direct affront to the religious beliefs and sensitivities of Christians,” the petition said of the headliner, per the report.
The tech company subsequently removed the event, which was initially promoted to “wrap up this amazing month,” from the list of annually sponsored activities on its website.
NYC DRAG MARCHERS CHANT ‘WE’RE COMING FOR YOUR CHILDREN’ DURING PRIDE EVENT

The performance of drag artist “Peaches Christ” was specifically labeled as a “provocative and inflammatory” attack on the “religious beliefs and sensitivities of Christians.” (Steve Jennings/Getty Images)
Google spokesperson Chris Pappas told the outlet the event was removed as it was initially posted “without going through our standard events process.”
“We’ve long been very proud to celebrate and support the LGBTQ+ community. Our Pride celebrations have regularly featured drag artists for many years, including several this year,” he said.
Fox News Digital reached out to Google for comment, but a response was not immediately received.
DEFENSE SECRETARY’S NEW GUIDANCE ON DRAG SHOWS ON MILITARY BASES HAS IMMEDIATE IMPACT: REPORT
Pappas did not share if the petition led to the company’s decision to remove the public event from the list.

Google spokesperson Chris Pappas said the tech giant has “long been very proud to celebrate and support the LGBTQ+ community,” but the event was removed from the site because it did not go through the standard events process. (ANGELA WEISS/AFP)
“While the event organizers have shifted the official team event onsite, the performance will go on at the planned venue — and it’s open to the public, so employees can still attend,” Pappas said.
GOP SENATORS MOVE TO BAN DRAG SHOWS FROM MILITARY BASES: ‘GROSS MISUSE OF TAXPAYER FUNDS’
Google sponsors a series of Pride events in San Francisco every June, which include various drag shows and a Pride parade.

Google said it sponsors Pride events, like drag shows and a parade, each June in San Francisco. (Steve Jennings/Getty Images)
Just four years ago, the tech company was forced to deal with a petition from LGBTQ+ employees who spoke out against the company’s participation in the San Francisco Pride parade. The petitioners argued that Google’s float should have been removed because the company did not go far enough to protect its LGBTQ+ employees.
CLICK HERE TO GET THE FOX NEWS APP
Several companies across the U.S. have sought to closely balance the promotion of their Pride materials with the wills of their employees and customers.
Companies that failed to strike this balance, like Bud Light’s promotion of transgenderism and Target’s controversial LGBTQ+ displays, have been subject to criticism and boycotts.
The trans-sport two-step dishonors female athlete pioneers like my mom

NEWYou can now listen to Fox News articles!
The Senate Judiciary Committee recently held a hearing entitled, “Protecting Pride: Defending the Civil Rights of LGBTQ+ Americans,” where former University of Kentucky women’s swimmer Riley Gaines gave emotional testimony about her experience losing an NCAA championship to a male transgender swimmer, and having to change in a locker room with the 6’4″ 22-year-old biological man exposing his male genitalia in front of her and other female swimmers.
Senate Democrats, led by Dick Durbin of Illinois, downplayed the elite woman swimmer’s powerful message and delivered general bromides about transgender discrimination and its “divisive and hateful rhetoric” putting “children in danger.” He noted, “LGBTQ+ Americans are asking for no more—and no less—than the full freedom to live as who they are.”
Actually, on this score, transgender activists are indeed asking for a lot more, and their demands discriminate against women athletes who train hard for the right to compete and win on a level playing field with other women. Such efforts also dishonor real advocates of gay rights, as well as the pioneers of women’s sports who worked so hard to let females enjoy and compete in sports previously restricted to men.
Rights for gay Americans like me have made incredible strides in a few short decades, when we faced true discrimination in the areas of military service, marriage, employment, and – in many quarters – general social acceptance. In my own case more than 30 years ago, I had to mask my sexuality to serve my country in the Marine Corps, before the advent of “don’t ask, don’t tell,” and long before gay Americans were accepted fully in the military. That was real discrimination, and thankfully it was resolved fully across the uniformed services a decade ago. Now, thanks to the efforts of so many trailblazers over the past 60 years, gays enjoy full equality in the military, as well as in each of these other important societal realms.
RILEY GAINES: REWRITE OF TITLE IX IS AN ABOMINATION
The same is true for women’s sports, where decades ago women were not allowed to compete in many athletic events open to men because they were not considered physically up for the stress of participating, and it took true pioneers to change that for the benefit of society.
My late mother was a world-class marathon runner, and a member of the U.S. women’s national team in the mid-1970’s – a time when women were not allowed to compete in that event in the Olympics or in other marquee long-distance events. In fact, another top woman runner of that era, Katherine Switzer, was attacked and nearly pushed to the ground by a race official when she was discovered running covertly in the 1967 Boston Marathon, when women were barred from that storied event. Surrounded by a few burly male friends, she continued the race and made it across the finish line. Her story spread nationally and led to the event admitting women five years later.
Several years after that, my mother and other top female and male long-distance runners formed the International Runners’ Committee that worked hard to break down barriers for women in long distance running competitions. Thanks to their efforts and those in other countries, the Olympic Games added a women’s marathon event in 1984, which American Joan Benoit won that year, along with a 3,000-meter race. Over the next decade-plus, the Olympics added the three remaining long-distance events, bringing women to full parity with men in track-and-field racing worldwide.
CLICK HERE TO GET THE OPINION NEWSLETTER
In 1984, the women’s record in the marathon stood at 2 hours 24 minutes, and the men’s record was a full 16 minutes below that at 2 hours 8 minutes. How would Joan Benoit, my mother, and other pioneers like Katherine Switzer feel when they achieved their hard-fought goal of allowing women to compete in the Olympic marathon, only to have several biological male second-stringers limber up at the starting line with the best women runners in the world, intending to run ten-plus minutes faster than the rest of the field simply by declaring themselves to be women and taking a few hormone treatments? It would be absurd on its face, and true pioneers of women’s equality in sports like my mother would oppose it 100 percent.
CLICK HERE TO GET THE FOX NEWS APP
The transparent unfairness of this trans-sport two-step is not hard for Americans to grasp – no matter their gender or sexual identity. Asking transgender biological male athletes to stick to competing in men’s sports instead of women’s sports, and to shower in men’s locker rooms, not women’s, does not represent discrimination.
Transgender advocates who say otherwise dishonor those who have made monumental strides in recent decades both on true gay rights and on forging a level playing field for women in competitive sports.
CLICK HERE TO READ MORE FROM JOHN ULLYOT
John Ullyot served for seven years as a senior staff member in the U.S. Senate and is a former deputy assistant to the president.
Tech experts outline the four ways AI could spiral into worldwide catastrophes
Tech experts, Silicon Valley billionaires and everyday Americans have voiced their concerns that artificial intelligence could spiral out of control and lead to the downfall of humanity. Now, researchers at the Center for AI Safety have detailed exactly what “catastrophic” risks AI poses to the world.
“The world as we know it is not normal,” researchers with the Center for AI Safety (CAIS) wrote in a recent paper titled “An Overview of Catastrophic AI Risks.” “We take for granted that we can talk instantaneously with people thousands of miles away, fly to the other side of the world in less than a day, and access vast mountains of accumulated knowledge on devices we carry around in our pockets.”
That reality would’ve been “inconceivable” to people centuries ago and remained far-fetched even a few decades back, the paper stated. A pattern in history has emerged of “accelerating development,” the researchers noted.
“Hundreds of thousands of years elapsed between the time Homo sapiens appeared on Earth and the agricultural revolution,” the researchers continued. “Then, thousands of years passed before the industrial revolution. Now, just centuries later, the artificial intelligence (AI) revolution is beginning. The march of history is not constant—it is rapidly accelerating.”
CAIS is a tech nonprofit that works to reduce “societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards,” while also acknowledging artificial intelligence has the power to benefit the world.
EXPERTS WARN ARTIFICIAL INTELLIGENCE COULD LEAD TO ‘EXTINCTION’

Experts argue the difference between AI investment in China and the U.S. is the fact that the American model is driven by private companies whereas China takes a government approach. (JOSEP LAGO/AFP via Getty Images)
The CAIS leaders behind the study, including the nonprofit’s director Dan Hendrycks, broke down four categories encapsulating the main sources of catastrophic AI risks, which include: malicious use, the AI race itself, organizational risks and rogue AIs.
“As with all powerful technologies, AI must be handled with great responsibility to manage the risks and harness its potential for the betterment of society,” Hendrycks and his colleagues Mantas Mazeika and Thomas Woodside wrote. “However, there is limited accessible information on how catastrophic or existential AI risks might transpire or be addressed.”
NEXT GENERATION ARMS RACE COULD CAUSE ‘EXTINCTION’ EVENT: TECH EXECUTIVE
Malicious Use
Artificial intelligence hacker behind computer. (Fox News)
The study from the CAIS experts defines malicious use of AI as when a bad actor uses the technology to cause “widespread harm,” such as through bioterrorism, misinformation and propaganda, or the “deliberate dissemination of uncontrolled AI agents.”
The researchers pointed to an incident in Japan in 1995 when the doomsday cult Aum Shinrikyo spread an odorless and colorless liquid on subway cars in Tokyo. The liquid ultimately killed 13 people and injured 5,800 other people in the cult’s effort to jumpstart the end of the world.
Fast-forward nearly 30 years, AI could potentially be used to create a bioweapon that could have devastating effects on humanity if a bad actor gets their hands on the technology. The CAIS researchers floated a hypothetical where a research team open sources an “AI system with biological research capabilities” that is intended to save lives, but could actually be repurposed by bad actors to create a bioweapon.
AI COULD GO ‘TERMINATOR,’ GAIN UPPER HAND OVER HUMANS IN DARWINIAN RULES OF EVOLUTION, REPORT WARNS
“In situations like this, the outcome may be determined by the least risk-averse research group. If only one research group thinks the benefits outweigh the risks, it could act unilaterally, deciding the outcome even if most others don’t agree. And if they are wrong and someone does decide to develop a bioweapon, it would be too late to reverse course,” the study states.
Malicious use could entail bad actors creating bioengineered pandemics, using AI to create new and more powerful chemical and biological weapons, or even unleashing “rogue AI” systems trained to upend life.
“To reduce these risks, we suggest improving biosecurity, restricting access to the most dangerous AI models, and holding AI developers legally liable for damages caused by their AI systems,” the researchers suggest.
AI Race
A woman chatting with a smart AI or artificial intelligence using an artificial intelligence chatbot developed. (getty images)
The researchers define the AI race as competition potentially spurring governments and corporations to “rush the development of AIs and cede control to AI systems,” comparing the race to the Cold War when the U.S. and Soviet Union sprinted to build nuclear weapons.
“The immense potential of AIs has created competitive pressures among global players contending for power and influence. This ‘AI race’ is driven by nations and corporations who feel they must rapidly build and deploy AIs to secure their positions and survive. By failing to properly prioritize global risks, this dynamic makes it more likely that AI development will produce dangerous outcomes,” the research paper outlines.
In the military, the AI race could translate to “more destructive wars, the possibility of accidental usage or loss of control, and the prospect of malicious actors co-opting these technologies for their own purpose” as AI gains traction as a useful military weapon.
WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE
Lethal autonomous weapons, for example, can kill a target without human intervention while streamlining accuracy and decision-making time. The weapons could become superior to humans and militaries could delegate life-or-death situations to the AI systems, according to the researchers, which could escalate the likelihood of war.
“Although walking, shooting robots have yet to replace soldiers on the battlefield, technologies are converging in ways that may make this possible in the near future,” the researchers explained.
“Sending troops into battle is a grave decision that leaders do not make lightly. But autonomous weapons would allow an aggressive nation to launch attacks without endangering the lives of its own soldiers and thus face less domestic scrutiny,” they added, arguing that if political leaders no longer need to take responsibility for human soldiers returning home in body bags, nations could see an increase in the likelihood of war.
Artificial intelligence could also open the floodgates to more accurate and fast cyberattacks that could decimate infrastructure or even spark a war between nations.
“To reduce risks from an AI race, we suggest implementing safety regulations, international coordination, and public control of general-purpose AIs,” the paper suggests to help prevent such outcomes.
Organizational Risks
Artificial Intelligence is hacking datas in the near future. (iStock)
The researchers behind the paper say labs and research teams building AI systems “could suffer catastrophic accidents, particularly if they do not have a strong safety culture.”
“AIs could be accidentally leaked to the public or stolen by malicious actors. Organizations could fail to invest in safety research, lack understanding of how to reliably improve AI safety faster than general AI capabilities, or suppress internal concerns about AI risks,” researchers wrote.
They compared the AI organizations to disasters throughout history such as Chernobyl, Three Mile Island and the fatal Challenger Space Shuttle incident.
AI TECH ‘MORE DANGEROUS THAN AN AR-15,’ CAN BE TWISTED FOR ‘MALEVOLENT POWER,’ EXPERT WARNS
“As we progress in developing advanced AI systems, it is crucial to remember that these systems are not immune to catastrophic accidents. An essential factor in preventing accidents and maintaining low levels of risk lies in the organizations responsible for these technologies,” the researchers wrote.
The researchers argue that even in the absence of bad actors or competitive pressure, AI could have catastrophic effects on humanity due to human error alone. In the case of the Challenger or Chernobyl, there was already well established knowledge on rocketry and nuclear reactors when chaos struck, but AI in comparison is far less understood.
“AI lacks a comprehensive theoretical understanding, and its inner workings remain a mystery even to those who create it. This presents an added challenge of controlling and ensuring the safety of a technology that we do not yet fully comprehend,” the researchers argued.
AI accidents would not only be potentially catastrophic, but also hard to avoid.
The researchers pointed to an incident at OpenAI, the AI lab behind ChatGPT, where an AI system was trained to produce uplifting responses to users, but human error led the system to produce “hate-filled and sexually explicit text overnight.” Bad actors who hack a system or a leak of an AI system could also pave the way for catastrophe as malicious entities reconfigure the systems beyond the original creator’s intentions.
History has also shown that inventors and scientists often underestimate how quickly technological advances actually become a reality, such as the Wright brothers predicting powered flight was 50 years down the road, when they actually achieved this win two years of their prediction.
“Rapid and unpredictable evolution of AI capabilities presents a significant challenge for preventing accidents. After all, it is difficult to control something if we don’t even know what it can do or how far it may exceed our expectations,” the researchers explained.
The researchers suggest that organizations establish better cultures and structures to reduce such risks, such as through “internal and external audits, multiple layers of defense against risks, and military-grade information security.”
Rogue AIs
Artificial Intelligence words are seen in this illustration taken March 31, 2023. (REUTERS/Dado Ruvic/Illustration)
One of the most common concerns with artificial intelligence since the proliferation of the tech in recent years is that humans could lose control and computers overpower human intelligence.
AI ‘KILL SWITCH’ WILL MAKE HUMANITY LESS SAFE, COULD SPAWN ‘HOSTILE’ SUPERINTELLIGENCE: AI FOUNDATION
“If an AI system is more intelligent than we are, and if we are unable to steer it in a beneficial direction, this would constitute a loss of control that could have severe consequences. AI control is a more technical problem than those presented in the previous sections,” the researchers wrote.
Humans could lose control through “proxy gaming,” when humans give an AI system an approximate goal that “that initially seems to correlate with the ideal goal,” but the AI systems “end up exploiting this proxy in ways that diverge from the idealized goal or even lead to negative outcomes.”
Researchers cited an example from the Soviet Union when authorities began measuring nail factories’ performances based on how many nails a factory was able to produce. To exceed or meet expectations, factories began mass producing tiny nails that were essentially useless due to their size.
“The authorities tried to remedy the situation by shifting focus to the weight of nails produced. Yet, soon after, the factories began to produce giant nails that were just as useless, but gave them a good score on paper. In both cases, the factories learned to game the proxy goal they were given, while completely failing to fulfill their intended purpose,” the researchers explained.
Researchers suggest that companies not deploy AI systems with open-ended goals like “make as much money as possible,” and supporting AI safety research that can hash out in-the-weeds research to prevent catastrophes.
CLICK HERE TO GET THE FOX NEWS APP
“These dangers warrant serious concern. Currently, very few people are working on AI risk reduction. We do not yet know how to control highly advanced AI systems, and existing control methods are already proving inadequate… As AI capabilities continue to grow at an unprecedented rate, they could surpass human intelligence in nearly all respects relatively soon, creating a pressing need to manage the potential risks,” the researchers wrote in their conclusion.
There is, however, “many courses of action we can take to substantially reduce these risks” as outlined in the report.
First AI-generated drug enters human clinical trials, targeting chronic lung disease patients
The first-ever drug generated by artificial intelligence has entered Phase 2 clinical trials, with the first dose successfully administered to a human, Insilico Medicine announced yesterday.
The drug, currently referred to as INS018_055, is being tested to treat idiopathic pulmonary fibrosis (IPF), a rare, progressive type of chronic lung disease.
The 12-week trial will include participants diagnosed with IPF.
“This drug, which will be given orally, will undergo the same rigorous testing to ensure its effectiveness and safety, like traditionally discovered drugs, but the process of its discovery and design are incredibly new,” said Insilico Medicine’s CEO Alex Zhavoronkov, PhD, in a statement to Fox News Digital.
FIRST NEW ‘QUIT-SMOKING’ DRUG IN 20 YEARS SHOWS PROMISING RESULTS IN US TRIAL: ‘HOPE AND EXCITEMENT’
“However, with the latest advances in artificial intelligence, it was developed much faster than traditional drugs.”
How AI is transforming drug discoveryFor any new drug, there are four steps, explained Zhavoronkov, who is based in Dubai.
“First, scientists have to find a ‘target,’ a biological mechanism that is driving the disease, usually because it is not functioning as intended,” he said.

Insilico Medicine’s CEO Alex Zhavoronkov, PhD (left), is pictured in the company’s AI-run robotics lab in Suzhou, China, which Insilico opened in January 2023. (Insilico Medicine)
“Second, they need to create a new drug for that target, similar to a puzzle piece, that would block the progression of the disease without harming the patient.”
The third step is to conduct studies — first in animals, then in clinical trials in healthy human volunteers, and finally in patients.
RESEARCHERS USE AI TO UNDERSTAND ALZHEIMER’S DISEASE, IDENTIFY DRUG TARGETS
“If those tests show positive results in helping patients, the drug reaches its fourth and final step — approval by the regulatory agencies for use as a treatment for that disease,” said Zhavoronkov.
In the traditional process, he said, scientists find targets by combing through scientific literature and public health databases to look for pathways or genes linked to diseases.

CEO Alex Zhavoronkov, PhD (left), in the company’s AI-run robotics lab in Suzhou, China. “AI allows us to analyze massive quantities of data and find connections that human scientists might miss,” he said, “and then ‘imagine’ entirely new molecules that can be turned into drugs.” (Insilico Medicine)
“AI allows us to analyze massive quantities of data and find connections that human scientists might miss, and then ‘imagine’ entirely new molecules that can be turned into drugs,” Zhavoronkov said.
In this case, Insilico used AI both to discover a new target for IPF and then to generate a new molecule that could act on that target.
AI TECH AIMS TO HELP PATIENTS CATCH DISEASE EARLY, EVEN ‘REVERSE THEIR BIOLOGICAL AGE’
The company uses a program called PandaOmics to detect disease-causing targets by analyzing scientific data from clinical trials and public databases.
Once the target was discovered, researchers entered it into Insilico’s other tool, Chemistry42, which uses generative AI to design new molecules.

The first drug generated by artificial intelligence has entered Phase 2 clinical trials, with the first dose successfully administered to a human, Insilico Medicine announced. (Insilico Medicine)
“Essentially, our scientists provided Chemistry42 with the specific characteristics they were looking for and the system generated a series of possible molecules, ranked based on their likelihood of success,” Zhavoronkov said.
The chosen molecule, INS018_055, is so named because it was the 55th molecule in the series and showed the most promising activity, he said.
AI-DISCOVERED DRUG SHOWS ‘ENORMOUS POTENTIAL’ TO TREAT SCHIZOPHRENIA: ‘REAL NEED FOR BETTER TREATMENT’
The current treatments for idiopathic pulmonary fibrosis are pirfenidone and nintedanib.
While these drugs may provide some relief or slow the worsening of symptoms, they do not reverse the damage or stop progression, Zhavoronkov said.

The Insilico team is hopeful the data from this newly launched clinical trial will confirm their drug’s safety and effectiveness. (Insilico Medicine)
They also have unpleasant side effects, most notably nausea, diarrhea, weight loss and loss of appetite.
“There are very few options for people with this terrible condition, and the prognosis is poor — most will die within two to five years of diagnosis,” Zhavoronkov explained.
“Our initial studies have indicated that INS018_055 has the potential to address some of the limitations of current therapies.”
Next stepsThe Insilico team is hopeful the data from this newly launched clinical trial will confirm the drug’s safety and effectiveness.
CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER
“If our Phase IIa study is successful, the drug will then go to Phase IIb with a larger cohort of participants,” said Hong Kong-based Sujata Rao, M.D., Insilico’s chief medical officer, in a statement to Fox News Digital.
During Phase IIb, the primary objective will be to determine whether there is significant response to the drug, Rao said.

In this case, Insilico used AI to discover a new target for IPF — and then to generate a new molecule that could act on that target. (Insilico Medicine)
“Then, the drug will go on to be evaluated in a much larger group of patients — typically hundreds — in Phase III studies to confirm the safety and effectiveness before it can be approved by the FDA as a new treatment for patients with that condition,” he explained.
One of the biggest challenges with these trials is recruiting patients, Rao said, particularly for a rare disease like idiopathic pulmonary fibrosis.
CLICK HERE TO GET THE FOX NEWS APP
“Patients need to fulfill certain criteria in order to be considered for trial enrollment,” he noted.
Despite the challenges, Rao said the research team is optimistic that this drug will be ready to go to market — and reach the patients who may benefit from it — in the next few years.
Melissa Rudy is health editor and a member of the lifestyle team at Fox News Digital.
June 24, 2023
Adele Pauses Concert to Survey Audience on Titanic Sub After Tragedy

On June 18, 2023, a deep-sea submersible Titan, operated by the U.S.-based company OceanGate Expeditions and carrying five people on a voyage to the wreck of the Titanic, was declared missing. Following a five-day search, the U.S. Coast Guard announced at a June 22 press conference that the vessel suffered a “catastrophic implosion” that killed all five passengers on board.
Pakistani-born businessman Shahzada Dawood and his 19-year-old son Suleman Dawood, both British citizens, were also among the victims.
Their family is one of the wealthiest in Pakistan, with Shahzada Dawood serving as the vice chairman of Engro Corporation, per The New York Times. His son was studying at the University of Strathclyde in Glasgow, Scotland.
Shahzada’s sister Azmeh Dawood told NBC News that Suleman had expressed reluctance about going on the voyage, informing a relative that he “wasn’t very up for it” and felt “terrified” about the trip to explore the wreckage of the Titanic, but ultimately went to please his father, a Titanic fan, for Father’s Day.
The Dawood Foundation mourned their deaths in a statement to the website, saying, “It is with profound grief that we announce the passing of Shahzada and Suleman Dawood. Our beloved sons were aboard OceanGagte’s Titan submersible that perished underwater. Please continue to keep the departed souls and our family in your prayers during this difficult period of mourning.”
Foo Fighters’ Dave Grohl’s daughter Violet, 17, surprises fans with special Glastonbury Festival performance
A family affair! Dave Grohl brought his eldest daughter Violet out on stage at the 2023 Glastonbury Festival on Friday June 23 2023 to sing one of the Foo Fighters’ new tracks from the latest album But Here We Are.
“‘My favorite singer in the world,” the frontman told the cheering crowds as his 17-year-old daughter walked out on stage. “This is a song I wrote for my mother, Violet’s grandmother. This is ‘Show Me How.'”

He later yelled: “That’s my girl!” as they finished performing the song, and quipped: “I love it when you’re on stage with your daughter and you hit a bad note.”
The band were a surprise act, although many fans had previously guessed that The Churnups – who had an afternoon slot on the Pyramid Stage on day one – were the iconic rock band,.
“All right [expletive], let’s dance!” Dave told the crowds as they launched directly into ‘All My Life,’ for their short hour-long set. “We only have one hour, so we’re going to try and fit in as many songs as we can,” he said, before joking: “You guys knew it was us this whole time. You knew it was us, right?”

2023 marks the band’s 25th anniversary of the first time they played the British festival; in 1998 they played on the same day and in the same time slot.
The performance was also their first in the UK since they returned to the stage on May 26 and were joined by late drummer Taylor Hawkins’ 17-year-old son Shane.
“How about we do a song with one of my favorite drummers in the world? Ladies and gentlemen, Shane Hawkins!” Dave said to the crowds as the band, and Shane, performed ‘I’ll Stick Around’.

Speaking of the decision to return to live performances after the tragic death of their drummer Taylor Hawkin in 2022, Dave shared with the crowd at the Boston Calling music festival: “I’m gonna do it for Taylor’s family, and I’m gonna do it for Taylor, because we used to sing it together.”
The performance was their second with full-time new drummer Josh Freese, whose addition to the band was confirmed in a new video that saw friends including Mötley Crüe’s Tommy Lee, Tool’s Danny Carey, and Red Hot Chili Peppers’ Chad Smith all make appearances before Josh interrupts their chit-chat with the request to play some songs.
Taylor died in March 2022 at the age of 50. Taking to the band’s official Twitter account, they wrote: “The Foo Fighters family is devastated by the tragic and untimely loss of our beloved Taylor Hawkins. His musical spirit and infectious laughter will live on with all of us forever. Our hearts go out to his wife, children and family, and we ask that their privacy be treated with the utmost respect in this unimaginably difficult time.”
Amber Heard puts on flirty display with dashing co-star at Taormina Film Festival in Italy
Amber Heard is back! The Aquaman actress was pictured with her dashing co-star as she attended the 69th Taormina Film Festival in Italy to support the premiere of her film In The Fire.
Amber was pictured leaving the premiere with cast and crew including co-star Luca Calvini, with the pair giggling and keeping a close hold on each other. The 37-year-old wore a gorgeous calf-length wrap skirt with matching tee and high black stilettos – which she was later caught on camera removing as she made her way down the cobbled stone streets.

Luca wore a pale green suit with a casual buttoned-up tee as in other pictures Amber – who wore a bold red lip and had her blonde hair in loose waves – kept a tight hold of his arm.
The appearance marks Amber’s first official promotional appearance in any capacity since the June 1, 2022 verdict of a headline-making defamation trial, in which her ex-husband Johnny Depp largely prevailed.
Johnny was awarded more than $10 million in damages when a jury determined that she had defamed the actor in her Washington Post op-ed about domestic violence. She appealed the decision and at the end of 2022 made the decision to settle out of court. She paid him $1 million and the money was recently split equally between the Make-A-Film Foundation, The Painted Turtle, Red Feather, Tetiaroa Society, and Amazonia Fund Alliance, with each receiving $200,000.

The Make-A-Film Foundation grants film wishes to children and teenagers who have serious or life-threatening medical conditions, helping them to create short film legacies by teaming them with noted actors, writers and directors. The Painted Turtle, in Santa Monica, provides a year-round, life-changing environment and authentic camp experience for children with chronic and life-threatening illnesses.
Red Feather is a development group that offers housing assistance for Native American communities, while the Tetiaroa Society aims to ensure island and coastal communities “have a future as rich as their past”. The Amazonia Fund Alliance is an international fundraising program for projects of preservation, reforestation, and help to indigenous tribes in the Amazon Rainforest.

In The Fire will be Amber’s first film since Zack Snyder’s Justice League in 2021, and she’ll appear alongside the film’s director Conor Allyn and co-star Eduardo Noriego.
The flick is a supernatural thriller set in 1899 Colombia, and Amber plays an American psychiatrist who’s arrived to psychoanalyze an emotionally disturbed young boy — who locals believe is haunted by otherworldly forces, if not the devil.
Amber, who has nearly 50 films to her credits, has another film in post-production as well: Aquaman and the Lost Kingdom, in which she reprises her role as Mera alongside such stars as Jason Momoa, Nicole Kidman and Ben Affleck. The big-budget DC Comic sequel is set to hit theaters this December.
Victoria Fox's Blog
- Victoria Fox's profile
- 137 followers
