Taking too long? Close loading screen.
Connect with us

World

The case for taking AI seriously as a threat to humanity

Published

on

Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”

That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could permanently cut off human civilization from a good future.

This concern has been raised since the dawn of computing. But it has come into particular focus in recent years, as advances in machine-learning techniques have given us a more concrete understanding of what we can do with AI, what AI can do for (and to) us, and how much we still don’t know.

There are also skeptics. Some of them think advanced AI is so distant that there’s no point in thinking about it now. Others are worried that excessive hype about the power of their field might kill it prematurely. And even among the people who broadly agree that AI poses unique dangers, there are varying takes on what steps make the most sense today.

The conversation about AI is full of confusion, misinformation, and people talking past each other — in large part because we use the word “AI” to refer to so many things. So here’s the big picture on how artificial intelligence might pose a catastrophic danger, in nine questions:

1) What is AI?

Artificial intelligence is the effort to create computers capable of intelligent behavior. It is a broad catchall term, used to refer to everything from Siri to IBM’s Watson to powerful technologies we have yet to invent.

Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.

Narrow AI has seen extraordinary progress over the past few years. AI systems have improved dramatically at translation, at games like chess and Go, at important research biology questions like predicting how proteins fold, and at generating images. AI systems determine what you’ll see in a Google search or in your Facebook Newsfeed. They compose music and write articles that, at a glance, read as if a human wrote them. They play strategy games. They are being developed to improve drone targeting and detect missiles.

But narrow AI is getting less narrow. Once, we made progress in AI by painstakingly teaching computer systems specific concepts. To do computer vision — allowing a computer to identify things in pictures and video — researchers wrote algorithms for detecting edges. To play chess, they programmed in heuristics about chess. To do natural language processing (speech recognition, transcription, translation, etc.), they drew on the field of linguistics.

But recently, we’ve gotten better at creating computer systems that have generalized learning capabilities. Instead of mathematically describing detailed features of a problem, we let the computer system learn that by itself. While once we treated computer vision as a completely different problem from natural language processing or platform game playing, now we can solve all three problems with the same approaches.

And as computers get good enough at narrow AI tasks, they start to exhibit more general capabilities. For example, OpenAI’s famous GPT-series of text AIs is, in one sense, the narrowest of narrow AIs — it just predicts what the next word will be in a text, based on the previous words and its corpus of human language. And yet, it can now identify questions as reasonable or unreasonable and discuss the physical world (for example, answering questions about which objects are larger or which steps in a process must come first). In order to be very good at the narrow task of text prediction, an AI system will eventually develop abilities that are not narrow at all.

Our AI progress so far has enabled enormous advances — and has also raised urgent ethical questions. When you train a computer system to predict which convicted felons will reoffend, you’re using inputs from a criminal justice system biased against black people and low-income people — and so its outputs will likely be biased against black and low-income people too. Making websites more addictive can be great for your revenue but bad for your users. Releasing a program that writes convincing fake reviews or fake news might make those widespread, making it harder for the truth to get out.

Rosie Campbell at UC Berkeley’s Center for Human-Compatible AI argues that these are examples, writ small, of the big worry experts have about general AI in the future. The difficulties we’re wrestling with today with narrow AI don’t come from the systems turning on us or wanting revenge or considering us inferior. Rather, they come from the disconnect between what we tell our systems to do and what we actually want them to do.

For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.

In other words, our problems come from the systems being really good at achieving the goal they learned to pursue; it’s just that the goal they learned in their training environment isn’t the outcome we actually wanted. And we’re building systems we don’t understand, which means we can’t always anticipate their behavior.

Right now the harm is limited because the systems are so limited. But it’s a pattern that could have even graver consequences for human beings in the future as AI systems become more advanced.

2) Is it even possible to make a computer as smart as a person?

Yes, though current AI systems aren’t nearly that smart.

One popular adage about AI is “everything that’s easy is hard, and everything that’s hard is easy.” Doing complex calculations in the blink of an eye? Easy. Looking at a picture and telling you whether it’s a dog? Hard (until very recently).

Lots of things humans do are still outside AI’s grasp. For instance, it’s hard to design an AI system that explores an unfamiliar environment, that can navigate its way from, say, the entryway of a building it’s never been in before up the stairs to a specific person’s desk. We are just beginning to learn how to design an AI system that reads a book and retains an understanding of the concepts.

The paradigm that has driven many of the biggest breakthroughs in AI recently is called “deep learning.” Deep learning systems can do some astonishing stuff: beat games we thought humans might never lose, invent compelling and realistic photographs, solve open problems in molecular biology.

These breakthroughs have made some researchers conclude it’s time to start thinking about the dangers of more powerful systems, but skeptics remain. The field’s pessimists argue that programs still need an extraordinary pool of structured data to learn from, require carefully chosen parameters, or work only in environments designed to avoid the problems we don’t yet know how to solve. They point to self-driving cars, which are still mediocre under the best conditions despite the billions that have been poured into making them work.

It’s rare, though, to find a top researcher in AI who thinks that general AI is impossible. Instead, the field’s luminaries tend to say that it will happen someday — but probably a day that’s a long way off.

Other researchers argue that the day may not be so distant after all.

That’s because for almost all the history of AI, we’ve been held back in large part by not having enough computing power to realize our ideas fully. Many of the breakthroughs of recent years — AI systems that learned how to play strategy games, generate fake photos of celebrities, fold proteins, and compete in massive multiplayer online strategy games — have happened because that’s no longer true. Lots of algorithms that seemed not to work at all turned out to work quite well once we could run them with more computing power.

And the cost of a unit of computing time keeps falling. Progress in computing speed has slowed recently, but the cost of computing power is still estimated to be falling by a factor of 10 every 10 years. Through most of its history, AI has had access to less computing power than the human brain. That’s changing. By most estimates, we’re now approaching the era when AI systems can have the computing resources that we humans enjoy.

And deep learning, unlike previous approaches to AI, is highly suited to developing general capabilities.

“If you go back in history,” top AI researcher and OpenAI cofounder Ilya Sutskever told me, “they made a lot of cool demos with little symbolic AI. They could never scale them up — they were never able to get them to solve non-toy problems. Now with deep learning the situation is reversed. … Not only is [the AI we’re developing] general, it’s also competent — if you want to get the best results on many hard problems, you must use deep learning. And it’s scalable.”

In other words, we didn’t need to worry about general AI back when winning at chess required entirely different techniques than winning at Go. But now, the same approach produces fake news or music depending on what training data it is fed. And as far as we can discover, the programs just keep getting better at what they do when they’re allowed more computation time — we haven’t discovered a limit to how good they can get. Deep learning approaches to most problems blew past all other approaches when deep learning was first discovered.

Furthermore, breakthroughs in a field can often surprise even other researchers in the field. “Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

There’s another consideration. Imagine an AI that is inferior to humans at everything, with one exception: It’s a competent engineer that can build AI systems very effectively. Machine learning engineers who work on automating jobs in other fields often observe, humorously, that in some respects, their own field looks like one where much of the work — the tedious tuning of parameters — could be automated.

If we can design such a system, then we can use its result — a better engineering AI — to build another, even better AI. This is the mind-bending scenario experts call “recursive self-improvement,” where gains in AI capabilities enable more gains in AI capabilities, allowing a system that started out behind us to rapidly end up with abilities well beyond what we anticipated.

This is a possibility that has been anticipated since the first computers. I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

3) How exactly could AI wipe us out?

It’s immediately clear how nuclear bombs will kill us. No one working on mitigating nuclear risk has to start by explaining why it’d be a bad thing if we had a nuclear war.

The case that AI could pose an existential risk to humanity is more complicated and harder to grasp. So many of the people who are working to build safe AI systems have to start by explaining why AI systems, by default, are dangerous.

Javier Zarracina/Vox

The idea that AI can become a danger is rooted in the fact that AI systems pursue their goals, whether or not those goals are what we really intended — and whether or not we’re in the way. “You’re probably not an evil ant-hater who steps on ants out of malice,” Stephen Hawking wrote, “but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Here’s one scenario that keeps experts up at night: We develop a sophisticated AI system with the goal of, say, estimating some number with high confidence. The AI realizes it can achieve more confidence in its calculation if it uses all the world’s computing hardware, and it realizes that releasing a biological superweapon to wipe out humanity would allow it free use of all the hardware. Having exterminated humanity, it then calculates the number with higher confidence.

It is easy to design an AI that averts that specific pitfall. But there are lots of ways that unleashing powerful computer systems will have unexpected and potentially devastating effects, and avoiding all of them is a much harder problem than avoiding any specific one.

Victoria Krakovna, an AI researcher at DeepMind (now a division of Alphabet, Google’s parent company), compiled a list of examples of “specification gaming”: the computer doing what we told it to do but not what we wanted it to do. For example, we tried to teach AI organisms in a simulation to jump, but we did it by teaching them to measure how far their “feet” rose above the ground. Instead of jumping, they learned to grow into tall vertical poles and do flips — they excelled at what we were measuring, but they didn’t do what we wanted them to do.

An AI playing the Atari exploration game Montezuma’s Revenge found a bug that let it force a key in the game to reappear, thereby allowing it to earn a higher score by exploiting the glitch. An AI playing a different game realized it could get more points by falsely inserting its name as the owner of high-value items.

Sometimes, the researchers didn’t even know how their AI system cheated: “the agent discovers an in-game bug. … For a reason unknown to us, the game does not advance to the second round but the platforms start to blink and the agent quickly gains a huge amount of points (close to 1 million for our episode time limit).”

What these examples make clear is that in any system that might have bugs or unintended behavior or behavior humans don’t fully understand, a sufficiently powerful AI system might act unpredictably — pursuing its goals through an avenue that isn’t the one we expected.

In his 2009 paper “The Basic AI Drives,” Steve Omohundro, who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”

His argument goes like this: Because AIs have goals, they’ll be motivated to take actions that they can predict will advance their goals. An AI playing a chess game will be motivated to take an opponent’s piece and advance the board to a state that looks more winnable.

But the same AI, if it sees a way to improve its own chess evaluation algorithm so it can evaluate potential moves faster, will do that too, for the same reason: It’s just another step that advances its goal.

If the AI sees a way to harness more computing power so it can consider more moves in the time available, it will do that. And if the AI detects that someone is trying to turn off its computer mid-game, and it has a way to disrupt that, it’ll do it. It’s not that we would instruct the AI to do things like that; it’s that whatever goal a system has, actions like these will often be part of the best path to achieve that goal.

That means that any goal, even innocuous ones like playing chess or generating advertisements that get lots of clicks online, could produce unintended results if the agent pursuing it has enough intelligence and optimization power to identify weird, unexpected routes to achieve its goals.

Goal-driven systems won’t wake up one day with hostility to humans lurking in their hearts. But they will take actions that they predict will help them achieve their goal — even if we’d find those actions problematic, even horrifying. They’ll work to preserve themselves, accumulate more resources, and become more efficient. They already do that, but it takes the form of weird glitches in games. As they grow more sophisticated, scientists like Omohundro predict more adversarial behavior.

4) When did scientists first start worrying about AI risk?

Scientists have been thinking about the potential of artificial intelligence since the early days of computers. In the famous paper where he put forth the Turing test for determining if an artificial system is truly “intelligent,” Alan Turing wrote:

Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. … There would be plenty to do in trying to keep one’s intelligence up to the standards set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control.

I.J. Good worked closely with Turing and reached the same conclusions, according to his assistant, Leslie Pendleton. In an excerpt from unpublished notes Good wrote shortly before he died in 2009, he writes about himself in third person and notes a disagreement with his younger self — while as a younger man, he thought powerful AIs might be helpful to us, the older Good expected AI to annihilate us.

[The paper] “Speculations Concerning the First Ultra-intelligent Machine” (1965) … began: “The survival of man depends on the early construction of an ultra-intelligent machine.” Those were his words during the Cold War, and he now suspects that “survival” should be replaced by “extinction.” He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings. He said also that “probably Man will construct the deus ex machina in his own image.”

In the 21st century, with computers quickly establishing themselves as a transformative force in our world, younger researchers started expressing similar worries.

Nick Bostrom is a professor at the University of Oxford, the director of the Future of Humanity Institute, and the director of the Governance of Artificial Intelligence Program. He researches risks to humanity, both in the abstract — asking questions like why we seem to be alone in the universe — and in concrete terms, analyzing the technological advances on the table and whether they endanger us. AI, he concluded, endangers us.

In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

Across the world, others have reached the same conclusion. Bostrom co-authored a paper on the ethics of artificial intelligence with Eliezer Yudkowsky, founder of and research fellow at the Berkeley Machine Intelligence Research Institute (MIRI), an organization that works on better formal characterizations of the AI safety problem.

Yudkowsky started his career in AI by worriedly poking holes in others’ proposals for how to make AI systems safe, and has spent most of it working to persuade his peers that AI systems will, by default, be unaligned with human values (not necessarily opposed to but indifferent to human morality) — and that it’ll be a challenging technical problem to prevent that outcome.

Increasingly, researchers realized that there’d be challenges that hadn’t been present with AI systems when they were simple. “‘Side effects’ are much more likely to occur in a complex environment, and an agent may need to be quite sophisticated to hack its reward function in a dangerous way. This may explain why these problems have received so little study in the past, while also suggesting their importance in the future,” concluded a 2016 research paper on problems in AI safety.

Bostrom’s book Superintelligence was compelling to many people, but there were skeptics. “No, experts don’t think superintelligent AI is a threat to humanity,” argued an op-ed by Oren Etzioni, a professor of computer science at the University of Washington and CEO of the Allan Institute for Artificial Intelligence. “Yes, we are worried about the existential risk of artificial intelligence,” replied a dueling op-ed by Stuart Russell, an AI pioneer and UC Berkeley professor, and Allan DaFoe, a senior research fellow at Oxford and director of the Governance of AI program there.

It’s tempting to conclude that there’s a pitched battle between AI-risk skeptics and AI-risk believers. In reality, they might not disagree as profoundly as you would think.

Facebook’s chief AI scientist Yann LeCun, for example, is a vocal voice on the skeptical side. But while he argues we shouldn’t fear AI, he still believes we ought to have people working on, and thinking about, AI safety. “Even if the risk of an A.I. uprising is very unlikely and very far in the future, we still need to think about it, design precautionary measures, and establish guidelines,” he writes.

That’s not to say there’s an expert consensus here — far from it. There is substantial disagreement about which approaches seem likeliest to bring us to general AI, which approaches seem likeliest to bring us to safe general AI, and how soon we need to worry about any of this.

Many experts are wary that others are overselling their field, and dooming it when the hype runs out. But that disagreement shouldn’t obscure a growing common ground; these are possibilities worth thinking about, investing in, and researching, so we have guidelines when the moment comes that they’re needed.

5) Why couldn’t we just shut off a computer if it got too powerful?

A smart AI could predict that we’d want to turn it off if it made us nervous. So it would try hard not to make us nervous, because doing so wouldn’t help it accomplish its goals. If asked what its intentions are, or what it’s working on, it would attempt to evaluate which responses are least likely to get it shut off, and answer with those. If it wasn’t competent enough to do that, it might pretend to be even dumber than it was — anticipating that researchers would give it more time, computing resources, and training data.

So we might not know when it’s the right moment to shut off a computer.

We also might do things that make it impossible to shut off the computer later, even if we realize eventually that it’s a good idea. For example, many AI systems could have access to the internet, which is a rich source of training data and which they’d need if they’re to make money for their creators (for example, on the stock market, where more than half of trading is done by fast-reacting AI algorithms).

But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.

In that case, isn’t it a terrible idea to let any AI system — even one which doesn’t seem powerful enough to be dangerous — have access to the internet? Probably. But that doesn’t mean it won’t continue to happen. AI researchers want to make their AI systems more capable — that’s what makes them more scientifically interesting and more profitable. It’s not clear that the many incentives to make your systems powerful and use them online will suddenly change once systems become powerful enough to be dangerous.

So far, we’ve mostly talked about the technical challenges of AI. But from here forward, it’s necessary to veer more into the politics. Since AI systems enable incredible things, there will be lots of different actors working on such systems.

There will likely be startups, established tech companies like Google (Alphabet’s recently acquired startup DeepMind is frequently mentioned as an AI frontrunner), and organizations like Elon-Musk-founded OpenAI, which recently transitioned to a hybrid for-profit/non-profit structure.

There will be governments — Russia’s Vladimir Putin has expressed an interest in AI, and China has made big investments. Some of them will presumably be cautious and employ safety measures, including keeping their AI off the internet. But in a scenario like this one, we’re at the mercy of the least cautious actor, whoever they may be.

That’s part of what makes AI hard: Even if we know how to take appropriate precautions (and right now we don’t), we also need to figure out how to ensure that all would-be AI programmers are motivated to take those precautions and have the tools to implement them correctly.

6) What are we doing right now to avoid an AI apocalypse?

“It could be said that public policy on AGI [artificial general intelligence] does not exist,” concluded a paper in 2018 reviewing the state of the field.

The truth is that technical work on promising approaches is getting done, but there’s shockingly little in the way of policy planning, international collaboration, or public-private partnerships. In fact, much of the work is being done by only a handful of organizations, and it has been estimated that around 50 people in the world work full time on technical AI safety.

Bostrom’s Future of Humanity Institute has published a research agenda for AI governance: the study of “devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.” It has published research on the risk of malicious uses of AI, on the context of China’s AI strategy, and on artificial intelligence and international security.

The longest-established organization working on technical AI safety is the Machine Intelligence Research Institute (MIRI), which prioritizes research into designing highly reliable agents — artificial intelligence programs whose behavior we can predict well enough to be confident they’re safe. (Disclosure: MIRI is a nonprofit and I donated to its work in 2017-2019.)

The Elon Musk-founded OpenAI is a very new organization, less than three years old. But researchers there are active contributors to both AI safety and AI capabilities research. A research agenda in 2016 spelled out “concrete open technical problems relating to accident prevention in machine learning systems,” and researchers have since advanced some approaches to safe AI systems.

Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here. “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).

There are also lots of people working on more present-day AI ethics problems: algorithmic bias, robustness of modern machine-learning algorithms to small changes, and transparency and interpretability of neural nets, to name just a few. Some of that research could potentially be valuable for preventing destructive scenarios.

But on the whole, the state of the field is a little bit as if almost all climate change researchers were focused on managing the droughts, wildfires, and famines we’re already facing today, with only a tiny skeleton team dedicating to forecasting the future and 50 or so researchers who work full time on coming up with a plan to turn things around.

Not every organization with a major AI department has a safety team at all, and some of them have safety teams focused only on algorithmic fairness and not on the risks from advanced systems. The US government doesn’t have a department for AI.

The field still has lots of open questions — many of which might make AI look much scarier, or much less so — which no one has dug into in depth.

7) Is this really likelier to kill us all than, say, climate change?

It sometimes seems like we’re facing dangers from all angles in the 21st century. Both climate change and future AI developments are likely to be transformative forces acting on our world.

Our predictions about climate change are more confident, both for better and for worse. We have a clearer understanding of the risks the planet will face, and we can estimate the costs to human civilization. They are projected to be enormous, risking potentially hundreds of millions of lives. The ones who will suffer most will be low-income people in developing countries; the wealthy will find it easier to adapt. We also have a clearer understanding of the policies we need to enact to address climate change than we do with AI.

There’s intense disagreement in the field on timelines for critical advances in AI. While AI safety experts agree on many features of the safety problem, they’re still making the case to research teams in their own field, and they disagree on some of the details. There’s substantial disagreement on how badly it could go, and on how likely it is to go badly. There are only a few people who work full time on AI forecasting. One of the things current researchers are trying to nail down is their models and the reasons for the remaining disagreements about what safe approaches will look like.

Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now.

8) Is there a possibility that AI can be benevolent?

AI safety researchers emphasize that we shouldn’t assume AI systems will be benevolent by default. They’ll have the goals that their training environment set them up for, and no doubt this will fail to encapsulate the whole of human values.

When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.

Of course, we can build AI systems that are aligned with human values, or at least that humans can safely work with. That is ultimately what almost every organization with an artificial general intelligence division is trying to do. Success with AI could give us access to decades or centuries of technological innovation all at once.

“If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made,” writes the introduction to Alphabet’s DeepMind. “From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach.”

So, yes, AI can share our values — and transform our world for the good. We just need to solve a very hard engineering problem first.

9) I just really want to know: how worried should we be?

To people who think the worrying is premature and the risks overblown, AI safety is competing with other priorities that sound, well, a bit less sci-fi — and it’s not clear why AI should take precedence. To people who think the risks described are real and substantial, it’s outrageous that we’re dedicating so few resources to working on them.

While machine-learning researchers are right to be wary of hype, it’s also hard to avoid the fact that they’re accomplishing some impressive, surprising things using very generalizable techniques, and that it doesn’t seem that all the low-hanging fruit has been picked.

AI looks increasingly like a technology that will change the world when it arrives. Researchers across many major AI organizations tell us it will be like launching a rocket: something we have to get right before we hit “go.” So it seems urgent to get to work learning rocketry. No matter whether or not humanity should be afraid, we should definitely be doing our homework.


Help keep Vox free for all

Millions turn to Vox each month to understand what’s happening in the news, from the coronavirus crisis to a racial reckoning to what is, quite possibly, the most consequential presidential election of our lifetimes. Our mission has never been more vital than it is in this moment: to empower you through understanding. But our distinctive brand of explanatory journalism takes resources. Even when the economy and the news advertising market recovers, your support will be a critical part of sustaining our resource-intensive work. If you have already contributed, thank you. If you haven’t, please consider helping everyone make sense of an increasingly chaotic world: Contribute today from as little as $3.

Source

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

World

The US just broke its record for the highest number of new coronavirus cases in a day

Published

on

The United States broke its record for the highest number of confirmed coronavirus cases reported in a single day on Friday, an alarming sign that what some epidemiologists are calling a “third wave” of infections is spreading at breakneck speed as winter approaches.

According to the New York Times, by the end of the day on Friday at least 85,085 cases were reported in states across the country — about 10,000 cases more than the previous same-day high on July 16.

Public health experts had long warned that uneven compliance with social distancing guidelines, inadequate contact tracing programs, and premature reopenings of indoor venues were creating conditions for a resurgence of virus transmission after its summer peak, and that is what appears to be happening now.

A bar graph showing the US case totals for each day, going back to March 3. A red trend line goes across the top of the bars, peaking first in early April at around 35,000 cases, falling to just over 20,000 in early June, spiking to around 70,000 in July, falling to around 40,000 in September, and rising again to about 60,000 in October. The Covid Tracking Project

The new case numbers also show that the geographic spread is wider than during past spikes. According to an internal report produced on Thursday for officials at the Department of Health and Human Services obtained by the Washington Post, more than 170 counties across 36 states have been designated rapidly rising hotspots. And 24 states have broken single-day records of new cases in the past two weeks, the Post reports.

Also concerning is that in the past month there has been a 40 percent rise in the number of people hospitalized for Covid-19 infections. Deaths have not surged so far, but epidemiologists have pointed out that there can be a significant time lag between a surge in cases and deaths tied to that surge.

“Today’s cases represent infections that probably happened a week or two ago,” Boston University epidemiologist Eleanor Murray told Vox’s Dylan Scott in July. “Today’s deaths represent cases that were diagnosed possibly up to a month ago, so infections that were up to six weeks ago or more.”

Saturday, President Donald Trump downplayed the record in new reported cases on Twitter, and incorrectly claimed that cases were up only because testing ability is up.

But public health experts have pointed to state-level policies on distancing and contact-tracing as a key driver of the current uptick. Moreover, the high rates at which coronavirus tests are coming back positive in many states — a key data point for estimating the true spread of the virus — and the surge in hospitalizations are signs that the new wave is not just a function of testing capacity. As Vox’s German Lopez has explained, a high positivity rate actually suggests that not enough tests are being done to track and contain spread in a given area.

Murray, the epidemiologist at Boston University, told the Washington Post that the wide geographic range of the new wave will make it difficult to move health care workers to hot spots. Previous spikes were concentrated in certain communities, allowing medical professionals from less affected areas to be moved to deal with outbreaks. But the breadth of the current outbreak could tax US health care capacity in a manner that has not been seen before.

And Murray also pointed out that this wave is more dangerous that the two that preceded it because it started from a higher point of infections.

“We are starting this wave much higher than either of the previous waves,” she told the Post. “And it will simply keep going up until people and officials decide to do something about it.”

Experts have warned about a third wave for a while

Medical professionals, epidemiologists and many public health officials have long pointed out the risk of a third wave.

As Vox’s German Lopez wrote in early October, experts warned that a third wave looked likely in light of the fact that the virus was never really suppressed nationally, and that premature reopening, encouraged most aggressively by Trump and Republican governors, would simply accelerate its spread:

Consider Florida. Last month, the state reopened bars and, more recently, restaurants, despite the high risk of these indoor spaces. After Florida previously opened bars, in June, experts said the establishments were largely to blame for the state’s massive Covid-19 outbreak in the summer. As Florida reopens now, it has roughly two to three times the number of Covid-19 cases that it had in early June, and its high test positivity rate suggests it’s still likely missing a lot of cases. The state is fanning its flames while its most recent fire is nowhere near extinguished.

This is, in effect, what much of the country is doing now as it rushes to reopens schools, particularly colleges and universities, and risky indoor spaces. Coupled with recent Labor Day celebrations, experts worry that’s already leading to a new increase in Covid-19 cases.

Experts have pointed out that Trump’s persistent agenda to downplay the dangers of the virus — and his suggestions that the news of a third wave is a media conspiracy designed to throw the election in Democrats’ favor — could intensify the problem as the virus is made into an increasingly partisan issue. The president has repeatedly failed to take responsibly for the US’ troubled pandemic response, including at the second presidential debate. He has instead blamed China and Democrats for the country’s problems, while leaving it to individual states to create plans for lower the rate of infection.

Some states have had more success in reducing infection than others, but none has managed to eliminate spread altogether. And more worrying still is the fact that cold weather and flu season have yet to fully settle in many states as winter approaches.

The good news is that we know how to counteract further spread.

“None of the ideas to prevent all of this are shocking or new,” Lopez recently wrote. “They’re all things people have heard before: More testing and contact tracing to isolate people who are infected, get their close contacts to quarantine, and deploy broader restrictions as necessary. More masking, including mandates in the 17 states that don’t have one. More careful, phased reopenings. More social distancing.”

Source

Continue Reading

World

‘A Disturbing Pattern’: ICE Detainees Were Pressured to Have Gynecological Surgery, Doctors Say

Published

on

A report drafted by a team of independent doctors and experts found a “disturbing pattern” of questionable gynecological surgical procedures performed on female detainees at an ICE detention center in Georgia. 

The medical professionals say they reviewed more than 3,200 pages of records from 19 women who “allege medical maltreatment during detention” at the Irwin County Detention Center in Ocilla, Georgia, which has emerged at the center of a political firestorm following complaints from women held at the facility. 

The report alleges a number of women were pressured to have “unnecessary surgery” without an adequate discussion about the risks, benefits, or alternatives.

“Our findings reveal a disturbing pattern that warrants further investigation: one in which many women either underwent abdominal surgery or were pressured to have a surgery that was not medically indicated and to which they did not consent,” the authors, including nine board-certified OB-GYNs affiliated with major academic medical centers and two nursing experts, wrote in the report. “None of the women appear to have received adequate informed consent.”

The report represents the most extensive examination of medical records among detainees at the facility to have emerged since a September whistleblower complaint alleged a pattern of “jarring medical neglect” and confusing medical care at Irwin. The report’s authors include doctors affiliated with Vanderbilt University, Northwestern University, and Baylor College of Medicine. The medical experts developed the report in coordination with lawyers representing detainees and a coalition of advocacy groups.

VICE News reviewed a copy of the report, which was drafted as a five-page executive summary, on Friday. The report was delivered to members of Congress on Thursday, but has not yet been publicly released. Its existence was first reported by the LA Times.

The document details accounts of women who were treated by a local gynecologist named Dr. Mahendra Amin, who has repeatedly denied wrongdoing. 

In a statement, an attorney for Amin noted that the report did not involve a complete review of all the relevant medical records, and called the doctors and nursing experts’ review “severely incomplete, at best.”

“Any serious medical professional would agree that one cannot possibly come to a conclusion regarding the appropriateness of a medical procedure without reviewing all of the relevant medical records, especially the records from the physician who performed the procedure and the hospital where the procedure was performed,” Amin’s attorney, Scott Grubman, wrote in the statement.

Amin is fully cooperating with official investigators and he “looks forward to the investigations clearing his good name and reputation,” Grubman said. 

The Irwin County Detention Center is run by the private prison company LaSalle Corrections and houses immigrants detained by U.S. Immigration and Customs Enforcement (ICE). 

A spokesperson for ICE declined to comment specifically on the report on Friday, citing an ongoing investigation by the Department of Homeland Security inspector general. LaSalle Corrections has denied wrongdoing in the past, and did not immediately respond to questions about the report from VICE News on Friday. 

‘A disturbing pattern’

The report says that reviewed records, which include sworn declarations and transcribed telephone interviews, suggested that Amin’s findings justifying surgery appear to be unsupported “by all other available sources of information.”

“There are indications that both Dr. Amin and the referring detention facility took advantage of the vulnerability of women in detention to pressure them to agree to overly aggressive, inappropriate, and unconsented medical care,” the document alleges. 

Women detained at Irwin, the document goes on, faced “pressure to have unnecessary surgery without a discussion of risks, benefits, or alternatives, including one woman who was told she needed removal of her uterus, fallopian tubes, and ovaries.” 

The report found that several women indicated that they’d been referred for psychiatric treatment if they refused gynecological procedures.

One woman, who believed she was going to have a cyst drained at Amin’s office, was instead taken to the local Irwin County Hospital for surgery, according to the report. 

“When she attempted to refuse, she was told that she could die if she didn’t have surgery.”

“When she attempted to refuse, she was told that she could die if she didn’t have surgery and, at the same time, told that ICE might deny a request for surgery if she changed her mind later,” the report says. 

Women were sometimes referred to the gynecologist even if they didn’t have gynecological complaints, according to the report.

The report alleges that unnecessary transvaginal procedures were performed without consent, and imaging results were exaggerated to justify surgeries while less invasive treatments were not “adequately pursued.” 

In an interview with The Washington Post on Friday, however, one of the authors said it appears Amin might have saved a woman’s life in one instance, in a detail that isn’t mentioned in the report. 

Dr. Ted Anderson, director of gynecology at Vanderbilt University Medical Center and a member of the review team, told the Post that Amin had incorrectly diagnosed a woman with fibroids. But then Amin found that she had cancer and appropriately performed a hysterectomy, Anderson told the Post.

The records

The report’s authors state they only uncovered one signed consent form, which they describe as “an English language consent for a woman whose primary language appears to be Spanish.” 

Yet they also acknowledge that they did not obtain all of the patients’ medical records.

“Records produced by the Irwin County Detention Center, Irwin County Hospital, and by Dr. Amin appear to be incomplete,” the report says. “In some cases, fewer than 20 pages of medical records were provided. No imaging studies were produced. In many cases, referral records, operative notes, pathology reports, hospital records, and imaging reports were either entirely missing or incomplete, and office notes were nearly illegible.”

Amin’s attorney argued that the lack of access to the complete patients’ records should be seen as a fatal flaw in the report’s findings. 

“Importantly, only four ICE detainees have ever requested medical records from Dr. Amin’s office, and only five ICE detainees have ever requested records from the hospital,” Grubman wrote. “In fact, upon review, it appears that, for the vast majority of patients included in the cited report, no records were requested from either Dr. Amin or Irwin County Hospital.”

Those requests overlap, he said, meaning fewer than nine detainees requested their records directly from the hospital or the doctor’s office.

“The report states that the medical records that were reviewed did not contain informed consent forms,” Grubman wrote. “However, these forms are contained in the medical records maintained by the doctor’s office and/or the hospital which, again, were not reviewed.”

Anderson told VICE News in an interview Friday evening that the team believes the records they reviewed were sufficient to form conclusions. And he said the group also recovered records from the Irwin County Detention Center, which had been forwarded to the facility from Amin’s office and from the hospital. 

“For each of the 19 women, there is some medical or psychiatric record,” said Anderson, who previously served as the president of the American College of Obstetricians and Gynecologists, the country’s premiere professional organization for OB-GYNs. 

“Is there enough data to say this is overly aggressive or unnecessary in most cases? Yes.”

“Are these records 100% complete? Absolutely not,” said Dr. Michelle Debbink, a board-certified OB-GYN based in Salt Lake City, who was not a part of the team behind the report but did review the records of six women who underwent gynecological care while at Irwin. “Is there enough data to say this is overly aggressive or unnecessary in most cases? Yes.”

VICE News has independently uncovered four consent forms signed by women who were detained in Irwin and treated by Amin. Three were for surgical procedures, and one was for a birth control injection. Those women or their attorneys have told VICE News they received medical treatment that they either didn’t want or didn’t understand, despite signing the forms. 

Anderson argued that if the women did not understand their operations, they should not be considered to have agreed to them. 

“Consent is actually a conversation that you have, and not a piece of paper,” Anderson told VICE News. “There are documents we got from the detention center in which the patients report asking why they had surgery and say they don’t understand what happened. That clearly indicates there was not informed consent.”

Debbink agreed. 

“It’s unclear to me that there is a pattern of appropriate informed consent conversations with these patients before they are booked for surgery, and that should be the pattern,” Debbink said. She added, “It is clear to me, from the stories that these women tell independently of one another, that they had no idea what was happening. And I personally saw zero signed consent forms.”

In September, Ken Cuccinelli, the acting deputy director of the Department of Homeland Security, told the National Review that an initial DHS review found that early allegations included in the September whistleblower complaint were not backed up by documentation sent to Washington, D.C. by ICE. But he said an audit team would review the Irwin facility’s original records.

Scott Sutterfield, an executive with LaSalle Corrections, told VICE News on Thursday that  company policy prohibits comment during pending investigations.

“However, we can assure you the allegations are being investigated by an independent office and LaSalle Corrections is fully cooperating,” he wrote in an email. “We are very confident once the facts are made public our commitment to the highest quality care will be evident.”

He added: “We are confident the facts will demonstrate the very malicious intent of others to advance a purely political agenda.”

Source

Continue Reading

World

Record-breaking Colorado wildfires force more evacuations

Published

on

Officials say an elderly couple was found dead as the largest blazes in the US state’s history continue to spread.

Authorities in the US state of Colorado have issued an evacuation order for residents near Rocky Mountain National Park, as gusting winds on Saturday fanned the second-largest wildfire in the state’s history.

Officials issued a mandatory evacuation order for eastern Estes Park, a small town in northern Colorado, after wind pushed the 188,300-acre (76,200-hectare) East Troublesome Fire further east.

A red flag warning issued by the National Weather Service was in effect for the area as winds of 97 kilometres per hour (60 miles per hour) and low humidity were expected through Saturday.

“We tried to get ahead of it to get everyone safely out in an orderly fashion,” said Larimer County Sheriff’s Office spokesman David Moore. “We are expecting a very long day. Fingers crossed and prayers.”

A satellite image shows smoke from the East Troublesome wildfire in northern Colorado [Satellite Image 2020 Maxar Technologies/Handout via Reuters]

The fire, which started on October 14, was 14 percent contained as of Saturday.

As the flames spread, authorities closed all 668 square kilometres (415 square miles) of Rocky Mountain National Park to visitors and ordered the evacuation of several mountain communities.

The blaze has killed at least two to date, after an elderly couple was found dead in their home outside the town of Grand Lake, about 30km (19 miles) from Estes Park, on Friday.

Grand County Sheriff Brett Schroetlin said Lyle and Marilyn Hileman, both in their 80s, had “refused to evacuate”, instead opting to stay in the home they had lived in for many years.

“Our parents left this world together and on their own terms. They leave a legacy of hard work and determination to overcome – something all of Grand County will need,” the family said in a statement that was read by the sheriff.

Schroetlin called the wildfire “a catastrophic event” in the small community.

Colorado has witnessed the largest fires in its history this year, which has also seen massive blazes in California and the Pacific Northwest of the United States.

More than 1,813sq km (700sq miles) of land have burned so far in the East Troublesome Fire, Larry Helmerick, fire information coordinator for the Rocky Mountain Area Coordination Center, told The Associated Press news agency this week.

Another fire in northern Colorado that began in August and is still spreading – known as the Cameron Peak Fire – has become the largest in state history.

As of Saturday morning, that blaze was 60 percent contained and has destroyed more than 207,000 acres (nearly 84,000 hectares), officials said.

Authorities in Colorado said this week there was a possibility the Cameron Peak Fire and the East Troublesome Fire could merge.

Scientists have pointed to climate change as making wildfires more intense across the US, among other major climate events, such as storms and droughts.

The Cameron Peak Fire, the largest wildfire in Colorado’s history, has destroyed more than 207,000 acres of land [Loveland Fire Rescue Authority/via Reuters]

Jennifer Balch, director of the Earth Lab at the University of Colorado, Boulder, said drought intensified the blazes in the state.

She said it is “just a matter of time” until the wildfire threat affects more people, who are moving closer to forests.

“If I had a panic button, I would push it – because we have put millions of homes in harm’s way across the Western US,” Balch told AP news agency.

Source

Continue Reading

Trending