Taking too long? Close loading screen.
Connect with us

Tech

SpaceX successfully launches another Starlink mission, with over 700 satellites launched to date

Published

on

SpaceX has launched yet another flight of 60 of its Starlink broadband internet satellites. The launch took off from Kennedy Space Center in Florida at 7:29 AM EDT (4:29 AM PDT) this morning, after having been delayed three times earlier due to scrubs – twice because of weather, and one because of an unusual sensor reading. This is the 12 Starlink mission to date, and it means that over 700 of the SpaceX satellites have now been launched.

The mission included reuse of a Falcon 9 booster stage that had previously flown on two separate missions, including the Crew Dragon Demo-2 launch that carried SpaceX’s first human crew – NASA astronauts Bob Behnken and Doug Hurley. SpaceX successfully recovered the booster with a controlled landing on its ‘Of Course I Still Love You’ drone ship at sea for this mission, too. A recovery attempt to catch the fairing halves using different recovery ships will also be attempted, and we’ll post results when they’re available.

SpaceX is currently in private beta testing of Starlink, optimizing for latency and connection. The company says that it has achieved downlink speeds of up to 100 megabits per second, with very low latency as well. It intends to broaden the beta to the public beginning later this year.

The deployment of these Starlink satellites is set for a little while from now, at which point we’ll confirm a good orbital insertion and update this post.

Source

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

TikTok puts Fleetwood Mac’s Rumours back in Billboard’s top 10

Published

on

Fleetwood Mac’s Rumours is a top 10 album this week — more than four decades after its release — thanks to a viral TikTok video that’s had everyone vibing along to “Dreams.” Rumours now ranks seventh on the Billboard 200 chart, the publication announced last night, the album’s first appearance in the top 10 since 1978, a year after it debuted.

Rumours’ newfound popularity is thanks to a viral video from Nathan Apodaca, who goes by 420doggface208 on TikTok, that shows him skateboarding down a road and sipping cran-raspberry juice straight from the jug, while “Dreams” plays over top. It’s been viewed more than 60 million times since being posted at the end of September, and it’s even inspired both Mick Fleetwood and Stevie Nicks to sign up for TikTok over the past two weeks. Fleetwood recreated the viral video himself, while Nicks posted a video of herself lacing up roller skates and singing along. They have a combined 35 million views.

Billboard’s chart measures album “units” moved each week in terms of sales, track purchases, and streams. Rumours made it into the top 10 with 33,000 units, nearly 70 percent of which came from streaming. Billboard says it counted 30.6 million streams from the album for the week ending October 15th. Rumours had already been climbing the chart over the past few weeks as the song went viral, though the classic album had been hovering in the 50s even without the boost. “Dreams” itself soared on Spotify, becoming the week’s 20th most popular song.

Rumours’ resurgence is another example of the power TikTok has for musicians. The app sent Lil Nas X to stardom, and since then, other artists have tried — somewhat desperately, at times — to harness its power to give them a top hit. Rumours’ success shows that there’s even room for new audiences to discover a classic.

Source

Continue Reading

Tech

How to Make Grits From Fresh Hominy

Published

on

Illustration for article titled How to Make Grits From Fresh Hominy

Photo: Claire Lower

True grits are usually made with dried, ground hominy—corn that has been soaked in an alkaline solution to render it more nutritious and delicious. This is harder to find in the Pacific Northwest than one might think. Even though Portland practically has a fetish for southern food, most of the “grits” you find in area grocery stores are sold as “grits aka polenta,” which is not what I’m looking for. Not at all.

Advertisement

That extra soaking step is what makes grits taste like grits. Without calcium hydroxide (or some similar caustic substance), your ground corn porridge is bland and blah. Nixtamalization—which you can learn all about here—is the key to giving it the toasty, sweet, aromatic notes that make this particular bowl of breakfast mush better than the others.

Advertisement

There aren’t however, many recipes for turning freshly nixtamlized corn into grits. There are a few that show you how to dry and grind your corn and then turn that into grits, but I simply don’t want to do all of that. Luckily, if you can make risotto (and have a food processor) you can make grits from fresh hominy. I was only able to find one recipe for doing so (on the Anson Mill’s site), so I used that as a template and—because I cannot help myself—made a few adjustments.

But, before we get to that, you should familiarize yourself with the process of transforming corn into hominy, so go do that if you haven’t already. Once you’ve made at least a cup of the stuff, all you’ll need is water, salt, and a little butter for flavor. You will not need cream or milk. The creaminess in grits comes from their own, naturally occurring starch, not dairy, so please save the milk for your cereal and the cream for your coffee.

Grits made with freshly nixtamlized corn are—depending on how finely you break them down—a little more toothsome than the kind you buy pre-ground. They retain some of the hominy’s chewy texture, which I enjoy, but the more finely you prepare them with the food processor, the closer to “regular” grits they will be.

Those larger bits will come out chewy, so pulse a little past this point.

Those larger bits will come out chewy, so pulse a little past this point.
Photo: Claire Lower

Advertisement

Once you’ve pulsed your hominy into something that looks like grits, it’s almost exactly like cooking risotto, the only difference being that you do not toast them in fat before you add liquid, as doing so will coat the little corn bits and prevent them from absorbing water. Toast them in a dry pan until they are hot and fragrant and start to stick to the end of a wooden spoon, then gradually add salted water, stirring with each addition, until they soften and swell and release their starch. Then—and only then—should you add butter to taste.

Fresh Hominy Grits

Ingredients:

  • At least 1 cup of freshly prepared hominy (not dried or canned)
  • At least 2 cups of water per cup of hominy
  • Salt
  • 2 tablespoons of butter per cup of hominy, divided

Add the hominy to your food processor and pulse until it is broken down into fine, grit-sized pieces. The smaller your bits, the quicker the grits will cook, and the creamier they will be (though those little toothsome bits can be fun). Add the grits to a dry stainless steel pot, and cook over medium-low heat, stirring constantly with a wooden spoon, until they are hot and fragrant and steaming. While the grits are heating, lightly salt the water and bring to a boil in a separate sauce pan (or kettle).

Advertisement

Once the grits are hot and starting to stick to the end of your wooden spoon, start adding water, about 1/3 cup at a time, stirring with each addition until it is absorbed. The grits will be quite tight and firm looking at first; just keep adding water and stirring until they soften and swell. Eventually, they will loosen up and release their starch, which is what will make them creamy.

Taste as you go. It’s possible your grits will look the part before they are cooked enough, so keep cooking until they are soft on your teeth, adding more water as needed to keep them from dying out. Once they look and taste right, add one tablespoon of butter (per cup of hominy you started with), stir, taste, and adjust with more butter or salt if needed. Serve with hot sauce, cheese, shrimp, and/or more butter.

Advertisement

Source

Continue Reading

Tech

The ambitious effort to piece together America’s fragmented health data

Published

on

From the early days of the COVID-19 pandemic, epidemiologist Melissa Haendel knew that the United States was going to have a data problem. There didn’t seem to be a national strategy to control the virus, and cases were springing up in sporadic hotspots around the country. With such a patchwork response, nationwide information about the people who got sick would probably be hard to come by.

Other researchers around the country were pinpointing similar problems. In Seattle, Adam Wilcox, the chief analytics officer at UW Medicine, was reaching out to colleagues. The city was the first US COVID-19 hotspot. “We had 10 times the data, in terms of just raw testing, than other areas,” he says. He wanted to share that data with other hospitals, so they would have that information on hand before COVID-19 cases started to climb in their area. Everyone wanted to get as much data as possible in the hands of as many people as possible, so they could start to understand the virus.

Haendel was in a good position to help make that happen. She’s the chair of the National Center for Data to Health (CD2H), a National Institutes of Health program that works to improve collaboration and data sharing within the medical research community. So one week in March, just after she’d started working from home and pulled her 10th grader out of school, she started trying to figure out how to use existing data-sharing projects to help fight this new disease.

The solution Haendel and CD2H landed on sounds simple: a centralized, anonymous database of health records from people who tested positive for COVID-19. Researchers could use the data to figure out why some people get very sick and others don’t, how conditions like cancer and asthma interact with the disease, and which treatments end up being effective.

But in the United States, building that type of resource isn’t easy. “The US healthcare system is very fragmented,” Haendel says. “And because we have no centralized healthcare, that makes it also the case that we have no centralized healthcare data.” Hospitals, citing privacy concerns, don’t like to give out their patients’ health data. Even if hospitals agree to share, they all use different ways of storing information. At one institution, the classification “female” could go into a record as one, and “male” could go in as two — and at the next, they’d be reversed.

Emergencies, though, have a way of busting through norms. “Nothing like a pandemic to bring out the best in an institution,” Haendel says. And after only a few months of breakneck work from CD2H and collaborators around the country, the National COVID Cohort Collaborative Data Enclave, or N3C, opened to researchers at the start of September. Now that it’s in place, it could help bolster pandemic responses in the future. It’s unique from anything that’s come before it, in size and scope, Haendel says. “No other resource has ever tried to do this before.”

Institutional silos

Patient health records are fairly accessible to scientists — under health privacy laws, the records can be used for research as long as identifying information (like names and locations) are removed. The catch is that researchers are usually limited to records of patients at the places that they work. The dataset can only include as many patients as that institution treats, and it’s geographically restricted. Researchers can’t be sure that patient data in New York City would be equivalent to patient data in Alabama. Using information from multiple places would help make sure the results were as representative as possible.

But it can be risky for institutions to share and combine their data, Wilcox says. Moving data outside of the control of an organization risks a data breach, which could lead to patient mistrust, open the institution up to legal issues, or create other competitive disadvantages, he says. They need to balance all those concerns against the potential benefits. “The organization needs to approve it. Is this a good idea? Do we want to participate in it?” Wilcox says.

Institutions often answer those questions with a “no.” They want to maintain ownership and control over their own data, says Anita Walden, assistant director at CD2H. The pandemic changed that culture. People who may typically be reluctant to participate in programs like this one were suddenly all-in, she says. “Because of COVID-19, people just want to do what they can.”

Getting institutions to send in their data was only the first step. Next, experts had to transform that data into something useful. Medical institutions all collect and record health information in slightly different ways, and there haven’t been incentives for them to standardize their methods. Many institutions spent hundreds of millions of dollars to set up their electronic medical records — they don’t want to change things unless they absolutely have to.

“It’s like turning the Titanic at this point,” says Emily Pfaff, who leads the team at N3C merging different institutions’ data. The companies that make the software for electronic health records, like Epic, also don’t make their strategies for storing data available to outside researchers. “If you want to practice open science with clinical data, which I think many of us do, you’re not going to be able to do that with the data formatted in the way that the electronic health record does it,” she says. “You have to transform that data.”

Countries like the United Kingdom, which have centralized health care systems, don’t have to deal with the same problems: data from every patient in the country’s National Health Service is already in one place. In May, researchers published a study that analyzed records from over 17 million people to find risk factors for death from COVID-19.

But in the US, for N3C, it’s not as simple. Instead of a COVID-19 patient’s data heading directly into a national database, the new process is far more involved. Let’s say a pregnant woman goes to her doctor with symptoms of what she thinks could be COVID-19. She gets tested, and the test comes back positive. That result shows up in her health record. If her health care provider is participating in the N3C database, that record gets flagged. “Then her health record has a chance to get caught by our net, because what our net is looking for, among other things, is a positive COVID test,” Pfaff says.

Her data then travels into a database, where a program (which had to be created from scratch) transforms information about the patient’s treatments and preexisting conditions into a standardized format. Then, it’ll get pushed into the N3C data enclave, undergo a quality check, and then — without her name or the name of the institution the record came from — be available for researchers.

Nearly 70 institutions have started the process to contribute data to the enclave. Data from 20 sites has passed through the full process, and data is accessible to researchers. At the end of September, the database held around 65,000 COVID-19 cases, Pfaff says, and around 650,000 non-COVID-19 cases (which can be used as controls). There’s no specific numerical goal, she says. “We would take as many as possible.”

Using the data

As some experts were working to get medical institutions on board with the project and others were figuring out how to harmonize a crush of data, still others were organizing to figure out what, exactly, they wanted to do with the resulting information. They sorted into a handful of working groups, each focused on a different area: there’s one focused on the intersection of diabetes and COVID-19, for example, and another on kidney injuries.

Elaine Hill, a health economist at the University of Rochester, is heading up a group focused on pregnancy and COVID-19. The first thing they’re hoping to do, she says, is figure out just how many people had the virus when they gave birth — only a few hospitals have published that data so far. “Then, we’re interested in understanding how COVID-19 infection affects pregnancy-related outcomes for both mother and baby,” she says. Thanks to the database, they’ll be able to do that with nationwide information, not just data from patients in a handful of places.

That wide view of the problem is one key benefit of a large, national database. Different places across the US had different COVID-19 prevention policies, different regulations around lockdowns, and have different demographics. Combining them gives a more complete picture of how the virus hit the country. “It makes it possible to shed light on things we wouldn’t be able to with just my Rochester cohort,” Hill says.

Some symptoms or complications from COVID-19 are also rare, and one hospital might only see one or two total patients who have them. “When you’re gathering data across the nation, you have a bigger population, and can look at trends in those rarer conditions,” Walden says. Larger datasets can make it possible for analysts to use more complicated machine learning techniques, as well.

If all goes well with N3C, the project could offer a blueprint for better data sharing in the future. More than that, it can offer a concrete tool to future projects — the code needed to clean, transform, and merge data from multiple hospitals now exists. “I almost feel like it’s building pandemic-ready infrastructure for the future,” Pfaff says. And now that research institutions have shared data once — even though it’s under unique circumstances — they may be more willing to do it again in the future.

“Five years from now, the greatest value of this data set won’t be the data,” Wilcox says. “It’ll have been the methods that we learned trying to get it working.”

Source

Continue Reading

Trending