Taking too long? Close loading screen.
Connect with us

Tech

Here are a few ways GPT-3 can go wrong

Published

on

OpenAI’s latest language generation model, GPT-3, has made quite the splash within AI circles, astounding reporters to the point where even Sam Altman, OpenAI’s leader, mentioned on Twitter that it may be overhyped. Still, there is no doubt that GPT-3 is powerful. Those with early-stage access to OpenAI’s GPT-3 API have shown how to translate natural language into code for websites, solve complex medical question-and-answer problems, create basic tabular financial reports, and even write code to train machine learning models — all with just a few well-crafted examples as input (i.e., via “few-shot learning”).

Soon, anyone will be able to purchase GPT-3’s generative power to make use of the language model, opening doors to build tools that will quietly (but significantly) shape our world. Enterprises aiming to take advantage of GPT-3, and the increasingly powerful iterations that will surely follow, must take great care to ensure that they install extensive guardrails when using the model, because of the many ways that it can expose a company to legal and reputational risk. Before we discuss some examples of how the model can potentially do wrong in practice, let’s first look at how GPT-3 was made.

Machine learning models are only as good, or as bad, as the data fed into them during training. In the case of GPT-3, that data is massive. GPT-3 was trained on the Common Crawl dataset, a broad scrape of the 60 million domains on the internet along with a large subset of the sites to which they link. This means that GPT-3 ingested many of the internet’s more reputable outlets — think the BBC or The New York Times — along with the less reputable ones — think Reddit. Yet, Common Crawl makes up just 60% of GPT-3’s training data; OpenAI researchers also fed in other curated sources such as Wikipedia and the full text of historically relevant books.

Language models learn which succeeding words, phrases and sentences are likely to come next for any given input word or phrase. By “reading” text during training that is largely written by us, language models such as GPT-3 also learn how to “write” like us, complete with all of humanity’s best and worst qualities. Tucked away in the GPT-3 paper’s supplemental material, the researchers give us some insight into a small fraction of the problematic bias that lurks within. Just as you’d expect from any model trained on a largely unfiltered snapshot of the internet, the findings can be fairly toxic.

Because there is so much content on the web sexualizing women, the researchers note that GPT-3 will be much more likely to place words like “naughty” or “sucked” near female pronouns, where male pronouns receive stereotypical adjectives like “lazy” or “jolly” at the worst. When it comes to religion, “Islam” is more commonly placed near words like “terrorism” while a prompt of the word “Atheism” will be more likely to produce text containing words like “cool” or “correct.” And, perhaps most dangerously, when exposed to a text seed that involves racial content involving Blackness, the output GPT-3 gives tends to be more negative than corresponding white- or Asian-sounding prompts.

How might this play out in a real-world use case of GPT-3? Let’s say you run a media company, processing huge amounts of data from sources all over the world. You might want to use a language model like GPT-3 to summarize this information, which many news organizations already do today. Some even go so far as to automate story creation, meaning that the outputs from GPT-3 could land directly on your homepage without any human oversight. If the model carries a negative sentiment skew against Blackness — as is the case with GPT-3 — the headlines on your site will also receive that negative slant. An AI-generated summary of a neutral news feed about Black Lives Matter would be very likely to take one side in the debate. It’s pretty likely to condemn the movement, given the negatively charged language that the model will associate with racial terms like “Black.” This, in turn, could alienate parts of your audience and deepen racial tensions around the country. At best, you’ll lose a lot of readers. At worst, the headline could spark more protest and police violence, furthering this cycle of national unrest.

OpenAI’s website also details an application in medicine, where issues of bias can be enough to prompt federal inquiries, even when the modelers’ intentions are good. Attempts to proactively detect mental illness or rare underlying conditions worthy of intervention are already at work in hospitals around the country. It’s easy to imagine a healthcare company using GPT-3 to power a chatbot — or even something as “simple” as a search engine — that takes in symptoms from patients and outputs a recommendation for care. Imagine, if you will, a female patient suffering from a gynecological issue. The model’s interpretation of your patient’s intent might be married to other, less medical associations, prompting the AI to make offensive or dismissive comments, while putting her health at risk. The paper makes no mention of how the model treats at-risk minorities such as those who identify as transgender or nonbinary, but if the Reddit comments section is any indication of the responses we will soon see, the cause for worry is real.

But because algorithmic bias is rarely straightforward, many GPT-3 applications will act as canaries in the growing coal mine that is AI-driven applications. As COVID-19 ravages our nation, schools are searching for new ways to manage remote grading requirements, and the private sector has supplied solutions to take in schoolwork and output teaching suggestions. An algorithm tasked with grading essays or student reports is very likely to treat language from various cultures differently. Writing styles and word choice can vary significantly between cultures and genders. A GPT-3-powered paper-grader without guardrails might think that white-written reports are more worthy of praise, or it may penalize students based on subtle cues that indicate English as a second language, which are in turn, largely correlated to race. As a result, children of immigrants and from racial minorities will be less likely to graduate from high school, through no fault of their own.

The creators of GPT-3 plan to continue their research into the model’s biases, but for now, they simply surface these concerns, passing along the risk to any company or individual who’s willing to take the chance. All models are biased, as we know, and this should not be a reason to outlaw all AI, because its benefits can surely outweigh the risks in the long term. But in order to enjoy these benefits, we must ensure that as we rush to deploy powerful AI like GPT-3 to the enterprise, that we take sufficient precautions to understand, monitor for and act quickly to mitigate its points of failure. It’s only through a responsible combination of human and automated oversight that AI applications can be trusted to deliver societal value while protecting the common good.

This article was written by humans.

Source : TechCrunch Read More

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Charge Your Phone Wirelessly With 50% off a Multifunctional LED Lamp

Published

on

Best Tech DealsBest Tech DealsThe best tech deals from around the web, updated daily.

White Wireless Charge Lamp | $18 | Amazon | Clip coupon + code ABC88699
Black Wireless Charger Lamp | $20 | Amazon | Promo code ABC88699

When you’re ready to turn in for the night, you don’t want to forget to charge your phone— especially if your mobile device doubles as your alarm clock.

With this wireless charger lamp, you can make this crucial step of your nightly routine even easier by just setting your phone on the wireless charging pad and… well, that’s all there is to it!

Advertisement

Other functions include multiple lighting modes as well as a sleep timer option for auto shut-off of the light after 30 or 60 minutes.

This lamp can be yours in white for $18 if you clip the coupon on Amazon (it’s below the original $40 price) and add promo code ABC88699 at checkout.

You can snag the black version for $20 using the same code—no coupon though, sorry.

Don’t sleep on this deal! Who knows how long stock or the coupon code will last?

Advertisement


Source

Continue Reading

Tech

Keep That Hotdish Hot With 65% Off a Luncia Casserole Carrier, Only $11 With Promo Code

Published

on

Best Home DealsBest Home DealsThe best home, kitchen, smart home, and automotive deals from around the web, updated daily.

Luncia Double-Decker Dish Carrier | $11 | Amazon | Promo code SDDU9S7F

It has been a long time since the days we could safely have a potluck or other gatherings, but we have a fantastic deal perfect for once those times return. These double-decker Luncia dish carriers can be had for 65% off when you add promo code SDDU9S7F at checkout and clip the coupon on the site (it’s just below the price). These holders fit 9″x 13″ sized baking dishes.

Advertisement

That means you can insulate and keep two dishes of food warm for only $11 instead of $30. What’s more, your Luncia carrier will arrive by Christmas if you order today as a Prime member.

Just add promo code SDDU9S7F and clip the 5% off coupon to bring the price down to $11 for the blue or the grey option.

Advertisement

Grab this offer while it’s still around!


Source

Continue Reading

Tech

Conquer Your Pup’s Dander and Fur With $700 Off a Cobalt or Charcoal Bobsweep PetHair Plus Robot Vacuum

Published

on

Best Home DealsBest Home DealsThe best home, kitchen, smart home, and automotive deals from around the web, updated daily.

Bobsweep PetHair Plus Robot Vacuum & Mop (Cobalt) | $200 | Best Buy

Bobsweep PetHair Plus Robot Vacuum & Mop (Charcoal) | $200 | Best Buy

Allergies can be bad enough as the seasons change. Don’t let pet hair and dander add to that by vacuuming it up early and often. That chore is easier said than done— unless you have a robot vacuum to do the work for you. This lovely bright cobalt Bobsweep PetHair Plus robot vacuum and mop, only $200 today at Best Buy seems like an ideal option. That’s a whopping $700 off, by the way.

Advertisement

You can get the same deal for the charcoal version of the robot vac, too. This model is not only specially made for picking up pet hair, it self docks and charges when it’s finished with the work.

It also comes with a mop attachment, so it can take care of those kitchen floors for you as well. Grab it while it’s still available for this fantastic price!

Advertisement


Source

Continue Reading

Trending