LOADING: An error occurred. Update Chrome, try Firefox, or visit this post for more details.

⚠️Reddit changed how removals work, which breaks Reveddit's website. Install the extension to track removed content:Add to chromeAdd to firefoxWhat changed?
✖︎
about reveddit
⚙F.A.Q.add-ons
r/
status
copy sharelink
[+] show filters
472
Discussion[D] GPT-3, The $4,600,000 Language Model(self.MachineLearning)
submitted 5 years, 7 months ago* (edited 18 hours, 40 minutes after) by mippie_moe to /r/MachineLearning (3m)
217 commentsredditother-discussions (2+)subreddit-indexmessage modsop-focus

OpenAI’s GPT-3 Language Model Explained

Some int...

... view full text

since 5 years, 7 months ago
3 of 3

Tip Reveddit Real-Time can notify you when your content is removed.

your account history
(check your username's removed content. why?)
Tip Check if your account has any removed comments.
view my removed comments
you are viewing a single comment's thread.
view all comments
[–]djc10003 points5 years, 7 months ago

AGI isn’t the issue. I think a lot of folks who’ve responded to me are confused about that.

The issue is performance on basic language understanding tasks like anaphoricity. They made essentially no progress there.

The performance on question-answering tasks isn’t meaningful. We know from the many times results like these have been reported before, that they’re actually coming from extremely carefully prepared test datasets that won’t carry over to real world data.

An example is their reported results on simple arithmetic. The model doesn’t know how to do arithmetic. It just happened that its training dataset included a texts with arithmetic examples that matched the test corpus. Inferring the answer to “2 + 2 =“ based on the statistically most probable word to follow in a sentence, is not the same as understanding how to add 2 and 2.

permalinkparentcontexthide replies (1)author-focusas-ofpreserve
[–][deleted]4 points5 years, 7 months ago* (edited 7 minutes after)
[deleted] by user
(check your username's removed content. why?)
parenthide replies (1)as-of
[–]djc10003 points5 years, 7 months ago

Very little progress. It doesn’t “understand” language at all. It isn’t a “few shot learner,” but it’s able to infer the answers to some questions because they’re textually similar to material in its training set.

(I’ve seen so many claims about few shot learning and the like - it always turns out not to really be true.)

You’re right that it could be fine tuned.

But it’s important to keep in mind, this was a model trained and tested on very clean, prepared text. The history of models like this shows that performance drops 20-30% on real world text. So where they’re saying 83% on anaphoricity, or whatever, I’m reading 60%.

I appreciate that my brain reference caused a great deal of confusion, sorry about that.

permalinkparentcontextauthor-focusas-ofpreserve
r/revedditremoved.substack.com
🚨 NEWS 🚨
✖︎

Important: Reddit Changed How Removals Work

A recent Reddit update makes mod-removed content disappear from profile pages, which breaks Reveddit's website.

Install the browser extension to receive removal alerts.

Add to chromeAdd to firefox

What changed?

r/revedditremoved.substack.com