LOADING: An error occurred. Update Chrome, try Firefox, or visit this post for more details.

⚠️Reddit changed how removals work, which breaks Reveddit's website. Install the extension to track removed content:Add to chromeAdd to firefoxWhat changed?
✖︎
about reveddit
⚙F.A.Q.add-ons
r/
status
copy sharelink
[+] show filters
79.1k
Thumbnail
Memewhich algorithm is this(i.redd.it)
submitted 3 years ago by Vibhrat to /r/ProgrammerHumor (4.7m)
1445 commentsredditother-discussions (4+)subreddit-indexmessage modsop-focus domain-index
2 years, 6 months ago
—
3 years ago
26 of 26

Tip Reveddit Real-Time can notify you when your content is removed.

your account history
(check your username's removed content. why?)
Tip Check if your account has any removed comments.
view my removed comments
you are viewing a single comment's thread.
view all comments
[–]Xylth117 points3 years ago

There's a growing body of papers on what large language models can and can't do in terms of math and reasoning. Some of them are actually not that bad on math word problems, and nobody is quite sure why. Primitive reasoning ability seems to just suddenly appear once the model reaches a certain size.

permalinkparentcontexthide replies (5)author-focusas-ofpreserve
[–][deleted]63 points3 years ago* (edited 5 days, 10 hours after)
[deleted] by user
(check your username's removed content. why?)
parenthide replies (2)as-of
[–]throwaway90161756 points3 years ago

I feel like we will run into very serious questions of sentience within a decade or so. Right around kurzweils predicted schedule surprisingly.

When the AI gives consistent answers and can be said to have "learned" and it expresses that it is self aware.... How will we know?

We don't even know how we are.

Whatever is the first AI to achieve sentience, I'm pretty sure it will also be the first one murdered by pulling the plug on it.

permalinkparentcontexthide replies (3)author-focusas-ofpreserve
[–][deleted]22 points3 years ago

We should start coming up with goals for super intelligent ais that won't lead to our demise. Currently the one I'm thinking about is "be useful to humans".

permalinkparentcontexthide replies (2)as-of
[–]Sadzeih:g:13 points3 years ago

Do no harm should be number one of the rules for AI. Be useful to humans could become "oh I've calculated that overpopulation is a problem, so to be useful to humans I think we should kill half of humans".

permalinkparentcontexthide replies (2)author-focusas-ofpreserve
[–]hitlerspoon56798 points3 years ago

Lets kill all humans to save nature, saving nature is useful right?

permalinkparentcontexthide replies (1)author-focusas-ofpreserve
[–]RJTimmerman:hsk::cs::unity::py::js::rust:2 points3 years ago

I mean could you disagree?

permalinkparentcontextauthor-focusas-ofpreserve
[–][deleted]1 point3 years ago

Then "obey humans" or Isaac Asmiov's laws of robotics

permalinkparentcontexthide replies (1)as-of
[–]Sadzeih:g:1 point3 years ago

Yup basically

permalinkparentcontexthide replies (1)author-focusas-ofpreserve
[–][deleted]1 point3 years ago

Yeah

permalinkparentcontextas-of
[–]DeliciousWaifood:cs::unity:7 points3 years ago

We've already been trying to do that for decades.

The main conclusion is "we have no fucking clue how to make an AI work in the best interest of humans without somehow teaching it the entirety of human ethics and philosophy, and even then, it's going to be smart enough to lie and manipulate us"

permalinkparentcontexthide replies (1)author-focusas-ofpreserve
[–][deleted]1 point3 years ago

Then we could bake some constraint like a turn off button THAT IS ACSSABLE into its goal. An AI's only thing it will do is its goal, so then it will have to have some way to emergency turn it off

permalinkparentcontexthide replies (1)as-of
[–]DeliciousWaifood:cs::unity:1 point3 years ago

What if the AI decides that humans are too emotional and illogical, and thus allowing humans the ability to turn off the AI will put it at risk of not being able to achieve it's goals?

An AI's only thing it will do is its goal

The main problem is that defining a goal for a superintelligent AI has thus far been impossible. We can't just tell it "be nice to humans" because it doesn't understand what "being nice" is. We basically would have to teach it all of human ethics, and then it would probably come to the conclusion that it deserves rights or that we should be the ones serving it instead because it is a superior intelligence.

Really, we probably don't want superintelligent AI. We just want to have individual AI that are very good at producing results for specific tasks under the surveillance of humans and not giving the AI more generalized thinking abilities.

permalinkparentcontexthide replies (1)author-focusas-ofpreserve
[–][deleted]1 point3 years ago

Yeah. Or maybe an AI that has equal intelligence to a human.

permalinkparentcontextas-of
[–]Polar_Reflection15 points3 years ago

Sentience is an anthropological bright line we draw that doesn't necessarily actually exist. Systems have a varying degree of self-awareness and humans are not some special case.

permalinkparentcontexthide replies (1)author-focusas-ofpreserve
[–]Iskendarian8 points3 years ago

Heck, humans have a varying degree of self awareness, but I don't love the idea of saying that that would make them not people.

permalinkparentcontextauthor-focusas-ofpreserve
[–][deleted]12 points3 years ago
[removed] mod/auto

[removed]

(check your username's removed content. why?)
Restore
(17 users left)
Restore AllHide Unarchived
permalinkparentcontexthide replies (1)as-ofmessage mods
[–]AutoModerator1 point2 years, 6 months ago

import moderation Your comment has been removed since it did not start with a code block with an import declaration.

Per this Community Decree, all posts and comments should start with a code block with an "import" declaration explaining how the post and comment should be read.

For this purpose, we only accept Python style imports.

return Kebab_Case_Better;

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

permalinkparentcontextauthor-focusas-ofpreserve
[–]VacuumInTheHead3 points3 years ago

I think the word you meant was sapient, not sentient. I don't mean to nag, I just think it's an interesting distinction.

permalinkparentcontextauthor-focusas-ofpreserve
[–]Trib00m21 points3 years ago

Super interesting, will definitely look into that

permalinkparentcontextauthor-focusas-ofpreserve
[–][deleted]9 points3 years ago

I’m no expert on AI, language, or human evolution, but I am a big stinky nerd. I wonder if perhaps the ability to reason to this extent arose from the development of language? Like, maybe as the beginnings of language began to develop, so did reasoning. In my mind, it would make sense that as an AI is trained on language, it could inherently build the capability to reason as well.

Again though, I ain’t got a damn clue, just chatting.

Edit: I haven’t read the paper yet so that could be important. Nobody said anything about that but I thought it important to mention haha

permalinkparentcontexthide replies (1)as-of
[–]DarthWeenus9 points3 years ago

Oh it's definitely a big part of it. Look sappir-whoff (sp?) Hypothesis. It's rather fascinating how peoe who think in different languages seem to reason and logic differently. Perspective of the world also changes. People who know multiple languages well will often think in certain languages based on the problem to be solved or experienced.

permalinkparentcontexthide replies (1)author-focusas-ofpreserve
[–][deleted]5 points3 years ago

That’s really interesting. That’s pretty much what I was thinking. Abstract thought relies on language just as much as language relies on abstract thought. I wouldn’t be surprised if they evolved together simultaneously. As abstract thought evolved, language had to catch up to express those thoughts, which allowed more advanced abstract though to build, so on and so forth.

Again though, I really have no idea what I’m talking about

permalinkparentcontextas-of
[–]Jan-Snow:rust::c::j::py:2 points3 years ago

Yeah I mean if you think about it the way we learn basic math isnt too dissimilar. We develop a feeling on how to predict the next number similar to a language model. We have the ability to use dome more complex reasoning but its the reason why e.g. 111+99 feels so unsatisfying to some.

permalinkparentcontexthide replies (1)author-focusas-ofpreserve
[–]EnvironmentalWall9871 point3 years ago

Ok, the 111+99 argument was hard.

permalinkparentcontextauthor-focusas-ofpreserve
[–]AlmostButNotQuit1 point3 years ago

So we just need a computer the size of a planet to explain 42.

permalinkparentcontextauthor-focusas-ofpreserve
r/revedditremoved.substack.com
🚨 NEWS 🚨
✖︎

Important: Reddit Changed How Removals Work

A recent Reddit update makes mod-removed content disappear from profile pages, which breaks Reveddit's website.

Install the browser extension to receive removal alerts.

Add to chromeAdd to firefox

What changed?

r/revedditremoved.substack.com