this post was submitted on 01 Oct 2023
48 points (100.0% liked)

TechTakes

1432 readers
117 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

After several months of reflection, I’ve come to only one conclusion: a cryptographically secure, decentralized ledger is the only solution to making AI safer.

Quelle surprise

There also needs to be an incentive to contribute training data. People should be rewarded when they choose to contribute their data (DeSo is doing this) and even more so for labeling their data.

Get pennies for enabling the systems that will put you out of work. Sounds like a great deal!

All of this may sound a little ridiculous but it’s not. In fact, the work has already begun by the former CTO of OpenSea.

I dunno, that does make it sound ridiculous.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 17 points 1 year ago (1 children)

This comment from the HN discussion is too funny

https://news.ycombinator.com/item?id=37725746

The number of AI safety sessions I’ve joined where the speakers have no real AI experience talking about potentially bad futures, based on zero CS experience and little ‘evidence’ beyond existing sci-fi books and anecdotes, have left me very jaded on the subject as a ‘discipline’.

[–] [email protected] 6 points 1 year ago (1 children)

"who needs to listen to the poet/writers/painters/sculptors/.... anyway? they're just there to make things that look good in my palazzo garden!"

[–] [email protected] 5 points 1 year ago

Yes, there is a lot of bunk AI safety discussions. But there are legitimate concerns as well.

Hey, don't worry, someone's standing up for--

AI is close to human level.

Uh, never mind