TechTakes

1430 readers
108 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
426
 
 

Basically: use GPT to help copy an entire web site, then jack their search results, get profit. Aided by the fact that search engines are shit. This is something you could do before, GAI just made it faster.

Web2 is going great!

427
428
 
 

from the linked github thread:

Your project is in violation of the AGPL, and you have stated this is intentional and you have no plans to open source it. This is breaking the law, and as such I've began to help you with the first steps of re-open sourcing the plugin.

the project author (who gets paid for violating the AGPL via patreon) responds like a mediocre crypto grifter and insists their violation of the law be debated on the discord they control (where their shitty community can shout down the reporter):

While keeping code private doesn't guarantee security, it does make it harder for bad actors to keep up with changes. You are welcome to debate this matter in the MakePlace discord: https://discord.com/invite/YuvcPzCuhq If you are able to convince the MakePlace community that keeping the code open-source is better, I will respect the wishes of the community.

aaaand the smackdown:

Respectfully, I won't attempt to "debate" or "convince" anyone; I'm leaving this pull request and my fork here for others to see and use. It is not a matter of "better"; you are violating a software license and the law. It does not "make it harder" for anyone; Harmony hooking exists, IL modification exists, you can modify plugins from other plugins.

429
 
 

and too many on HN fail to realise

430
 
 

yes it's a bunch of text images. yes it's worth it

431
 
 

Just a lil tidbit of news of some web3 based company going away.

432
 
 

[open scene]

background: a brightly lit airy Social Gathering space with multicoloured furniture (meeting rooms are so 2010). people have been arriving in clumps of 2~5 for over 30 minutes, and the presentation can start soon

sundar: I want to thank you all for coming. this one should be quick today.

* sundar briefly sweeps his eyes across the room before continuing *

sundar: guys! GUYS! we made the prompt VIDEO CAPABLE! it can follow A STREAMING SEQUENCE OF IMAGES!! you can immediately start testing this from your corporate account (whispers if you're in the right orgs). for the public scoff, we'll start with Ask Us pricing in a few months, and we'll force it on the usual product avenues. the office and mail suites stand ready to roll out the integration updates before anyone can ask. you know how the riffraff gets....

* some motion and noise in the back *

sundar: ... sorry melanie, what's that? speak up melanie I can't hear your question. you know how much that mask muffles your voice...

* a game of broken telephone for moving a handheld microphone to the back of the room ensues *

melanie: hi sundar, congratulations to the team for their achievement. I wanted to ask: how does gemini pro solve the issues other models have faced? what new innovations have been accomplished? how is it dealing with the usual issues of correctness, energy consumption, cultural contexts? how is it trained on areas where no datasets exist? were any results sourced from cooperation with the AI ethics and responsibility workgroups that have found so many holes in our previous models?

sundar: * smiles brightly, stares directly into middle of crowd. moves hand to the electronic shutter control, and starts pressing the increase button multiple times until shutter is entirely opaque *

[sundar walks off into the fake sunset, breaks open the boardroom whiskey]

[inside the private exec room]

sundar: FUCK! that was too close. didn't we fire those types already in the last layoffs...? someone get me HR, we need to do something

[end scene]

433
 
 

from the “i'll drm your arse” and “industrial sabotage r us” department, a true scandal: a polish train manufacturer used firmware to lock out trains at 3rd party service depots in order to disrupt the operations of the trains for the railways who did not choose to service the trains at the manufacturer's; at the same time they blamed the 3rd parties for their inability to properly service the trains.

further reading in polish (but translates via google well): more technical and less technical, but with more political/economical details.

434
 
 

archive

"There's absolutely no probability that you're going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It's going to take years, if not many decades, but I still think the time to focus on safety is now," he said.

just days after poor lil sammyboi and co went out and ran their mouths! the horror!

Sources told Reuters that the warning to OpenAI's board was one factor among a longer list of grievances that led to Altman's firing, as well as concerns over commercializing advances before assessing their risks.

Asked if such a discovery contributed..., but it wasn't fundamentally about a concern like that.

god I want to see the boardroom leaks so bad. STOP TEASING!

“What we really need are safety brakes. Just like you have a safety break in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety breaks in AI systems that control critical infrastructure, so that they always remain under human control,” Smith added.

this appears to be a vaguely good statement, but I'm gonna (cynically) guess that it's more steered by the fact that MS now repeatedly burned their fingers on human-interaction AI shit, and is reaaaaal reticent about the impending exposure

wonder if they'll release a business policy update about usage suitability for *GPT and friends

435
 
 

In a since deleted thread on another site, I wrote

For the OG effective altruists, it’s imperative to rebrand the kooky ultra-utilitarianists as something else. TESCREAL is the term adopted by their opponents.

Looks like great minds think alike! The EA's need to up their google juice so people searching for the term find malaria nets, not FTX. Good luck on that, Scott!

The HN comments are ok, with this hilarious sentence

I go to LessWrong, ACX, and sometimes EA meetups. Why? Mainly because it's like the HackerNews comment section but in person.

What's the German term for a recommendation that's the exact opposite?

436
127
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

Anatoly Karlin @powerfultakes

Replying to @RichardHanania

I'm against legalizing bestiality because the animal consent problem hasn't been solved, but probably actually will be quite soon thanks to Al (at least for the higher animals with complex languages). So why not wait a few more years. I don't see disgust as a good reason. It was an evolutionary adaptation of the agricultural era against the spread of zoonotic illnesses, but technology will soon make that entirely irrelevant as well.

437
 
 

With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety"/doomer nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (as well as some contacts from old hands who are on top of how to handle ultra-rich man-children with god complexes).

438
32
from iucounu on bsky (awful.systems)
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

439
 
 

Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’

440
 
 

Well, that didn't take long lmao

441
 
 

archive (e: twitter [archive] too, archive for nitter seems a bit funky)

it'd be nice if these dipshits, like, came off a factory line somewhere. then you could bin them right at the QC failure

442
 
 
443
 
 

in spite of popular belief, maybe lying your ass off on the orange site is actually a fucking stupid career move

for those who don’t know about Kyle, see our last thread about Cruise. the company also popped up a bit recently when we discussed general orange site nonsense — Paully G was doing his best to make Cruise look like an absolute success after the safety failings of their awful self-driving tech became too obvious to ignore last month

444
 
 

I don't know what's going on but I'm loving it.

445
 
 

Mr. Altman’s departure follows a deliberative review process [by the board]

"god, he's really cost us... how much can we get back?"

which concluded that he was not consistently candid in his communications with the board

not only with the board, kids

hindering its ability to exercise its responsibilities

you and me both, brother

446
 
 

this article is incredibly long and rambly, but please enjoy as this asshole struggles to select random items from an array in presumably Javascript for what sounds like a basic crossword app:

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. I made some false starts using what I could gather, and while I did my thing—programming—Ben told GPT-4 what he wanted and got code that ran perfectly.

Fine: commands like those are notoriously fussy, and everybody looks them up anyway.

ah, the NP-complete problem of just fucking pulling the file into memory (there’s no way this clown was burning a rainforest asking ChatGPT for a memory-optimized way to do this), selecting a random item between 0 and the areay’s length minus 1, and maybe storing that index in a second array if you want to guarantee uniqueness. there’s definitely not literally thousands of libraries for this if you seriously can’t figure it out yourself, hackerman

I returned to the crossword project. Our puzzle generator printed its output in an ugly text format, with lines like "s""c""a""r""*""k""u""n""i""s""*" "a""r""e""a". I wanted to turn output like that into a pretty Web page that allowed me to explore the words in the grid, showing scoring information at a glance. But I knew the task would be tricky: each letter had to be tagged with the words it belonged to, both the across and the down. This was a detailed problem, one that could easily consume the better part of an evening.

fuck it’s convenient that every example this chucklefuck gives of ChatGPT helping is for incredibly well-treaded toy and example code. wonder why that is? (check out the author’s other articles for a hint)

I thought that my brother was a hacker. Like many programmers, I dreamed of breaking into and controlling remote systems. The point wasn’t to cause mayhem—it was to find hidden places and learn hidden things. “My crime is that of curiosity,” goes “The Hacker’s Manifesto,” written in 1986 by Loyd Blankenship. My favorite scene from the 1995 movie “Hackers” is

most of this article is this type of fluffy cringe, almost like it’s written by a shitty advertiser trying and failing to pass themselves off as a relatable techy

447
448
 
 

[this is probably off-topic for this forum, but I found it on HN so...]

Edit "enjoy" the discussion: https://news.ycombinator.com/item?id=38233810

449
 
 

nitter archive

just in case you haven't done your daily eye stretches yet, here's a workout challenge! remember to count your reps, and to take a break between paragraphs! duet your score!

oh and, uh.. you may want to hide any loose keyboards before you read this. because you may find yourself wanting to throw something.

450
 
 

replaced with essay of lament by creator.

My only hot take: a thing being x amount of good for y amount of people is not justification enough for it to exist despite it being z amount of bad for var amount of people.

view more: ‹ prev next ›