TechTakes

1432 readers
117 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
451
 
 

will this sure is gonna go well :sarcmark:

it almost feels like when Google+ got shoved into every google product because someone had a bee in their bonnet

flipside, I guess, is that we'll soon (at scale!) get to start seeing just how far those ideas can and can't scale

452
453
 
 

Title is ... editorialized.

454
455
456
 
 

Don't mind me I'm just here to silently scream into the void

Edit: I'm no good at linking to HN apparently, made link more stable.

457
19
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

Title quote stolen from JZW: https://www.jwz.org/blog/2023/10/the-best-way-to-profit-from-ai/

Yet again, the best way to profit from a gold rush is to sell shovels.

458
 
 

Non-paywalled link: https://archive.ph/9Hihf

In his latest NYT column, Ezra Klein identifies the neoreactionary philosophy at the core of Marc Andreessen's recent excrescence on so-called "techno-optimism". It wasn't exactly a difficult analysis, given the way Andreessen outright lists a gaggle of neoreactionaries as the inspiration for his screed.

But when Andreessen included "existential risk" and transhumanism on his list of enemy ideas, I'm sure the rationalists and EAs were feeling at least a little bit offended. Klein, as the founder of Vox media and Vox's EA-promoting "Future Perfect" vertical, was probably among those who felt targeted. He has certainly bought into the rationalist AI doomer bullshit, so you know where he stands.

So have at at, Marc and Ezra. Fight. And maybe take each other out.

459
460
461
 
 

One reason that, three and a half years later, Andreessen is reiterating that “it’s time to build” instead of writing posts called “Here’s What I Built During the Building Time I Previously Announced Was Commencing” is that Marc Andreessen has not really built much of anything.

462
463
26
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

I don’t really have much to say… it kind of speaks for itself. I do appreciate the table of contents so you don’t get lost in the short paragraphs though

464
465
 
 

archive.org | and .is

this is almost a NSFW? some choice snippets:

more than 1.5 million people have used it and it is helping build nearly half of Copilot users’ code

Individuals pay $10 a month for the AI assistant. In the first few months of this year, the company was losing on average more than $20 a month per user, according to a person familiar with the figures, who said some users were costing the company as much as $80 a month.

good thing it's so good that everyone will use it amirite

starting around $13 for the basic Microsoft 365 office-software suite for business customers—the company will charge an additional $30 a month for the AI-infused version.

Google, ..., will also be charging $30 a month on top of the regular subscription fee, which starts at $6 a month

I wonder how long they'll try that, until they try forcing it on everyone (and raise all prices by some n%)

466
 
 

Carole Piovesan (formerly of McCarthy Tétrault, now at INQ Law) describes this as a "step in the process to introducing some more sort of enforceable measures".

In this case the code of conduct has some fairly innocuous things. Managing risk, curating to avoid biases, safeguarding against malicious use. It's your basic industrial safety government boilerplate as applied to AI. Here, read it for yourself:

https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems

Now of course our country's captains of industry have certain reservations. One CEO of a prominent Canadian firm writes that "We don’t need more referees in Canada. We need more builders."

https://twitter.com/tobi/status/1707017494844547161

Another who you will recognize from my prior post (https://awful.systems/post/298283) is noted in the CBC article as concerned about "the ability to put a stifling growth in the industry". I am of course puzzled about this concern. Surely companies building these products are trivially capable of complying with such a basic code of conduct?

For my part I have difficulty seeing exactly how "testing methods and measures to assess and mitigate risk of biased output" and "creating safeguards against malicious use" would stifle industry and reduce building. My lack of foresight in this regard could be why I am a scrub behind a desk instead of a CEO.

Oh, and for bonus Canadian content, the name Desmarais from the photo (next to the Minister of Industry) tweaked my memory. Oh right, those Desmarais. Canada will keep on Canada'ing to the end.

https://dailynews.mcmaster.ca/articles/helene-and-paul-desmarais-change-agents-and-business-titans/

https://en.wikipedia.org/wiki/Power_Corporation_of_Canada#Politics

467
 
 

Representative take:

If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt "cat" to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of "cat".

468
 
 

Source: nitter, twitter

Transcribed:

Max Tegmark (@tegmark):
No, LLM's aren't mere stochastic parrots: Llama-2 contains a detailed model of the world, quite literally! We even discover a "longitude neuron"

Wes Gurnee (@wesg52):
Do language models have an internal world model? A sense of time? At multiple spatiotemporal scales?
In a new paper with @tegmark we provide evidence that they do by finding a literal map of the world inside the activations of Llama-2! [image with colorful dots on a map]


With this dastardly deliberate simplification of what it means to have a world model, we've been struck a mortal blow in our skepticism towards LLMs; we have no choice but to convert surely!

(*) Asterisk:
Not an actual literal map, what they really mean to say is that they've trained "linear probes" (it's own mini-model) on the activation layers, for a bunch of inputs, and minimizing loss for latitude and longitude (and/or time, blah blah).

And yes from the activations you can get a fuzzy distribution of lat,long on a map, and yes they've been able to isolated individual "neurons" that seem to correlate in activation with latitude and longitude. (frankly not being able to find one would have been surprising to me, this doesn't mean LLM's aren't just big statistical machines, in this case being trained with data containing literal lat,long tuples for cities in particular)

It's a neat visualization and result but it is sort of comically missing the point


Bonus sneers from @emilymbender:

  • You know what's most striking about this graphic? It's not that mentions of people/cities/etc from different continents cluster together in terms of word co-occurrences. It's just how sparse the data from the Global South are. -- Also, no, that's not what "world model" means if you're talking about the relevance of world models to language understanding. (source)
  • "We can overlay it on a map" != "world model" (source)
469
 
 

Direct link to the video

B-b-but he didn't cite his sources!!

470
471
 
 

After several months of reflection, I’ve come to only one conclusion: a cryptographically secure, decentralized ledger is the only solution to making AI safer.

Quelle surprise

There also needs to be an incentive to contribute training data. People should be rewarded when they choose to contribute their data (DeSo is doing this) and even more so for labeling their data.

Get pennies for enabling the systems that will put you out of work. Sounds like a great deal!

All of this may sound a little ridiculous but it’s not. In fact, the work has already begun by the former CTO of OpenSea.

I dunno, that does make it sound ridiculous.

472
 
 

The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanism. We’re looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

“Whoops, it’s done now, oh well, guess we’ll have to do it later”

Go fucking directly to jail

473
 
 

These experts on AI are here to help us understand important things about AI.

Who are these generous, helpful experts that the CBC found, you ask?

"Dr. Muhammad Mamdani, vice-president of data science and advanced analytics at Unity Health Toronto", per LinkedIn a PharmD, who also serves in various AI-associated centres and institutes.

"(Jeff) Macpherson is a director and co-founder at Xagency.AI", a tech startup which does, uh, lots of stuff with AI (see their wild services page) that appears to have been announced on LinkedIn two months ago. The founders section lists other details apart from J.M.'s "over 7 years in the tech sector" which are interesting to read in light of J.M.'s own LinkedIn page.

Other people making points in this article:

C. L. Polk, award-winning author (of Witchmark).

"Illustrator Martin Deschatelets" whose employment prospects are dimming this year (and who knows a bunch of people in this situation), who per LinkedIn has worked on some nifty things.

"Ottawa economist Armine Yalnizyan", per LinkedIn a fellow at the Atkinson Foundation who used to work at the Canadian Centre for Policy Alternatives.

Could the CBC actually seriously not find anybody willing to discuss the actual technology and how it gets its results? This is archetypal hood-welded-shut sort of stuff.

Things I picked out, from article and round table (before the video stopped playing):

Does that Unity Health doctor go back later and check these emergency room intake predictions against actual cases appearing there?

Who is the "we" who have to adapt here?

AI is apparently "something that can tell you how many cows are in the world" (J.M.). Detecting a lack of results validation here again.

"At the end of the day that's what it's all for. The efficiency, the productivity, to put profit in all of our pockets", from J.M.

"You now have the opportunity to become a Prompt Engineer", from J.M. to the author and illustrator. (It's worth watching the video to listen to this person.)

Me about the article:

I'm feeling that same underwhelming "is this it" bewilderment again.

Me about the video:

Critical thinking and ethics and "how software products work in practice" classes for everybody in this industry please.

474
475
 
 

I found this searching for information on how to program for the old Commodore Amiga’s HAM (Hold And Modify) video mode and you gotta touch and feel this one to sneer at it, cause I haven’t seen a website this aggressively shitty since Flash died. the content isn’t even worth quoting as it’s just LLM-generated bullshit meant to SEO this shit site into the top result for an existing term (which worked), but just clicking around and scrolling on this site will expose you to an incredible density of laggy, broken full screen animations that take way too long to complete and block reading content until they’re done, alongside a long list of other good design sense violations (find your favorites!)

bonus sneer arguably I’m finally taking up Amiga programming as an escape from all this AI bullshit. well fuck me I guess cause here’s one of the vultures in the retrocomputing space selling an enshittified (and very ugly) version of AmigaOS with a ChatGPT app and an AI art generator, cause not even operating on a 30 year old computer will spare me this bullshit:

like fuck man, all I want to do is trick a video chipset from 1985 into making pretty colors. am I seriously gonna have to barge screaming into another German demoscene IRC channel?

view more: ‹ prev next ›