rook

joined 1 year ago
[–] [email protected] 7 points 2 months ago (6 children)

Valsorda was on mastodon for a bit (in ‘22 maybe?) and was quite keen on it , but left after a bunch of people got really pissy at him over one of his projects. I can’t actually recall what it even was, but his argument was that people posted stuff publicly on mastodon, so he should be able to do what he liked with those posts even if they asked him not to. I can see why he might not have a problem with LLMs.

Anyone remember what he was actually doing? Text search or network tracing or something else?

[–] [email protected] 16 points 2 months ago

One to keep an eye on… you might all know this already, but apparently Mozilla has an “add ai chatbot to sidebar” in Firefox labs (https://blog.nightly.mozilla.org/2024/06/24/experimenting-with-ai-services-in-nightly/ and available in at least v130). You can currently choose from a selection of public llm providers, similar to the search provider choice.

Clearly, Mozilla has its share of AI boosters, given that they forced “ai help” onto MDN against a significant amount of protest (see https://github.com/mdn/yari/issues/9230 from last July for example) so I expect this stuff to proceed apace.

This is fine, because Mozilla clearly has time and money to spare with nothing else useful they could be doing, alternative browsers are readily available and there has never been any anti-ai backlash to adding this sort of stuff to any other project.

[–] [email protected] 7 points 2 months ago (1 children)

Looking at both cohost and tumblr, I don’t think the funder has an asset that’s worth very much.

[–] [email protected] 10 points 2 months ago (9 children)

Cohost going readonly at the end of this month, and shutting down at the end of the year: https://cohost.org/staff/post/7611443-cohost-to-shut-down

Their radical idea of building a social network that did not require a either VC funding or large amounts of volunteer labour has come to a disappointing, if not entirely surprising end. Going in without a great idea on how to monetise the thing was probably not the best strategy as it turns out.

[–] [email protected] 7 points 2 months ago

One or more of the following:

  • they don’t bother with ai at all, but pretending they do helps with sales and marketing to the gullible
  • they have ai but it is totally shit, and they have to mechanical turk everything to have a functioning system at all
  • they have shit ai, but they’re trying to make it better and the humans are there to generate test and training data annotations
[–] [email protected] 3 points 2 months ago

When we hit AGI, if we can continue to keep open source models, it will truly take the power of the rich and put it in the hands of the common person.

Setting aside the “and then a miracle occurs” bit, this basically seems to be “rich people get to have servants and slaves… what if we democratised that?”. Maybe AGI will invent a new kind of ethics for us.

But the rich can multiply that effort by however many people they can afford.

If the hardware to train and run what currently passes for AI was cheap and trivially replicable, Jensen Huang wouldn’t be out there signing boobs.

[–] [email protected] 12 points 2 months ago (4 children)

Sounds like he’s been huffing too much of whatever the neoreactionaries offgas. Seems to be the inevitable end result of a certain kind of techbro refusing to learn from history, and imagining themselves to be some sort of future grand vizier in the new regime…

[–] [email protected] 11 points 2 months ago (2 children)

Interview with the president of the signal foundation: https://www.wired.com/story/meredith-whittaker-signal/

There’s a bunch of interesting stuff in there, the observation that LLMs and the broader “ai” “industry” wee made possible thanks to surveillance capitalism, but also the link between advertising and algorithmic determination of human targets for military action which seems obvious in retrospect but I hadn’t spotted before.

But in 2017, I found out about the DOD contract to build AI-based drone targeting and surveillance for the US military, in the context of a war that had pioneered the signature strike.

What’s a signature strike?

A signature strike is effectively ad targeting but for death. So I don’t actually know who you are as a human being. All I know is that there’s a data profile that has been identified by my system that matches whatever the example data profile we could sort and compile, that we assume to be Taliban related or it’s terrorist related.

[–] [email protected] 6 points 2 months ago

You should try what these folk are selling.

https://h2o4u.ca/

[–] [email protected] 1 points 2 months ago

WONTFIX: system working as designed.

[–] [email protected] 1 points 2 months ago

To my limited knowledge, no, for various values of “someone”. It is just a sort of malign beige juggernaut that’s shitty all by itself without needing external direction.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago) (2 children)

I have faith in the ability of the UK public sector (or rather, the relentlessly incompetent outsources they hire) to catastrophically fuck up delivery of any software project.

For example, capita has already lined up at the trough: https://www.capita.co.uk/news/capita-advances-approach-next-generation-ai-microsoft

If you’re unfamiliar with capita, that’s probably a good thing. I’m not aware that they’ve ever been successful in anything, other than their continued ability to fleece the government. They’re basically too big to fail in the uk, because HMG’s procurement processes mean that they basically can’t stop giving them money.

view more: ‹ prev next ›