It's from today's acx, apparently they offered free samples to the siskind household.
edit: this is not sarcasm.
It's from today's acx, apparently they offered free samples to the siskind household.
edit: this is not sarcasm.
I thought this looked kind of interesting and not especially grifty apart from the usual anti-FDA narrative up until
The plan is:
Phase 1: (January 2024) Sell to biohackers in Prospera for $20,000.
By good luck, Prospera will soon be hosting a two month super-conference of biotech and crypto entrepreneurs/enthusiasts.
whence my eyes did an uzumaki. And also Aela is somehow involved, apparently.
Give them the benefit of a doubt, sure, maybe they are pandering this hard to an audience that self selects for gullibility and cultbrain and against consumer protections strictly for the greater good, merely accelerating what should have already happened if it weren't for those meddlin' kids at the FDA.
Siskind's always been pretty explicit about it, you get banned if you broach so-called culture war topics in the open threads. They ended up with two subreddits for similar reasons.
Between this and EY seemingly being a huge fan of employing gullibility filters, one might argue that many of the most prominent rationalists aren't necessarily the most candid of characters.
Late to the party and never done advents before, I liked how this problem reminded me that tree traversal is thing, almost as much as I don't that so much of my career involves powershell now.
I'm putting everything up at https://github.com/SpaceAntelope/advent-of-code-2023 except the input files.
Using command abbreviations like % and ? to keep the horizontal length friendly to lemmy post areas, they are expanded in git.
Part 2 in Powershell
function calculate([string]$data) {
# code for parsing data and calculating matches from pt1 here, check the github link if you like banal regexps
# returns objects with the relevant fields being the card index and the match count
}
function calculateAccumulatedCards($data) {
$cards = calculate $data # do pt1 calculations
$cards
| ? MatchCount -gt 0 # otherwise the losing card becomes its own child and the search cycles to overflow
| % {
$children = ($_.Index + 1) .. ($_.Index + $_.MatchCount) # range of numbers corresponding to indices of cards won
| % { $cards[$_ - 1] } # map to the actual cards
| ? { $null -ne $_ } # filter out overflow when index exceeds input length
$_ | Add-Member -NotePropertyName Children -NotePropertyValue $children # add cards gained as children property
}
# do depth first search on every card and its branching children while counting every node
# the recursive function is inlined in the foreach block because it's simpler than referencing it
# from outside the parallel scope
$cards | % -Parallel {
function traverse($card) {
$script:count++
foreach ($c in $card.Children) { traverse($c) }
}
$script:count = 0 # script: means it's basically globally scoped
traverse $_
$script:count # pass node count to pipeline
}
| measure -sum
| % sum
}
Must be another one of those hundred hostile bloggers who inexplicably have it in for EA, according to siskind.
To be clear, it's because he played Edward Snowden in a movie. That's the conspiracy.
'We are the sole custodians of this godlike technology that we can barely control but that we will let you access for a fee' has been a mainstay of OpenAI marketing as long as Altman has been CEO, it's really no surprise this was 'leaked' as soon as he was back in charge.
It works, too! Anthropic just announced they are giving chat access to a 200k token context model (chatgtp4 is <10k I think) where they supposedly cut the rate of hallucinations in half and it barely made headlines.
So far I like best that he was probably fired for not being enough of an AI doomer, i.e. deprioritizing AI safety and diverting too many resources from research to product service, all the while only paying lip service to the ea/rationalist orthodoxy about heading off the impending birth of AI Cthulhu.
Any remaining weirdness can be explained away by the OpenAI board being sheltered oddballs who honestly though they could boot the CEO and face of the company on a dime and without repercussions, in order to bring in an ideologically purer replacement.
It certainly looks like more of the same. Maybe the negative publicity on EA is a silver lining.
This certainly looks like both venture and established capital saying that while it was fun pretending to take EA concerns about AI seriously, it's time to move on.
Also the increasing number of anti-EA effortposts that started cropping up in the OpenAI subreddit over the last few days is deilghtful.
Every ends-justify-the-means worldview has a defense for terrorism readily baked in.
/r/buttcoin seems to have it handled, but it might be worth it to have an alternative just in case, especially if @[email protected] is going to be contributing.