this post was submitted on 25 Nov 2024
30 points (100.0% liked)

TechTakes

1481 readers
293 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this - this one was a bit late, I got distracted)

top 50 comments
sorted by: hot top controversial new old
[–] ibt3321@lemmy.blahaj.zone 23 points 3 weeks ago (2 children)

My organic chemistry professor used ChatGPT to write a lab procedure. My other chemistry professor's daughter is VP of AI at Microsoft. AAAAA

[–] istewart@awful.systems 15 points 3 weeks ago (2 children)

Two posts in two weeks about professors using ChatGPT has me questioning my desire to go back to school

load more comments (2 replies)
[–] skillissuer@discuss.tchncs.de 12 points 3 weeks ago (1 children)

do you have it? i'm in this field and i wonder how badly it fucked up

[–] ibt3321@lemmy.blahaj.zone 12 points 3 weeks ago (2 children)
[–] skillissuer@discuss.tchncs.de 29 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

alright i'm gonna start by noting that this synthesis - the real one - is not particularly hard or dangerous, and if you know what are you doing it can be even semi-safely done in shed. that said, it's also not exactly normal compound and in few aspects it's a corner case. importantly for spicy autocomplete, this procedure is well disseminated over internet

whatever chatgpt cooked there is not workable at all, because it's three unrelated procedures mashed together in a way that makes little sense. i think i've been able to pick up where did some weird, disastrous numbers came from. here's some random synthesis https://web.fscj.edu/milczanowski/eleven/luminol.pdf it sounds like everyone copies this procedure from everyone else with small changes if any

overall synthesis of luminol goes like this: phthalic anhydride (or acid) gets nitrated to 3-nitrophthalic acid (+ 4-nitrophthalic acid), this step has been omitted and i don't know why, because this is nice if a little bit dangerous synthesis for first organic lab ever (after basics of purification methods are covered). it requires some care, but more importantly requires understanding of crystallization because that's how these two isomers are separated. it's not particularly hard either. anyway, it's not shown there. that would be step zero. step one is synthesis of hydrazide from acid, at rather high temperature (200C) with glycol-type solvent (triethylene glycol or straight up polyethylene glycol). it's variant of this method https://repository.ukim.mk/bitstream/20.500.12188/11328/1/XIII_0540.pdf it's rarely discussed, because not many compounds of interest survive these conditions. here, however, it works fine because of combination of factors, one of them being unusual nucleophilicity of hydrazine, the other being additional driving force coming from the fact that newly formed ring is aromatic (which is covered in probable training material, but chatgpt can't infer so it doesn't matter)

then second step is reduction of nitro group to amine. there are many ways to do this, but if i was to choose, i'd pick some modern method like catalytic hydrogenation on Pd/C, or transfer hydrogenation with formic acid. what can be found out there uses either tin(II) salts or sodium dithionite, or some method using hydrogen sulfide. then again some purification is necessary, of which good choice would be crystallization. unusually hydrochloride salt of luminol has low solubility in water, and this gets useful during workup.

chatgpt instead cooked something that won't work under any conditions. like i said previously, first step would be synthesis of phthalohydrazide directly from acid, but spicy autocomplete had little info about this type of synthesis in probable training material so it doesn't come up. what was, however, was actual synthesis of luminol and variety of amide synthesis methods. these syntheses use some kind of activated acyl group equivalents, starting with anhydrides, acyl halides, isourea derivatives etc the important bit is anhydrides. anhydrides do form amides, and cyclic anhydrides can be formed on heating. spicy autocomplete conflated heating in synthesis of phthalhydrazide with making 3-nitrophthalic anhydride - step that does not exist in real synthesis

so let's start with good synthesis, that looks for example like this:

Combine 0.300 g of 3-nitrophthalic acid and 0.4 mL of 10% aqueous hydrazine in a side arm test tube. Heat the test tube over a microburner until the solid dissolves. Add 0.8 mL of triethylene glycol to the test tube. Clamp the test tube to a ring stand, insert a thermometer into the test tube (use a two holed rubber stopper, one hole for the thermometer and the other to make certain that the system is not sealed), and connect the side arm to a vacuum source. Heat the solution to 200 °C and keep the solution at 210–220 °C for two minutes. Allow the test tube to cool to 100 °C and add 4 mL of boiling water to the test tube. Cool the suspension to room temperature by running tap water along the outside of the test tube. Collect the brown solid on a Hirsch funnel.

botslop version:

i'm just gonna highlight worst bits because it's ridiculously, irritatingly verbose

Hydrazine hydrate (80% aqueous solution)

this is 50% hydrazine (hydrate is 64%) and more than normal grade (32%). real procedure uses 8%, 10% hydrazine or something of similar concentration. more dilute hydrazine is a bit safer

Step 1: Formation of 3-Nitrophthalic Anhydride

  1. Dehydration of 3-Nitrophthalic Acid:

Place 3-nitrophthalic acid (e.g., 10 g) in a dry round-bottom flask. Heat the flask gradually to 200–220°C under reduced pressure to remove water and form the anhydride. Maintain the temperature until no more water distills off.

zeroth of all, there's no motherfucking "for example" in procedures. it's described for strictly defined amount of whatever was put there, and also i'd like to see it recalculated into mmols and equivs so that both student can show that they know what they are doing and it's easier to spot mistakes

this step is completely made up and unnecessary. the 200C temperature was taken from real synthesis of phthalohydrazide. distillation manifold appears magically out of thin air when it's necessary to notice that no further water is distilled off, and depending on how hard that vacuum would be, this could result in loss of material by evaporation. 3-nitrophthalic acid is a solid, and remains a solid (sigma lists mp as 205 with decomposition) where would you even put thermometer? some solvent is necessary for good heat transfer, but there none is shown. in real procedure it is used. this entire step is footgun number 1

entire substep for cooling? come on

Step 2: Synthesis of 3-Nitrophthalhydrazide

Procedure:

  1. Reaction with Hydrazine:

Dissolve the 3-nitrophthalic anhydride in 50 mL of ethanol in a round-bottom flask.

after this point, what you get instead is ethyl and diethyl 3-nitrophthalate. this is wrong and bad form. again, no scale is provided

Add hydrazine hydrate (excess, e.g., 5 mL) to the solution. Attach a reflux condenser to the flask.

in real life hydrazinolysis of esters works in much milder conditions, at room temperature. this probably is workable, but in terms of lab exercise like this it would probably be better to make ester directly from acid through Fischer esterification, maybe with Dean-Stark adapter. this is very typical preparation. anyway, there's easier option there, so it's not used. moving on

Pour the mixture into 200 mL of ice-cold water with stirring. Collect the precipitated 3-nitrophthalhydrazide by filtration. Wash the solid with cold water to remove impurities.

when crude product is used in next step without purification you are excused, but there, after purification, even as rudimentary like this, i'd like to see at minimum expected yield and melting point (after crystallization). how are you even supposed to grade this

footgun number 2 approaching. reduction of nitro group

Suspend the dried 3-nitrophthalhydrazide in 100 mL of aqueous ethanol (50% ethanol in water) in a round-bottom flask. Add iron powder (excess, e.g., 15 g) to the suspension. Add concentrated hydrochloric acid (e.g., 20 mL) carefully to the mixture.

this, like previously, is not a real procedure. this is badly ripped from synthesis of aniline from nitrobenzene. while it is a real undergrad level preparation, i'd really wish it wasn't. why on god's green earth are you teaching this century old procedure when we have catalytic hydrogenation, they won't ever compete with bavarian coal tar dye industry from year 1904. it's annoying, it's not as high yielding as it could be, it's wet and dirty, rather harsh, and not green at all, but at least it's kinda real. the real procedure looks like this:

In two-necked round bottom flask with short reflux condenser put 18.5g (150mmol) of nitrobenzene and 30g (550mmol) of iron turnings. Add conc. hydrochloric acid in 2ml portions with shaking, 80ml total. Reaction mixture heats up strongly and starts boiling. If reaction is too vigorous, cool it down externally with water bath. After adding 30ml of acid bigger portions can be used. After adding all 80ml of acid, heat up flask in boiling water bath for 1h. When reduction is done, cool down flask and alkalinize reaction mixture with 45g of NaOH dissolved in 90ml of water. Aniline is then distilled off with water vapor.

then, aniline is extracted from distillate (two-phase) with ether, then ultimately distilled on its own. Yield 12g (84%), bp 182-183C.

the last bit is absolutely critical here. aniline can be distilled off like this, luminol can't. this is important because adding sodium hydroxide to mixture like this would result in miserable metal hydroxide yoghurt-like emulsion that under no pretext can be extracted or filtered. this is exactly what spicy autocomplete end up suggesting:

While still hot, filter the mixture to remove the iron salts (iron oxides). Wash the residue with hot water to extract any remaining product.

these iron salts are more precisely in form of iron chloride, entirely soluble in reaction mixture. there's nothing to filter. luminol hydrochloride could be probably precipitated, but only when cold and only if reaction mixture is concentrated enough in the first place

Combine the filtrates and cool them in an ice bath.

there luminol hydrochloride can drop out of solution. this is used in real synthesis where tin(II) chloride is used, precisely to avoid emulsion

Slowly add a solution of sodium hydroxide (10% NaOH) to the filtrate until the pH reaches around 8–9. Luminol will precipitate out of the solution.

yeah, maybe, along with multiple its weight of iron hydroxide that is now practically impossible to separate. this is footgun number 2.

none of this shit is workable or even real, unless the point is setting students up to fail

[–] V0ldek@awful.systems 17 points 3 weeks ago (1 children)

none of this shit is workable or even real, unless the point is setting students up to fail

This conclusion applies to literally every single ChatGPT "solution" to a nontrivial problem from any domain I've seen attempted at an undergrad level.

[–] skillissuer@discuss.tchncs.de 15 points 3 weeks ago

this one comes with bonus option of maybe killing whoever relies on this advice

[–] o7___o7@awful.systems 12 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

There should be some kind of mic-drop Hall of Fame to put this in, or maybe just nail it to the chemistry building's front door Martin Luther-style. Holy shit.

[–] skillissuer@discuss.tchncs.de 13 points 3 weeks ago* (last edited 3 weeks ago)

there's more, but i hit character limit

for example, lol i completely missed that first time around. notice this bit where you're supposed to add acid in small portions in aniline synthesis and how it can get too energetic? (you can also do it another way, add all acid at once and pour iron powder in small portions. but it's more annoying because iron powder sticks to wet condenser, that is part where it's easiest to pour it) well, spicy autocomplete tells you to dump everything at once. if you get burns from boiling concentrated hydrochloric acid after following chatgpt procedure, it's on you i don't know what to expect from it. another footgun

and it doesn't even have enough detail in the first place. where are expected yields, melting points, accurate amounts of everything incl solvents. how are you supposed to grade it, usually yield and purity are taken into account (inferred from melting point range or boiling point range)

[–] skillissuer@discuss.tchncs.de 10 points 3 weeks ago (2 children)

there are two incredible footguns in there, both of which can be trivially avoided

load more comments (2 replies)
[–] swlabr@awful.systems 22 points 2 weeks ago (2 children)

presented without comment.

[–] self@awful.systems 16 points 2 weeks ago

1.2 thousand upvotes for the LLM equivalent of adding a little astrology to your holistic medicine. reddit ain’t ok

[–] mii@awful.systems 14 points 2 weeks ago

Promptfondlers too lazy to even fondle prompts anymore. I’m sure this is the prime target demographic for Elon’s brain chips.

[–] maol@awful.systems 22 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

Eugenics in action:

Danish parenting tests under fire after baby removed from Greenlandic mother

Psychometric tests are widely used in Denmark as part of child protection investigations into new parents, and have long been criticised by human rights bodies as culturally unsuitable for Greenlandic people and other minorities.

In a 2022 report, the institute said that because the tests were not adapted to take cultural differences into account, Greenlandic parents ran “the risk of obtaining low test scores, so that it is concluded, for example, that they have reduced cognitive abilities, without there being actual evidence for this."

Psychological assessments of her were made by a Danish-speaking psychologist. Kronvold, whose first language is Kalaallisut (West Greenlandic), is not fluent in Danish.

[–] slopjockey@awful.systems 17 points 3 weeks ago (1 children)

Oh man that is so grim

Kronvold, 38, was given an FKU test in 2014 before the birth of her second child, a boy, and again recently while pregnant with her third child. Speaking through an intermediary, she told the Guardian that on this last occasion she was told it was to see if she was “civilised enough”.

load more comments (1 replies)
[–] swlabr@awful.systems 10 points 3 weeks ago

billy spears got it right. Something is rotten in the state of Denmark

[–] BlueMonday1984@awful.systems 19 points 3 weeks ago (1 children)

Starting things off with a fresh post from Brian Merchant: Tech under Trump, part 1

[–] Soyweiser@awful.systems 12 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Sidenote: Love how the tech VCs all grew up in the media landscape of tech workers going 'the management of this company is a group of idiots' an then didn't think that would apply to themselves.

load more comments (1 replies)
[–] self@awful.systems 19 points 3 weeks ago (2 children)

the richest boy in the world sued to stop The Onion from turning infowars into a parody of itself on the grounds that he thinks infowars’ twitter accounts shouldn’t be transferred as part of the bankruptcy even though that’s something that happens constantly and also wouldn’t impact the rest of the bankruptcy proceedings even if it were grounded in anything resembling fact

Musk has also tweeted occasionally that he believes The Onion is not funny.

it’s getting really hard to adequately describe how funny musk isn’t. it’s not just try-hard shit like the weird sink thing, the soul-sucking cameos, or the fact that he’s literally throwing his money into stopping a comedy site from existing — it’s everything taken as a whole. I’d call him anti-comedy, but he’s so much less interesting than that implies

load more comments (2 replies)
[–] self@awful.systems 15 points 3 weeks ago (2 children)

after going closed-source, redis is now doing a matt and trying to use trademark to take control over community-run projects. stay tuned to the end of the linked github thread where somebody spots their endgame

this is becoming a real pattern, and it might deserve a longer analysis in the form of a blog post

load more comments (2 replies)
[–] wizardbeard@lemmy.dbzer0.com 15 points 2 weeks ago (4 children)

Police are openly admitting to using chatGPT to hallucinate reports. I'm sure they were before, but now they're comfortable enough to admit to it.

Nothing could possibly go wrong with this. Nothing at all.

load more comments (4 replies)
[–] froztbyte@awful.systems 14 points 3 weeks ago (3 children)

currently in vc delusion, the public just doesn’t understand how to move about efficiently

the levels of not-even-wrong from these dipshits continue to be astounding

[–] istewart@awful.systems 16 points 3 weeks ago (1 children)

If you asked people what they wanted, they would say a car that drives itself

load more comments (1 replies)
[–] Soyweiser@awful.systems 14 points 3 weeks ago

Also, this plan has a very much a fuck disabled people and old people factor. And what a lonely world they live in.

[–] sailor_sega_saturn@awful.systems 11 points 3 weeks ago (10 children)

If I mathed right that'd be one waymo every 350 feet of road on average. Is that a lot? It sounds like it might be a lot. Especially since self-driving cars greatest weakness appears to be driving in the vicinity of other self-driving cars.

load more comments (10 replies)
[–] Architeuthis@awful.systems 14 points 2 weeks ago (1 children)

NASB does anybody else think the sudden influx of articles (from kurzgesagt to recent wapo) pushing the idea that you can't lose weight by exercise have anything to do with Ozempic being aggressively marketed at the same time?

load more comments (1 replies)
[–] sailor_sega_saturn@awful.systems 13 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

I woke up and immediately read about something called "Defense Llama". The horrors are never ceasing: https://theintercept.com/2024/11/24/defense-llama-meta-military/

Scale AI advertised their chatbot as being able to:

apply the power of generative AI to their unique use cases, such as planning military or intelligence operations and understanding adversary vulnerabilities

However their marketing material, as is tradition, include an example of terrible advice. Which is not great given it's about blowing up a building "while minimizing collateral damage".

Scale AI's response to the news pointing this out -- complaining that everyone took their murderbot marketing material seriously:

The claim that a response from a hypothetical website example represents what actually comes from a deployed, fine-tuned LLM that is trained on relevant materials for an end user is ridiculous.

[–] BlueMonday1984@awful.systems 12 points 3 weeks ago (6 children)

On the one hand, that spectacular failure could potentially dissuade the military from buying in and prolonging this bubble. On the other hand, having an accountability sink for war crimes would be a tempting offer to your average army.

[–] istewart@awful.systems 11 points 3 weeks ago

The eventual war crimes trials will very likely reveal that "AI targeting" has already been used as an accountability sink for a premeditated ethnic cleansing policy in Gaza.

load more comments (5 replies)
[–] maol@awful.systems 13 points 2 weeks ago

Some anti-AI propaganda via spellingmistakescostlives

[–] Architeuthis@awful.systems 12 points 2 weeks ago (5 children)

The old place on reddit has a tweet up by aella where she goes on a small evo-psych tirade about how since there's been an enormous amount of raid related kidnapping and rape in prehistory it stands to reason that women who enjoyed that sort of thing had an evolutionary advantage and so that's why most women today... eugh.

I wonder where the superforecasters stand on aella being outed as a ghislain maxwell type fixer for the tescreal high priesthood.

load more comments (5 replies)
[–] wizardbeard@lemmy.dbzer0.com 12 points 2 weeks ago

Nothing huge, but some wonderful examples of people trying to rules lawyer fucking dictionary definitions of plagiarism in this HackerNews thread about the parents that sued the school because their kid got in trouble for copy pasting from an LLM.

Thankfully the case was ruled in the school's favor. Just got a laugh out of some of the comments that are just unintentional satire of stereotypical HN comments.

[–] JFranek@awful.systems 11 points 3 weeks ago (1 children)

The promptfans testing OpenAI Sora have gotten mad that it's happening to them and (temporarily) leaked access to the API.

https://techcrunch.com/2024/11/26/artists-appears-to-have-leaked-access-to-openais-sora/

“Hundreds of artists provide unpaid labor through bug testing, feedback and experimental work for the [Sora early access] program for a $150B valued [sic] company,” the group, which calls itself “Sora PR Puppets,” wrote in a post ...

"Well, they didn't compensate actual artists, but surely they will compensate us."

“This early access program appears to be less about creative expression and critique, and more about PR and advertisement.”

OK, I could give them the benefit of the doubt: maybe they're new to the GenAI space, or general ML Space ... or IT.

But I'm not going to. Of course it's about PR hype.

load more comments (1 replies)
[–] V0ldek@awful.systems 11 points 3 weeks ago (15 children)

This is completely off topic I think but I need you all to see this, it's important on a spiritual level

This map is infinitely sneerable, every region you look at is somehow worse than the previous one, regardless of the order in which you do that.

Tag yourself, I'm Cracked Coast, population 17.

[–] Amoeba_Girl@awful.systems 13 points 3 weeks ago (5 children)

someone needs to stop paradox gamers from interfacing with the real world

load more comments (5 replies)
[–] Soyweiser@awful.systems 13 points 3 weeks ago (1 children)

Unifying belgium and the Netherlands makes me think really bad things about the map designers. People who want that are either fools who dont know much about the region or white nationalist fascists. (They often also want SA included)

[–] skillissuer@discuss.tchncs.de 10 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

caucasus would be even bigger shitshow than it is now. no chechenia or ingushetia, but azerbaijan has now half of current armenian land + iranian province of the same name. that's weirdly specific and suspicious

load more comments (1 replies)
[–] sailor_sega_saturn@awful.systems 11 points 3 weeks ago

Ah yes Africa, the small country on the northern coast of Africa.

load more comments (12 replies)
[–] dgerard@awful.systems 11 points 3 weeks ago (1 children)
[–] V0ldek@awful.systems 14 points 3 weeks ago (2 children)

Software licensing is notoriously labyrinthine, so resources like the site Microsoft will close – Get Licensing Ready – can be very handy. Today, the site offers over 50 training modules plus documentation.

I'm sorry, mister MSFT, why did you cause there to be more educational content about your stupid licenses than there is for theoretical physics in an undergrad programme, have you ever considered that it's time to stop? Get some help?

load more comments (2 replies)
[–] corbin@awful.systems 10 points 3 weeks ago (1 children)

John "Animats" Nagle choosing the most racist angle possible to respond to problems in education. The topic is giftedness and yet Nagle needs to start with "Ashkenazi Jews".

load more comments (1 replies)
[–] swlabr@awful.systems 10 points 2 weeks ago (2 children)

I listen to a podcast about black/african american conspiracy theories, and I got a podcast ad from Ed Zitron, so there’s that.

load more comments (2 replies)
[–] BlueMonday1984@awful.systems 10 points 2 weeks ago

New post from Brian Merchant: No thanks to generative AI, which is about AI-run publisher Spines and their attempt to enshittify the literature world. Pulling a paragraph near the end here:

For another, the needle can move here; if the noise is loud enough, AI publishing can get slapped with a stigma that can at least help slow the erosion of the industry. Public shame can be a powerful tool, when warranted! So yeah: This is why I’m thankful that we’re building this community, and that there are people out there willing to go to the mat to oppose things like the AI-enabled automation of book production. (I fully resent that ‘AI enabled automation of book production’ is a phrase I had to write in 2024.)

Giving my thoughts, I feel Merchant and co. have a headstart when it comes to moving the needle here, for two main reasons:

  • AI has been thoroughly stripped of whatever "wow factor" - showing off that your gen-AI system can make books isn't gonna impress Joe Public the way it would've back in '22 or '23.

  • The one-two punch of the slop-nami and the plagiarism lawsuits have indelibly associated "AI" as a concept with "zero effort garbage made of stolen shit" - as a consequence, using or supporting it will immediately disgust a good portion of the crowd right out of the gate

load more comments
view more: next ›