this post was submitted on 17 Nov 2023
424 points (99.5% liked)

Technology

59340 readers
5442 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Google is embedding inaudible watermarks right into its AI generated music::Audio created using Google DeepMind’s AI Lyria model will be watermarked with SynthID to let people identify its AI-generated origins after the fact.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 91 points 1 year ago (9 children)

People are listening to AI generated music? Someone on Bluesky put (paraphrased slightly) it best-

If they couldn't put time into creating it I'm not going to put time into listening to it.

[–] [email protected] 15 points 1 year ago (1 children)

People are using AI tools to do crazy stuff with music right now. It's pretty great

Human performance but AI voice: https://www.youtube.com/watch?v=gbbUWU-0GGE

Carl Wheezer covers: https://www.youtube.com/watch?v=65BrEZxZIVQ

[–] [email protected] 9 points 1 year ago (1 children)

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=gbbUWU-0GGE

https://www.piped.video/watch?v=65BrEZxZIVQ

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

[–] [email protected] 4 points 1 year ago

You tell 'em, bot. 🙌🏽

[–] [email protected] 11 points 1 year ago (1 children)

Can it be much different from the mass-market auto-tuned pap that gets put out today?

load more comments (1 replies)
[–] [email protected] 9 points 1 year ago (2 children)

My own feelings on the matter aside (fuck google and all that) this has been something chased after for a long time. The famous composer Raymond Scott dedicated the back end of his life trying to create a machine that did exactly this. Many famous musical creators such as Michael Jackson were fascinated by the machine and wanted to use it. The problem was is he was never "finished". The machine worked and it could generate music, it's immensely fascinating in my opinion.

If you want more information in podcast format check out episode 542 of 99% invisible or here https://www.thelastarchive.com/season-4/episode-one-piano-player

They go into the people who opposed Scott and why they did, and also talk about the emotion behind music and the artists, and if it would even work. Because the most fascinating part of it all was that the machine was kind of forgotten and it no longer works. Some currently famous musicians are trying to work together to restore it.

The question then is, if someone created their life's work and modern musicians spend an immense amount of time restoring the machine, when the machine creates music does that mean no one spent time on it? I enjoy debating the philosophy behind the idea in my head, especially since I have a much more negative view when a modern version of this is done by Google.

[–] [email protected] 9 points 1 year ago (1 children)

I feel like the machine itself would be the art in that case, not necessarily what it creates. Like if someone spent a decade making a machine that could cook FLAWLESS BEEF WELLINGTON, the machine would be far more impressive and artistic than the products it made

[–] [email protected] 3 points 1 year ago

i mean, where do you draw the line necessarily between the machine and what it creates? the machine itself is totally useless without inputs and outputs, not to say art needs utility. the beef wellington machine is only notable on its ability to conjure beef wellington, otherwise it's just a nothing machine. which is still kind of cool, I guess, but the beef wellington machine not making beef wellington is kind of a disregard for the core part of the machine, no?

[–] [email protected] 3 points 1 year ago (1 children)

That was a great episode of 99PI. Would love the machine restored.

IIRC, It's not so much that it made music, but that it would create loops through iteration to inspire people. He wanted it to make full busic but it was never close to that

[–] [email protected] 3 points 1 year ago

Yeah I think you're right, and it was apparently actually random. The longer it would play a loop the more it would iterate. Such a cool thing to exist

[–] [email protected] 4 points 1 year ago

You will still listen to it, watching movies, advertisements, playing video games...

[–] [email protected] 4 points 1 year ago (2 children)

This is the worst timeline

[–] [email protected] 7 points 1 year ago

This is the worst time line so far.

load more comments (1 replies)
[–] [email protected] 3 points 1 year ago (9 children)

Yikes. TIL you think music sounds good based on how much time went into making it, not how it actually sounds.

Can't wait for you to hear something you like then pretend it's bad when you find out it was made by AI.

[–] [email protected] 6 points 1 year ago (32 children)

This assumes music is made and enjoyed in a void. It's entirely reasonable to like music much more if it's personal to the artist. If an AI writes a song about a very intense and human experience it will never carry the weight of the same song written by a human.

This isn't like food, where snobs suddenly dislike something as soon as they find out it's not expensive. Listening to music often has the listener feel a deep connection with the artist, and that connection is entirely void if an algorithm created the entire work in 2 seconds.

load more comments (32 replies)
[–] [email protected] 4 points 1 year ago (1 children)

I don't think that's OPs point, but it's interesting how many classic songs were written in less than 30 minutes

load more comments (1 replies)
load more comments (7 replies)
load more comments (3 replies)
[–] [email protected] 53 points 1 year ago (1 children)

A spectrum analysis and bandpass filter should take care of that.

[–] [email protected] 7 points 1 year ago

chuckles contemptfully in Audacity

[–] [email protected] 30 points 1 year ago (1 children)

So we'll just need another AI to remove the watermarks... which I think already exists.

[–] [email protected] 25 points 1 year ago

Don't even need AI. Basic audio editing works.

[–] [email protected] 21 points 1 year ago (4 children)

Lately in youtube I'm constantly been bombarded with ai garbage music passed as a normal unknown bands and it's getting really annoying. What will happen when there's an actual new band but everyone ignores them because you would think it's just ai?

[–] [email protected] 19 points 1 year ago (2 children)

What will happen when there’s an actual new band but everyone ignores them because you would think it’s just ai?

Their music will speak for itself and elevate them above the AI that is making worse music.

You're asking the wrong question. What happens when you hear something you like, then find out it's made by AI and all of a sudden you have to pretend you never liked it?

[–] [email protected] 11 points 1 year ago (2 children)

A needle in a haystack is much harder to find if the haystack is the size of a truck. People don't have infinite time to listen to music, and if it's almost all the same, they'll stop trying to find upcoming artists, ai or not.

[–] [email protected] 4 points 1 year ago

We need an ai to listen to music and tell us what to like by playing it on repeat till we do. Just like the radio stations.

load more comments (1 replies)
[–] [email protected] 8 points 1 year ago (1 children)

Music snobs have been doing this for decades, pretending to like the shittiest pink Floyd b-side because the normies don't get it and acting like Abba's entire catalogue isn't solid bangers because disco isn't cool, until it was again then they'd always loved it.

It'll be just like it always is, Pete Seagar with an axe trying to stop Bob Dylan playing an electric guitar. I remember when people hated d&b and said it wasn't real music and all that shit now they're all telling bullshit stories about how they were og junglist massive.

People will use ai to make really cool things and a loud portion of the population will act superior by pretending it's bad, time will pass and when the next thing comes along all those people will point at the ai music and say 'your new music will never be as good as real music like that' but the people listening to atonal arithmic echolocation beats to study to or whatever the next trend is won't pay them any attention.

load more comments (1 replies)
[–] [email protected] 10 points 1 year ago

ai garbage music

actual new band but everyone ignores them because you would think it's just ai

I think you answered your own question.

[–] [email protected] 6 points 1 year ago

Omg the AI voice describing a short is infuriating.

"This man was minding his own business not knowing he was about to change this child's life...Watch how his interaction is measured..."

Dots Do not recommend this channel again

load more comments (1 replies)
[–] [email protected] 13 points 1 year ago

The Audacity!

Hehe.

[–] [email protected] 8 points 1 year ago (1 children)

This is the best summary I could come up with:


Audio created using Google DeepMind’s AI Lyria model, such as tracks made with YouTube’s new audio generation features, will be watermarked with SynthID to let people identify their AI-generated origins after the fact.

In a blog post, DeepMind said the watermark shouldn’t be detectable by the human ear and “doesn’t compromise the listening experience,” and added that it should still be detectable even if an audio track is compressed, sped up or down, or has extra noise added.

President Joe Biden’s executive order on artificial intelligence, for example, calls for a new set of government-led standards for watermarking AI-generated content.

According to DeepMind, SynthID’s audio implementation works by “converting the audio wave into a two-dimensional visualization that shows how the spectrum of frequencies in a sound evolves over time.” It claims the approach is “unlike anything that exists today.”

The news that Google is embedding the watermarking feature into AI-generated audio comes just a few short months after the company released SynthID in beta for images created by Imagen on Google Cloud’s Vertex AI.

The watermark is resistant to editing like cropping or resizing, although DeepMind cautioned that it’s not foolproof against “extreme image manipulations.”


The original article contains 230 words, the summary contains 195 words. Saved 15%. I'm a bot and I'm open source!

[–] [email protected] 12 points 1 year ago* (last edited 1 year ago) (2 children)

it does this by converting the audio into a 2d visualisation that shows how the spectrum of frequencies evolves in a sound over time

Old school windows media player has entered the chat

Seriously fuck off with this jargon, it doesn’t explain anything

[–] [email protected] 22 points 1 year ago (2 children)

That's actually an accurate description of what is happening: an audio file turned into a 2d image with the x axis being time, the y axis being frequency and color being amplitude.

[–] [email protected] 10 points 1 year ago (2 children)

That's literally a spectrograph

[–] [email protected] 8 points 1 year ago

Spectrogram*

[–] [email protected] 6 points 1 year ago

Your mom's literally a spectrograph.

load more comments (1 replies)
[–] [email protected] 13 points 1 year ago (1 children)

Sounds like a bad journalist hasn't understood the explanation. A spectrogram contains all the same data as was originally encoded. I guess all it means is that the watermark is applied in the frequency domain.

[–] [email protected] 10 points 1 year ago* (last edited 1 year ago) (1 children)
[–] [email protected] 8 points 1 year ago (1 children)

Well, encoding stuff in the spectrogram isn't new, sure. But encoding stuff into an audio file that is inaudible but robust to incidental modifications to the file is much harder. Aphex Twin's stuff is audible!

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

I would like to know what it is that makes it so robust. The article explains very little. Is it in the high frequencies? Higher than the human ear can hear? Compression will effect that plus that’s going to piss dogs off. Could be something with the phasing too. Filters and effects might be able to get rid of the water mark

[–] [email protected] 4 points 1 year ago

I don't know what frequencies are annoying for dogs but I'm guessing it's above 24kHz so no sound file or sound system is going to be able to store or produce it anyway.

There will certainly be some way to get rid of the watermark. But it might nevertheless persist through common filters.

[–] [email protected] 7 points 1 year ago

thats like putting a watermark besides the Bill.if it is inaudible then you can just delete it

[–] [email protected] 6 points 1 year ago (3 children)

I wonder if being able to generate music will make people less interested in actually bothering to learn how to do it themselves. Having ai tool makes many things so much easier and you need to have only rudimentary understanding of the subject.

[–] [email protected] 12 points 1 year ago (4 children)

Yeah, like most people don't realise but until about 1900 most piano music was played by humans, of course there were no pianists after the invention of the pianola with its perforated rolls of notes and mechanical keys.

It's sad, drums were things you hit with a stick once but Mr Theramin ensured you never see a drummer anymore, while Mr Moog effectively ended bass and rhythm guitars with the synthesizer....

It's a shame it would be fun to go see a four piece band performing live but that's impossible now no one plays instruments anymore.

People are never going to stop learning to play instruments, if anything they'll get inspired by using AI to make music and it'll get them interested in learning to play, they'll then use ai tools to help them learn and when they get to be truly skilled with their instrument they'll meet up with some awesomely talented friends to form a band which creates painfully boring and indulgent branded rock.

load more comments (4 replies)
[–] [email protected] 4 points 1 year ago

I believe it will depend on a couple different factors. Putting keywords into a generator isn't the same as laying your hands on an instrument, being able to physically play it yourself. However, if the result is so perfect and beautiful that a person could have never possibly come up with it on their own, it might be discouraging (but I can't really see that happening)

load more comments (1 replies)
load more comments
view more: next ›