this post was submitted on 20 Jul 2024
2 points (100.0% liked)

Ask Lemmy

26916 readers
1819 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about US Politics. If you need to do this, try [email protected]


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS
 

If a recording of someones very rare voice is representable by mp4 or whatever, could monkeys typing out code randomly exactly reproduce their exact timbre+tone+overall sound?

I don't get how we can get rocks to think + exactly transcribe reality in the ways they do!

Edit: I don't get how audio can be fossilized/reified into plaintext

top 10 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 4 months ago* (last edited 4 months ago)

Basically sound is a change in air pressure, and we record that pressure value thousands of times a second. That's basically a bunch of numbers, and how rocks/electricity represents that is ones and zeroes (binary).

Usually that data then gets compressed by using lots of smart maths. When you play that sound file, all that work is done backwards and your speakers produce the necessary pressure changes to make the sound.

Monkeys could randomly produce a perfect human sentence if they typed random stuff into a text file and it got converted appropriately. It's just insanely unlikely.

[–] [email protected] 1 points 4 months ago (1 children)

Yes, monkeys could type out the zeros and ones. In fact we (not the monkeys) kind of did. There is a library of babel for audio named the sound library of babel which contains every 15 seconds audio recording you can imagine. Every single one. Almost all of them are white noise, but still there are recordings of every human saying any words in 15 seconds.

[–] [email protected] 1 points 4 months ago (1 children)

I call bullshit on that. Every second there are 44100 samples of 8 bit, so every second of sound is 44100 bytes, or 44kB. Even 1 second of audio is impossible to generate all possibilities.

To put this in perspective, there's something called Universally Unique Identifier (UUID for short), one of them is 128 bits, or 16 bytes. Let's imagine these were 1 bit long, on the second attempt at generating an id you would have a 50% chance of generating a repeated one, which means that by the third one you generate the chances that you have already generated a repeated id are 50%; If we extend this to 1 byte (i.e. 256 possibilities) the second time you have 1/256 chance of generating a repeated one, the second time 1/255, so on, and so forth. So from the third one on your chances of having already generated a duplicated id are 1/256 + 1/255 + 1/254 + ... This means that by the 103th id you generate you have a 50% chance to have already generated a repeated one; why did I do those examples? Because a UUID has 16 bytes, this means that if you generated a billion UUID per second, it would take you 100 years to have a 50% chance of having generated a repeated one, and by that time you would need 43 ZB of storage (that's not a typo, it's Zettabytes as in 1024 EB (which is also not a typo, that's Exabytes which is 1024 PB (which is also not a typo, that's Petabytes which is 1024 TB, or Terabytes which is the first measure people are likely to be familiar with))).

Let me again try to put this in perspective, if Google, Amazon, Microsoft and Facebook emptied all of their storage just for this, they wouldhave around 2 Exabytes, so you would need a company 4300x larger than that conclomerate to have enough space to store the amount of unique ids that would be generated from a 16 byte random data (until you have a 50% chance of generating a repeated one).

Another way of thinking about this is that to store all of the possible combinations of 1 bit you need 2 bits of space, for 2 bits is 4, for 3 bits is 8, it goes on exponentially, so that for n bits is 2^n. For the UUID that is 3.4E38, or 3.5E13 YB (again, not a typo, that's 1024 Zettabytes), i.e 35000000000000 YB (I could go up a few more orders of magnitude, but I think I made my point). And this is for 128 bits, every bit doubles that amount.

So again, I call bullshit that they have all possible sounds for even 1 second which is almost 3x that amount.

[–] [email protected] 0 points 4 months ago (1 children)

I appreciate the interest in doing all the math, and I am also not specifically familiar with audio or the audio library, but I believe you could use a similar argument against the OG library of babel, and I happen to know(confidently believe?) that they don't actually have a stored copy of every individual text file "in the library", rather each page is algorithmically generated and they have proven that the algorithm will generate every possible text.

I'd wager it's the same thing here, they have just written the code to generate a random audio file from a unique input, and proven that for all possible audio files (within some defined constraints, like exactly 15 seconds long), there exists an input to the algorithm which will produce said audio file.

Determining whether or not an algorithm with infrastructure backing it counts as a library is an exercise left to the reader, I suppose.

[–] [email protected] 1 points 3 months ago

The claim was it "contains every 15 seconds audio recording you can imagine. Every single one.". Which is bullshit, that's like saying this program contains every single literally work:

import sys

print(sys.argv[1])

It's just adding a layer of encoding on top so it feels less bullshity, something like:

def decode(number: int):
  out = ""
  while number:
    number, letter_index = divmod(number, len(string.printable))
    out += string.printable[letter_index]
  return out

That also does not contain every possible (ASCII) book, it can decode any number into a text, and some numbers happen to contain texts that are readable.

[–] [email protected] 1 points 4 months ago

Short answer: to record a sound, take samples of the sound "really really often" and store them as a sequence of numbers. Then to play the sound, create an electrical signal by converting those digital numbers to a voltage "really really often", then smooth it, and send it to a speaker.

Slightly longer answer: you can actually take a class on this, typically called Digital Signal Processing, so I'm skipping over a lot of details. Like a lot a lot. Like hundreds of pages of dense mathematics a lot.

First, you need something to convert the sound (pressure variation) into an electrical signal. Basically, you want the electrical signal to look like how the audio sounds, but bigger and in units of voltage. You basically need a microphone.

So as humans, the range of pitches of sounds we can hear is limited. We typically classify sounds by frequency, or how often the sound wave "goes back and forth". We can think of only sine waves for simplicity because any wave can be broken up into sine waves of different frequencies and offsets. (This is not a trivial assertion, and there are some caveats. Honestly, this warrants its own class.)

So each sine wave has a frequency, i.e. how long many times per second the wave oscillates ("goes back and forth").

I can guarantee that you as a human cannot hear any pitch with a frequency higher than 20000 Hz. It's not important to memorize that number if you don't intend to do technical audio stuff, it's just important to know that number exists.

So if I recorded any information above that frequency, it would be a waste of storage. So let's cap the frequency that gets recorded at something. The listener literally cannot tell the difference.

Then, since we have a maximum frequency, it turns out that, once you do the math, you only need to sample at a frequency of exactly twice the maximum you expect to find. So for an audio track, 2 times 20000 Hz = 40000 times per second that we sample the sound. It is typically a bit higher for various technical reasons, hence why 44100 Hz and 48000 Hz sample frequencies are common.

So if you want to record exactly 69 seconds of audio, you need 69 seconds × 44100 [samples / second] = 3,042,900 samples. Assuming space is not a premium and you store the file with zero compression, each sample is stored as a number in your computer's memory. The samples need to be stored in order.

To reproduce the sound in the real world, we feed the numbers in the order at the same frequency (the sample frequency) that we recorded them at into a device that works as follows: for each number it receives, the device outputs a voltage that is proportional to the number it is fed, until the next number comes in. This is called a Digital-to-Analog Converter (DAC).

Now at this point you do have a sound, but it generally has wasteful high frequency content that can disrupt other devices. So it needs to get smoothed out with a filter. Send this voltage to your speakers (to convert it to pressure variations that vibrate your ears which converts the signal to an electrical signal that is sent to your brain) and you got sound.

Easy peazy, hundreds of pages of calculus squeezy!

could monkeys typing out code randomly exactly reproduce their exact timbre+tone+overall sound

Yes, but it is astronomically unlikely to happen before you or the monkeys die.

If you have any further questions about audio signal processing, I would be literally thrilled to answer them.

[–] [email protected] 1 points 4 months ago

A microphone is a membrane attached to a means to generate electricity (like shaking wires around a magnet). When you make sound by a mic you shake the membrane and it in turn generates a small amount of electricity.

This electricity is an analog signal (it's continuous, and the exact amount changes over time). We can take that signal and digitize it (literally chop it up into distinct digits) by using an ADC or analog to digital converter. Essentially an ADC takes a snapshot of the analog signal at a specific point in time, and repeats that snapshot process very quickly. If you take enough snapshots fast enough you can have a reasonable approximation of the original signal (like following a dotted line).

Now we have a digital signal and we can store those series of snapshots in a file.

But how do we turn that back into sound? We literally just follow the process in reverse.

We open the file and get the list of snapshots. We pass those to a DAC or digital to analog converter that generates a continuous analog signal that passes through every original point. We pass that signal to thin wire wrapped around a magnet and attached to a membrane. This mechanism takes the small generated electric field from the DAC and causes the membrane to shake in the same pattern that the mic originally shook in.

In practice there are often other steps in line such as amps to increase the strength of a signal or compression to minimize how much space the snapshots take up.

[–] [email protected] 1 points 4 months ago (1 children)

Long list of numbers in sequence. Each represents how far away from equilibrium the speaker cone should be, at each point in time, as it vibrates back and forth.

[–] [email protected] 0 points 4 months ago* (last edited 4 months ago) (1 children)

I just think its crazy I can record a random recording right now or me speaking and that can be stored in what must ultimately be good old-fashioned plaintext or whatever.

Like, thats a rock thinking and turning sound right into stone, wayyyyy more impressive and beneficial than alchemy turning lead into gold

[–] [email protected] 1 points 4 months ago

Yes digital media, and computers in general, are miracles of science and engineering. Is there some reason digital audio in particular inspires you in this way, as opposed to digital images?