Eyron

joined 1 year ago
[–] [email protected] 1 points 1 month ago

Time isn't the only factor for adoption. Between the adoption of IPv4 and IPv6, the networking stack shifted away from network companies like Novell to the OSes like Windows, which delayed IPv6 support until Vista.

When IPv4 was adopted, the networking industry was a competitive space. When IPv6 came around, it was becoming stagnant, much like Internet Explorer. It wasn't until Windows Vista that IPv6 became an option, Windows 7 for professionals to consider it, and another few years later for it to actually deployable in a secure manner (and that's still questionable).

Most IT support and developers can even play with IPv6 during the early 2000s because our operating systems and network stacks didn't support it. Meanwhile, there was a boom of Internet connected devices that only supported IPv4. There are a few other things that affected adoption, but it really was a pretty bad time for IPv6 migration. It's a little better now, but "better" still isn't very good.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago)

You should probably read/know the actual law, rather than just getting it close. You're probably referring to 18 USC 922 (d) (10), which includes any felony-- not just shooting. That's one of 11 listed requirements in that section, which assumes that the first requirement (a) (1) is met: not an interstate nor foreign transaction. There's a lot more to it than just "as long as you don't have good evidence they're going to go shoot someone"

Even after the sale, ownership is still illegal under section (g)-- it just isn't the seller's fault anymore.

This is basic information that should be known to any gun safety advocate. "Responsible" gun owners must know those laws, plus others backward and forward. One small slip-up is a felony, jail, and permanent loss of gun ownership/use. Are they really supposed to listen to those who can't even talk about current law correctly?

The law can be better, but you won't do yourself any favors by misrepresenting it.

[–] [email protected] 6 points 1 month ago (2 children)

Voyager - if I didn’t love Voyager Janeway would kick my ass.

No need for threats. Voyager is good.

Blink twice if you need help.

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago)

It seems you are mixing the concepts of voting systems and candidate selection. FPP nor FPTP should not sound scary. As a voting systems, FPP works well enough more often than many want to admit. The name just describes it in more detail: First Preference Plurality.

Every voting system is as bottom-up or top-down as the candidate selection process. The voting system itself doesn't really affect whether it is top down or bottom up. Requiring approval/voting from the current rulers would be top-down. Only requiring ten signatures on a community petition is more bottom up.

The voting systems don't care about the candidate selection process. Some require precordination for a "party", but that could also be a party of 1. A party of 1 might not be able to get as much representation as one with more people: but that's also the case for every voting system that selects the same number of candidates.

Voting systems don't even need to be used for representation systems. If a group of friends are voting on where to eat, one problem might be selecting the places to vote on, but that's before the vote. With the vote, FPP might have 70% prefer pizza over Indian food, but the Indian food vote might still win because the pizza voters had another first choice. Having more candidates often leads to minority rule/choice, and that's not very good for food choice nor community representation.

[–] [email protected] 2 points 4 months ago* (last edited 4 months ago) (1 children)

I’m fully aware how rirs allocate ipv6. The smallest allocation is a /64, that’s 65535 /64’s. There are 2^32 /32’s available, and a /20 is the minimum allocatable now. These aren’t /8’s from IPv4, let’s look at it from a /56, there are 10^16 /56 networks, roughly 17 million times more network ranges than IPv4 addresses.

/48s are basically pop level allocations, few end users will be getting them. In fact comcast which used to give me /48s is down to /60 now.

I’ll repeat, we aren’t running out any time soon, even with default allocations in the /3 currently existing for ipv6.

Sorry, but your reply suggests otherwise.

The RIRs (currently) never allocate a /64 nor a /56. /48 is their (currently) smallest allocation. For example, of the ~800,000 /32's ARIN has, only ~47k are "fragmented" (smaller than /32) and <4,000 are /48s. If /32s were the average, we'd be fine, but in our infinite wisdom, we assign larger subnets (like Comcast's 2601::/20 and 2603:2000::/20).

These aren’t /8’s from IPv4. let’s look at it from a /56, there are 10^16 /56 networks, roughly 17 million times more network ranges than IPv4 addresses.

Taking into account the RIPE allocations, noted above, the closer equivalent to /8 is the 1.048M /20s available. Yes, it's more than the 8-bit class-A blocks, but does 1 million really sound like the scale you were talking about? "enough addresses in ipv6 to address every known atom on earth"

The situation for /48s is better, but still not as significant as one would think. With Cloudflare as an extreme example: They have 6639 IPv4 /24 blocks, but 670,550 IPv4 /48 blocks. Same number of networks in theory, but growing from needing 13-bits of networks in IPv4 to 19-bits of networks: 5 extra bits of usage from just availability.

That sort of increase of networks is likely-- especially in high-density data centers where one server is likely to have multiple IPv6 networks assigned to it. What do you think the assignments will look like as we expand to extra-terrestrial objects like satellites, moons, planets, and other spacecraft?

I’ll repeat, we aren’t running out any time soon

Soon vs never. OP I replied to said "never". Your post implied similarly, too-- that these numbers are far too big for humans to imagine or ever reach. The IPv6 address space is large enough for that: yes. But our allocations still aren't. The number of bits we're actually allocating (which is the metric used for running out) is significantly smaller than most think. In the post above, you're suggesting 56-64 bits, but the reality is currently 20-32 bits-- 1M-4B allocations.

If everyone keeps treating IPv6 as infinite, the current allocation sizes would take longer than IPv4 to run out, but it isn't really an unfathomable number like the number of atoms on Earth. 281T /48s works more sanely: likely enough for our planet-- but RIPEs seem to avoid allocating subnets that small.

IPv4-style policy shifts could happen: requirements for address blocks rise, allocation sizes shrink, older holders have /20 blocks (instead of 8-bit class A blocks), and newer organizations limited to /48 blocks or smaller with proper justification. The longer we keep giving away /20s and /32s like candy, the more likely we'll see the allocations run out sooner (especially compared to never). My initial message tried to imply that it depends on how fast we grow and achieve network growth goals:

30 years? Optimistically, including interstellar networks and continued exponential growth in IP-connected devices? Yes.

. . .

Realistically, it’s probably more than 100 years away, maybe outside our lifetimes

[–] [email protected] 4 points 4 months ago* (last edited 4 months ago) (3 children)

That wasn't what I said. 2^56 was NOT a reference to bits, but to how many IPs we could assign every visible star, if it weren't for subnet limitations. IPv6 isn't classless like IPv4. There will be a lot of wasted/unrunused/unroutable addresses due to the reserved 64-bits.

The problem isn't the number of addresses, but the number of allocations. Our smallest allocation, today, for a 128-bit address: is only 48-bits. Allocation-wise, we effectively only have 48-bits of allocations, not 128. To run out like with IPv6 , we only need to assign 48-bits of networks, rather than the 24-bits for IPv4. Go read up on how ARIN/RIPE/APNIC allocate IPs. It's pretty wasteful.

[–] [email protected] 0 points 4 months ago* (last edited 4 months ago) (5 children)

There's a large possibility we'll run out of IPv6 addresses sooner than we think.

Theoretically, 128-bits should be enough for anything. IPv6 can theoretically give 2^52 IPs to every star in the universe: that would be a 76-bit subnet for each star rather than the required 64 minimum. Today, we (like ARIN) do 32-48-bit allocations for IPv6.

With IPv4, we did 8-24-bit allocations. IPv6 gives only 24 extra allocation bits, which may last longer than IPv4. We basically filled up IPv4's 24-bits of allocations in 30 years. 281 trillion (2^48) allocations is fairly reachable. There doesn't seem to be any slowdown of Internet nor IP growth. Docker and similar are creating more reasons to allocate IPs (per container). We're also still in the early years of interstellar communications. With IPv4, we could adopt classless subnetting early to delay the problem. IPv6's slow adoption probably makes a similar shift in subnetting unlikely.

If we continue the current allocation trend, can we run out of the 281 trillion allocations in 30 years? Optimistically, including interstellar networks and continued exponential growth in IP-connected devices? Yes. Realistically, it's probably more than 100 years away, maybe outside our lifetimes, but that still sounds low when IPv6 has enough IPs for assigning an IP to every blade of grass, given every visible star has an Earth. We're basically allocating a 32-48 bit subnet to every group of grass, and there are not really enough addresses for that.

[–] [email protected] 2 points 5 months ago

I'm still rocking a Galaxy Watch 4: one of the first Samsung watches with WearOS. It has a true always-on screen, and most should. The always-on was essential to me. I generally notice within 60 minutes if an update or some "feature" tries to turn it off. Unfortunately, that's the only thing off about your comment.

It's a pretty rough experience. The battery is hit or miss. At good times, I could get 3 days. Keeping it locked, (like after charging) used to kill it within 60 minute (thankfully, fixed after a year). Bad updates can kill the battery life, even when new: from 3 days life to 10 hours, then to 3 days again. Now, after almost 3 years, it's probably about 30 hours, rather than 3 days.

In general, the battery life with always-on display should last more than 24 hours. That'd be pretty acceptable for a smartwatch, but is it a smartwatch?

It can't play music on its own without overheating. It can't hold a phone call on its own without overheating. App support is limited, but the processor seems to struggle most of the time. Actually smart features seem rare, especially for something that needs consistent charging.

Most would be better off with a Pebble or less "smart" watch: better water resistance, better battery, longer support, 90% of the usable features, and other features to help make up for any differences.

[–] [email protected] 4 points 9 months ago (1 children)

Vote. Seriously. (If practical: get involved, too). The U.S. is currently in the middle of a large shift of generational power.

Many of these changes are fairly recent:

  • 2020 was the first federal election where the Baby Boomers didn't make up the largest voting generation.
  • It was only in 2016 that the number Gen X and younger voting numbers grew larger than the boomer and older numbers.
  • Those numbers had been possible since 2010. Despite having more eligible voters (135M vs 93M), the "GenXers and younger" only had ~36M actual voters, compared to ~57M older ones.

Looking forward, the numbers only get better for younger voters. There hasn't been a demographic shift like this in the U.S. in a long time (ever?). The current power structures can not be maintained for much longer. It is still possible for that shift to be peaceful. Please encourage the peaceful transfer: vote. Vote in the primaries. Maybe even vote for better voting systems. This time is unique, but change takes time. Don't let them fool you otherwise: that's just them trying to hold on to their power.

[–] [email protected] 1 points 10 months ago (1 children)

To me, your attempt at defending it or calling it a retcon is an awkward characterization. Even in your last reply: now you're calling it an approximation. Dividing by 1024 is an approximation? Did computers have trouble dividing by 1000? Did it lead to a benefit of the 640KB/320KB memory split in the conventional memory model? Does it lead to a benefit today?

Somehow, every other computer measurement avoids this binary prefix problem. Some, like you, seem to try to defend it as the more practical choice compared to the "standard" choice every other unit uses (e.g: 1.536 Mbps T1 or "54" Mbps 802.11g).

The confusion this continues to cause does waste quite a bit of time and money today. Vendors continue to show both units on the same specs sheets (open up a page to buy a computer/server). News still reports differences as bloat. Customers still complain to customer support, which goes up to management, and down to project management and development. It'd be one thing if this didn't waste time or cause confusion, but we're still doing it today. It's long past time to move on.

The standard for "kilo" was 1000 centuries before computer science existed. Things that need binary units have an option to use, but its probably not needed: even in computer science. Trying to call kilo/kibi a retcon just seems to be trying to defend the use of the 1024 usage today: despite the fact that nearly nothing else (even in computers) uses the binary prefixes.

[–] [email protected] 1 points 10 months ago

209GB? That probably doesn't include all of the RAM: like in the SSD, GPU, NIC, and similar. Ironically, I'd probably approximate it to 200GB if that was the standard, but it isn't. It wouldn't be that much of a downgrade to go to 200GB from 192GiB. Is 192 and 209 that different? It's not much different from remembering the numbers for a 1.44MiB floppy, 1.5436Mbps T1 lines, or ~3.14159 pi approximation. Numbers generally end up getting weird: trying to keep it in binary prefixes doesn't really change that.

The definition of kilo being "1000" was standard before computer science existed. If they used it in a non-standard way: it may have been common or a decent approximation at the time, but not standard. Does that justify the situation today, where many vendors show both definitions on the same page, like buying a computer or a server? Does that justify the development time/confusion from people still not understanding the difference? Was it worth the PR reaction from Samsung, to: yet again, point out the difference?

It'd be one thing if this confusion had stopped years ago, and everyone understood the difference today, but we're not: and we're probably not going to get there. We have binary prefixes, it's long past time to use them when appropriate-- but even appropriate uses are far fewer than they appear: it's not like you have a practical 640KiB/2GiB limit per program anymore. Even in the cases you do: is it worth confusing millions/billions on consumer spec sheets?

[–] [email protected] -2 points 10 months ago* (last edited 10 months ago) (8 children)

This is all explained in the post we're commenting on. The standard "kilo" prefix, from the metric system, predates modern computing and even the definition of a byte: 1700s vs 1900s. It seems very odd to argue that the older definition is the one trying to retcon.

The binary usage in software was/is common, but it's definitely more recent, and causes a lot of confusion because it doesn't match the older and bigger standard. Computers are very good at numbers, they never should have tried the hijack the existing prefix, especially when it was already defined by existing International standards. One might be able to argue that the US hadn't really adopted the metric system at the point of development, but the usage of 1000 to define the kilo is clearly older than the usage of 1024 to define the kilobyte. The main new (last 100 years) thing here is 1024 bytes is a kibibyte.

Kibi is the recon. Not kilo.

view more: next ›