Irisos

joined 1 year ago
[–] [email protected] 0 points 10 months ago* (last edited 10 months ago)

Solar and wind are not cheap enough

Solar on itself works between a few less than 8 hours and 16 hours depending on the solstice you are the nearest of.

And that's the theorical best.

Reality is efficiency will drop during summer because of the record temperatures each year and in winter we are seeing more sun (Haven't seen snow in 7-8 years btw) but the production is still relatively low.

If you want it to run 24h/24, you need to build batteries which adds more carbon and cost. And that's on top of the maintenance cost for the panels themselves.

Wind can work 24h/24 but you cannot predict it long term.

Wind too strong? We stop the plant. Wind too weak? Subpar production. And with climate change, your expectations on a few years basis can change very rapidly.

So how do you make sure we produce the same amount of energy with certainty? You build oversized farms more expensive than what you theorically predicted.

There is also the problem of land.

A wind or solar power farm requires a lot of land comparatively to nuclear if you want to approach the same power production.

That land can be occupied instead for housing, farming or anything else.

Comparatively, a nuclear plant can easily be circled in a few minutes by foot and produce over 1 Tera Watt of energy.

Once you compound everything, nuclear is the best solution we have at our current technology level but ridiculous anti-nuclear propaganda acts like it is a thing from the demon.(My green party almost closed several nuclear power plants. During the start of the russian war. To open gas power plant instead. Like WTF?).

So what will the rich people do?

Refuse to build nuclear because their fearmongering to push gas/oil backfired on humanity and refuse to build solar/wind because we could build 50 Disneylands in the same area.

I would love them to eat their shit and choose either solution still. But it's only a dream.

[–] [email protected] 34 points 10 months ago* (last edited 10 months ago) (2 children)

I just accept our fate.

Humanity will probably realize we seriously fucked up around 2050 and near the end of the century mass migration will lead to a death count much bigger than WW2 or the chinese civil wars.

The only grace is that most of us reading this thread will die from various reason before the second stage.

I will still do my part by reducing my CO2 footprint but unless we find some miracle technology producing nuclear power plant levels of energy for the cost of a charcoal power plant, shitty world leaders and corporations will ruin everything for fake wealth.

[–] [email protected] 6 points 11 months ago* (last edited 11 months ago)

Let's not forget the unmentioned income thanks to gathering all that user data.

The real value of Youtube (and social medias in general) is not the raw revenue they generate.

It's being able to be able to predict what will trend in advance to sell ads to anyone, anywhere. Which is proven by their 200+ billions in revenue from ads from all services.

It's extremely likely that in an alternate universe where Google doesn't own YouTube, their profit today is lower than what they currently have.

But like you said, poor YouTube is not making money explicitly on its own so they'll use it to justify any cost increase attempt when they already know what the real money maker is.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Generating a large amount of utterances to train your cloud service language model for a bot because I'm sure not writing hundreds of utterances all asking the same thing.

[–] [email protected] 1 points 1 year ago

For development, I have a single image per project tagged "dev" running locally in WSL that I overwrite over and over again.

For real builds, I use pipelines on my Azure DevOps server to build the image on an agent using a remote buildkit container and push it in my internal repository. All 3 components hosted in the same kubernetes cluster.

[–] [email protected] 21 points 1 year ago* (last edited 1 year ago) (2 children)

They could implement restrictions to block VPN traffic. But that would be repealed as fast as it came when these very congressmen would phone angrily their district on why they can work from their million dollar home anymore.

Support: Sorry VPNs are now blocked and you cannot work remotely without them

Congressman: Who are the idiots that voted for these laws

Support: Well, you and your friends

 

Hello everyone,

Recently I have returned to managing a kubernetes cluster in my homelab with Ansible on RHEL distros. Since I haven't touched to the installation stages since quite a long time I started to look for tutorials from the base installation to the cni configuration, MetalLB setup and metrics server installation.

In every single tutorial, I have seen major issues that made me pull my hair:

  • First and the worst, most tutorials obviously have the firewall disabled or tells you to deactivate it. Just. No. I know deactivating it makes everything much easier and many issues disappear as soon as you run a systemctl stop firewalld. But if you want to teach correcty, you wouldn't recommend something that would make you fired on the spot.

  • CNI installations are straight forward but miss important information for troubleshooting. Stuff like putting flannel interfaces in the internal zone or adding some direct forwarding rules to firewalld can be necessary but again, everyone and their mothers have their firewall off so they never talk about it.

  • In MetalLB, the configMap used by the speakers is not created automatically by the official manifest. Missing it is impossible as the speaker straight up do not start and the logs are straightforward. Yet I have never seen one tutorial mention it.

  • Again in metalLB, if the controller is on a worker node, webhooks are not accessible and you cannot configure the load balancer. It's rare-ish and easy to fix but again, never seen any mention of that

  • While Flannel, MetalLB, Weave, ... clearly state which ports you need to open for their solutions, tutorials never do (firewall? Someone?)

  • The metrics server has some ... Particularities (like the need to modify the startup arguments or the dnsPolicy). Those are easily found in the github issues due to how frequent they're but I can never seem to find a tutorial mentionning those extra configuration to do.

  • Various basic stuff like a worker node + a cni being needed for coreDNS and the master node to become ready. Or how to verify your deployment of ingress/cni/metalLB is working correctly. If you are familiar with Kubernetes, it's not too hard to find the solution to those but when most of your audience, it should be explicit to at least share a random nginx manifest to test if everything is good.

This is mainly a rant because it is crazy to see that a tutorial that is supposed to explain the documentation but faster is utterly useless because of course, you won't get any forwarding issues between interfaces if your device is an open bar.

And that most of them are like this.

So to everyone who also tried to follow tutorials for the set up of their clusterw what was your experience with them? Were they also useless or did you find a gem that didn't simply copy pasted the documentation and took screenshots of an working cluster setup without trying their guide?

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Revanced extended ...

Not Revanced

[–] [email protected] 22 points 1 year ago* (last edited 1 year ago)

Even if they updated to a new version, the hack clients developers would probably just update their custom clients to ignore Mojang's server ban list and call it a day.

The new EULA is a joke and it's easier to tell everyone to use a third party client (which is most of the time better than the official one) than try to abide to them.

[–] [email protected] 20 points 1 year ago* (last edited 1 year ago) (2 children)

It's fine. He just exchanged trademarks with Egosoft https://twitter.com/EGOSOFT/status/1683477783584858115 so he is in the clear 👍

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (1 children)

In addition to what the other commenter said, Mozilla doesn't have the will to improve Firefox into a market contender.

They get a lot of free money from their competitors to prevent legislations from attacking chromium for market monopoly which makes them prioritize making Google happy more than their users.

They also have very controversial opinions regarding actual useful features such as progressive web apps (where support was given exclusively on Android but after a lot of complains). You can't make your browser into a market contender if you act like Safari on PC.

10 years ago when we had a 3 way market, Mozilla actually cared about making a good product.

Nowadays, they are just Google's shell company to keep Chrome's dominance away from the anti-competivity law suits.

[–] [email protected] 2 points 1 year ago

If it is using chatgpt as a backend, my guess is that they are using Azure OpenAI and know what they are doing.

Azure OpenAI allows you to turn off abuse monitoring and content filtering if you have legitimate reasons to do so.

It would be very hard for a malicious actor to get the approval to turn off both using a front company. But if one would manage to do it, they could create such malicious chatGPT service with little to no chance to be found out.

[–] [email protected] 1 points 1 year ago

If you are not using any HA feature and only put servers into the same cluster for ease of management.

You could use the same command but with a value of 1.

The reason quorum exist is to prevent any server to arbitrarily failover VMs when it believes the other node(s) is down and create a split brain situation.

But if that risk does not exist to begin with, so do the quorum.

view more: next ›