1
submitted 2 years ago by [email protected] to c/[email protected]
1
submitted 2 years ago by [email protected] to c/[email protected]
1
submitted 2 years ago by [email protected] to c/[email protected]
1
submitted 2 years ago by [email protected] to c/[email protected]
0
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

Lemmy implements a scoring system allowing people to upvote or downvote posts. You know that since you are using Lemmy :)

Score can be used to increase or lower visibility of posts, in particular when using some sorting algorithms (active, hot, top).

This can be used to increase the visibility of good quality posts, and lower that of low quality or irrelevant posts.

Yet, from what I observe, the tool is mostly used for communities to self-administer filter bubble. Some communities seem to behave like a hive mind, massively upvoting or downvoting until either the dissident is assimilated in a very Borg way, or excommunicated.

Also, scores seem to be used often to convey cheap moral judgement, without having the need to expose oneself to criticism by providing arguments to sustain their opinion.

Overall, I think scores are more toxic than useful, and I would be in favor of hiding them by default, so that new comers are not put out by them.

What is your opinion about this? What are the advantages of having the score visible by default?

Just a clarification: the question is not "should scores exist or not?". If people find value in scores, good for them. I'm not one to dictate other people preferences. :)

1
Book: Ethics Into Action (www.goodreads.com)
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

A excellent book for all vegan activists to improve their strategy, their communication and their actual impact on animal rights, and well-being.

1
submitted 2 years ago by [email protected] to c/[email protected]
2
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]

cross-posted from: https://lemmy.ml/post/171118

On the account that "we are better equipped", Go will now ignore the order of the CipherSuite option, starting with Go 1.18, due this month.

The sorting logic is detailed in the code.

Several choices seem strange to me:

  • "SHA-256 variants of the CBC ciphersuites don't implement any Lucky13 countermeasures." leading to CBC-SHA1 being favored over CBC-SHA256.
  • "AES comes before ChaCha20", on the account that AES-NI is faster. They use heuristics to determine whether both ends support AES-NI and whether to prefer ChaCha20 over AES.
  • "AES-128 comes before AES-256", on the account that AES-256 is slower.

The static nature of the sorting algorithm also leads to security conundrums such as the fact that updating the Go library and recompiling programs will be required if a vulnerability is found in an algorithm implementation (e.g. Lucky13 for the CBC-SHA256 Go implementation); you won't be able to just reduce its priority by updating a config file.

What's your take on this? Can you explain some of the choices that feel strange to me?

[-] [email protected] 1 points 2 years ago

Good article. Thank you. You make some excellent points.

I agree that source access is not sufficient to get a secure software and that the many-eyes argument is often wrong. However, I am convinced that transparency is a requirement for secure software. As a consequence, I disagree with some points and especially that one:

It is certainly possible to notice a vulnerability in source code. Excluding low-hanging fruit, it’s just not the main way they’re found nowadays.

In my experience as a developer, the vast majority of vulnerabilities are caught by linters, source code static analysis, source-wise fuzzers and peer reviews. What is caught by blackbox (dynamic, static, and negative) testing, and scanners is the remaining bugs/vulnerabilities that were not caught during the development process. When using a closed source software, you have no idea if the developers did use these tools (software and internal validation) and so yeah: you may get excellent results with the blackbox testing. But that may just be the sign that they did not accomplish their due diligence during the development phase.

As an ex-pentester, I can assure you that having a blackbox security tools returning no findings is not a sign that the software is secure at all. Those may fail to spot a flawed logic leading to a disaster, for instance.

And yeah, I agree that static analysis has its limits, and that running the damn code is necessarry because UT, integrations tests and load tests can only get you so far. That's why big companies also do blue/green deployments etc.

But I believe this is not an argument for saying that a closed-source software may be secure if tested that way. Dynamic analysis is just one tool in the defense-in-depth strategy. It is a required one, but certainly not a sufficient one.

Again, great article, but I believe that you may not be paranoid enough 😁 Which might be a good thing for you 😆 Working in security is bad for one's mental health 😂

1
submitted 2 years ago by [email protected] to c/[email protected]
1
submitted 2 years ago by [email protected] to c/[email protected]
1
Décision n° 2022-835 DC du 21 janvier 2022 (www.conseil-constitutionnel.fr)
submitted 2 years ago by [email protected] to c/[email protected]
1
submitted 2 years ago by [email protected] to c/[email protected]
view more: next ›

X_Cli

joined 2 years ago
MODERATOR OF