this post was submitted on 08 Feb 2024
20 points (100.0% liked)

Firefox

2 readers
53 users here now

The latest news and developments on Firefox and Mozilla, a global non-profit that strives to promote openness, innovation and opportunity on the web.

You can subscribe to this community from any Kbin or Lemmy instance:

Related

Rules

While we are not an official Mozilla community, we have adopted the Mozilla Community Participation Guidelines as far as it can be applied to a bin.

Rules

  1. Always be civil and respectful
    Don't be toxic, hostile, or a troll, especially towards Mozilla employees. This includes gratuitous use of profanity.

  2. Don't be a bigot
    No form of bigotry will be tolerated.

  3. Don't post security compromising suggestions
    If you do, include an obvious and clear warning.

  4. Don't post conspiracy theories
    Especially ones about nefarious intentions or funding. If you're concerned: Ask. Please don’t fuel conspiracy thinking here. Don’t try to spread FUD, especially against reliable privacy-enhancing software. Extraordinary claims require extraordinary evidence. Show credible sources.

  5. Don't accuse others of shilling
    Send honest concerns to the moderators and/or admins, and we will investigate.

  6. Do not remove your help posts after they receive replies
    Half the point of asking questions in a public sub is so that everyone can benefit from the answers—which is impossible if you go deleting everything behind yourself once you've gotten yours.

founded 1 year ago
MODERATORS
 

Today marks a significant moment in our journey, and I am thrilled to share some important news with you. After much thoughtful consideration, I have decid

top 2 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 8 months ago (1 children)

After reading this article, I predict Firefox AI software ChatGPT feature built into Firefox in the future. This can be very useful if done right. Just hope there is an OFF switch for anyone who doesn’t want to use it.

[–] [email protected] 3 points 8 months ago

@[email protected] @[email protected] local LLM execution times can be very fast on recent consumer hardware. No need to send anywhere, just like their translation - do it all on-device.
As an example, with no optimization or GPU support, my @[email protected] (AMD) generates around 5 characters/sec from a 4 gigabyte pre-quantized model.