Technology

34728 readers
183 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
2176
 
 

first there was digg, then there was reddit, but now...

2177
 
 

cross-posted from: https://beehaw.org/post/719121

This blog post by Ploum, who was part of the original XMPP efforts long ago, describes how Google killed one great federated service, which shows why the Fediverse must not give Meta the chance

2178
 
 

I’m currently using Eero https://eero.com/ for my home network, as it mostly works well and is easy for my partner to enable and disable our kids devices at bedtime etc. The interface is quite slow, and I worry about being so cloud and Amazon dependant.

I’m wondering if there’s a local-only, ideally open, alternative? Most alternatives eg Ubiquiti seem to be becoming cloud based, and the likes of open wrt isn’t very partner-friendly.

Is there a middle ground? My requirements are modest, just a few wireless access points plus a handful of wired devices. Internet is max 1 gigabit.

2179
 
 

Huffman has said, "We are not in the business of giving that [Reddit's content] away for free." That stance makes sense. But it also ignores the reality that all of Reddit's content has been given to it for free by its millions of users. Further, it leaves aside the fact that the content has been orchestrated by its thousands of volunteer moderators.

touché

2180
2181
2182
2183
2184
 
 

Now, it appears that Yaccarino's management tactics were so effective that Google is weighing the benefits of forming "a broader partnership" with Twitter, possibly investing more in Twitter ads or paying to access Twitter data.

Is someone testing the waters for offloading a depreciating asset to Google?

2185
 
 

You may be able to pay for purchases and get into train stations without having to physically touch your phone to an NFC terminal in the future. The NFC Forum, which defines the standards for NFC, has revealed a roadmap for key research and plans for near field communication through 2028. Apparently, one of the main priorities for the future of the technology is to increase its range. At the moment, NFC only works if two enabled devices are within 5 millimeters from each other, but the group says it's currently examining ranges that are "four to six times the current operating distance."

That's 30 millimeters or 1.18 inches at most, but it could enable faster transactions and fewer failed ones overall, seeing as a longer range also means there's a lower precision requirement for antenna alignment. In addition, the forum is looking to improve the current NFC wireless charging specification of 1 watt to 3 watts. The capability will bring wireless charging to "new and smaller form factors," the forum said, but didn't give examples of what those form factors could look like.

Another potential future NFC capability will support several actions with a single tap. Based on the sample use cases the forum listed — point-to-point receipt delivery, loyalty identification and total-journey ticketing — we could be looking at the possibility of being able to validate transit tickets or venue tickets for the whole family with just one tap or a single device. NFC-enabled smartphones could have the power to serve as point-of-sale devices in the future, as well. Apple's Tap to Pay feature already lets iPhone owners use their phones as payment terminals. But a standardized capability would allow more people, especially in developing countries where Android is more prevalent, to use their devices to offer payments for their small businesses and shops.

These plans are in varying stages of development right now, with some further along than others. The forum doesn't have a clear timeline for their debut yet, but it said that the timeframe for its plans spans two to five years. NFC tech could get faster and go fully contactless within the next five years

2186
2187
2188
2189
 
 

cross-posted from: https://lemmy.world/post/440237

The latest update of Koboldcpp v1.32 brings significant performance boosts to AI computations at home, enabling faster generation speeds and improved memory management for several AI models like MPT, GPT-2, GPT-J and GPT-NeoX, plus upgraded K-Quant matmul kernels for OpenCL.

By leveraging GPU power via OpenCL and implementing optimized programming techniques, it allows hobbyist and enthusiast users to run these advanced models more efficiently on home hardware.

LostRuin's Koboldcpp v1.32 GitHub Patch Notes

  • Ported the optimized K-Quant CUDA kernels to OpenCL ! This speeds up K-Quants generation speed by about 15% with CL (Special thanks: @0cc4m)
  • Implemented basic GPU offloading for MPT, GPT-2, GPT-J and GPT-NeoX via OpenCL! It still keeps a copy of the weights in RAM, but generation speed for these models should now be much faster! (50% speedup for GPT-J, and even WizardCoder is now 30% faster for me.)
  • Implemented scratch buffers for the latest versions of all non-llama architectures except RWKV (MPT, GPT-2, NeoX, GPT-J), BLAS memory usage should be much lower on average, and larger BLAS batch sizes will be usable on these models.
  • Merged GPT-Tokenizer improvements for non-llama models. Support Starcoder special added tokens. Coherence for non-llama models should be improved.
  • Updated Lite, pulled updates from upstream, various minor bugfixes.

To use, download and run the koboldcpp.exe, which is a one-file pyinstaller. Alternatively, drag and drop a compatible ggml model on top of the .exe, or run it and manually select the model in the popup dialog.

Once loaded, you can connect like this (or use the full koboldai client):

http://localhost:5001

For more information, be sure to run the program with the --help flag.

Want to run AI models at home? Checkout Koboldcpp on Github, an inference engine made by LostRuins, or any of the other many options you can download at home for free.

2190
 
 

there goes the ear licking ASMR 😔

2191
8
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

I still subscribe to YouTube TV and don’t care for an alternative, but losing yet another sports channel sucks considering I’m a big baseball fan.

2192
2193
2194
2195
 
 

I wonder if higher quality datasets are the future rather than using tons of internet scraped texts. Either way, neat model!

2196
2197
2198
 
 

Hi! I just got a new computer recently, and I'm concerned about browser security and security in general. Any recommendations for a good secure browser? Preferably open source.

2199
2200
view more: ‹ prev next ›