Technology

34728 readers
91 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
2026
2027
2028
2029
2030
 
 

cross-posted from: https://lemmy.ml/post/1809408

Education technology, or EdTech, is increasingly shaping and influencing the day-to-day experiences of students, teachers, and administrators. Recognizing the importance of education to the digital economy, corporations are capturing emerging markets in schools and higher education institutions through the process of digital colonialism.

2031
2032
 
 

Twitter is threatening legal action against Meta over its new text-based “Twitter killer” platform, accusing the social media giant of poaching former employees to create a “copycat” application.

2033
 
 

Meta’s Twitter alternative promises that it will work with decentralized platforms, giving you greater control of your data. You can hold the company to that—if you don't sign up.

2034
2035
2036
2037
 
 

cross-posted from: https://lemmy.world/post/1102882

On 07/05/23, OpenAI Has Announced a New Initiative:

Superalignment

Here are a few notes from their article, which you should read in its entirety.

Introducing Superalignment

We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we’ve secured to date to this effort. We’re looking for excellent ML researchers and engineers to join us.

Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.

While superintelligence seems far off now, we believe it could arrive this decade.

Here we focus on superintelligence rather than AGI to stress a much higher capability level. We have a lot of uncertainty over the speed of development of the technology over the next few years, so we choose to aim for the more difficult target to align a much more capable system.

Managing these risks will require, among other things, new institutions for governance and solving the problem of superintelligence alignment:

How do we ensure AI systems much smarter than humans follow human intent?

Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.

Other assumptions could also break down in the future, like favorable generalization properties during deployment or our models’ inability to successfully detect and undermine supervision during training.

Our approach

Our goal is to build a roughly human-level automated alignment researcher. We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence.

To align the first automated alignment researcher, we will need to 1) develop a scalable training method, 2) validate the resulting model, and 3) stress test our entire alignment pipeline:

  • 1.) To provide a training signal on tasks that are difficult for humans to evaluate, we can leverage AI systems to assist evaluation of other AI systems (scalable oversight). In addition, we want to understand and control how our models generalize our oversight to tasks we can’t supervise (generalization).
  • 2.) To validate the alignment of our systems, we automate search for problematic behavior (robustness) and problematic internals (automated interpretability).
  • 3.) Finally, we can test our entire pipeline by deliberately training misaligned models, and confirming that our techniques detect the worst kinds of misalignments (adversarial testing).

We expect our research priorities will evolve substantially as we learn more about the problem and we’ll likely add entirely new research areas. We are planning to share more on our roadmap in the future.

The new team

We are assembling a team of top machine learning researchers and engineers to work on this problem.

We are dedicating 20% of the compute we’ve secured to date over the next four years to solving the problem of superintelligence alignment. Our chief basic research bet is our new Superalignment team, but getting this right is critical to achieve our mission and we expect many teams to contribute, from developing new methods to scaling them up to deployment.

Click Here Read More.

I believe this is an important notch in the timeline to AGI and Synthetic Superintelligence. I find it very interesting OpenAI is ready to admit the proximity of breakthroughs we are quickly encroaching as a species. I hope we can all benefit from this bright future together.

If you found any of this interesting, please consider subscribing to /c/FOSAI!

Thank you for reading!

2038
2039
 
 

Received this in my email what do you all think?

2040
 
 

cross-posted from: https://lemmy.ca/post/1185025

Meta can introduce their signature rage farming to the Fediverse. They don't need to control Mastodon. All they have to do is introduce it in their app. Show every Threads user algorithmically filtered content from the Fediverse precisely tailored for maximum rage. When the rage inducing content came from Mastodon, the enraged Thread users will flood that Mastodon threads with the familiar rage-filled Facebook comment section vomit. This in turn will enrage Mastodon users, driving them to engage, at least in the short to mid term. All the while Meta sells ads in-between posts. And that's how they rage farm the Fediverse without EEE-ing the technology. Meta can effectively EEE the userbase. The last E is something Meta may not intend but would likely happen. It consists of a subset of the Fediverse users leaving the network or segregating themselves in a small vomit-free bubble.

2041
 
 

I am actually looking forward to threads taking off. I, as a mastodon user, will be able to follow my friends, celebrities, artists and interact with them when federation is activated. It is hard to get friends on to mastodon. The software is great and is better than Twitter, but the people are not on Mastodon but on Twitter, Instagram, etc.

Now, I know platforms by Meta (Facebook) are terrible and spy and leech out pretty much all the data from users. I am also aware that Meta has some hidden agenda behind the launch of threads. Yes, I also read about EEE(Embrace, Extend and Extinguish). Even if Threads decides to drop federation/activitypub, I don't think the fediverse will be harmed. I quote the founder of Mastodon

There are comparisons to be made between Meta adopting ActivityPub for its new social media platform and Meta adopting XMPP for its Messenger service a decade ago. There was a time when users of Facebook and users of Google Talk were able to chat with each other and with people from self-hosted XMPP servers, before each platform was locked down into the silos we know today. What would stop that from repeating? Well, even if Threads abandoned ActivityPub down the line, where we would end up is exactly where we are now. XMPP did not exist on its own outside of nerd circles, while ActivityPub enjoys the support and brand recognition of Mastodon.

I think many instance admins are all ready to defederate with threads. It just doesn't feel right. Imo, we should welcome users from threads and see how it goes.

What are your thoughts?

2042
2043
2044
2045
 
 

Today, Meta is launching its new microblogging platform called Threads. What is noteworthy about this launch is that Threads intends to become part of the decentralized social web by using the same standard protocol as Mastodon, ActivityPub. There’s been a lot of speculation around what Threads will be and what it means for Mastodon. We’ve put together some of the most common questions and our responses based on what was launched today....

2046
 
 

So it begins.

2047
2048
 
 

Original announcement on Hacker News

There was a thread on HN asking for people to post their personal blogs. Another HNer scraped the thread to create a website with a list of blogs with an intro in the words of the blog owner.

It reminds me of the olden days, pre Altavista when you depended on curated lists and indices to find stuff on the internet. Pre that we had WAIS and Gopher. I started out "browsing" with telnet.

2049
2050
view more: ‹ prev next ›