Linux, GNU/Linux, free software...

6 readers
1 users here now

Welcome to /r/Linux! This is a community for sharing news about Linux, interesting developments and press. If you're looking for tech support,...

founded 2 years ago
MODERATORS
226
 
 
The original post: /r/linux by /u/Melioodaas on 2024-12-21 00:15:29.

I've been kinda wanting to get into linux for a few weeks and I have been thinking about getting mint on my laptop, only thing stopping me is that my laptop is a 360 laptop with touchscreen. Is there any way to keep the touchscreen functionality and possible the screen rotation?

227
 
 
The original post: /r/linux by /u/FunWithSkooma on 2024-12-20 23:49:47.

Thanks to everyone on r/linux4noobs for all the help. I’ve been exploring Linux since the introduction of the Steam Deck, watching the amazing evolution of gaming on Linux, first with Wine and similar programs, and now with ProtonDB, which has made it the ultimate seamless experience. I’m using Bazzite as my gaming distro, and so far, everything has been amazing. I have little to no experience with Linux, but so far, nothing has been a barrier.

screw you Windows LOOOL

228
 
 
The original post: /r/linux by /u/IntensiveVocoder on 2024-12-20 22:53:02.

The NUC 14 Pro is the first of the NUCs released after Intel licensed that business to ASUS. The design still mirrors traditional Intel NUCs, and ASUS validates NUCs for Linux just as Intel did prior to the transition. Within Intel's naming scheme, the NUC 14 Pro is "Revel Canyon," and ASUS continues to offer units as complete mini PCs (with Windows, memory, and storage), or as barebones kits for users to add their own components.

The Linux validation and barebones availability are why NUCs are my go-to system for running Linux on the desktop, as they often just work, eliminating a need for manual configuration for graphics, sound, or networking on modern distributions. For day-to-day use, a full-size PC is a bit overkill, as I've got a NAS for bulk storage and an external DVD drive for the rare occasion that I need one. I use a standing desk, so a PC that fits below my monitor is more convenient than routing cables for a full-size tower on the floor.

The internals of the NUC 14 Pro.

I've taken a few more photos of the setup process, and they're in this Imgur album as r/Linux is set to allow only one photo per post.

Being upfront, ASUS sent the NUC 14 Pro for this review, and Patriot provided the RAM and SSD. Neither company read the review prior to posting. I'm striving to be objective, though as the lead moderator of r/IntelNUC, I'm clearly enthusiastic about NUCs and SFF PCs generally. From a personal perspective, I've used Linux for a decade—for half of that time, as my only OS, though I use Windows, Mac, and Linux about equally now—and I've been a NUC user since 2018.

Introduction

Fortunately, ASUS retained the design and strategy that made the NUCs useful: like previous NUCs, the the NUC 14 Pro is available in "slim" which support two M.2 SSDs, or "tall" units, which also support a 2.5" SATA HDD or SSD, up to 15mm tall, and NUCs are still primarily sold as barebones "kit" systems for the user to add their own memory and storage.

There's five options for processors: a Core Ultra 7 155H, Core Ultra 5 125H, or Core 3 100U, and the vPro-enabled Core Ultra 7 165H and Core Ultra 5 135H. Generally, vPro is only used by businesses for fleet management. These CPUs are nearly identical to the non-vPro versions, so there's no advantage for consumers to buy the comparatively expensive vPro versions.

While the NUC 14 Pro is the standard 4×4" square, there are other NUCs available. The NUC 14 Pro+ is slightly larger and adds a Core 9 185H option (but has no 2.5" drive bay), and the NUC 14 Pro AI uses Intel's Lunar Lake SoC, which uses on-package memory, so only the SSD can be replaced. The NUC 14 Performance includes an NVIDIA RTX 40 Series Laptop GPU, and is marketed for gamers as the ROG NUC.

Unboxing

I'm using the tall NUC 14 Pro with an Intel Core 7 Ultra 165H, which is a Meteor Lake-H processor with 6 performance cores with two threads per core, 8 efficiency cores, and 2 low-power efficiency cores, for a total of 16 cores and 22 threads. The maximum turbo clock speed is 5 GHz, and Intel's website provides full details; figures for the base and turbo speeds are as ungratifying to write as they are to read. On the NUC 14 Pro, ASUS configures the power (cTDP) at 40W. My unit is 117 × 112 × 54 mm and 600 g (4.6 × 4.4 × 2.1 in. and 21 oz., in freedom units), the slim version is 37 mm tall and 500 g (1.1 in. and 17.6 oz.), before adding memory and storage.

The front features one 20 Gbps USB Type C port and two 10 Gbps USB ports, and the power button. There's no ASUS logo on the barebones kit, and I'm reasonably certain that the HDMI logo is a sticker, but I haven't tried to remove it yet. ASUS removed the headset jack in the NUC 14 Pro (and Pro+), and this is the first mainline NUC to not have one. There's no integrated SD Card reader, but the last mainline NUC with one was the 10th generation (Frost Canyon) NUC from 2019.

The back has two Thunderbolt 4 / USB Type C ports (which support DisplayPort 1.4) and two HDMI 2.1 ports (which support TMDS), allowing up to four monitors to be connected. There is also one 10 Gbps USB port and one USB 2.0 port on the back, as well as an RJ-45 port for 2.5 Gb Ethernet (using Intel's I226-V/LM controller), and the barrel connector for power. The PSU included with my NUC 14 Pro is a FSP120-ABBU3, a 120W / 19V / 6.32A unit measuring 98 × 64.5 × 22.3 mm, which is quite compact. (For comparison, my 140W MacBook Pro charger is 96 × 75 × 29 mm.)

The spacer held in by two screws on the back can be used to add additional ports through an expansion kit from GoRite, for either one RS-232 port, two USB 2.0 ports, or two USB 2.0 ports and SMA RF (Wi-Fi) antennas. Similar to previous Intel NUCs, GoRite designs expansions that replace the top lid of the NUC to add items like an additional 2.5 GbE port or a full assembly for an LTE modem, which could be helpful if you’re using a NUC as an edge server.

Other than a Kensington security slot on the right side of the NUC—to protect against theft—the sides are reserved for ventilation, though the back of the NUC (above the I/O ports) has larger ventilation holes. There is a VESA bracket in the box for mounting the NUC to a monitor. On the back, there's a slotted hole for an optional security screw (included in the box) to secure the power cord from being unplugged accidentally.

Disassembly & Hardware

Disassembling the NUC 14 Pro is reasonably easy—the bottom cover locks in using a sliding mechanism on the right. (You can also lock the case with the captive screw near the slider.) Slide it upward, and gently remove the bottom cover. If you're using the tall version of the NUC 14 Pro, there is a ribbon cable that connects the SATA port on the bottom assembly to the mainboard—the cable is not too short as to be actively frustrating, but not too long as to get in the way when closing things back up. Open the plastic lock on the mainboard connector to release the cable—I used nylon tweezers to open it—and detach the ribbon cable from the mainboard, setting the bottom assembly aside.

On the mainboard, there are two SODIMM RAM slots and two SSD slots: one M.2 2280, and one M.2 2242. Both M.2 SSD slots are wired for PCIe 4.0 x4 signaling. This is an improvement over the NUC Pro 13, which only supported SATA on the M.2 2242 slot. The Wi-Fi module (Intel AX211 / Wi-Fi 6E, Bluetooth 5.3) is soldered to the mainboard, so it is not upgradable. The NUC 14 Pro supports up to 96 GB DDR5-5600 RAM, if you use two 48 GB modules. I'm using this for web browsing, code editing, and light gaming, so 32 GB (2 × 16 GB) is sufficient. I'm using Patriot Signature DDR5-5600 SODIMMs (PSD516G560081S) in the NUC 14 Pro.

Inserting the RAM is just like any other system: insert the module in the slot at a 45-degree angle and press down on the top edge until the latches on both sides click into place. If, for some reason, you've only got one RAM module, put it in the bottom slot. I strongly recommend using two RAM modules on the NUC, as using only one will significantly reduce application and graphics performance. (ASUS indicates that Intel's Arc GPU functionality requires two RAM modules, otherwise it's just "Intel Graphics". trademark quibbles aside, the implication is lower performance.)

The M.2 slots are tool-less, there is a little plastic plunger that holds the drive in place. Oddly, the NUC 14 Pro (and Pro+) is rather opinionated about what M.2 drives are used. ASUS posted an advisory indicating that using some M.2 drives will result in the system not powering on, and advising the use of SSDs on the qualified vendor list (QVL) which are tested for the system. I'm using a 2TB Patriot Viper VP4300 SSD—this works as expected, despite it not being on the QVL. Conversely, the VP4300 Lite did not work in the NUC 14 Pro, but worked in other computers. Patriot and ASUS are in communication to troubleshoot and resolve the issue.

The bottom cover (of the tall version) of the NUC 14 Pro integrates a mounting bracket for a 2.5" SATA drive, up to 15mm thick. This isn't new—the NUC 12 and 13 Pro also support 15mm SATA drives (or port expansion on the back panel), but other mini PCs typically do n...


Content cut off. Read original on https://old.reddit.com/r/linux/comments/1hiw0mn/using_an_asus_nuc_14_pro_with_fedora_workstation/

229
 
 
The original post: /r/linux by /u/vko- on 2024-12-20 21:46:09.

I guess there are alternatives, but this service was super easy to setup (just install, start systemd service) and it just works. My desktop now never freezes. Some tabs die, VSCode dies when I debug some ungodly nodejs app, but my linux memory management problems (which were significant), are over.

I know installing it by default would pose problems, but freezeups cause more problems for the regular user IMO. So I hope distros adopt some service like that by default at some point.

And no - swap does not really solve that problem. Yes, if my computer was running a mars rover it would be better to have it slow down instead of die. But in practice having your desktop run into swap renders the machine unusable anyway. And most modern apps save their state often enough to not lose valuable work.

230
 
 
The original post: /r/linux by /u/Open_Engineering8855 on 2024-12-20 19:16:00.
231
 
 
The original post: /r/linux by /u/mfilion on 2024-12-20 19:11:21.
232
 
 
The original post: /r/linux by /u/Tiny-Independent273 on 2024-12-20 15:20:18.
233
 
 
The original post: /r/linux by /u/Zery12 on 2024-12-20 15:15:00.

many people love immutable/atomic distros, and many people also hate them.

currently fedora atomic (and ublue variants) are the only major immutable/atomic distro.

manjaro, ubuntu and kde (making their brand new kde linux distro) are already planning on releasing their immutable variant, with the ubuntu one likely gonna make a big impact in the world of immutable distros.

imo, while immutable is becoming more common, the regular ones will still be common for many years. at some point they might become niche distros, though.

what is your opinion about this?

234
 
 
The original post: /r/linux by /u/oshunluvr on 2024-12-20 15:14:16.

Context: KDE/Plasma 6 "Discover" application will often require you to reboot twice to complete a package update, forcing a "reboot - install - reboot" even when ZERO reboots are required. I assume other package installers may behave this way as this functionality comes from "systemd.offline-updates".

As an example, I launched Discover on Kubuntu 24.04 and only had a single package that needed updating: brave-browser. An update to brave-browser does not require updates to any other packages. Yet still, I was presented with "reboot - install - reboot".

So I closed Discover and used apt at the command line to update Brave and lo and behold - exactly zero reboots required.

The reality is a large number of updates do not require even a single reboot, much less 2. I applaud the idea that some users may not reboot when they should and they need guidance, but does it have to be so dumbed down? Are we headed down the road to Linux being as base as MS WIndows?

235
 
 
The original post: /r/linux by /u/Unprotectedtxt on 2024-12-20 13:59:26.
236
 
 
The original post: /r/linux by /u/T_Jamess on 2024-12-20 09:22:04.

In other words, what would you change if you could travel back in time and alter anything about Linux that isn't possible/feasible to do now? For example something like changing the names of directories, changing some file structure, altering syntax of commands, giving a certain app a different name *cough*gimp*cough*, or maybe even a core aspect of the identity of Linux.

237
 
 
The original post: /r/linux by /u/No-Purple6360 on 2024-12-20 08:47:22.
238
 
 
The original post: /r/linux by /u/Time-Bowler-2130 on 2024-12-20 06:37:36.
239
 
 
The original post: /r/linux by /u/WraientDaemon on 2024-12-20 04:21:42.

https://i.redd.it/8x9nw73wjx7e1.gif

Github link : https://github.com/wraient/curd

Features :

  • Stream anime online
  • Update anime in Anilist after completion
  • Skip anime Intro and Outro
  • Skip Filler and Recap episodes
  • Discord RPC about the anime
  • Rofi support
  • Image preview in rofi
  • Local anime history to continue from where you left off last time
  • Save mpv speed for next episode
  • Configurable through config file
240
 
 
The original post: /r/linux by /u/frozencreed on 2024-12-20 04:17:32.

This is meant to take AppImage programs and turn them into regular apps that can be opened in the regular launcher and pinned to the dash like normal apps in Ubuntu 24.04. This should work with any AppImage program that can be normally run in Ubuntu 24.04.

I'm gonna get right to the point, I recently had to add Bambu Studio to my new Ubuntu Laptop (screw you Windows 11) and I was not impressed with the process. They only had an AppImage to download, and it took some extra steps to even get it to work (libfuse2, looking at you). Then I was left with this ugly icon that I had to run from a directory to get to work. Not the end of the world but it annoyed me for a few reasons:

  1. I couldn't pin it to the dash, meaning it wasn't as easy to access as I wanted
  2. It had the ugly settings cog icon, and wasn't easy to find in a folder with other files.
  3. It looked ugly if I left it on my desktop.
  4. Did I mention it was ugly?

So I found a way to convert it into a regular app that can be launched from the menu and added an icon file of it to make it nicer to work with, and as a bonus, I can now pin it to my dash!

It took some troubleshooting, but after I got it working I realized that it should have been way easier to do this. It frustrated me to the point that I said screw it, and coded a script to automate the whole process, like pretty much completely hands off.

https://github.com/bl4ckj4ck777/install-appimage

I'm gonna try to keep this relatively short, but basically, download the zip, extract the files into a new folder, add your app image in there and an svg icon file (or just use the default one I included, I completely support laziness), and run the script as sudo. Then it will ask you a couple questions to make the app work correctly in Ubuntu (like what the name/description/category should be).

It will make all the directory and permission changes to make it executable, etc, automatically so you don't have to do anything other than run the script.

There's probably already something like this out there, I'm not under any illusions that there aren't. I honestly don't care if there is, I just wanted something to do this afternoon, and after I finished it, I decided to upload it to github and make it open source.

Anyway, if you try it, let me know if it works for you and your setup and if it doesn't, then make an issue, that's what github is for right?

241
 
 
The original post: /r/linux by /u/hydro10s on 2024-12-20 01:30:06.

So I have 2 monitors. The first one has 165hz and the second one 144hz. The things is in Windows they run with no problem at 165 and 144 but in Ubuntu they start to bug a lot. The 165hz stays with a black horizontal bar in the middle of the screen... like what the hell? Does someone knows what the problem is? Or any suspections?

I really do like Linux and Wanted to slowly maybe make the transiction but this thing in killing me

Thank you

Edit: Thank you so much guys for the help. The problem was the drivers were not running. Thank u so much to all!!

242
 
 
The original post: /r/linux by /u/Melab on 2024-12-20 00:44:49.

There used to be a piece of software called LINA. It was, I think, a Linux emulator for Windows. Is there anything like that now? VirtualBox, VMware, Hyper-V, and Cygwin don't count. Maybe Windows Subsystem for Linux, but let's set that aside for now.

243
 
 
The original post: /r/linux by /u/poynnnnn on 2024-12-20 00:27:37.

I’m working on a setup where I run multiple VPN clients inside Linux-based containers (e.g., Docker/LXC) on a single VM, each providing a unique external IP address. I’d then direct traffic from a Windows VM’s Python script through these container proxies to achieve multiple unique IP endpoints simultaneously.

Has anyone here tried a similar approach or have suggestions on streamlining the setup, improving performance, or other best practices?

-----------------------

I asked ChatGPT, and it suggested this. I'm unsure if it's the best approach or if there's a better one. I've never used Linux before, which is why I'm asking here. I really want to learn if it solves my issue:

  1. Host and VM Setup:
    • You have your main Windows Server host running Hyper-V.
    • Create one Linux VM (for efficiency) or multiple Linux VMs (for isolation and simplicity) inside Hyper-V.
  2. Inside the Linux VM:Why a proxy? Because it simplifies routing. Each container’s VPN client will give that container a unique external IP. Running a proxy in that container allows external machines (like your Windows VM) to access the network over that VPN tunnel.
    • Use either Docker or LXC containers. Each container will run:
      • A VPN client (e.g., OpenVPN, WireGuard, etc.)
      • A small proxy server (e.g., SOCKS5 via dante-server, or an HTTP proxy like tinyproxy)
  3. Network Configuration:Make sure the firewall rules on your Linux VM allow inbound traffic to these proxy ports from your Windows VM’s network.
    • Make sure the Linux VM’s network is set to a mode where the Windows VM can reach it. Typically, if both VMs are on the same virtual switch (either internal or external), they’ll be able to communicate via the Linux VM’s IP address.
    • Each container will have a unique listening port for its proxy. For example:
      • Container 1: Proxy at LinuxVM_IP:1080 (SOCKS5)
      • Container 2: Proxy at LinuxVM_IP:1081
      • Container 3: Proxy at LinuxVM_IP:1082, and so forth.
  4. Use in Windows VM:For example, if you’re using Python’s requests module with SOCKS5 proxies via requests[socks]:import requests # Thread 1 uses container 1’s proxy session1 = requests.Session() session1.proxies = { 'http': 'socks5://LinuxVM_IP:1080', 'https': 'socks5://LinuxVM_IP:1080' } # Thread 2 uses container 2’s proxy session2 = requests.Session() session2.proxies = { 'http': 'socks5://LinuxVM_IP:1081', 'https': 'socks5://LinuxVM_IP:1081' } # and so forth...
    • On your Windows VM, your Python code can connect through these proxies. Each thread you run in Python can use a different proxy endpoint corresponding to a different container, thus a different VPN IP.
  5. Scaling:
    • If you need more IPs, just spin up more containers inside the Linux VM, each with its own VPN client and proxy.
    • If a single Linux VM becomes too complex, you can create multiple Linux VMs, each handling a subset of VPN containers.

In Summary:

  • The Linux VM acts as a “router” or “hub” for multiple VPN connections.
  • Each container inside it provides a unique VPN-based IP address and a proxy endpoint.
  • The Windows VM’s Python code uses these proxies to route each thread’s traffic through a different VPN tunnel.

This approach gives you a clean separation between the environment that manages multiple VPN connections (the Linux VM with containers) and the environment where you run your main application logic (the Windows VM), all while ensuring each thread in your Python script gets a distinct IP address.

https://preview.redd.it/zxc2mb92ew7e1.png?width=1387&format=png&auto=webp&s=dd8dc0fa30dc445b92b6a07781973e8f561fc793

244
 
 
The original post: /r/linux by /u/brand_momentum on 2024-12-20 00:22:24.
245
 
 
The original post: /r/linux by /u/gabriel_3 on 2024-12-19 22:20:16.
246
 
 
The original post: /r/linux by /u/Krowatko on 2024-12-19 21:35:46.
247
 
 
The original post: /r/linux by /u/CubicleNate on 2024-12-19 15:36:50.

In your opinion, how has 2024 been for Linux? This will be a big part of our discussion for the last LinuxSaloon (https://tuxdigital.com/podcasts/linux-saloon/) for 2024, this Saturday!

https://strawpoll.com/XOgOVkl04n3/

248
 
 
The original post: /r/linux by /u/BrageFuglseth on 2024-12-19 08:00:52.
249
 
 
The original post: /r/linux by /u/BinkReddit on 2024-12-19 00:30:44.

So, I try to make certain I document stuff. Why? In case I need to reference something, reconfigure something, understand why I did something and whatnot.

I thought I might be taking too much notes and, today, I just noticed I now have 58 pages in total and I think I agree.

What's in all these notes? Everything. Everything from commands for how to do some minor things to changes I made to account for different distributions to Plasma/Firefox configuration settings to LibreOffice tweaks, steps for doing certain things in Kdenlive, BIOS changes, and, well, you name it! It's there!

Let's just say my foray into Linux has been fun!

250
 
 
The original post: /r/linux by /u/Laserspeeddemon on 2024-12-18 23:00:48.

I was hired on as a DBA back in July for a government contractor. I am not a DBA. They hired me for my Linux experience. The DBA roles, isn't really a DBA position, it's more Linux work then DBA work and the last few DBAs they hired didn't last because the refused to work outside of databases. I, on the other hand, have plenty of data analysis/management experience in my 20+ years as a Linux Admin/Engineer. From the get-go, I found that the Linux team really could use my help and started fixing small things at the OS level. When I came on-board, I only had 20 database servers to manage.

The contract was up for recompete/re-bid and my company lost the contract. For reasons I'm not privy to, the new company did NOT bring the linux team lead back on-board, in addition to the project manager; but they did bring me back. The transition was messy, for 2 weeks literally no one was in the office. The temporary, on-site project manager was just a network engineer. The government client was hit with an ACAS scan and found that patches weren't donen for 2 months. There was also multiple issues in multiple in-house developed applications.

The customer/temp PM initially went to the Linux Team and asked them to address the updates/patching. The team members told him that "that was David's job" (David was the old team lead that didn't return). The PM learned that I had over 20 years experience in Linux/Unix and asked if I could manage their Red Hat Satellite server and manage the patching. I told him that I have very limited exposure to Satellite, but it was something I was really excited about learned and I stated as much. In fact, David was part of the group for my tech interview and when he mentioned Satellite, I was really excited about learning Satellite and he was excited to hear me say that.

The next thing I know I am being handed ALL of David's responsibilities. I had to change the admin password on the command line for Satellite, just to get in. Aaaaand that's when I was hit with the tsunami. There are 1500 hosts registering in Satellite. I thought at best there would've been like 100-150 servers.

I fix a lot of the issues, got Satellite patche, synced the repos and started to sift through the mountain of registered hosts. Most of them are offline, but I don't know if they were just powered off or are no longer in use. The more I dug, the worst it got. I am literally going into this completely blind.

I asked to see the environment architecture. They have none. I asked for documentation. They had none. I started to look through what files they had in the file share server and it's just aaaall over the place. None of our processes have been documented. What little documentation that I found, that may have been useful, was no longer being adhered to. For example there is no discernable naming convention. It's really what ever someone wanted at the moment that made them. Without having an idea of what is Production vs. Pre-Prod vs. Dev vs. Test, I'm very hesitant to power off or apply anything more than minor releases because some of their servers are actively used world wide. Some servers have ZERO activity and are literally just a bare-bones. One had only two logins in the past year. I asked the government and they literally said they have no idea. They also have NO documentation and never enforced the contractual requirement that was to be. I was literally shrugged at when I asked them what a set of servers do.

I'm now being tasked on patching all servers, but I literally have no idea what these servers do or if they're regularly or periodic use. During one scheduled outage for an application suite (Tomcat server, Web server and DB server), I was half through running just your standard update/patching, when two very frustrated government customers and a contractor came running up and asked me what the heck was going on as FOUR applications went down. Despite having scheduled the outage in advance and informing my counterparts (Network Engineer, Application POC and the senior DBA), literally not a single one of them informed me that the database server that I scheduled to go off line also had 4 other applications databases inside of the database application.

I'm being pressured to patch these 1500 servers of critical patches within 24 hours, but the government also requires a minimum of 48 hours notice before any production servers are taken offine. I'm told that we can't work during off hours (everyone must be there in the office during core hours) and I'm being told that I need to patch these during the day, but also that production servers are not allowed to taken down during the day.....

How am I supposed to proceed?

Oh and the Satellite server isn't being used for anything more than an over-glorified file server. It's not configured to automate anything. It literally does nothing beyond just holds rpms...its just a very expensive VERY large repository... All patches on all servers are manually applied on each server....one at a time.

And to make matters worse all but one other Linux Admin quit, so there's two of us. One won't touch David's old work/responsibilities and myself, whom only been here for 3-4 months.... Just the two of us...for 1500 servers.

view more: ‹ prev next ›