You should check out Nixos. You make a config file that you can just copy over to as many machines as you want.
Linux
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
That or Ansible, if you will have a machine to deploy from
if you will have a machine to deploy from
You can run ansible against localhost, so you don't even need that.
You don't need a machine to deploy from. You just need a git repo and Ansible pull. It will pulldown and run playbooks against the host. (Use the self target to run it on the local machine)
that workflow seems fine if it works for you. seems overkill for debian but if it works i don't see anything wrong with it.
one way I do it is dpkg - l > package.txt to get a list of all install packages to feed into apt on the new machine then to setup two stow directories one for global configs. when a change is made and one for dot files in my home directory then commit and push to a personal git server.
Then when you want to setup a new system it's install minimal install then run apt install git stow
then clone your repos grab the package.txt run apt install < package.txt then run stow on each stow directory and you are back up and running after a reboot.
Sounds like you need nixos
Use configuration tooling such as Ansible.
You also could build a image builder to build your system. You could utilize things like docker and or Ansible to repeatedly get to the same result.
You might be able to script something with Debootstrap. I tested Bcachefs on a spare device once and couldn't get through the standard Debian install process, so I ended up using a live image to Debootstrap the drive. You should be able to give a list of packages to install and copy over configs to the partition.
Ansible and docker would work nicely for this
Just put your system configuration in Ansible playbook. When your distro has new release, go through your changes and remove ones that are no longer relevant.
For home, I recommend a dotfiles repository with subdirectories for each tool, like bash, git, vim, etc. Use GNU stow
to symlink the required files in place on each machine.
I have the exact same workflow except I have two images: one for legacy/MBR and another for EFI/GPT -- once I read your post I was glad to see I'm not alone haha!
I did the same, exactly the way you did but my "zygote" isnt as advanced.
I should make a raw ISO too, but currently I just use Clonezilla (which shrinks and resizes automatically) and have a small SSD with a nearly vanilla system.
Just because the Fedora ISO didnt boot
I believe that Proxmox does this because I have installed/created containers from their available images. I wonder how they create those container images?
There are many way to make a image
This is designed for Gentoo but I've used it for Ubuntu before: https://github.com/TheChymera/mkstage4/