DevOps

1652 readers
2 users here now

DevOps integrates and automates the work of software development (Dev) and IT operations (Ops) as a means for improving and shortening the systems development life cycle.

Rules:

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
26
 
 

We need to deploy a Kubernetes cluster at v1.27. We need that version because it comes with a particular feature gate that we need and it was moved to beta and set enabled by default from that version.

Is there any way to check which feature gates are enabled/disabled in a particular GKE and EKS cluster version without having to check the kubelet configuration inside a deployed cluster node? I don't want to deploy a cluster just to check this.

I've check both GKE and EKS changelogs and docs, but I couldn't see a list of enabled/disabled feature gates list.

Thanks in advance!

27
 
 

"Inform users that we might disable this forumula one day given there will be no more version updates in homebrew-core due to the license change"

28
29
 
 

We're using Terraform to manage our AWS infrastructure and the state itself is also in AWS. We've got 2 separate accounts for test and prod and each has an S3 bucket with the state files for those accounts.

We're not setting up alternate regions for disaster recovery and it's got me wondering if the region the terraform S3 bucket is in goes down then we won't be able to deploy anything with terraform.

So what's the best practice for this? Should we have a bucket in every region with the state files for the projects in that region but then that doesn't work for multi-region deployments.

30
49
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

OpenTF project has been renamed to OpenTofu.

Personally, I feel happy to see this project geting form and I cannot wait to see what happens at the end.

31
 
 

Hoping you folks might be able to point me to the right things to Google.

Our project has developed a very "business lead" (to put it politely) requirement to monitor and allow/block outgoing connections to other parts of the business. We live in a dedicated AWS account and have reasonable autonomy over our networking setup (NACLs, route tables, etc), but less freedom with what AWS services we can use, and deploying things from Marketplace.

The basic requirements are as follows:

  • Default blocking for certain CIDRs.
  • Exceptions for certain IP/Host and port combos within those CIDRs.
  • Authentication and authorisation to use said exceptions (i.e. user tracking).
  • Detailed logging on connections; source, dest, request and response sizes, ports, protocols, whatever we can get out hands on.
  • All of the above for all (?) kinds of TCP connections (HTTPS, Postgres, Oracle DB, MongoDB, as examples).

The security aspect of this is fairly minimal as it's mainly for usage tracking and making sure our users sign their life away before they access their services from our platform. As such, I was hoping to have something that could be rolled out fairly simply; a couple of EC2 instances, yum install foo, and some routing rules, but it looks like the feature set we want requires something more robust, like OPNsense or similar.

Am I missing an obvious solution here, a forward proxy of some sort, any "light" firewalls that don't require a whole separate AMI?

Thanks in advance!

32
 
 

I recently stumbled upon a problem: I wanted the stdout of a command task to be printed after execution, so I toggled the global -v flag. However, the service module is apparently verbose as shit and printed like a 100 lines and uhh.... that's a costly tradeoff O_o

Seems like a PR for a task-level verbosity keyword has been proposed, yet rejected.

I'm aware it's possible to just register the stdout of the command and print it in a following debug task, but I wonder if there's a prettier solution.

How would you go about this? Ever encountered such a feeling?

33
 
 
  1. How much extra do you get paid for being on an call rotation?
  2. Is the salary/benefits the same for inconvenience of being on call and working on an incident?
  3. What other rules do you have? Eg. max time working on an incident, rota for highly unsociable hours?
  4. How many people are on the same schedule with you?
  5. Where are you based, EU/US/UK/Canada?
34
 
 

A true story about how Arch Linux migrated its packaging infrastructure and tooling to GitLab.

35
 
 

GitHub only supports a single CODEOWNERS file in a repository, which is fairly limiting. This tool allows OWNERS files to be distributed throughout the code base, and provide more localized semantic meaning.

Benefits of distributed files:

The primary benefit, in my view, is around ownership of the CODEOWNERS file. Having a single file means that you either have a small number of people who own the CODEOWNERS file, through which all updates must pass, or you have the CODEOWNERS file open broadly, possibly to anyone's reviews. In the former, you have a bottleneck, and people approving changes they may not be familiar the implications of, especially cross team ownership. In the latter, people could add themselves as an owner without the current owners being aware. By having the distributed OWNERS files, the teams/people who own the code also own the OWNERS file. This means the right people will have to approve changes.

It's easier to find who the experts on a group of code is, which is helpful when people have questions, or are otherwise seeking to engage more about it.

It's generally better practice to have many smaller scoped files, rather than monolithic ones. This applies to code, of course, but it also applies to metadata, such as ownership.

Feedback is welcome, I hope some find this helpful. :)

https://github.com/andrewring/github-distributed-owners

Note this includes support for pre-commit.

36
 
 

cross-posted from: https://lemmy.ml/post/4593804

Originally discussed on Matrix.


TLDR; Ansible handlers are added to the global namespace.


Suppose you've got a role which defines a handler MyHandler:

- name: MyHandler
  ...
  listen: "some-topic"

Each time you import/include your role, a new reference to MyHandler is added to the global namespace.

As a result, when you notify your handler via the topics it listens to (ie notify: "some-topic"), all the references to MyHandler will be executed by Ansible.

If that's not what you want, you should notify the handler by name (ie notify: MyHandler) in which case Ansible will stop searching for other references as soon as it finds the first occurrence of MyHandler. That means MyHandler will be executed only once.

37
38
 
 

OpenTF fork (prepare for alpha) is now available at the GH Repository here:

https://github.com/opentffoundation/opentf

Take a look at the issues tab to see some of the live RFCs and discussions happening. Lots of things like the use of tf in the binary/name and bring their own registry.

39
40
 
 

TLDR: terraform bad, pulumi good

41
42
17
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

Is your organization doing anything to ensure new devs are productive from day one? How do you guys handle local environments for the code you are working on? I am trying to get my company to enable teams to create their own workstation image that contains all the dev tools and local application-related infrastructure needed for that team to be productive. Has anyone done something similar?

43
 
 

Both are valuable, but which should be the priority? Assuming finite hours in a day, how much time should be spent working on your actual work vs. learning about the customer vs. learning about technology you haven't worked with before?

44
 
 

Here's a hypothetical scenario at a company: We have 2 repos that builds and deploys code as tools and libraries for other apps at the company. Let's call this lib1 and lib2.

There's a third repo, let's call it app, that is application code that depends on lib1 and lib2.

The hard part right now is keeping track of which version of lib1 and lib2 are packaged for app at any point in time.

I'd like to know at a glance, say 1 month ago, what versions of app is deployed and what version of lib1 and lib2 they were using. Ideally, I'm looking for a software solution that would be agnostic to any CI/CD build system, and doubly ideally, an open source one. Maybe a simple web service you call with some metadata, and it displays it in a nice UI.

Right now, we accomplish this by looking at logs, git commit history, and stick things together. I know I can build a custom solution pretty easily, but I'm looking for something more out-of-the-box.

45
 
 

Won't impact most users apparently.

46
47
48
49
50
view more: ‹ prev next ›