rhymepurple

joined 3 years ago
[–] [email protected] 1 points 2 months ago

That's true, but how often have you heard a finance team member wanting a CSV file so they can more easily process the data using Pandas or visualize it with MatPlotLib? How many accountants or finance people (especially those that ask for everything in Excel) do you know that is comfortable writing even a single line of Python code? How many of the finance team's Excel-based tools will Python integrate well with? What feature(s) does Python within Excel provide that Excel (formulas, pivot tables, VBA, Power Query, Power Pivot, etc.) does not provide that someone on the finance team would need? What advanced charting/dashboarding functionality does Python in Excel provide that isn't better accomplished in PowerBI (if not handled by standard Excel charts/graphs)?

Don't get me wrong - Microsoft's implementation of Python in Excel has its merits and will solve some problems that otherwise would not be possible in Excel and will make some people happy. However, this is not the solution most people were expecting, asking for, or find useful.

[–] [email protected] 9 points 2 months ago

I agree with everything you said, but (in Microsoft's eyes) this is a feature - not a bug.

Without this cloud component, how could:

  • Microsoft make sure that the accounting team does not introduce a malicious/old Python library into the Excel file?
  • Microsoft protect its users from writing/running inefficient, buggy, or malicious Python code?
  • Microsoft provide a Python runtime to users who do not know how to install Python?
  • Microsoft charge to run code that you wrote in a free, open source software programming language on a device that you own?
[–] [email protected] 20 points 2 months ago (11 children)

Over a year later and I still do not understand what the use case for this is.

A lot of the examples/documentation that was made by Microsoft for this seems to focus on data analysis and data visualization. Anyone in those fields would probably prefer to get the data out of Excel and into their tool/pipeline of choice instead of running their Python code in Excel. That also makes the big assumption that the data being used is fully contained within the Excel file and that the libraries used within the code are avalaible in Excel (including the library version).

For anyone looking to learn/use Excel better, I doubt the best use of their time is learning a new programming language and how Excel implements that programming language. They would likely be better off learning Excel's formulas, pivot tables, charts, etc. They could even learn Power Query to take things to another level.

For anyone looking to learn Python, this is absolutely a terrible way to do so. For example, it abstracts away library maintenance, could provide modified error messages, and makes the developer feedback loop more complicated.

If you want to automate Excel then this realistically allows for very little new functionality that did not exist prior to this feature. Using other Python libraries like OpenPyxl and xlWings will still be required to automate Excel.

I am sure there are edge cases where this iteration of Python in Excel is perfect. However, this feels like a checkbox filler ("yeah, Excel supports Python now") than an implementation of an actual useful feature. A fully featured and supported Python library that manipulates Excel/Excel files would have been a much more exciting and useful feature - even if it had to be executed outside of Excel, like OpenPyxl.

[–] [email protected] 1 points 2 months ago

Take a look at QuickWeather if you want a map.

[–] [email protected] 1 points 2 months ago

The improvements sound great.

I did not look through the details, but it's strange that one of the features is that Cloudflare R2 will be used to improve download speeds and reduce API calls to Github while at the same time adding a new requirement of adding a personal Github API token.

Hopefully one day the Github requirement will be removed. It would be nice if projects/code stored on Gitlab, Codeberg, or other Git services like Gitea or Forgejo could be used without having to mirror/fork the project onto Github.

[–] [email protected] 2 points 3 months ago

In terms of privacy, you are giving your identity provider insight to each of the third party services that you use. It may seem that there isn't too much of a difference between using Google's SSO vs using your Gmail address to register your third party account. However, one big distinction is that Google would be able to see often and when you use each of your third party services.

Also, it may be impossible to restrict the sharing of certain information from your identity provider with the third party service. For example, maybe you don't want to share a picture of yourself with a service, but that service uses user profile pictures or avatars. That service may ask (and require) that you give it access to your Google account's profile picture in order to authenticate using Google's SSO. You may be able to overwrite that picture, but you also may not be able to revoke the service's ability to retrieve it. If you used a "regular" local account, that Google profile picture would never be shared with the third party service if you did not upload it directly. The same is true for other information like email, first/last/full name, birthday, etc.

There are other security and operational concerns with using SSO options. With the variety of password managers available, introduction of passkeys, and increased adoption of multi-factor authentication, many of the security benefits associated with SSO aren't as prevalent as they were 10 years ago. The biggest benefit is likely the convenience that SSO still brings compared to other authentication methods.

Ultimately it's up to you to determine if these concerns are worth the benefits of using SSO (or the third party service provider at all if they require SSO). I have a feeling the common advise will be to avoid SSO unless its an identity provider that you trust (or even better - one that you host yourself) - especially if you're using unique emails/usernames along with strong and unique passwords with multi-factor authentication and/or passkeys.

[–] [email protected] 8 points 3 months ago

There are a few performance issues that you may experience. For example, if you're into online gaming then your latency will likely increase. Your internet connection bandwidth could also be limited by either Mullvad's servers, your router, or any of the additional hops necessary due to the VPN. There's also the situation where you have no internet connection at all due to an issue with the VPN connection.

There are also some user experience issues that users on the network nay experience. For example, any location based services based on IP address will either not work at all or require manual updates by the user. The same is true for other settings like locale, but they are hopefully better handled via browser/system settings. What's more likely is content restrictions due to geographic IP addresses. Additionally, some accounts/activity could be flagged as suspicious, suspended, or blocked/deleted if you change servers too frequently.

I'm sure you are either aware of or thought through most of that, but you may want to make sure everyone on the network is fine with that too.

In terms of privacy and security, it really comes down to your threat model. For example, if you're logged into Facebook, Google, etc. 24/7, use Chrome, Windows, etc., and never change the outbound Mullvad server, you're not doing too much more than removing your ISP's ability to log your activity (and maybe that's all you want/need).

[–] [email protected] 14 points 3 months ago

Ultra-wideband

In addition to other use cases, it is used to precisely identify where a device is in relation to another one.

[–] [email protected] 2 points 3 months ago

I think there may be an issue where F-Droid is not properly recognizing the 64-bit version of Findroid. Maybe Droid-ify and/or the version of Android you are using won't allow 32-bit apps to be installed.

[–] [email protected] 11 points 3 months ago (1 children)

Just to clarify - this is just an update that (I believe) is only available on IzzyOnDroid's F-Droid Repo, which previously had prior Findroid versions available. This new v0.15.0 is not available on the main F-Droid Repo.

Is anyone only able to download the 32-bit version of this app via F-Droid? It looks like a 64-bit version has been made available starting with v0.3.0 and is also available on this new version.

[–] [email protected] 2 points 3 months ago (1 children)

Really not sure why you got down voted so hard and it's a shame your comment was deleted. Your comment was relevant, accurate, and focused on an issue that others aren't talking about in here (and apparently don't want to). You were also the only person in this thread who provided any sources.

I'm not sure what argument can be made against what you said. Just because a piece of information "is public" doesn't mean everyone wants that public information collected and shared with little (if any) control/input by you. If that were the case, doxxing wouldn't be an issue.

[–] [email protected] 6 points 3 months ago

I did not watch the mentioned video so I am not sure if what I am about to mention is discussed there or not. Also, sorry for the really long reply!

I am not aware of any available truly privacy respecting, modern cars. However, assuming theat you obtain one or you can do things like physically disconnect/remove all wireless connectivity from the car to make it as private/secure as possible, there still is little you can do to be truly anonymous.

Your car likely has a VIN and license plate as well as a vehicle registration. Assuming you legally obtained the vehicle and did not take any preventative measures prior to purchasing the car, those pieces of information will be tied back to you and your home address (or at least someone closely connected to you). You would need to initially obtain the vehicle via a compsy/LLC/partnership/etc. as the owner/renter/leasee of the vehicle and an address not associated to you. Additionally, you would need to find some means of avoiding or limiting the additional information connected to you that is likely required to obtain the vehicle like car insurance and your drivers license.

Additionally, any work that certain mechanics perform may be shared (either directly or indirectly) with data brokers - even just routine maintenance like an oil change or alignment. Hopefully you didn't use your credit card, loyalty rewards program, etc. when you had any work done!

There is also CCTV, security cameras, and other video recorders that are nearly impossible to avoid. Given enough time/resources and maybe a little bit of information, your car could be tracked from its origin to destination locations. This location history can be used to identify you as the owner (or at least driver/passenger) of the car. Unless your car never leaves your garage, you can almost guarantee that your car is on some Ring camera, street camera, etc.

Furthermore, anything special or different about your car (custom decal, unusual window tinting, funny bumper sticker, uncommon color for the car, uncommon trim/package for the car, dented bumper, fancy rims, replaced tires, specific location of toll reader placement on the windshield, something hanging from your rear mirror, etc.) all help identify your car. The make/model and year of your car can also be used to identify your car if its not a common car in the area. These identifiers can be used to help track your car via the video feeds mentioned above.

Then there are license plate readers which are only slightly easier to avoid than the video recordings. Permanent, stationary license plate readers can be found on various public roads and parking lots. There are also people who drive around with license plate readers as part of their job for insurance/repossession purposes. You may be able to use some sort of cover over your license plate(s) to hinder the ability of license plate readers to capture your plate number, but that could be used to help identify your car in video feeds/recordings.

 

cross-posted from: https://lemmy.ml/post/16693054

Is there a feature in a CI/CD pipeline that creates a snapshot or backup of a service's data prior to running a deployment? The steps of a ideal workflow that I am searching for are similar to:

  1. CI tool identifies new version of service and creates a pull request
  2. Manually merge pull request
  3. CD tool identifies changes to Git repo
    1. CD tool creates data snapshot and/or data backup
    2. CD tool deploys update
  4. Issue with deployment identified that requires rollback
    1. Git repo reverted to prior commit and/or Git repo manually modified to prior version of service
    2. CD tool identifies the rolled back version
      1. (OPTIONAL) CD tool creates data snapshot and/or data backup
      2. CD tool reverts to snapshot taken prior to upgrade
      3. CD tool deploys service to prior version per the Git repo
  5. (OPTIONAL) CD tool prunes data snapshot and/or data backup based on provided parameters (eg - delete snapshots after _ days, only keep 3 most recently deployed snapshots, only keep snapshots for major version releases, only keep one snapshot for each latest major, minor, and patch version, etc.)
8
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/[email protected]
 

Is there a feature in a CI/CD pipeline that creates a snapshot or backup of a service's data prior to running a deployment? The steps of a ideal workflow that I am searching for are similar to:

  1. CI tool identifies new version of service and creates a pull request
  2. Manually merge pull request
  3. CD tool identifies changes to Git repo
    1. CD tool creates data snapshot and/or data backup
    2. CD tool deploys update
  4. Issue with deployment identified that requires rollback
    1. Git repo reverted to prior commit and/or Git repo manually modified to prior version of service
    2. CD tool identifies the rolled back version
      1. (OPTIONAL) CD tool creates data snapshot and/or data backup
      2. CD tool reverts to snapshot taken prior to upgrade
      3. CD tool deploys service to prior version per the Git repo
  5. (OPTIONAL) CD tool prunes data snapshot and/or data backup based on provided parameters (eg - delete snapshots after _ days, only keep 3 most recently deployed snapshots, only keep snapshots for major version releases, only keep one snapshot for each latest major, minor, and patch version, etc.)
3
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

I'm trying to find a video that demonstrated automated container image updates for Kubernetes, similar to Watchtower for Docker. I believe the video was by @[email protected] but I can't seem to find it. The closest functionality that I can find to what I recall from the video is k8s-digester. Some key features that were discussed include:

  • Automatically update tagged version number (eg - Image:v1.1.0 -> Image:v1.2.0)
  • Automatically update image based on tagged image's digest for tags like "latest" or "stable"
  • Track container updates through modified configuration files
    • Ability to manage deploying updates through Git workflows to prevent unwanted updates
  • Minimal (if any) downtime
  • This may not have been in the video, but I believe it also discussed managing backups and rollback functionality as part of the upgrade process

While this tool may be used in a CI/CD pipeline, its not limited exclusively to Git repositories as it could be used to monitor container registries from various people or organizations. The tool/process may have also incorporated Ansible.

If you don't know which video I'm referring to, do you have any suggestions on how to achieve this functionality?

EDIT: For anyone stumbling on this thread, the video was Meet Renovate - Your Update Automation Bot for Kubernetes and More! by @[email protected], which discusses the Kubernetes tool Renovate.

 

I've been looking for something "official" from the Librewolf team regarding running Librewolf in Docker, but I haven't found much. There are a few initiatives that seem to support Librewolf Docker containers (eg Github, Docker Hub), but they don't seem to be referenced much nor heavily used. However, maybe the reason I don't see it much is that there are better ways to achieve what I'm looking for.

  • Better separation from daily OS environment and regular browsing environment
  • Ability to run multiple instances privacy friendly browser and isolate each instance for particular use cases
  • Configure each instance to be run over different VPNs (or no VPN at all)

Is there a way to best achieve this?

view more: next ›