nibblebit

joined 1 year ago
MODERATOR OF
[–] [email protected] 2 points 1 year ago

Sorry, I didn't mean to be dismissive. I wholeheartedly agree with you. What I meant was that it's a shame I, as an engineer in the year 2023, would have a hard time pitching a blockchain solution to a non-crypto problem to paying customers no matter how fitting the solution might be. I don't think that's very disputable. Now this attitude is entirely driven by the last decade of unsubstantiated crypto hype and associated bad faith actors. It has nothing to do with the technology as it is.

[–] [email protected] 1 points 1 year ago

Sorry, it was not my intention to be vague. I admit to not having a complete implementation in mind. My point is that linking each log as a block in a chain with hashes forces an order that is more difficult to tamper with than a timestamp or auto incremented integer id. You have to alter more data to inject or purge records from a chain than you would with a table of timestamped records. I admit I can't make my case better than that.

As for the simplicity factor. I think your suggestion of serving logs to peers from a server like an RSS feed is a fine solution.

But I can setup a MultiChain instance In a few hours and start issuing tokens. I can send the same link out to my peers and auditors for them to connect and propagate the shared state. The community can shrink and grow without the members having to change anything. Now it's mostly a hands off venture that scales relatively well. I'm an okay programmer but to coordinate an effort to build, test and verify a system to do the same with RSS feeds across multiple companies would take me months. Something like MultiChain or HyperLedger is comparatively turnkey.

I'm not here to say this is the best way to do it. I'm just saying there's some merit to leveraging these technologies.

If you ask me, audit logs should just be posted to Twitter, the only true write-only database.

[–] [email protected] 1 points 1 year ago (2 children)

I mean you would need the hashing and consensus stuff to figure out exactly how the chain diverged. Just pooling the event would in theory be enough to prove that shenanigans were afoot then the ledgers don't align, but that's a bit too brittle to base a bi-annual evaluation on. You could close those up and setup some eventual-consistency across peers, sure but now you're talking about a some complicated proprietary software. It's also not clear how a system like that would scale.

There's plenty of convenient self-hosted blockchain solutions out there already that can be used to accomplish this. And there are a ton of tools to do analysis and tracing on these chains. This makes it not unreasonable when compared to a dedicated solution.

[–] [email protected] 2 points 1 year ago (1 children)

Aah yes, naming collisions. I hadn't considered that. I'm curious what kind packages you want to make available to the public are also prone to naming collisions that can be solved by a namespace convention. Are you publishing libraries, applications, VS extensions or templates?

[–] [email protected] 1 points 1 year ago (1 children)

ooh i haven't heard of sleet that looks so neat.

10 publishes a day? Is that slow? I'm at a 20 man team and we run up to about 10 a month. is your final product a suite of packages or are you like only using package references in your projects?

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (4 children)

I'm sure the hardcore variant would have its uses. But the goal isn't necessarily to make fraud impossible, just evident. So probably more towards the latter option. And you are correct that you don't need a blockchain to create a distributed database that enforces consensus. It's just a neat tool you could use that scales pretty well, is relatively low maintenance(SWE hours not GPU hours), can adapt to a lot of cases, and is affordable for small and mid-sized companies. You could do the same by broadcasting your events to all your peers and having each peer save everyone's events to compare notes later. But this would be a hassle to setup and keep consistent.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Most auditing and insurance companies don't have a webhook where you can arbitrarily send your logs to. They have humans with eyes and fingers holding risk management and law degrees called auditors. That you need to, with words and arguments,convince of your process integrity. And What happens if you switch insurer or certifier? You probably have to do a ton of IT work to change the format and destination of your logs. And how do you prove that your process was not manipulated during the transition?

What you describe are digital notary services and it's billion-dollar industry. All they do is be a trusted third party that records process integrity. IAM, change logs, RFCs, financial transactions, incident detection, and response are all sent in real time so you are ready for certification or M&A. Most small and mid-sized enterprises can't afford that kind of service and are often locked out of certain certifications or insurances or take a huge price cut when acquired.

Something like pooling together resources to a provable immutable log trail isn't unreasonable.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Let's say a country mandates their Telecom sector to audit it's transactions. The idea would be to share the network with several peers, your telecoms. In this case "mining" would be verifying the integrity if the chain and can be done by anyone of the peers. The government or auditing authority could also be a peer in the network and they are all capable of verifying the integrity of the chain through "mining". You are right that it's easier to have a small group of peers conspire to manipulate the chain. But it's a lot harder for several telecoms to conspire than for one rogue CFO to cook the books.

In this application you're not generating 'valuable' tokens in the sense bitcoin does it, but the value is the integrity of the chain. People value the proof that no one has redacted or injected any transactions.

[–] [email protected] 1 points 1 year ago (6 children)

The security comes from consensus. Everyone needs to agree about what the truth is. The burden of proof is proportional to the number of peers that need to agree. Public chains require a lot of work to create consensus amongst hundreds of thousands of peers. Let's say your chain consists of 12 companies all using the same chain to validate and verify each other's transactions so they are ready for an audit.

Yes, it's easier to have 12 peers conspire to manipulate the chain than to have 200 000 peers. But making 12 businesses conspire to cook the books is already several orders of magnitude more difficult than the checks and balances we have in place now.

[–] [email protected] 3 points 1 year ago (2 children)

This right here is really the spirit of the post. Yes there's many impractical applications. Much like there are many impractical applications for RDBMSs, but the tech has such a stank on it, it's important to remember it's just a tool that can be useful despite the hype cycle.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (2 children)

Not every log needs that kind of security and a chain does not need to be public. You download blocks from peers and do your own accounting.

Nothing is preventing you from only giving access to your chain to a trusted circle of peers.

Something you could do is encrypt your logs and push them to a chain shared by a number of peers who do they same with their own keys. Now you have a pool of accountability buddies, because if someone tries to tamper with the logs, you all hang together.

If you're doing some spooky stuff and need to prove a high degree of integrity is you could push encrypted logs to a chain. The auditor then can appoint several independent parties whose only job it is to continuously prove the integrity of your logs. After that is proven you can release your keys to the auditor who can inspect your logs knowing that they have been complete and untampered during the audit period.

Again I understand it's not the most efficient system, but there are less efficient and less flexible systems out there in enterprise land haha

[–] [email protected] 2 points 1 year ago (1 children)

As others have said so far. If you have zero experience what you are aiming for is pretty complicated.

  • you need path-finding. Godot nav mesh will do great. But you could implement waypoints and A* yourself if you like more control and want to learn.
  • you need some place holder models. Using prisms or Sprite3d is better because you can more easily see which way they are facing
  • you need some agent behaviour. What does move randomly but also towards the player mean? Are you thinking of a pacman like situation?. You might want to think about a state machines
  • If you want the levels to be procedurally generated you open a whole new can of worms.
  • Depending on your use case you might want to spend time getting comfortable with the UI framework and Control nodes to create buttons and widgets to create start and reset levels.
 

Hey I've been wondering what you all use to create and manage dialogue trees for your games. I've come across many tools for the different engines. Most fall in the low-code node-graph category that I find frustrating and finnicky to work with. I never got the hang of the different plugins for Godot, and it's tiring to just spam and duplicate if statements in huge globs.

I made a C# package to let me map out dialogue trees and shoot events all in neat little yaml files that live happily in version control. It was made abstract to work for MUDs and text adventures, but I recently started using it in my Godot games and it works pretty well.

I don't believe I'm the only one that prefers to work this way. I am curious about what you all use for branching dialogue mechanics, reacting to events during dialogue, SFX etc.

Do you like the plugins? Do you have bottomless branches of flow control? Let me know!

 

Hi! I'm a software guy and would like to start out doing some robotics. Before I go out and get a bunch of hardware. I'd like to practice the fundamentals.

I'm most comfortable with C++ and C# and dotnet and am pretty comfortable with game engines like Unity Unreal and Godot.

I've started out modeling a three-joint articulated robot arm that i can control through signals to the individual joints, like controlling a stepper motor.

My goal is to figure out a system where I can declare the shape of a robot like this (armature size, number of joints, offsets etc) to create a virtual model of the robot. I want to be able to send target coordinates and a basis rotation to that model and receive a series of signals back that will move the head of the robot to that 3d coordinate and rotation.

Now, I'm sure there are systems and packages that do all the math for this already, so what tools/libraries do you guys use to do modeling like this?

I want to see if I can simulate it in a game engine, and if that works out maybe ill try it on a toy :D

Thanks!

 

The guys from NoClip dug up a bunch of old videogame archival footage and are slowly uploading them to archive.org.

One of them is this documentary for an GBA game that I found to be so heartwarming.

Hope you like!

1
Lemmy on native Azure? (raw.githubusercontent.com)
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

Hey! Last week I tried off and on to get Lemmy running on an Azure subscription, it's been tricky.

I still haven't gotten it working correctly. So far, I've tried to run the docker-compose on an ACI and Container app, but I've had the most success on a Web App for Containers of all things with the configs uploaded directly on the app service through FTP (yeah...).

I'm running the Postgres as a separate Flexible server instance (set it to v15, default is v13). And I'm running the pict-rs container as a separate ACI with a mounted storage account.

Right now the backend doesn't want to run db migrations fully, but I'm not sure why, otherwise the rest seems to work as intended and can scale independently. Running up to a projected $52/month with everything on the lowest possible SKU

I will publish a bicep once I get the whole thing to run reproducibly.

Have you guys tried it out? What other approaches have you tried or would you try?

 

Hey I started making an Azure functions bot so I made a quick lemmy HTTP client and decided to push it to Nuget

 
 

Hey there! 👋

Welcome to our C# community on Lemmy! We're a group of programmers, hobbyists, and learners all keen about C#. Whether you're a pro or just getting started, we're excited to have you here.

Our goal? To learn, share, and collaborate on everything C#. Got questions, projects, or resources to share? Or simply want to discuss a feature you love (or not) about C#? This is your space!

Here are a few ground rules:

  1. Be respectful and considerate: Remember, we're all at different stages in our C# journey.

  2. Stay on topic: Let's keep discussions C# focused.

  3. No spamming or self-promotion: Share your projects, but don't overdo the self-promotion.

  4. Use appropriate language: No offensive language. Let's keep it positive!

So, let's dotnet build and Nuget Unable to resolve dependency

Cheers!

 

Hey Guys. I thought it would be fun to setup a public anonymous survey about our users. Just to see what kind of different cloud adopters we have around. Results are public and entry is anonymous. It's only to be used for the community.

For now it's as simple as taking a look at what you guys are using and what you are curious about, but in the future we can expand it to answer some interesting questions :)

 

Hey everyone,

I thought it would be good to set up a repository of learning materials beneficial for both newcomers and seasoned professionals.

The aim is to curate content that ranges from beginner to advanced levels, either focused on specific cloud platforms like AWS, Google Cloud, Azure, IBM Cloud, etc., or general insights applicable across multiple platforms.

The three main categories for suggestions are:

  1. Books: What are some introductory and advanced-level books that have deepened your understanding of cloud computing? This could include architecture, best practices, security, scalability, serverless computing, cookbooks and others.

  2. Blogs: We'd love to know which blogs you trust and follow for the latest news, trends, and innovations in cloud computing. Technical blogs offering how-to guides, problem-solving techniques, project logs and tutorials, or sharing personal experiences in the field would also be great.

  3. Videos: Are there YouTube channels, online course platforms, or websites that have provided you with insightful video tutorials, webinars, or talks on cloud technology?

Cloud computing is a big field, so here are some suggestions for interesting topics:

  • IaaS, PaaS and SaaS offerings of different providers
  • comparisons and cross-platform mappings (eg. Azure for AWS engineers)
  • IAC solutions
  • Authentication, Security and Access control
  • Architecture
  • Big(ish) Data management
  • Governance, compliance and Monitoring
  • Fun personal projects

Thank you so much!

1
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
view more: next ›