751
submitted 4 days ago* (last edited 4 days ago) by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 0 points 4 days ago

Plain text is slow and cumbersome for large amounts of logs. It would of had a decent performance penalty for little value add.

If you like text you can pipe journalctl

[-] [email protected] 3 points 4 days ago

But if journalctl is slow, piping is not helping.

We have only one week of very sparse logs in it, yet it takes several seconds... greping tens of gigabytes of logs can be sometimes faster. That is insane.

[-] [email protected] 2 points 4 days ago

Strange

Probably worth asking on a technical

[-] [email protected] 1 points 3 days ago

As I said, I've dealt with logging where the variable length text was kept as plain text, with external metadata/index as binary. You have best of both worlds here. Plus it's easier to have very predictable entry alignment, as the messy variable data is kept outside the binary file, and the binary file can have more fixed record sizes. You may have some duplicate data (e.g. the text file has a text version of a timestamp duplicated with the metadata binary timestamp), but overall not too bad.

this post was submitted on 23 Sep 2024
751 points (95.5% liked)

linuxmemes

20817 readers
796 users here now

I use Arch btw


Sister communities:

Community rules

  1. Follow the site-wide rules and code of conduct
  2. Be civil
  3. Post Linux-related content
  4. No recent reposts

Please report posts and comments that break these rules!

founded 1 year ago
MODERATORS