Programming Languages

46 readers
1 users here now

This subreddit is dedicated to the theory, design and implementation of programming languages.

founded 1 year ago
MODERATORS
1
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/FurCollarCriminal on 2024-04-04 22:51:36.


Compilers seem like a good use-case for formal verification at first glance: they're complex deterministic programs where correctness is of the utmost importance.

While there is the canonical example of the "verified C compiler" I don't really understand how formal verification fits into the compiler world at large.

Has anyone here applied formal verification to their work? Perhaps for one part of their compiler -- verifying the type checker, a specific optimization, etc? What did it look like? What tools did you use?

2
1
Lambda Screen (text.marvinborner.de)
submitted 6 months ago by [email protected] to c/[email protected]
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/marvinborner on 2024-04-08 13:58:55.

3
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/Fibreman on 2024-04-06 06:13:34.


Like the JVM or the CLR? It seems like you get a lot of the same benefits provided by a VM if you develop/use languages based on LLVM, but with a leaner, faster language at the end.

4
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/GeroSchorsch on 2024-04-04 19:17:45.


I wrote a C99 compiler (https://github.com/PhilippRados/wrecc) targeting x86-64 for MacOs and Linux.

It has a builtin preprocessor (which only misses function-like macros) and supports all types (except short, floats and doubles) and most keywords (except some storage-class-specifiers/qualifiers).

Currently it can only compile a single .c file at a time.

The self-written backend emits x86-64 which is then assembled and linked using hosts as and ld.

Since this is my first compiler (it had a lot of rewrites) I would appreciate some feedback from people that have more knowledge in the field, as I just learned as I needed it (especially for typechecker -> codegen -> register-allocation phases)

It has 0 dependencies and everything is self-contained so it _should_ be easy to follow 😄

5
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/Obj3ctDisoriented on 2024-04-03 22:02:37.


As I am sure many of you are aware, Donald Knuth proposed a test program to verify the proper scope

resolution in Algol Compilers, which he called "The man or boy test". in the early 1960s as many new compilers were coming out, and not all of them had properly implemented recursion/nested procedure scope resolution. The problem with the test, is that the test its self is difficult to verify, as the nature of the program involves multiple, very deep recursive calls across various scopes.

I've been able to test my interpreter up to k=3 and the output matches the table on wikipedia, however for values of k > 3, no matter how much I increase stack/memory space, one of them runs out before the program terminates, which is really frustrating and so got me thinking.

Is there some kind of newer "man or boy" test that is actually realistically computable? And if not, how could we go about developing one? It should simultaneously test name resolution across, recursion, and the proper implementation of either call by name or closures.

6
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/JGN1722 on 2024-04-03 18:55:41.


I wrote my own compiler for windows. However, the code it produces is marked as a virus by windows defender. I'm pretty sure I'm not the only one who experienced this problem before, so I came here to ask: do you have any advice about how to structure the output assembly code so that it doesn't trigger the av, or any past experience with that kind of things ? Thanks in advance !

7
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/Smallpaul on 2024-04-03 17:53:10.


Extremely sophisticated people or entities are starting to attack the open source infrastructure in difficult to detect ways.

Should programming languages start to treat dependencies as potentially untrustworthy?

I believe that the SPECIFIC attack was through the build system, and not the programming language, so maybe the ratio of our attention on build systems should increase?

More broadly though, if we can't trust our dependencies, maybe we need capability-based languages that embody a principle of least privilege?

Of do we need tools to statically analyze what's going on in our dependencies?

Of do you think that we should treat it 100% as a social, not technical problem?

8
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/7Geordi on 2024-04-01 16:42:30.


I'm thinking about this syntax:

average is Array of T -> T
with (T has [Add of T,Multiply of T,Divide of T])

What do you think of the proposed of operator as the denotation for a parameterized type?

9
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/AutoModerator on 2024-04-01 02:01:49.


How much progress have you made since last time? What new ideas have you stumbled upon, what old ideas have you abandoned? What new projects have you started? What are you working on?

Once again, feel free to share anything you've been working on, old or new, simple or complex, tiny or huge, whether you want to share and discuss it, or simply brag about it - or just about anything you feel like sharing!

The monthly thread is the place for you to engage /r/ProgrammingLanguages on things that you might not have wanted to put up a post for - progress, ideas, maybe even a slick new chair you built in your garage. Share your projects and thoughts on other redditors' ideas, and most importantly, have a great and productive month!

10
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/kraileth on 2024-04-01 09:49:57.


Now and then new posts are created in this sub where people are asking for more original ideas to implement in their PL project. I'd like to see a little discussion / collecting ideas for an important scenario: Our future.

I'd argue that in early CS a lot of the developments in language design were influenced by a key factor that seems to have been largely forgotten about - extreme resource constrains. These days are gone, but we live in an age where we will likely be forced to accept that resources that we got accustomed to treat as basically unlimited are not in fact endless. Doomsday prophecies about the world ending next Thursday as well as hard-core denial of any environmental problems aside, we can anticipate new laws being passed in the not too distant future that will massively impact the IT sector.

How could language design help to get to a more sustainable future? What emerging problems do you see that should be discussed? And what can you imagine programming with new languages could be like in 2050?

Here's just a couple of points that come to mind when I think of this:

  • Solving problems at the right place. A lot of people love Rust and the idea of memory safety and it's not hard to see why. We could however have computers which solve a lot of the common issues using Capability Hardware (Arm's Morello for example is a thing - the hardware actually exists and people have e. g. ported software as complex as KDE5 for demonstration purposes). What would a modern language look like that does not have to take all the problems relating to pointers into account?
  • Good caching. Thanks to CI and such we're building software over and over again. Programs like ccache help a lot with that but it seems like most languages don't have such tooling available. What can be done when designing a language to make such a feature more feasible?
  • Resource monitoring. Let's assume a rather extreme case where law would require a company selling a new device to support it for 15 years. What could language design do to help write programs that can be maintained for a very long time and to ensure that hardware requirements do not increase to a level where the old device cannot run the software anymore?
  • Dependency management. Quite some languages today come with packages, crates, etc. This is obviously a controversial topic. Using such means give more fine-grained control but can also cause harm when people are pinning old "known working" versions and sticking to those. What could be done to create a language well suited to create libraries which can be relied on without just freezing the dependencies?

We're still / only a quarter century away from that point. Far enough away that a lot can happen in the meantime but not so far away that we cannot make any remotely sensible predictions at all. Looking forward to read what people think about this and what other topics come up in that context.

11
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/Lucretia9 on 2024-03-27 12:12:42.

12
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/thepoluboy on 2024-03-29 14:56:21.


I have seen many mainstream programming language with similar tag lines , X programming language, an interpreted language...., an compiled system language.

As far as I understand, programming language is just a specification, some fixed set of rules. On the other hand the implementation of the programming language is compiled or interpreted, thus in theory, someone can write a compiled python, or interpreted C. Isn't it?

13
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/dibs45 on 2024-03-28 08:57:23.


If you have one, what's the end goal of your programming language? What motivates you to keep building it?

I feel like I've invested a stupid amount of time in Vortex. The language has come together quite nicely and I've learned a lot working on it. But I do feel slightly burnt out, and it's probably because I don't really have an end goal for the language.

I'm not sure where to go with it now. Sure I could keep improving it, fixing bugs, adding features and all the good stuff. But I'm also feeling a little lost and not sure where to go from here. Anyone else feel the same way?

14
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/open_source_guava on 2024-03-25 02:13:56.


15
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/gallais on 2024-03-26 17:49:23.

16
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/Pleasant-Form-1093 on 2024-03-25 18:52:05.


Forgive me if this sounds stupid but I just have this idea of a "function-based" programming language i.e to say everything is a function

This is like saying everything in the language is a function and doing absolutely anything looks like you are calling a function

Like say suppose if you wanted to declare an integer variable 'a' to a value of '90' and add the number '10' to it would go about it in a way like:

declare(int, a, 90) add(a, 10)

and so on and so forth

From my perspective(as a beginner myself) this will make the language pretty easy to parse but it's also quite obvious that it would make the language too verbose and tedious to work with

So what are your opinions on this approach and please do point out any part where I might have gone wrong.

Thanks in advance.

17
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/celeritasCelery on 2024-03-25 14:08:22.

18
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/ebingdom on 2024-03-23 10:53:51.


What is the state of the art in combining dependent types and side effects like divergence, partiality, mutable state, etc.? I recall coming across some papers over the years with various ways of tracking which code is "pure" in types or contexts (or universes or kinds or ...), and allowing impure programs to occur in types only in controlled situations (so that you can still write proofs about impure programs, but you can't use impure programs to produce bogus proofs). Another challenge is writing polymorphic code that can be reused in both proofs and programs to avoid duplication (such as pairing/projections and other general purpose facilities). Is there a satisfying solution or is more research needed? Does call-by-push value help with this somehow?

(To clarify, I am asking about side effects, not computational effects in general.)

19
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/DoomCrystal on 2024-03-25 03:14:00.


I'm new to this type theory business, so bear with me :) Questions are at the bottom of the post.

I've been trying to learn about how different languages do things, having come from mostly a C background (and more recently, Zig). I just have a few questions about how languages do optionals differently from something like Zig, and what approaches might be best.

Here is the reference for Zig's optionals if you're unfamiliar:

From what I've seen, there's sort of two paths for an 'optional' type: a true optional, like Rust's "Some(x) | None", or a "nullable" types, like Java's Nullable. Normally I see the downsides being that optional types can be verbose (needing to write a variant of Some() everywhere), whereas nullable types can't be nested well (nullable nullable x == nullable x). I was surprised to find out in my investigation that Zig appears to kind of solve both of these problems?

A lot of times when talking about the problem of nesting nullable types, a "get" function for a hashmap is brought up, where the "value" of that map is itself nullable. This is what that might look like in Zig:

const std = @import("std");

fn get(x: u32) ??u32 {
    if (x == 0) {
        return null;
    } else if (x == 1) {
        return @as(?u32, null);   
    } else {
        return x;
    }
}

pub fn main() void {
    std.debug.print(
        "{?d} {?d} {?d}\n",
        .{get(0) orelse 17, get(1) orelse 17, get(2) orelse 17},
    );
}

  1. We return "null" on the value 0. This means the map does not contain a value at key 0.
  2. We cast "null" to ?u32 on value 1. This means the map does contain a value at key 1; the value null.
  3. Otherwise, give the normal value.

The output printed is "17 null 2\n". So, we printed the "default" value of 17 on the ??u32 null case, and we printed the null directly in the ?u32 null case. We were able to disambiguate them! And in this case, the some() case is not annotated at all.

Okay, questions about this.

  1. Does this really "solve" the common problems with nullable types losing information and optional types being verbose, or am I missing something? I suppose the middle case where a cast is necessary is a bit verbose, but for single-layer optionals (the common case), this is never necessary.
  2. The only downside I can see with this system is that an optional of type @TypeOf(null) is disallowed, and will result in a compiler error. In Zig, the type of null is a special type which is rarely directly used, so this doesn't really come up. However, if I understand correctly, because null is the only value that a variable of the type @TypeOf(null) can take, this functions essentially like a Unit type, correct? In languages where the unit type is more commonly used (I'm not sure if it even is), could this become a problem?
  3. Are these any other major downsides you can see with this kind of system besides #2?
  4. Are there any other languages I'm just not familiar with that already use this system?

Thanks for your help!

20
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/mttd on 2024-03-25 00:11:51.

21
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/perecastor on 2024-03-23 18:33:36.


I find it incredibly strange how popular languages keep errors from the past in their specs to prevent their users from doing a simple search and replacing their code base …

22
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/Cloundx01 on 2024-03-23 07:14:49.


User could use AST in code editors for syntax coloring, making symbol outline table interface, it could help with autocompletion.

Why do we have to use separate parsers, like lsp, ctags, tree-sitter, they are inaccurate or resource-intensive?

In fact I'm not really sure if even any languages that do that, but i think it should be the norm for language designers,

23
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/ThyringerBratwurst on 2024-03-22 03:58:37.


Actually, I planned custom operators like in Haskell, even pre- and postfix variants. But from a lexical point of view it is a horror to take all of this into account and then distinguish it from conventional punctuation. I also find the many bizarre operators of some libs in Haskell annoying and think that operators as special symbols are too syntactic and should be developed more carefully in the overall picture of the language design. That's why I'm thinking about handling it like in Python, where you can't define operators yourself, but you can reimplement the existing ones; or, as in Rust, easily overloadable by traits (corresponding to type classes in Haskell).

What do you think?

24
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/lpil on 2024-03-21 13:13:16.

25
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/programminglanguages by /u/XDracam on 2024-03-20 17:18:27.


First off: I know prolog and derivative languages. I am not looking for a query language. I also know of proof languages like Idris, Agda, Coq and F*, although to a lesser extent. I don't want to compute things, I just want static validation. If there are IDEs with great validating tooling for any of those languages, then feel free to tell me.

I've recently been writing a lot of mathematical logic, mostly set theory and predicate logic. In TeX of course. It's nice, but I keep making stupid errors. Like using a set when I'd need to use an element of that set instead. Or I change a statement and then other statements become invalid. This is annoying, and a solved problem in strongly typed programming languages.

What I am looking for is:

  • an IDE or something similar that lets me write set theory and predicate logic, or something equivalent
  • it should validate the "types" of my expressions, or at least detect inconsistencies between an object being used as a set as well as an element of the same set.
  • it should also validate notation, or the syntax of my statements
  • and it should find logical contradictions and inconsistencies between my statements

I basically want the IntelliJ experience, but for maths.

Do you know of anything like this? Or know of any other subreddits where I could ask this? If there's nothing out there, then I might start this as a personal project.

view more: next ›