• 0 Posts
  • 6 Comments
Joined 11 months ago
cake
Cake day: September 20th, 2023

help-circle
  • That’s horrible for muscle memory, every time I switch desk/keyboard I have to re-learn the position of the home/end/delete/PgUp/PgDn keys.

    I got used to Ctrl-a / Ctrl-e and it became second nature, my hands don’t have to fish for extra keys, to the point that it becomes annoying when a program does not support that. Some map Ctrl-a to “Select all” so, for input fields where the selection is one line, I’d rather Ctrl-a then left/right to go to the beginning/end than fish for home/end, wherever they are.


  • That quote was in the context of simply separating values with newlines (and the list also included “your language’s split or lines function”).

    Technically you don’t even need awk/sed/fzf, just a loop in bash doing read would allow you to parse the input one line at a time.

    while read line; do 
       echo $line # or whatever other operation
    done < whateverfile
    

    Also, those manpages are a lot less complex than the documentation for C# or Nushell (or bash itself), although maybe working with C#/nushell/bash is “easy when you’re already intuitively familiar with them”. I think the point was precisely the fact that doing that is easy in many different contexts because it’s a relatively simple way to separate values.


  • For the record, you mention “the limitations of the number of inodes in Unix-like systems”, but this is not a limit in Unix, but a limit in filesystem formats (which also extends to Windows and other systems).

    So it depends more on what the filesystem is rather than the OS. A FAT32 partition can only hold 65,535 files (2^16), but both ext4 and NTFS can have up to 4,294,967,295 (2^32). If using Btrfs then it jumps to 18,446,744,073,709,551,615 (2^64).




  • What C does depends on the platform, the CPU, etc.

    If the result actually differs due to compilers deviating in different architectures, then what we can say is that the language/code is not as portable. But I don’t think this implies there’s no denotational semantics.

    And if the end result doesn’t really differ (despite actually executing different instructions in different architectures) then… well, aren’t all compilers for all languages (including Rust) meant to use different instructions in different architectures as appropriate to give the same result?

    who’s to say what are the denotational semantics? Right? What is a ‘function’ in C? Well most C compilers translate it to an Assembly subroutine, but what if our target does not support labels, or subroutines?

    Maybe I’m misunderstanding here, but my impression was that attempting to interpret the meaning of “what a function is in C” by looking at what instructions the compiler translates that to is more in line with an operational interpretation (you’d end up looking at sequential steps the machine executes one after the other), not a denotational one.

    For a denotational interpretation of the meaning of that expression, shouldn’t you look at the inputs/outputs of the “factorial” operation to understand its mathematical meaning? The denotational semantics should be the same in all cases if they are all denotationally equivalent (ie. referentially transparent), even if they might not be operationally equivalent.