Skip to content

Left to Right Programming

Technology
27 11 0
  • People don’t, in fact, read code from top to bottom, left to right

    100% this.

    This false premise is also why a few (objectively wrong) people defend writing long essays: functions with hundreds of lines and files with thousands; saying "then you don't have to go back and forth to read it", when in fact, no one should be reading it like a novel in the first place.

    Once you get used with list and dict comprehensions, they read just fine. Much like the functional approach is not really that readable for a newcomer either.

    The blog post wasn't about reading, but about writing. And people usually do write top-to-bottom, left-to-right.

    The whole point of the blog post was to write code that the IDE can help you with when writing. It didn't go into readability even once.

  • This post did not contain any content.

    I'll agree that list comprehensions can be a bit annoying to write because your IDE can't help you until the basic loop is done, but you solve that by just doing [thing for thing in things] and then add whatever conditions and attr access/function calls you need.

  • Did we read the same blog post?

    Not a single time did OOP talk about readability. That was not a point at all, so I don't know why you are all about readability.

    It was all about having a language that the IDE can help you write in because it knows what you are talking about from the beginning of the line.

    The issue with the horrible one-liner (and with your nicely split-up version) is that the IDE has no idea what object you are talking about until the second-to-last non-whitespace character. The only thing it can autocomplete is "diffs". Up until you typed the word, it has no idea whether sum(), all(), abs(), <, >, or for-in actually exist for the data type you are using.

    If you did the same in Java, you'd start with diffs and from then on the IDE knows what you are talking about, can help you with suggesting functions/methods, can highlight typos and so on.

    That was the whole point of the blog post.

    I dunno, did we?

    Screenshot from the post

    I think rust's iterator chains are nice, and IDE auto-complete is part of that niceness. But comprehension expressions read very naturally to me, more so than iterator chains.

    I mean, how many python programmers don't even type hint their code, and so won't get (accurate) auto-complete anyway? Auto-completion is nice but just not the be-all and end-all.

  • The blog post wasn't about reading, but about writing. And people usually do write top-to-bottom, left-to-right.

    The whole point of the blog post was to write code that the IDE can help you with when writing. It didn't go into readability even once.

    the last section before the conclusion only mentions readability

  • The argument is not silly, it totally makes sense, and your point even proves that.

    A lot of libraries use module-level globals and if you use from imports (especially from X import *) you get exactly that issue.

    Yes, many more modern APIs use an object-oriented approach, which is left-to-right, and that's exactly what OOP is argueing for. If you notice, he didn't end the post with "Make good languages" but with "Make good APIs". He highlights a common problem using well-known examples and generalizes it to all APIs.

    The auther knows full well that this blog post will not cause Python to drop the List comprehension syntax or built-in functions. What he's trying to do is to get people to not use non-LTR approaces when designing APIs. All the points he made are correct, and many are even more pressing in other languages.

    For example, for a hobby project of mine I have to use C/C++ (microcontrollers). And this problem is huge in C libraries. Every function is just dumped into the global name space and there's no way to easily find the right function. Often I have to go to google and search for an external documentation or open up the header files of a project to find a function that does what I want, instead of being able to just follow the IDE autocomplete on an object.

    And sure, if I know every library and framework I use inside out and memorized all functions, methods, objects, variables and fields, then it's easy, but unless you work 30 years in a bank where you maintain the same old cobol script for decades, that's not going to happen.

    from X import *

    That's malpractice in most cases, and thankfully, becoming more rare to find in the wild. Any decent linter will shout at you for using star imports.

    What he’s trying to do is to get people to not use non-LTR approaces when designing APIs.

    Then he should have picked examples of APIs that break this, not use the built-in functions. Because as it reads now, it seems he is just against established conventions for purism.

    this problem is huge in C libraries

    yeah, one of my favorite things about python is that everything not in the language itself is either defined in the file, or explicitly imported. Unless, like mentioned, you have anti-patterns like star imports and scripts messing with globals().

  • This post did not contain any content.

    I'm kinda surprised that pretty much nobody who commented here seems to have understood the point of the post.

    It wasn't about readability at all.

    It was about designing APIs that the IDE can help you with.

    With RTL syntax the IDE doesn't know what you are talking about until the end of the line because the most important thing, the root object, the main context comes last. So you write your full statement and the IDE has no idea what you are on about, until you end at the very end of your statement.

    Take a procedural-style statement:

    len(str(myvar))

    When you type it out, the IDE has no idea what you want to do, so it begins suggesting everything in the global namespace starting with l, and when you finish writing len(, all it can do is point out a syntax error for the rest of the line. Rinse and repeat for str and myvar.

    Object-oriented, the IDE can help out much more:

    myvar.tostring().length()

    With each dot the IDE knows what possible methods you cound mean, the autocomplete is much more focussed and after each () there are no open syntax errors and the IDE can verify that what you did was correct. And it you have a typo or reference a non-existing method it can instantly show you that instead having to wait until the end of the whole thing.

  • I dunno, did we?

    Screenshot from the post

    I think rust's iterator chains are nice, and IDE auto-complete is part of that niceness. But comprehension expressions read very naturally to me, more so than iterator chains.

    I mean, how many python programmers don't even type hint their code, and so won't get (accurate) auto-complete anyway? Auto-completion is nice but just not the be-all and end-all.

    Fair, I missed one word. You missed the whole blog post.

    It's a big difference between writing code and writin APIs, tbh. If you write crap code that's your problem. If you write crap APIs it's the problem of anyone using your API.

  • the last section before the conclusion only mentions readability

    What about all the other sections?

  • from X import *

    That's malpractice in most cases, and thankfully, becoming more rare to find in the wild. Any decent linter will shout at you for using star imports.

    What he’s trying to do is to get people to not use non-LTR approaces when designing APIs.

    Then he should have picked examples of APIs that break this, not use the built-in functions. Because as it reads now, it seems he is just against established conventions for purism.

    this problem is huge in C libraries

    yeah, one of my favorite things about python is that everything not in the language itself is either defined in the file, or explicitly imported. Unless, like mentioned, you have anti-patterns like star imports and scripts messing with globals().

    He's using simple examples that everyone knows and understands instantly. It's like using a minimal test case to report a bug. In most cases a minimal test case is also nonsensical on its own, but it's used to show an issue that occurred in a more complex context without overloading the reader with useless garbage info that doesn't contribute to the point at hand.

  • Fair, I missed one word. You missed the whole blog post.

    It's a big difference between writing code and writin APIs, tbh. If you write crap code that's your problem. If you write crap APIs it's the problem of anyone using your API.

    The blog post is really about language design, because you definitely should not write a filter method for your custom iterable class in python; you should make it use the language's interface's for "being an iterable". Language design involves APIs offered by the language, but isn't really the purview of most people who write APIs.

    If a suggestion on language design would gain something at the cost of readability, anyone should be very skeptical of that.

    Those things together explain why I am evaluating the post mostly in terms of readability.

  • 464 Stimmen
    107 Beiträge
    546 Aufrufe
    H
    There's no need for the government to prevent people from becoming wealthy. The only ways to become that wealthy all involve monopolies. But every single monopoly that has ever existed, has only managed to become a monopoly due to help from allies in government. AKA Regulatory Capture. When governments are large, and filled with bureaucrats that aren't answerable to the public, monopolies are far more likely to emerge, as those same bureaucrats enact more and more regulations that make entering the market more and more difficult for those of modest to little means.
  • New executive order puts all grants under political control

    Technology technology
    49
    1
    351 Stimmen
    49 Beiträge
    215 Aufrufe
    C
    They are going to make their own Post Office "Fox Post" and it will charge us ten times more
  • Precision in Focus: North America Clinical Microscopes Market

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    7 Aufrufe
    Niemand hat geantwortet
  • getoffpocket.com, my guide to Pocket alternatives, just got a redesign

    Technology technology
    23
    85 Stimmen
    23 Beiträge
    290 Aufrufe
    B
    I've made some updates. There are many perspectives to view a guide like this. I hope there are some improvements to the self-hosting perspective. https://getoffpocket.com/
  • 137 Stimmen
    41 Beiträge
    539 Aufrufe
    R
    And I think you swallowed one too many Apple ads.
  • Why doesn't Nvidia have more competition?

    Technology technology
    22
    1
    33 Stimmen
    22 Beiträge
    273 Aufrufe
    B
    It’s funny how the article asks the question, but completely fails to answer it. About 15 years ago, Nvidia discovered there was a demand for compute in datacenters that could be met with powerful GPU’s, and they were quick to respond to it, and they had the resources to focus on it strongly, because of their huge success and high profitability in the GPU market. AMD also saw the market, and wanted to pursue it, but just over a decade ago where it began to clearly show the high potential for profitability, AMD was near bankrupt, and was very hard pressed to finance developments on GPU and compute in datacenters. AMD really tried the best they could, and was moderately successful from a technology perspective, but Nvidia already had a head start, and the proprietary development system CUDA was already an established standard that was very hard to penetrate. Intel simply fumbled the ball from start to finish. After a decade of trying to push ARM down from having the mobile crown by far, investing billions or actually the equivalent of ARM’s total revenue. They never managed to catch up to ARM despite they had the better production process at the time. This was the main focus of Intel, and Intel believed that GPU would never be more than a niche product. So when intel tried to compete on compute for datacenters, they tried to do it with X86 chips, One of their most bold efforts was to build a monstrosity of a cluster of Celeron chips, which of course performed laughably bad compared to Nvidia! Because as it turns out, the way forward at least for now, is indeed the massively parralel compute capability of a GPU, which Nvidia has refined for decades, only with (inferior) competition from AMD. But despite the lack of competition, Nvidia did not slow down, in fact with increased profits, they only grew bolder in their efforts. Making it even harder to catch up. Now AMD has had more money to compete for a while, and they do have some decent compute units, but Nvidia remains ahead and the CUDA problem is still there, so for AMD to really compete with Nvidia, they have to be better to attract customers. That’s a very tall order against Nvidia that simply seems to never stop progressing. So the only other option for AMD is to sell a bit cheaper. Which I suppose they have to. AMD and Intel were the obvious competitors, everybody else is coming from even further behind. But if I had to make a bet, it would be on Huawei. Huawei has some crazy good developers, and Trump is basically forcing them to figure it out themselves, because he is blocking Huawei and China in general from using both AMD and Nvidia AI chips. And the chips will probably be made by Chinese SMIC, because they are also prevented from using advanced production in the west, most notably TSMC. China will prevail, because it’s become a national project, of both prestige and necessity, and they have a massive talent mass and resources, so nothing can stop it now. IMO USA would clearly have been better off allowing China to use American chips. Now China will soon compete directly on both production and design too.
  • 5 Stimmen
    10 Beiträge
    94 Aufrufe
    S
    You could look into automatic local caching for diles you're planning to seed, and stick that on an SSD. That way you don't hammer the HDDs in the NAS and still get the good feels of seeding. Then automatically delete files once they get to a certain seed rate or something and you're golden. How aggressive you go with this depends on your actual use case. Are you actually editing raw footage over the network while multiple other clients are streaming other stuff? Or are you just interested in having it be capable? What's the budget? But that sounds complicated. I'd personally rather just DIY it, that way you can put an SSD in there for cache and you get most of the benefits with a lot less cost, and you should be able to respond to issues with minimal changes (i.e. add more RAM or another caching drive).
  • Nextcloud cries foul over Google Play Store app rejection

    Technology technology
    1
    1
    6 Stimmen
    1 Beiträge
    20 Aufrufe
    Niemand hat geantwortet