• spoonbill@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    How does HATEOAS deal with endpoints that take arguments? E.g. I have an endpoint that merges the currently viewed resource with another one? Does it require a new (argumentless) endpoint showing a form where one can enter the second resource? Wouldn’t it be quite inefficient if you have to now do two (or more) requests instead of just one?

  • 7EP6vuI@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 days ago

    Why is he not mentioning restfulobjects? This is exactly what he describes, just encapsulated in JSON, not in HTML, and in a way that it can actually be automatically consumed as there are some guidelines how to structure the documents.

    We use it at work, and I don’t like it. Its overly complicated and adds a lot of overhead (at least in the way we implemented it). A simple HTTP+JSON RPC with a good URL structure and a OpenApi documentation would be easier to understand, and to consume.

    • Kissaki@programming.devOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 days ago

      Doesn’t help that it’s a multi-page document

      Persistent domain entity, Proto-persistent domain entity, View model, ,

      What the heck… Yeah, I wouldn’t want to use that either. While it may be a formalization, it seems like it would significantly increase complexity and overhead. That can’t be worth it unless it’s a huge enterprise system that has to work with generalized object types across teams or something.

      I hadn’t heard of Restful Objects before.

  • The Bard in Green@lemmy.starlightkel.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    As a security and DevOps engineer, HTMX has been such a pain in my butt lately.

    Something are broken? -> Web devs blame WAF -> Me debugs and researches for hours when I has better stuff to do -> Finally me: WAF is fine. Is your broken JavaScript. Wut do? -> Web devs: Not know, write in HTMX, JS is abstracted, now we fix. -> 15 minutes later web devs: We fix! We do basic thing wrong! Now learn something new about HTMX. -> Me: Great. Thanks so much for that.

    • Kissaki@programming.devOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 days ago

      I didn’t quite follow.

      They’re using htmx, make errors, and learning something new about using it?

      That’s like using any new tech though, right? Or - depending on the devs - happens even with established tech.

      I’ve never seen htmx in production. I find it interesting though and want to explore using it. That won’t be at work though. :)

  • spoonbill@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 days ago

    I really struggle to see where HATEOAS can be used. Obviously not for machine to machine uses as others have pointed out. But even for humans it would lead to terrible interfaces.

    If the state of the resource changes such that the allowable actions available on that resource change (for example, if the account goes into overdraft) then the HTML response would change to show the new set of actions available.

    So if I’m in overdraft, some actions are not available? Which means they are not shown at all? How can a user easily know that there are things they could do, it it wasn’t for the fact that they are in a specific state? Instead of having disabled buttons and menus, with help text explaining why they are not usable, we just hide them? That can’t be right, can it? So how do we actually deliver a useable UX using HATEOAS?

    Or is it just meant for “exploration”, and real clients would not rely on the returned links? But how is that better than actual docs telling you the same but much more clearly ?

  • arendjr@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    Opinionated summary: Developers saw REST, picked the good parts and ignored the rest (no pun intended). They still called it REST, for lack of a better word, even though things like HATEOAS were overkill for most of the applications.

  • Fred@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    3 days ago

    Maybe I’m wildly misunderstanding something, not helped by the fact that I work very little with Web technologies, but…

    So, in a RESTful system, you should be able to enter the system through a single URL and, from that point on, all navigation and actions taken within the system should be entirely provided through self-describing hypermedia: through links and forms in HTML, for example. Beyond the entry point, in a proper RESTful system, the API client shouldn’t need any additional information about your API.

    This is the source of the incredible flexibility of RESTful systems: since all responses are self describing and encode all the currently available actions available there is no need to worry about, for example, versioning your API! In fact, you don’t even need to document it!

    If things change, the hypermedia responses change, and that’s it.

    It’s an incredibly flexible and innovative concept for building distributed systems.

    Does that mean only humans can interact with a REST system? But then it doesn’t really deserve the qualifier of “application programming interface”.

    • ramble81@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      3 days ago

      It feels like he’s trying to say something like Swagger should always be required. One of the things about SOAP for example was that it always had a self-generating WSDL that you could consume to get everything. There were quite a few REST endpoints that were missing this when first developed.

      But I do agree that “forms” and “html” are quite the opposite of an API.