• superguy@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      Yeah. I’m really annoyed by this trend of having programs that could function offline require connecting to a server.

    • boonhet@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      11 months ago

      Imagine a standardized API where you provide either your own LLM running locally, your own LLM running in your server (for enthusiasts or companies), or a 3rd party LLM service over the Internet, for your optional AI assistant that you can easily disable.

      Regardless of your DE, you could choose if you want an AI assistant and where you want the model to run.