TL;DW:

  • FSR 3 is frame generation, similar to DLSS 3. It can greatly increase FPS to 2-3x.

  • FSR 3 can run on any GPU, including consoles. They made a point about how it would be dumb to limit it to only the newest generation of cards.

  • Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.

  • Games will start using it by early fall, public launch will be by Q1 2024

It’s left to be seen how good or noticeable FSR3 will be, but if it actually runs well I think we can expect tons of games (especially on console) to make use of it.

  • kadu@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    The reason people care more about Reflex is that on every single game tested, Reflex is significantly ahead, to the point the difference in latency actually matters rather than being a theoretical.

      • kadu@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        They give a measured latency improvement, but it’s so small it’s not enough to affect perception of input latency per frame, which means yes… the feature is there but… that’s about it, really. Doesn’t really matter.

        Reflex is famous in eSports precisely because it actually improves latency just enough to be advantageous in the real world.

        eSports players do not care about brand affiliation, style, software - if it works, they’ll use it.

        EDIT: Though I have to say, I’m talking about Nvidia Reflex vs AMD anti lag. I don’t know Intel’s equivalent, so I can’t talk much about it, other than the fact that playing eSports and never hearing about it isn’t a good sign…

        • Dudewitbow@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I’m not saying reflex is bad and not used by esports pros. Its just the use of theoretical is not the best choice of word for the situation, as it does make a change, its just much harder to detect, similar to the difference between similar but not the same framerate on latency, or the experience of having refresh rates that are close to each other, especially on the high end as you stop getting into the realm of framerate input properties, but become bottlenecked by acreen characteristics (why oleds are better than traditional ips, but can be beat by high refresh rate ips/tn with BFI)

          Regardless, the point is less on the tech, but the idea that AMD doesnt innovate. It does, but it takes longer for people to see t because they either choose not to use a specific feature, or are completely unaware of it, either because they dont use AMD, or they have a fixed channel on where they get their news.

          • kadu@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            But I don’t think you’ve chosen the best example here. AMD’s “innovation” of reorganizing the display pipeline to reduce latency is nothing compared to 3 years being left absolutely behind by not making GPUs capable of accelerating AI models in real time.

            Using just this core concept, Nvidia is delivering a lot of features that have very real effects - AMD is trying to catch up by using the good old shader units and waaaay less effective algorithms.

            • Dudewitbow@lemmy.ml
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              1 year ago

              Because AMD gpu division is a much smaller division in an overall larger company. They physically cant push out as much features because of that. When they decide to make a drastic change to its hardware, its rarely seen till its considered old news. Take for example maxwell and pascal. You dont see a performance loss at the start because games would be designed for hardware at the time, in particular whatevers the most popular.

              Maxwell and Pascal had a notible trait allowing it to have lower power consumption, the lack of a hardware scheduler as Nvidia moved the scheduler onto the driver. This allowed Nvidia to manually have more control of the gpu pipeline allowing for their gpus to handle smaller pipelines better, compared to AMD which had a hardware based one with multuple pipelines that needed an application to use properly to maximize its performance. It led to Maxwell/Pascal cards to have better performance… Til it didnt, as devs started to thead games better, and what used to be a good change for power consumption evolved into a cpu overhead problem (something Nvidia still has to this day reletive to AMS). AMDs innovations tend to be more on the hardware side of things which is pretty hard to market because of it.

              It was like AMDs marketing for Smart Access Memory (again a feature AMD got to first, and till this day, works slightly better on AMD systems than other ones). It was a feature that was hard to market because there isnt much of a wow factor to them, but is an innovation.