TL;DW:

  • FSR 3 is frame generation, similar to DLSS 3. It can greatly increase FPS to 2-3x.

  • FSR 3 can run on any GPU, including consoles. They made a point about how it would be dumb to limit it to only the newest generation of cards.

  • Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.

  • Games will start using it by early fall, public launch will be by Q1 2024

It’s left to be seen how good or noticeable FSR3 will be, but if it actually runs well I think we can expect tons of games (especially on console) to make use of it.

      • Dudewitbow@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        AMD has features in yesteryears that it had before Nvidia, its just less people paid attention to them till it became a hot topic after nvidia implemented it.

        An example was anti lag, which AMD and Intel implemented before Nvidia

        https://www.pcgamesn.com/nvidia/geforce-driver-low-latency-integer-scaling

        But people didnt care about it till ULL mode turned into Reflex.

        AMD still holds onto Radeon Chill. Which basically keeps the gpu running slower when idling in game when not a lot is happening on the screen…the end result is lower power consumption when AFK, as well as reletivelly lower fan speeds/better acoustics because the gpu doesnt constantly work as hard.

        • kadu@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          The reason people care more about Reflex is that on every single game tested, Reflex is significantly ahead, to the point the difference in latency actually matters rather than being a theoretical.

            • kadu@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              They give a measured latency improvement, but it’s so small it’s not enough to affect perception of input latency per frame, which means yes… the feature is there but… that’s about it, really. Doesn’t really matter.

              Reflex is famous in eSports precisely because it actually improves latency just enough to be advantageous in the real world.

              eSports players do not care about brand affiliation, style, software - if it works, they’ll use it.

              EDIT: Though I have to say, I’m talking about Nvidia Reflex vs AMD anti lag. I don’t know Intel’s equivalent, so I can’t talk much about it, other than the fact that playing eSports and never hearing about it isn’t a good sign…

              • Dudewitbow@lemmy.ml
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I’m not saying reflex is bad and not used by esports pros. Its just the use of theoretical is not the best choice of word for the situation, as it does make a change, its just much harder to detect, similar to the difference between similar but not the same framerate on latency, or the experience of having refresh rates that are close to each other, especially on the high end as you stop getting into the realm of framerate input properties, but become bottlenecked by acreen characteristics (why oleds are better than traditional ips, but can be beat by high refresh rate ips/tn with BFI)

                Regardless, the point is less on the tech, but the idea that AMD doesnt innovate. It does, but it takes longer for people to see t because they either choose not to use a specific feature, or are completely unaware of it, either because they dont use AMD, or they have a fixed channel on where they get their news.

                • kadu@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  But I don’t think you’ve chosen the best example here. AMD’s “innovation” of reorganizing the display pipeline to reduce latency is nothing compared to 3 years being left absolutely behind by not making GPUs capable of accelerating AI models in real time.

                  Using just this core concept, Nvidia is delivering a lot of features that have very real effects - AMD is trying to catch up by using the good old shader units and waaaay less effective algorithms.

  • Carlos Solís@communities.azkware.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Given that it will eventually be open-source: I hope somebody hooks this to a capture card, to have relatively lag-less motion smoothing for console games locked to 30.

  • kadu@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    edit-2
    1 year ago

    Jesus Christ, doing frame generation on shader units… this will look horrible, considering how FSR 2 just needs to upscale and is already far behind DLSS or XeSS (on Intel, not the generalistic fallback)

    • Hypx@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      People made the same claim about DLSS 3. But those generated frames are barely perceptible and certainly less noticeable than frame stutter. As long as FSR 3 works half-decently, it should be fine.

      And the fact that it works on older GPUs include those from nVidia really shows that nVidia was just blocking the feature in order to sell more 4000 series GPUs.

      • kadu@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        That’s because DLSS 3 uses hardware that is idle and not fighting for cycles during normal rendering, and actual neural networks.

        This is different from keeping shader units that are already extremely busy sharing resources with FSR. That’s why FSR can’t handle things like occlusion nearly as well as DLSS.

        So scale that up to an entire frame generation, rather than upscaling, and you can expect some ugly results.

        And no - when the hardware is capable, Nvidia backports features. Video upscaling is available for older GPUs, the newly announced DLSS Ray Reconstruction is also available. DLSS 3 is restricted because it actually does require extra hardware to allow the tensor cores to read the framebuffer, generate an image in VRAM, and deliver it without disrupting the normal flow.

        • Hypx@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          You aren’t going to use these features on extremely old GPUs anyways. Most newer GPUs will have spare shader compute capacity that can be used for this purpose.

          Also, all performance is based on compromise. It is often better to render at a lower resolution with all of the rendering features turned on, then use upscaling & frame generation to get back to the same resolution and FPS, than it is to render natively at the intended resolution and FPS. This is often a better use of existing resources even if you don’t have extra power to spare.

        • Dudewitbow@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          because I think the post assumes that the GPU is always using all of its resources during computation when it isn’t. There’s a reason why benchmarks can make a GPU hotter than a game can, as well as the fact that not all games pin the gpu performance at 100%. If a GPU is not pinned at 100%, there is a bottleneck in the presentation chain somewhere. (which means unused resources on the GPU)

          • kadu@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            You’re correct, and if AMD is announcing the feature this does mean there’s is enough shader compute available for this to work.

            However, this does mean the algorithm must be light enough to generate the frame in that very limited resource usage. This is already what we see with FSR, that works well, but can’t fix some of the issues DLSS can because DLSS can use way more complex algorithms as it isn’t fighting for resources.

    • Edgelord_Of_Tomorrow@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      You’re getting downvoted but this will be correct. DLSSFG looks dubious enough on dedicated hardware, doing this on shader cores means it will be competing with the 3D rendering so will need to be extremely lightweight to actually offer any advantage.

      • Dudewitbow@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        I wouldnt say compete as the whole concept of frame generation is that it generates more frames when gpu resouces are idle/low due to another part of the chain is holding back the gpu from generating more frames. Its sorta like how I view hyperthreads on a cpu. They arent a full core, but its a thread that gets utilized when there are poonts in a cpu calculation that leaves a resouce unused (e.g if a core is using the AVX2 accerator to do some math, a hyperthread can for example, use the ALU that might not be in use to do something else because its free.)

        It would only compete if the time it takes to generate one additional frame is longer than the time a gpu is free due to some bottleneck in the chain.

    • hark@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      The hit will be less than the hit of trying to run native 4k.

  • cordlesslamp@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Guys, what would be a better purchase?

    1. Used 6700xt for $200

    2. Used 3060 12GB for $220

    3. Non of the used, get a new $300 card for the 2 years warranty.

    4. Another recommendations.

    • simple@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      $200 for the 6700XT is a pretty good deal. It’s up to you if you’d prefer getting used or getting something with warranty.

  • DarkThoughts@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.

    So, no Vulkan?

    • Ranvier@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m not sure, been trying to find the answer. But FSR3 they’ve stated will continue to be open source and prior versions have supported Vulkan on the developer end. It sounds like this is a solution for using it in games that didn’t necessarily integrate it though? So it might be separate. Unclear.