WereCat

joined 1 year ago
[–] WereCat@lemmy.world 3 points 5 hours ago

You dense... You're not very bright, are you?

[–] WereCat@lemmy.world 59 points 1 day ago (5 children)

All I understand is that your wife is 35 feet and you're 30 meter

[–] WereCat@lemmy.world 25 points 2 days ago (1 children)

If only it was so easy

[–] WereCat@lemmy.world 1 points 3 days ago

Thanks!

1.) will definitely give it a try 3.) I have set the amdgpu feature mask otherwise I wouldn't even have access to the power limit, voltages, etc... but VRAM overclocking just does not work. Everything else seems to work fine.

[–] WereCat@lemmy.world 1 points 4 days ago (1 children)

I'm 100% sure it's not a cable issue for many different valid reasons one of the main ones being that the cable is able to drive higher res monitor at higher refresh rate without issue.

Also if I just swap cables from my main monitor with the 2nd the same issue still happens with the 2nd monitor but only in Linux, never Windows.

[–] WereCat@lemmy.world 1 points 4 days ago (3 children)
[–] WereCat@lemmy.world 4 points 5 days ago

No problem! I was interested in the performance so may as well share my findings :)

Also important to note. On Windows the game runs Lumen all the time on DX12 and only way to disable it is to run the game on DX11. I'm assuming on Vulkan via Proton the game does not run Lumen at all. And also the game seems to support Frame Generation but I haven't tried it yet.

 

For those who are interested.

DISCLAIMER:

I DON'T KNOW IF THE GAME HAS DRM THAT WILL PREVENT YOU FROM PLAYING ON LINUX OR NOT. I DON'T OWN THE GAME, THE BENCHMARK TOOL IS FREELY AVAILABLE THOUGH AND THAT'S WHAT I'VE TESTED.

  • Fedora40
  • Ryzen 7 5800X3D
  • RX 6800 XT Sapphire Pulse
  • 4x16GB DDR4 3600MT/s (Quad Rank with manual tune)

I run some minor OC on the GPU which is 2600MHz core, no VRAM OC because it's broken on Linux, -50mV on the core and power limit set to 312W.

Results: Motion Blur OFF in all

  • 1440p High - Native
  • AVG = 65 FPS
  • Max = 75 FPS
  • Min = 55 FPS
  • Low 5th = 58 FPS

  • 1440p High - FSR 75% scaling
  • AVG = 87 FPS
  • Max = 106 FPS
  • Min = 73 FPS
  • Low 5th = 78 FPS

  • 1440p High - TSR 75% scaling
  • Avg = 85 FPS
  • Max = 100 FPS
  • Min = 70 FPS
  • Low 5th = 76 FPS

I found TSR more pleasing to my eyes even though a bit more blurry but I do find the shimmer of FSR more distracting in motion. In static scenes the FSR definitely pulls ahead in visuals.

Game looks like it's well optimized. You can probably run most settings on Very High if you're targeting just 60FPS with some upscaling. (Assuming if the game performs like the benchmark). The benchmark is also quite GPU heavy and barely put's any load on the CPU, my 5800X3D was using less than 20W for the entirety of the run. It's possible the actual game may be quite a bit more CPU heavy than that.

You can definitely set Textures to Cinematic quality without barely any performance hit if you have card with enough VRAM, the textures do look quite nice on Cinematic.

[–] WereCat@lemmy.world 1 points 6 days ago

Please bear in mind that custom tuning isn't a guarantee between different driver versions; the voltage floor can shift with power management firmware changes delivered driver packages (this doesn't overwrite the board VBIOS, it's loaded in at OS runtime (pmfw is also included in linux-firmware)). I'd recommend testing with vulkan memory test with each Adrenalin update, and every now and then on Fedora too.

I'm aware. For now it seems to behave consistently. I observed higher avg clocks on Linux vs Windows with the same OC but then again it may be due to difference in monitoring SW or just polling rate.

[–] WereCat@lemmy.world 1 points 6 days ago

To be fair when it's time to upgrade the Linux support will be probably even worse since I would be upgrading to even newer stuff than what I have now.

[–] WereCat@lemmy.world 2 points 1 week ago

I would hold on the conclusion for now. Steve from HW Unboxed tested both Zen 4 and Zen 5 with the "supposed" fix and both had improved performance so the rough difference between Zen 4 and Zen 5 remained almost the same as the issue was affecting both. We will need to see more tests though to draw a reasonable conclusion. We don't yet know if this also affects older Zen 3 at all or not.

[–] WereCat@lemmy.world 2 points 1 week ago (1 children)

The monitors being flipped happened as well. I fixed that by flipping the DP cable order on the graphics card.

[–] WereCat@lemmy.world 2 points 1 week ago* (last edited 1 week ago) (2 children)

1.) IDK, this issue tends to manifest for me with different distros as well sometimes. Forgot to mention that it also happens if monitors go to sleep when inactive and on wake up the 2nd screen sometimes does not wake. That's why I disabled sleep for monitors.

2.) So far works fine after disabling HW acceleration

4.) no need to waste both of ours time. The script now works fine but thanks for offer. I don't even know what half of your sentence means :D

3.) On Windows I use MPT to further modify the cards behaviour like SOC voltages, clock, FCLK clock, TDC limits and power limits, etc... Basically I can easily squeeze 10% on top of the typical overclocking available via MSI Afterburner or AMD Adrenaline SW. #73rd place in TimeSpy for GPU score which is kinda ridiculous for air cooled card

https://www.3dmark.com/search#advanced?test=spy%20P&cpuId=&gpuId=1348&gpuCount=1&gpuType=ALL&deviceType=ALL&storageModel=ALL&showRamDisks=false&memoryChannels=0&country=&scoreType=graphicsScore&hofMode=true&showInvalidResults=false&freeParams=&minGpuCoreClock=&maxGpuCoreClock=&minGpuMemClock=&maxGpuMemClock=&minCpuClock=&maxCpuClock=

Cyberpunk likes to draw a lot of current so my tweaks help alleviate the throttling caused by it in typical OC scenarios as the card hits current limit in the CP2077 benchmark more often than it hits the actual power limit. That's why the lows on Linux are worse, it's not related to CPU underperforming. I'd suspect that the lows would be actually better if I could uncap GPU TDC current limit on Linux. Averages would be likely still lower vs Windows due to lack of VRAM OC.

This is not really comparable and I would have to do proper test on both Windows and Linux with the same versions of the game but I've tested with the same settings which are FOV = 100, SSR = Low (because it performs like crap on higher settings for no visual benefit), everything else maxed out.

This is screenshot from the run I did in February with my Windows OC... (also had worse CPU and memory tune back then vs what I run now so results would be slightly better now as well).

And this is from right now on Fedora 40... not sure why CP detects it as Windows 10

Would be interesting to keep the same game versions and GPU,CPU,DRAM tune and do a direct comparison but I can't really be bothered right now to mess with that. What's important to me that it's roughly in the same ballpark and there are no massive swings in performance so unless I keep a close eye on monitoring I can hardly tell a difference when playing.

 

I've tried to switch multiple times and always found or encountered some issue that got me back to Windows (on desktop PC).

Last year it was after 2 months on Fedora 38 KDE when I had enough with the KDE Window Manager acting weird and broken unusable VRR on desktop and some other smaller but daily issues that I went back to W11 on my PC.

I like GNOME over KDE and back then there was no VRR support on GNOME so I only had to stick with KDE, now it's a different story.

I still have some minor annoyance which are probably solvable but I don't know how as I didn't put enough effort in finding solution.

Namely:

1.) Sometimes my 2nd monitor after boot remains blank and I have to unplug and plug back in the DP cable from the graphics card. Typically happens after a kernel update or restart but rarely on cold boot. I've seen others having this issue on Fedora40 but I haven't seen any solution mentioned.

2.) Steam UI hangs up sometimes for several seconds when trying to navigate fast trough it and especially if it needs to pop a different window.

3.) GPU VRAM OC is completely busted and even doing +-1MHz will result in massive artifacting even on desktop, not a big deal but I would take the extra 5% boost I can have from VRAM OC on Windows :)

4.) After every Kernel update I have to run two commands to get my GPU overclock to work again. I haven't figured out yet how to make a scrip that can read output from 1st command and copy it into 2nd command so I just do it manually every time which is roughly once a week.

5.) Free scrolling does not work in Chromium based browsers :( Luckily Vivaldi has some nice workaround with mouse gestures but I would still like free scrolling like on Windows.

And these are about the only annoyance I found worthwhile to mention.

Gaming works fine.

The apps I use typically work fine on Linux as well. Mangohud is amazing. No issues with audio unlike my last experience. Heck even Discord has no issues streaming video and audio now despite just using the web app. VRR despite being experimental works flawlessly on GNOME for me. I'm happy.

0
submitted 3 months ago* (last edited 3 months ago) by WereCat@lemmy.world to c/linux@lemmy.ml
 

SOLUTION:

I was missing this package sudo dnf install rocm-hip-devel as per instructions here: https://fedoraproject.org/wiki/SIGs/HC


Hi, I'm trying to get GPU acceleration on AMD to work in Blender 4.1 but I can't seem to be able to. From what I've seen it should be working with ROCm just fine but I had no luck with it.

I'm using Fedora 40 GNOME with Wayland and my GPU is RX 6800 XT.

System is up to date. I've also installed all these packages:

sudo dnf install rocminfo

sudo dnf install rocm-opencl

sudo dnf install rocm-clinfo

sudo dnf install rocm-hip

and restarted system after.

rocminfo gives me this

rocm-clinfo gives me this

___``___

 

When they are already finishing each others sentences.

 

Anyone can help me? I wasn't able to find any solution for this. The controller works via USB-C just fine but I only have a very short cable, I borrowed the controller from a friend to try it and don't have the original cable but I was intending to play via Bluetooth anyways.

Basically, I can find the controller via pairing mode but when I try to pair it I get error:

The Setup of Dual Sense wireless Controller has failed.

After that I can see it in available wireless devices but when I try to connect to it it will immediately disconnect again, checked with bluetoothctl.

Using on-board Intel AX200 wireless controller.

 

I switched from Windows to Fedora last week and I'm monitoring the stats with Mangohud when playing games. I used to run HWinfo on 2nd monitor when using Windows 11.

I have 6800X ( default voltage) . The card maintains higher clocks at lower power most of the time. I've set the same OC as on Windows with a 2700MHz max clock and in games I'm sitting pinned at 2670MHz-2700MHz almost all the time in Linux when I don't hit power limit (312W) while on Windows the actual clock barely went over 2600MHz and card was almost always bouncing off of power limit resulting in massive clock drops to 2300-2400MHz. On Linux the drops go down just by like 100MHz-130MHz at most in the same scenarios.

Unfortunately I'd need to install Windows again and do proper testing to compare but I wonder if anyone else can confirm/deny this to me.

At least on idle I can confirm for a fact that the card uses less power, usually around 30-35W while on W11 was like 40-50W.

 

I've got a 2nd hand RX 6800 XT about 3 months ago and immediately had to repaste it as the thermals went well above 100C even at 255W. I used a new tube of MX4 which I kept as a backup and the temp went down significantly for about 2 weeks but then hot spot started to creep up until the difference between GPU Temp and TJ Max was 30C+ at times and with my OC at 300W it was often hitting 105C-109C on TJ Max in Time Spy (but even in games the hotspot would sometimes randomly jump to over 100C even if the game was using 200W).

So my theory was that there is a thermal paste pump-out due to thermal cycling which would explain why the temp was going up so fast after repaste but it took me until now to try the Carbonaut pad which I assumed could fix the issue.

I've used Time Spy GPU Test2 on 5 loops to get these results for comparison. GPU was set to 300W and 2600MHz at stock 1150mV. GPU fan speed fixed with side panel on the case as well.

I've started in the morning so room temp went only up until I've got all my results which means that the pad results are slightly better than what I've measured.

After replacing the paste with a pad the TJ Max did go down by about 6C-9C and I was only hitting about 100C at most BUT the core went up significantly by almost 20C from around 78C to 95C.

This was definitely disappointing as this affected the GPU clocks quite significantly and resulted in around 250MHz drop.

But because the hotspot went down this made me think that there just must be insufficient contact or cooler pressure so I was able to find some rubber washers or O-rings or whatever those are in a garage and I took off the retention plate and installed them. I tried to screw the plate back on as evenly as possible with just a normal screw driver and I hoped I wont crack the die by using too much pressure.

Results are absolutely stellar as I've got almost 20C drop on hot spot vs paste (around 9C improvement vs pad without washers) which makes my Time Spy max out at 91C on hot spot. Also the GPU temp went down by more than 20C vs pad without washers and around 5C lower than vs paste to around 73C in Time Spy.

So all in all I'm quite happy with the results. Washers probably did the most as I think doing washers + paste would get me similar results or maybe even better but I'm not going to try.

If you decide to go for the pad I recommend to get larger than 32x32mm one as I did as it's just big enough with almost no room for error if it moves during installation.

view more: next ›