Thread 'Intel GPU'

Message boards : GPUs : Intel GPU
Message board moderation

To post messages, you must log in.

AuthorMessage
boboviz
Help desk expert

Send message
Joined: 12 Feb 11
Posts: 419
Italy
Message 105404 - Posted: 13 Sep 2021, 8:22:49 UTC

Intel gpu are coming.
Are boinc ready to support these?
ID: 105404 · Report as offensive
robsmith
Volunteer tester
Help desk expert

Send message
Joined: 25 May 09
Posts: 1299
United Kingdom
Message 105405 - Posted: 13 Sep 2021, 8:58:21 UTC - in response to Message 105404.  

More an issue for the projects than BOINC. After all BOINC has been able to identify Intel "gpu" for a good few years, but not all projects have developed applications that run well on them as the Igpu has been seen as having very low performance when it comes to computational work.
ID: 105405 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5124
United Kingdom
Message 105406 - Posted: 13 Sep 2021, 9:24:00 UTC

Several projects do already have iGPU applications - SETI, Einstein, WCG/Covid-19. Most discussion has focused on efficiency: there is a balance to be struck between high-performance applications, which draw so much power and generate so much heat that the rest of the CPU has to slow down, and low-performance applications which - well, don't add much science.

But I'm more concerned about scientific accuracy. Most iGPU applications are built on the back of OpenCL applications originally developed for discrete GPUs - these applications require little or no extra programming effort. But most applications that I've seen use a compiler optimisation called 'cl-mad-enable'. The documentation for mad says:

mad approximates a * b + c. Whether or how the product of a * b is rounded and how supernormal or subnormal intermediate products are handled is not defined. mad is intended to be used where speed is preferred over accuracy.
From what I've seen over the years, the divergence caused by mad has increased with each new iteration of the silicon and each new generation of runtime support in the drivers.

My advice would be to 'proceed at caution' until these effects are more widely understood.
ID: 105406 · Report as offensive
ProfileJord
Volunteer tester
Help desk expert
Avatar

Send message
Joined: 29 Aug 05
Posts: 15552
Netherlands
Message 105408 - Posted: 13 Sep 2021, 11:37:39 UTC - in response to Message 105406.  

Several projects do already have iGPU applications
I think boboviz means the discrete Intel GPU, or the Intel XE architecture. Let's first wait and see if someone has a problem with getting it detected before we get the devs to look at it.
ID: 105408 · Report as offensive
Richard Haselgrove
Volunteer tester
Help desk expert

Send message
Joined: 5 Oct 06
Posts: 5124
United Kingdom
Message 105410 - Posted: 13 Sep 2021, 12:22:50 UTC - in response to Message 105408.  

Fair enough.
ID: 105410 · Report as offensive
Ian&Steve C.

Send message
Joined: 24 Dec 19
Posts: 229
United States
Message 105411 - Posted: 13 Sep 2021, 16:48:08 UTC - in response to Message 105408.  

Several projects do already have iGPU applications
I think boboviz means the discrete Intel GPU, or the Intel XE architecture. Let's first wait and see if someone has a problem with getting it detected before we get the devs to look at it.


I actually have a slight issue with Intel Iris Xe on my new laptop (i7-1165G7). BOINC recognizes it (as "Intel Gen12LP HD Graphics"), but Einstein wont send it tasks. I suspect this to be a problem on the Einstein scheduler side though. they have Intel iGPU tasks, and I was trying to see if they worked for Xe. but can't test if I can't get the app sent to me.

running Linux BTW.
ID: 105411 · Report as offensive
ProDigit

Send message
Joined: 8 Nov 19
Posts: 718
United States
Message 105439 - Posted: 17 Sep 2021, 19:30:55 UTC

The older Intel CPUs already run IGP workloads just fine.
For most desktops, it's not an issue to run IGP and CPU WUs at the same time.
In fact, I run a 2 core Celeron G CPU (with hyperthreading), which push 2 Nvidia GPUs, and the Intel GPU, and then run 2 CPU WUs.
It runs at ~80C, which is pretty hot, but still within tolerance. Sometimes 85C.

I use the beignet drivers.

I think with more modern versions (10th and 11th gen) you could run into heating issues, especially on mobile devices. Either that, or throttling issues.
Since the 10th gen, the IGPs have become more powerful; but they should run just fine at 2 CPU threads or less.
Depending on the CPU, you could disable HT, to reduce power draw and heat somewhat, or underclock the CPU to reach lower temperatures.

On the beignet CPUs, the IGP will always run at full speed (be it 500Mhz, or 1000Mhz), under full load.
They don't throttle down (only when there's little to no graphics load).
Only the CPU part throttles under high temperatures.
Not sure if the same is true for the newer gen CPUs.
ID: 105439 · Report as offensive

Message boards : GPUs : Intel GPU

Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.