Message boards : GPUs : Changing GPU and CPU allocation per task? (ex. 0.5 CPU + 0.5 GPU per task)
Message board moderation
Author | Message |
---|---|
Send message Joined: 12 Dec 17 Posts: 2 |
I had a friend who altered my Milkyway@home work so that it GPU boosted CPU tasks. I want to know how to do this again as I have found that GPU boosting to be extraordinarily useful at completing tasks for the projects I run (Asteroids@home, SETI@home, and Milkyway@home). I can't seem to get any information through Google of find the relevant settings to do this though. I was thinking since I run a quad core (i5-6600k) and two GPUs (GTX 980ti and GTX 970) of this allocation: 1 CPU + 0.5 GPU for each task This would allow me to run 4 boosted tasks in what I would feel to be slightly less time then 6 tasks: 4 CPU, 2 GPU. Is it possible for me to do this and if so how? Is this even recommended? Thanks for the help, Purphoros |
Send message Joined: 2 Jul 14 Posts: 186 |
You're probably talking about GPU apps and tasks, which are actually not "boosting" any CPU tasks but rather a different thing on their own. First there has to be a proper GPU driver installed on your system. Otherwise Boinc won't recognize your GPUs and won't be able to use them. Start-up messages on the Boinc Event log show what Boinc finds out about the current GPUs. If Activity menu in Boinc Manager don't have anything about "GPU" then the cards definitely have not been recognized properly. Major OS version updates on the Windows 10 for example tend to screw up Nvidia GPU driver and it needs to be reinstalled. Every project site has a page for project preferences. For example http://asteroidsathome.net/boinc/prefs.php?subset=project If you login there you can find settings to enable GPU computing for that project. You might also need to check any GPU apps if a project preferences page has list of available apps. Not every project has GPU app available for all types of GPU (Nvidia / AMD / Intel). Make sure that your computer 'location' is same as the preferences set that you are saving the settings for. Then click an Update for the project via Boinc Manager (or use command prompt... Boinc client) . GPU computing for that project should become alive. You also need to see how this option is set: Boinc Manager Options... Computing preferences... "Suspend GPU computing when computer is in use". That could keep suspending GPU computing if you use your computer while Boinc is running. You can adjust GPU and CPU usage per task by saving app_config.xml in the project directory. Here is basic info about structure of that file: https://boinc.berkeley.edu/trac/wiki/ClientAppConfig You need to know the name of the GPU app. It can be different than the name of the CPU app. I don't know them for SETI or Milkyway. For Asteroids@home app_config.xml could look something like this: <app_config> <app> <name>period_search</name> <max_concurrent>4</max_concurrent> <gpu_versions> <gpu_usage>.5</gpu_usage> <cpu_usage>1</cpu_usage> </gpu_versions> </app> </app_config> Ps. Out of curiosity... how long do the GPU tasks on Asteroids run on your cards? |
Send message Joined: 12 Dec 17 Posts: 2 |
Thanks for the reply, You hit it right on the nose with your response. Asteroids tasks run in about 17 minutes on the 980 Ti alone and about 20 minutes on the 970 (OC@1600MHz). I feel like these times used to be longer but it might vary by task, and there may have been optimization done at some point. For the name of the application Asteroids worked perfectly. For Milkyway they are milkyway_nbody and milkyway but I can't seem to get SETI working. Their project is down for maintenance right now so I can't look through their forums. Thanks for your help. |
Send message Joined: 22 Nov 17 Posts: 11 |
I put this to my asteroids project directory and it doesn't work, i have gpu usage still 99%, at the same time there is 1CPU + 0.95 NVIDIA GPU inside boinc. <app_config> <app> <name>period_search</name> <max_concurrent>8</max_concurrent> <gpu_versions> <gpu_usage>.95</gpu_usage> <cpu_usage>1</cpu_usage> </gpu_versions> </app> </app_config> |
Send message Joined: 25 May 09 Posts: 1295 |
Because GPUs don't work quite the way you think they do.... It is very unlikely that you will get non-integer fractions of a GPU running a task. Half (0.5), third (0.3333) etc. work well and result in multiple tasks running. Likewise the CPU fraction should be seen as a target not an absolute value, and it is a target that may be missed in either direction.... |
Send message Joined: 22 Nov 17 Posts: 11 |
Than i can't use gpu for crunching, because i can't watch tv shows at the same time. |
Send message Joined: 2 Jul 14 Posts: 186 |
Try setting those values <max_concurrent>1</max_concurrent> <gpu_usage>.5</gpu_usage> Then your GPU should be running only 1 GPU task and it should try to use only half of the card. You need to click Boinc Manager ... Options ... Read config files to apply new settings or restart Boinc. |
Send message Joined: 22 Nov 17 Posts: 11 |
I already tried that and unfortunately gpu usage is still 99%, despite BOINC shows (1 CPU + 0.5 NVIDIA GPUs) And limiting cpu usage to 0.005 didn't work aswell, since asteroids gpu tasks don't use cpu at all. GPUGRID utilizes 80% of gpu per one task and one cpu core, but there is issue currently which prevent clients on windows 10 to get any tasks. So i will use gpu for this project, i am getting geforce 1060 from msi, which has tdp only 120W. |
Send message Joined: 5 Oct 06 Posts: 5121 |
Try setting those valuesUnfortunately, not true. The <gpu_usage> tag is not used in any way to control the behaviour of the application running on the GPU. So far as I know from talking with GPU developers, the GPU runtime support tools (installed as part of the GPU driver bundle) don't provide any API hooks to support throttling (though admittedly the developers I talk to are all motivated to provide maximum possible performance, and probably haven't even looked for any other way of programming). And beyond that, perhaps because there are no known tools, BOINC doesn't supply any way of passing such a 'slow down' instruction to GPUs. Your app_config fragment will operate as you describe to limit the number of tasks running on the CPU, but the single running task - once launched - will be free to grab every available resource and we're back to square one. Some project applications may interfere less than others with use of the GPU for its primary purpose of rendering images on screen: testing that to find an acceptable compromise will have to be done on the particular machine in question. I'm a user of NVidia cards: in general I find that applications programmed using the CUDA programming system are less intrusive than those coded using the rival OpenCL system. Solving this problem partly depends on how empl watches television. If he has a specific computer program which is used only for viewing TV, he can use <exclusive_gpu_app>important.exe</exclusive_gpu_app>to keep crunching and viewing separate: but if he uses a generic tool like a web browser, that may switch off BOINC's GPU computation too readily. |
Send message Joined: 22 Nov 17 Posts: 11 |
I could use inte hd graphic to watch tv, while crunching on my main gpu. |
Send message Joined: 5 Oct 06 Posts: 5121 |
Yes, that's worth a try. Let us know how you get on. |
Send message Joined: 22 Nov 17 Posts: 11 |
Unfortunately, my monitor doesn't support analog input from vga on motherboard :( |
Send message Joined: 22 Nov 17 Posts: 11 |
Never mind :D DVI works. EDIT: you can see pixelation on black color and colors are a bit off and darker. But it is watchable, not bad. |
Copyright © 2024 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.