InflatableMouse-temp

From JRiverWiki
Revision as of 12:23, 9 April 2014 by Inflatablemouse (talk | contribs)

Jump to: navigation, search

This is InflatableMouse' scratch page.


Note: this page is not done yet!cs:NVIDIA de:Nvidia es:NVIDIA fa:اِن‌ویدیا fr:Nvidia it:NVIDIA ja:NVIDIA nl:NVIDIA ru:NVIDIA tr:Nvidia zh-CN:NVIDIA

This article covers installing and configuring NVIDIA's proprietary graphic card driver. For information about the open-source drivers, see Nouveau. See instead NVIDIA Optimus if you have a laptop based on such technology.

Contents

Installing

These instructions are for those using the stock Template:Pkg or Template:Pkg packages. For custom kernel setup, skip to the next subsection.

Template:Tip

1. If you do not know what graphics card you have, find out by issuing:

Template:Bc

2. Determine the necessary driver version for your card by visiting NVIDIA's driver download site, looking up the name in NVIDIA's legacy card list, or finding the code name on nouveau wiki's code names page.

3. Install the appropriate driver for your card:

For the very latest GPU models, it may be required to install Template:AUR from the Arch User Repository, since the stable drivers may not support the newly introduced features.
If you are on 64-bit and also need 32-bit OpenGL support, you must also install the equivalent lib32 package from the multilib repository (e.g. Template:Pkg or lib32-nvidia-{304xx,173xx,96xx}-utils).
Template:Tip

4. Reboot. The Template:Pkg package contains a file which blacklists the nouveau module, so rebooting is necessary.

Once the driver has been installed, continue to configure.

Alternate install: custom kernel

First of all, it's good to know how the ABS works by reading some of the other articles about it:

Template:Note

The following is a short tutorial for making a custom NVIDIA driver package using ABS:

Install Template:Pkg from the official repositories and generate the tree with:

# abs

As a standard user, make a temporary directory for creating the new package:

$ mkdir -p ~/abs

Make a copy of the Template:Ic package directory:

$ cp -r /var/abs/extra/nvidia/ ~/abs/

Go into the temporary Template:Ic build directory:

$ cd ~/abs/nvidia

It is required to edit the files Template:Ic and Template:Ic so that they contain the right kernel version variables.

While running the custom kernel, get the appropriate kernel and local version names:

$ uname -r
  1. In nvidia.install, replace the Template:Ic variable with the custom kernel version, such as Template:Ic or Template:Ic depending on what the kernel's version is and the local version's text/numbers. Do this for all instances of the version number within this file.
  2. In PKGBUILD, change the Template:Ic variable to match the appropriate version, as above.
  3. If there are multiple kernels installed in parallel (such as a custom kernel alongside the default -ARCH kernel), change the Template:Ic variable in the PKGBUILD to a unique identifier, such as nvidia-344 or nvidia-custom. This will allow both kernels to use the nvidia module, since the custom nvidia module has a different package name and will not overwrite the original. You will also need to comment the line in Template:Ic that blacklists the nouveau module in Template:Ic (no need to do it again).

Then do:

$ makepkg -ci

The Template:Ic operand tells makepkg to clean left over files after building the package, whereas Template:Ic specifies that makepkg should automatically run pacman to install the resulting package.

Automatic re-compilation of the NVIDIA module with every update of any kernel

This is possible thanks to Template:AUR from the AUR. You will need to install the module sources: Template:AUR. In nvidia-hook, the 'automatic re-compilation' functionality is done by a nvidia hook on mkinitcpio after forcing to update the linux-headers package. You will need to add 'nvidia' to the HOOKS array in /etc/mkinitcpio.conf.

The hook will call the dkms command to update the NVIDIA module for the version of your new kernel.

Template:Note

Configuring

It is possible that after installing the driver it may not be needed to create an Xorg server configuration file. You can run a test to see if the Xorg server will function correctly without a configuration file. However, it may be required to create a configuration file (prefer Template:Ic over Template:Ic) in order to adjust various settings. This configuration can be generated by the NVIDIA Xorg configuration tool, or it can be created manually. If created manually, it can be a minimal configuration (in the sense that it will only pass the basic options to the Xorg server), or it can include a number of settings that can bypass Xorg's auto-discovered or pre-configured options. Template:Note

Minimal configuration

A basic configuration block in Template:Ic (or deprecated in Template:Ic) would look like this:

Template:Hc

Template:Tip

Automatic configuration

The NVIDIA package includes an automatic configuration tool to create an Xorg server configuration file (Template:Ic) and can be run by:

# nvidia-xconfig

This command will auto-detect and create (or edit, if already present) the Template:Ic configuration according to present hardware.

If there are instances of DRI, ensure they are commented out:

#    Load        "dri"

Double check your Template:Ic to make sure your default depth, horizontal sync, vertical refresh, and resolutions are acceptable.

Multiple monitors

See Multihead for more general information

To activate dual screen support, you just need to edit the Template:Ic file which you made before.

Per each physical monitor, add one Monitor, Device, and Screen Section entry, and then a ServerLayout section to manage it. Be advised that when Xinerama is enabled, the NVIDIA proprietary driver automatically disables compositing. If you desire compositing, you should comment out the Template:Ic line in "Template:Ic" and use TwinView (see below) instead.

Template:Hc

TwinView

You want only one big screen instead of two. Set the Template:Ic argument to Template:Ic. This option should be used instead of Xinerama (see above), if you desire compositing.

Option "TwinView" "1"

TwinView only works on a per card basis: If you have multiple cards, you'll have to use xinerama or zaphod mode (multiple X screens). You can combine TwinView with zaphod mode, ending up, for example, with two X screens covering two monitors each. Most window managers fail miserably in zaphod mode. Awesome is the shining exception, and KDE almost works.

Example configuration: Template:Hc

Device option information.

If you have multiple cards that are SLI capable, it is possible to run more than one monitor attached to separate cards (for example: two cards in SLI with one monitor attached to each). The "MetaModes" option in conjunction with SLI Mosaic mode enables this. Below is a configuration which works for the aforementioned example and runs GNOME flawlessly. Template:Hc

Manual CLI configuration with xrandr

If the latest solutions doesn't works for you, you can use the autostart trick of your window manager to run a Template:Ic command like this one :

xrandr --output DVI-I-0 --auto --primary --left-of DVI-I-1

or:

xrandr --output DVI-I-1 --pos 1440x0 --mode 1440x900 --rate 75.0

When:

  • Template:Ic is used to indicate to which "monitor" set the options.
  • Template:Ic is the name of the second monitor.
  • Template:Ic is the position of the second monitor respect to the first.
  • Template:Ic is the resolution of the second monitor.
  • Template:Ic is the Hz refresh rate.

You must adapt the Template:Ic options with the help of the output of the command Template:Ic run alone in a terminal.

Using NVIDIA Settings

You can also use the Template:Ic tool provided by Template:Pkg. With this method, you will use the proprietary software NVIDIA provides with their drivers. Simply run Template:Ic as root, then configure as you wish, and then save the configuration to Template:Ic.

ConnectedMonitor

If the driver doesn't properly detect a second monitor, you can force it to do so with ConnectedMonitor.

Template:Hc

The duplicated device with Template:Ic is how you get X to use two monitors on one card without Template:Ic. Note that Template:Ic will strip out any Template:Ic options you have added.

Mosaic mode

Mosaic mode is the only way to use more than 2 monitors across multiple graphics cards with compositing. Your window manager may or may not recognize the distinction between each monitor.

Base mosaic

Base mosaic mode works on any set of Geforce 8000 series or higher GPUs. It cannot be enabled from withing the nvidia-setting GUI. You must either use the nvidia-xconfig command line program or edit xorg.conf by hand. Metamodes must be specified. The following is an example for four DFPs in a 2x2 configuration, each running at 1920x1024, with two DFPs connected to two cards:

$ nvidia-xconfig --base-mosaic --metamodes="GPU-0.DFP-0: 1920x1024+0+0, GPU-0.DFP-1: 1920x1024+1920+0, GPU-1.DFP-0: 1920x1024+0+1024, GPU-1.DFP-1: 1920x1024+1920+1024"
SLI Mosaic

If you have an SLI configuration and each GPU is a Quadro FX 5800, Quadro Fermi or newer then you can use SLI Mosaic mode. It can be enabled from within the nvidia-settings GUI or from the command line with:

$ nvidia-xconfig --sli=Mosaic --metamodes="GPU-0.DFP-0: 1920x1024+0+0, GPU-0.DFP-1: 1920x1024+1920+0, GPU-1.DFP-0: 1920x1024+0+1024, GPU-1.DFP-1: 1920x1024+1920+1024"

Tweaking

GUI: nvidia-settings

The NVIDIA package includes the Template:Ic program that allows adjustment of several additional settings.

For the settings to be loaded on login, run this command from the terminal:

$ nvidia-settings --load-config-only

The desktop environment's auto-startup method 'may' not work for loading nvidia-settings properly (KDE). To be sure that settings are really loaded put the command in ~/.xinitrc file (create if not present).

For a dramatic 2D graphics performance increase in pixmap-intensive applications, e.g. Firefox, set the Template:Ic parameter to 2:

$ nvidia-settings -a InitialPixmapPlacement=2

This is documented in nvidia-settings source code. For this setting to persist, this command needs to be run on every startup. You can add it to ~/.xinitrc file for auto-startup with X.

Template:Tip


Advanced: 20-nvidia.conf

Edit Template:Ic, and add the option to the correct section. The Xorg server will need to be restarted before any changes are applied.

See NVIDIA Accelerated Linux Graphics Driver README and Installation Guide for additional details and options.

Enabling desktop composition

As of NVIDIA driver version 180.44, support for GLX with the Damage and Composite X extensions is enabled by default. Refer to Xorg page for detailed instructions.

Disabling the logo on startup

Add the Template:Ic option under section Template:Ic:

Option "NoLogo" "1"

Enabling hardware acceleration

Template:Note Add the Template:Ic option under section Template:Ic:

Option "RenderAccel" "1"

Overriding monitor detection

The Template:Ic option under section Template:Ic allows to override monitor detection when X server starts, which may save a significant amount of time at start up. The available options are: Template:Ic for analog connections, Template:Ic for digital monitors and Template:Ic for televisions.

The following statement forces the NVIDIA driver to bypass startup checks and recognize the monitor as DFP:

Option "ConnectedMonitor" "DFP"

Template:Note

Enabling triple buffering

Enable the use of triple buffering by adding the Template:Ic Option under section Template:Ic:

Option "TripleBuffer" "1"

Use this option if the graphics card has plenty of ram (equal or greater than 128MB). The setting only takes effect when syncing to vblank is enabled, one of the options featured in nvidia-settings.

Template:Note

Using OS-level events

Taken from the NVIDIA driver's README file: "[...] Use OS-level events to efficiently notify X when a client has performed direct rendering to a window that needs to be composited." It may help improving performance, but it is currently incompatible with SLI and Multi-GPU modes.

Add under section Template:Ic:

Option "DamageEvents" "1"

Template:Note

Enabling power saving

Add under section Template:Ic:

Option "DPMS" "1"

Enabling brightness control

Add under section Template:Ic:

Option "RegistryDwords" "EnableBrightnessControl=1"

Template:Note

Enabling SLI

Taken from the NVIDIA driver's README appendix: This option controls the configuration of SLI rendering in supported configurations. A "supported configuration" is a computer equipped with an SLI-Certified Motherboard and 2 or 3 SLI-Certified GeForce GPUs. See NVIDIA's SLI Zone for more information.

Find the first GPU's PCI Bus ID using Template:Ic: Template:Hc

Add the BusID (3 in the previous example) under section Template:Ic:

BusID "PCI:3:0:0"

Template:Note

Add the desired SLI rendering mode value under section Template:Ic:

Option "SLI" "AA"

The following table presents the available rendering modes.

Value Behavior
0, no, off, false, Single Use only a single GPU when rendering.
1, yes, on, true, Auto Enable SLI and allow the driver to automatically select the appropriate rendering mode.
AFR Enable SLI and use the alternate frame rendering mode.
SFR Enable SLI and use the split frame rendering mode.
AA Enable SLI and use SLI antialiasing. Use this in conjunction with full scene antialiasing to improve visual quality.

Alternatively, you can use the Template:Ic utility to insert these changes into Template:Ic with a single command:

# nvidia-xconfig --busid=PCI:3:0:0 --sli=AA

To verify that SLI mode is enabled from a shell: Template:Hc

Forcing Powermizer performance level (for laptops)

Add under section Template:Ic:

# Force Powermizer to a certain level at all times
# level 0x0=adaptiv (Driver Default)
# level 0x1=highest
# level 0x2=med
# level 0x3=lowest

# AC settings:
Option "RegistryDwords" "PowerMizerLevelAC=0x3"
# Battery settings:
Option	"RegistryDwords" "PowerMizerLevel=0x3"
# (Optional) AC Power adaptiv Mode and Battery Power forced to lowest Mode:
Option "RegistryDwords" "PowerMizerLevelAC=0x0; PowerMizerLevel=0x3"
Letting the GPU set its own performance level based on temperature

Add under section Template:Ic:

Option "RegistryDwords" "PerfLevelSrc=0x3333"

Disable vblank interrupts (for laptops)

When running the interrupt detection utility powertop, it can be observed that the Nvidia driver will generate an interrupt for every vblank. To disable, place in the Template:Ic section:

Option "OnDemandVBlankInterrupts" "1"

This will reduce interrupts to about one or two per second.

Enabling overclocking

To enable GPU and memory overclocking, place the following line in the Template:Ic section:

Option "Coolbits" "1"

This will enable on-the-fly overclocking within an X session by running:

$ nvidia-settings

Template:Note

Setting static 2D/3D clocks

Set the following string in the Template:Ic section to enable PowerMizer at its maximum performance level:

Option "RegistryDwords" "PerfLevelSrc=0x2222"

Set one of the following two strings in the Template:Ic section to enable manual GPU fan control within Template:Ic:

Option "Coolbits" "4"
Option "Coolbits" "5"

Tips and tricks

Fixing terminal resolution

Transitioning from nouveau may cause your startup terminal to display at a lower resolution. A possible solution (if you are using GRUB) is to edit the Template:Ic line of Template:Ic with desired display resolutions. Multiple resolutions can be specified, including the default Template:Ic, so it is recommended that you edit the line to resemble Template:Ic. See http://www.gnu.org/software/grub/manual/html_node/gfxmode.html#gfxmode for more information.

Enabling Pure Video HD (VDPAU/VAAPI)

Hardware Required:

At least a video card with second generation PureVideo HD [1].

Software Required:

Nvidia video cards with the proprietary driver installed will provide video decoding capabilities with the VDPAU interface at different levels according to PureVideo generation.

You can also add support for the VA-API interface with Template:Pkg.

Check VA-API support with:

$ vainfo

To take full advantage of the hardware decoding capability of your video card you will need a media player that supports VDPAU or VA-API.

To enable hardware acceleration in MPlayer edit Template:Ic

vo=vdpau
vc=ffmpeg12vdpau,ffwmv3vdpau,ffvc1vdpau,ffh264vdpau,ffodivxvdpau,

To enable hardware acceleration in VLC go:

Template:Ic, then check Use GPU accelerated decoding.

To enable hardware acceleration in smplayer go:

Template:Ic, then select Template:Ic as Template:Ic

To enable hardware acceleration in gnome-mplayer go:

Template:Ic, then set Template:Ic to Template:Ic

Playing HD movies on cards with low memory:

If your graphic card does not have a lot of memory (>512MB?), you can experience glitches when watching 1080p or even 720p movies. To avoid that start simple window manager like TWM or MWM.

Additionally increasing the MPlayer's cache size in Template:Ic can help, when your hard drive is spinning down when watching HD movies.

Avoid screen tearing in KDE (KWin)

Template:Hc

Also if the above doesn't help, then try this: Template:Hc

Do not have both of the above enabled at the same time. Also if you enable Tripple buffering make sure to enable TrippleBuffering for the driver itself. Source: https://bugs.kde.org/show_bug.cgi?id=322060

Hardware accelerated video decoding with XvMC

Accelerated decoding of MPEG-1 and MPEG-2 videos via XvMC are supported on GeForce4, GeForce 5 FX, GeForce 6 and GeForce 7 series cards. To use it, create a new file Template:Ic with the following content:

libXvMCNVIDIA_dynamic.so.1

See how to configure supported software.

Using TV-out

A good article on the subject can be found here.

X with a TV (DFP) as the only display

The X server falls back to CRT-0 if no monitor is automatically detected. This can be a problem when using a DVI connected TV as the main display, and X is started while the TV is turned off or otherwise disconnected.

To force NVIDIA to use DFP, store a copy of the EDID somewhere in the filesystem so that X can parse the file instead of reading EDID from the TV/DFP.

To acquire the EDID, start nvidia-settings. It will show some information in tree format, ignore the rest of the settings for now and select the GPU (the corresponding entry should be titled "GPU-0" or similar), click the Template:Ic section (again, Template:Ic or similar), click on the Template:Ic Button and store it somewhere, for example, Template:Ic.

Edit Template:Ic by adding to the Template:Ic section:

Option "ConnectedMonitor" "DFP"
Option "CustomEDID" "DFP-0:/etc/X11/dfp0.edid"

The Template:Ic option forces the driver to recognize the DFP as if it were connected. The Template:Ic provides EDID data for the device, meaning that it will start up just as if the TV/DFP was connected during X the process.

This way, one can automatically start a display manager at boot time and still have a working and properly configured X screen by the time the TV gets powered on.

Check the power source

The NVIDIA X.org driver can also be used to detect the GPU's current source of power. To see the current power source, check the 'GPUPowerSource' read-only parameter (0 - AC, 1 - battery):

Template:Hc

If you're seeing an error message similiar to the one below, then you either need to install acpid or start the systemd service via Template:Ic

ACPI: failed to connect to the ACPI event daemon; the daemon
may not be running or the "AcpidSocketPath" X
configuration option may not be set correctly. When the
ACPI event daemon is available, the NVIDIA X driver will
try to use it to receive ACPI event notifications. For
details, please see the "ConnectToAcpid" and
"AcpidSocketPath" X configuration options in Appendix B: X
Config Options in the README.

(If you are not seeing this error, it is not necessary to install/run acpid soley for this purpose. My current power source is correctly reported without acpid even installed.)

Displaying GPU temperature in the shell

Method 1 - nvidia-settings

Template:Note

To display the GPU temp in the shell, use Template:Ic as follows:

$ nvidia-settings -q gpucoretemp

This will output something similar to the following:

Attribute 'GPUCoreTemp' (hostname:0.0): 41.
'GPUCoreTemp' is an integer attribute.
'GPUCoreTemp' is a read-only attribute.
'GPUCoreTemp' can use the following target types: X Screen, GPU.

The GPU temps of this board is 41 C.

In order to get just the temperature for use in utils such as Template:Ic or Template:Ic, among others: Template:Hc

Method 2 - nvidia-smi

Use nvidia-smi which can read temps directly from the GPU without the need to use X at all. This is important for a small group of users who do not have X running on their boxes, perhaps because the box is headless running server apps. To display the GPU temperature in the shell, use nvidia-smi as follows:

$ nvidia-smi

This should output something similar to the following: Template:Hc

Only for temperature: Template:Hc

In order to get just the temperature for use in utils such as rrdtool or conky, among others:

Template:Hc

Reference: http://www.question-defense.com/2010/03/22/gpu-linux-shell-temp-get-nvidia-gpu-temperatures-via-linux-cli.

Method 3 - nvclock

Use Template:AUR which is available from the AUR. Template:Note

There can be significant differences between the temperatures reported by nvclock and nvidia-settings/nv-control. According to this post by the author (thunderbird) of nvclock, the nvclock values should be more accurate.

Set fan speed at login

You can adjust the fan speed on your graphics card with nvidia-settings&#39s console interface. First ensure that your Xorg configuration sets the Coolbits option to Template:Ic or Template:Ic in your Template:Ic section to enable fan control.

Option "Coolbits" "4"

Template:Note

Place the following line in your [[xinitrc|Template:Ic]] file to adjust the fan when you launch Xorg. Replace Template:Ic with the fan speed percentage you want to set.

nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUCurrentFanSpeed=n"

You can also configure a second GPU by incrementing the GPU and fan number.

nvidia-settings -a "[gpu:0]/GPUFanControlState=1" \ 
-a "[gpu:1]/GPUFanControlState=1" \
-a "[fan:0]/GPUCurrentFanSpeed=n" \
-a  [fan:1]/GPUCurrentFanSpeed=n" &

If you use a login manager such as GDM or KDM, you can create a desktop entry file to process this setting. Create Template:Ic and place this text inside it. Again, change Template:Ic to the speed percentage you want.

[Desktop Entry]
Type=Application
Exec=nvidia-settings -a "[gpu:0]/GPUFanControlState=1" -a "[fan:0]/GPUCurrentFanSpeed=n"
X-GNOME-Autostart-enabled=true
Name=nvidia-fan-speed

Order of install/deinstall for changing drivers

Where the old driver is nvidiaO and the new driver is nvidiaN.

remove nvidiaO
install nvidia-libglN
install nvidiaN
install lib32-nvidia-libgl-N (if required)

Switching between NVIDIA and nouveau drivers

Template:Out of date If you are switching between the NVIDIA and nouveau driver often, you can use these two scripts to make it easier (both need to be ran as root):

 #!/bin/bash
 # nouveau -> nvidia
 
 set -e
 
 # check if root
 if [[ $EUID -ne 0 ]]; then
    echo "You must be root to run this script. Aborting...";
    exit 1;
 fi
 
 sed -i 's/MODULES="nouveau"/#MODULES="nouveau"/' /etc/mkinitcpio.conf
 
 pacman -Rdds --noconfirm nouveau-dri xf86-video-nouveau mesa-libgl #lib32-nouveau-dri lib32-mesa-libgl
 pacman -S --noconfirm nvidia #lib32-nvidia-libgl
 
 mkinitcpio -p linux
 #!/bin/bash
 # nvidia -> nouveau
 
 set -e
 
 # check if root
 if [[ $EUID -ne 0 ]]; then
    echo "You must be root to run this script. Aborting...";
    exit 1;
 fi
 
 sed -i 's/#*MODULES="nouveau"/MODULES="nouveau"/' /etc/mkinitcpio.conf
 
 pacman -Rdds --noconfirm nvidia #lib32-nvidia-libgl
 pacman -S --noconfirm nouveau-dri xf86-video-nouveau #lib32-nouveau-dri
 
 mkinitcpio -p linux


A reboot is needed to complete the switch.

Adjust the scripts accordingly, if using other NVIDIA drivers (e.g. nvidia-173xx).

Uncomment the lib32 packages if you run a 64-bit system and require the 32-bit libraries (e.g. 32-bit games/Steam).

Troubleshooting

Bad performance, e.g. slow repaints when switching tabs in Chrome

Template:Out of date

On some machines, recent NVIDIA drivers introduce a bug(?) that causes X11 to redraw pixmaps really slow. Switching tabs in Chrome/Chromium (while having more than 2 tabs opened) takes 1-2 seconds, instead of a few milliseconds.

It seems that setting the variable InitialPixmapPlacement to 0 solves that problem, although (like described some paragraphs above) InitialPixmapPlacement=2 should actually be the faster method.

The variable can be (temporarily) set with the command

$ nvidia-settings -a InitialPixmapPlacement=0

To make this permanent, this call can be placed in a startup script.

Gaming using Twinview

In case you want to play fullscreen games when using Twinview, you will notice that games recognize the two screens as being one big screen. While this is technically correct (the virtual X screen really is the size of your screens combined), you probably do not want to play on both screens at the same time.

To correct this behavior for SDL, try:

export SDL_VIDEO_FULLSCREEN_HEAD=1

For OpenGL, add the appropiate Metamodes to your xorg.conf in section Template:Ic and restart X:

Option "Metamodes" "1680x1050,1680x1050; 1280x1024,1280x1024; 1680x1050,NULL; 1280x1024,NULL;"

Another method that may either work alone or in conjunction with those mentioned above is starting games in a separate X server.

Vertical sync using TwinView

If you're using TwinView and vertical sync (the "Sync to VBlank" option in nvidia-settings), you will notice that only one screen is being properly synced, unless you have two identical monitors. Although nvidia-settings does offer an option to change which screen is being synced (the "Sync to this display device" option), this does not always work. A solution is to add the following environment variables at startup, for example append in Template:Ic:

export __GL_SYNC_TO_VBLANK=1
export __GL_SYNC_DISPLAY_DEVICE=DFP-0
export __VDPAU_NVIDIA_SYNC_DISPLAY_DEVICE=DFP-0

You can change Template:Ic with your preferred screen (Template:Ic is the DVI port and Template:Ic is the VGA port). You can find the identifier for your display from nvidia-settings in the "X Server XVideoSettings" section.

Old Xorg settings

If upgrading from an old installation, please remove old Template:Ic paths as it can cause trouble during installation.

Corrupted screen: "Six screens" Problem

For some users using Geforce GT 100M's, the screen turns out corrupted after X starts; divided into 6 sections with a resolution limited to 640x480. The same problem has been recently reported with Quadro 2000 and hi-res displays.

To solve this problem, enable the Validation Mode Template:Ic in section Template:Ic:

Section "Device"
 ...
 Option "ModeValidation" "NoTotalSizeCheck"
 ...
EndSection

'/dev/nvidia0' input/output error

Template:Accuracy This error can occur for several different reasons, and the most common solution given for this error is to check for group/file permissions, which in almost every case is not the problem. The NVIDIA documentation does not talk in detail on what you should do to correct this problem but there are a few things that have worked for some people. The problem can be a IRQ conflict with another device or bad routing by either the kernel or your BIOS.

First thing to try is to remove other video devices such as video capture cards and see if the problem goes away. If there are too many video processors on the same system it can lead into the kernel being unable to start them because of memory allocation problems with the video controller. In particular on systems with low video memory this can occur even if there is only one video processor. In such case you should find out the amount of your system's video memory (e.g. with Template:Ic) and pass allocation parameters to the kernel, e.g.:

vmalloc=64M
or
vmalloc=256M

If running a 64bit kernel, a driver defect can cause the NVIDIA module to fail initializing when IOMMU is on. Turning it off in the BIOS has been confirmed to work for some users. [2]User:Clickthem#nvidia module

Another thing to try is to change your BIOS IRQ routing from Template:Ic to Template:Ic or the other way around. The first one can be passed as a kernel parameter:

PCI=biosirq

The Template:Ic kernel parameter has also been suggested as a solution but since it disables ACPI completely it should be used with caution. Some hardware are easily damaged by overheating.

Template:Note

'/dev/nvidiactl' errors

Trying to start an opengl application might result in errors such as:

Error: Could not open /dev/nvidiactl because the permissions are too
restrictive. Please see the Template:Ic 
section of Template:Ic 
for steps to correct.

Solve by adding the appropiate user to the Template:Ic group and relogin:

# gpasswd -a username video

32 bit applications do not start

Under 64 bit systems, installing Template:Ic that corresponds to the same version installed for the 64 bit driver fixes the problem.

Errors after updating the kernel

If a custom build of NVIDIA's module is used instead of the package from [extra], a recompile is required every time the kernel is updated. Rebooting is generally recommended after updating kernel and graphic drivers.

Crashing in general

  • Try disabling Template:Ic in xorg.conf.
  • If Xorg outputs an error about "conflicting memory type" or "failed to allocate primary buffer: out of memory", add Template:Ic at the end of the Template:Ic line in Template:Ic.
  • If the NVIDIA compiler complains about different versions of GCC between the current one and the one used for compiling the kernel, add in Template:Ic:
export IGNORE_CC_MISMATCH=1
  • If Xorg is crashing with a "Signal 11" while using nvidia-96xx drivers, try disabling PAT. Pass the argument Template:Ic to kernel parameters.

More information about troubleshooting the driver can be found in the NVIDIA forums.

Bad performance after installing a new driver version

If FPS have dropped in comparison with older drivers, first check if direct rendering is turned on (glxinfo is included in Template:Pkg):

$ glxinfo | grep direct

If the command prints:

direct rendering: No

then that could be an indication for the sudden FPS drop.

A possible solution could be to regress to the previously installed driver version and rebooting afterwards.

CPU spikes with 400 series cards

If you are experiencing intermittent CPU spikes with a 400 series card, it may be caused by PowerMizer constantly changing the GPU's clock frequency. Switching PowerMizer's setting from Adaptive to Performance, add the following to the Template:Ic section of your Xorg configuration:

 Option "RegistryDwords" "PowerMizerEnable=0x1; PerfLevelSrc=0x3322; PowerMizerDefaultAC=0x1"

Laptops: X hangs on login/out, worked around with Ctrl+Alt+Backspace

If while using the legacy NVIDIA drivers Xorg hangs on login and logout (particularly with an odd screen split into two black and white/gray pieces), but logging in is still possible via Ctrl-Alt-Backspace (or whatever the new "kill X" keybind is), try adding this in Template:Ic:

options nvidia NVreg_Mobile=1

One user had luck with this instead, but it makes performance drop significantly for others:

options nvidia NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=33 NVreg_DeviceFileMode=0660 NVreg_SoftEDIDs=0 NVreg_Mobile=1

Note that Template:Ic needs to be changed according to the laptop:

  • 1 for Dell laptops.
  • 2 for non-Compal Toshiba laptops.
  • 3 for other laptops.
  • 4 for Compal Toshiba laptops.
  • 5 for Gateway laptops.

See NVIDIA Driver's Readme:Appendix K for more information.

Refresh rate not detected properly by XRandR dependant utilities

The XRandR X extension is not presently aware of multiple display devices on a single X screen; it only sees the Template:Ic bounding box, which may contain one or more actual modes. This means that if multiple MetaModes have the same bounding box, XRandR will not be able to distinguish between them.

In order to support Template:Ic, the NVIDIA driver must make each MetaMode appear to be unique to XRandR. Presently, the NVIDIA driver accomplishes this by using the refresh rate as a unique identifier.

Use Template:Ic to query the actual refresh rate on each display device.

The XRandR extension is currently being redesigned by the X.Org community, so the refresh rate workaround may be removed at some point in the future.

This workaround can also be disabled by setting the Template:Ic X configuration option to Template:Ic, which will disable NV-CONTROL support for manipulating MetaModes, but will cause the XRandR and XF86VidMode visible refresh rate to be accurate.

No screens found on a laptop/NVIDIA Optimus

On a laptop, if the NVIDIA driver cannot find any screens, you may have an NVIDIA Optimus setup : an Intel chipset connected to the screen and the video outputs, and a NVIDIA card that does all the hard work and writes to the chipset's video memory.

Check if Template:Ic outputs something similar to:

00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02)
01:00.0 VGA compatible controller: nVidia Corporation Device 0df4 (rev a1)

NVIDIA drivers now offer Optimus support since 319.12 Beta [[3]] with kernels above and including 3.9.

Another solution is to install the Intel driver to handle the screens, then if you want 3D software you should run them through Bumblebee to tell them to use the NVIDIA card.

Possible Workaround

Enter the BIOS and changed the default graphics setting from 'Optimus' to 'Discrete' and the install NVIDIA drivers (295.20-1 at time of writing) recognized the screens.

Steps:

  1. Enter BIOS.
  2. Find Graphics Settings (should be in tab Config > Display).
  3. Change 'Graphics Device' to 'Discrete Graphics' (Disables Intel integrated graphics).
  4. Change OS Detection for Nvidia Optimus to "Disabled".
  5. Save and exit.

Tested on a Lenovo W520 with a Quadro 1000M and Nvidia Optimus

Screen(s) found, but none have a usable configuration

On a laptop, sometimes NVIDIA driver cannot find the active screen. It may be caused because you own a graphic card with vga/tv outs. You should examine Xorg.0.log to see what is wrong.

Another thing to try is adding invalid Template:Ic to Template:Ic to force Xorg throws error and shows you how correct it. Here more about ConnectedMonitor setting.

After re-run X see Xorg.0.log to get valid CRT-x,DFP-x,TV-x values.

Template:Ic could be helpful.

No brightness control on laptops

Try to add the following line on 20-nvidia.conf:

Option "RegistryDwords" "EnableBrightnessControl=1"

If it still not working, you can try install nvidia-bl or nvidiabl.

Black Bars while watching full screen flash videos with TwinView

Follow the instructions presented here: link.

Backlight is not turning off in some occasions

By default, DPMS should turn off backlight with the timeouts set or by running xset. However, probably due to a bug in the proprietary Nvidia drivers the result is a blank screen with no powersaving whatsoever. To workaround it, until the bug has been fixed you can use the Template:Ic as root.

Install the Template:Pkg package.

Turn off your screen on demand and then by pressing a random key backlight turns on again:

vbetool dpms off && read -n1; vbetool dpms on

Alternatively, xrandr is able to disable and re-enable monitor outputs without requiring root.

xrandr --output DP-1 --off; read -n1; xrandr --output DP-1 --auto

Blue tint on videos with Flash

A problem with Template:Pkg versions 11.2.202.228-1 and 11.2.202.233-1 causes it to send the U/V panes in the incorrect order resulting in a blue tint on certain videos. There are a few potential fixes for this bug:

  1. Install the latest Template:Pkg.
  2. Patch Template:Ic with this makepkg.
  3. Right click on a video, select "Settings..." and uncheck "Enable hardware acceleration". Reload the page for it to take affect. Note that this disables GPU acceleration.
  4. Downgrade the Template:Pkg package to version 11.1.102.63-1 at most.
  5. Use Template:AUR with the new Pepper API Template:AUR.
  6. Try one of the few Flash alternatives.

The merits of each are discussed in this thread.

Bleeding overlay with Flash

This bug is due to the incorrect colour key being used by the Template:Pkg version 11.2.202.228-1 and causes the flash content to "leak" into other pages or solid black backgrounds. To avoid this problem simply install the latest Template:Pkg or export Template:Ic within either your shell profile (E.g. Template:Ic or Template:Ic) or Template:Ic

Full system freeze using Flash

If you experience occasional full system freezes (only the mouse is moving) using flashplugin and get:

Template:Hc

A possible workaround is to switch off Hardware Acceleration in Flash, setting

Template:Hc

Or, if you want to keep Hardware acceleration enabled, you may try to::

export VDPAU_NVIDIA_NO_OVERLAY=1

...before starting the browser. Note that this may introduce tearing.

XOrg fails to load or Red Screen of Death

If you get a red screen and use GRUB disable the GRUB framebuffer by editing Template:Ic and uncomment GRUB_TERMINAL_OUTPUT. For more information see GRUB.

Black screen on systems with Intel integrated GPU

If you have an Intel CPU with an integrated GPU (e.g. Intel HD 4000) and get a black screen on boot after installing the Template:Pkg package, this may be caused by a conflict between the graphics modules. This is solved by blacklisting the Intel GPU modules. Create the file Template:Ic and prevent the i915 and intel_agp modules from loading on boot:

Template:Hc

Black screen on systems with VIA integrated GPU

As above, blacklisting the viafb module may resolve conflicts with NVIDIA drivers:

Template:Hc

X fails with "no screens found" with Intel iGPU

Like above, if you have an Intel CPU with an integrated GPU and X fails to start with

[ 76.633] (EE) No devices detected.
[ 76.633] Fatal server error:
[ 76.633] no screens found

then you need to add your discrete card's BusID to your X configuration. Find it:

Template:Hc

then you fix it by adding it to the card's Device section in your X configuration. In my case:

Template:Hc

Note how Template:Ic is written as Template:Ic.

Xorg fails during boot, but otherwise starts fine

On very fast booting systems, systemd may attempt to start the display manager before the NVIDIA driver has fully initialized. You will see a message like the following in your logs only when Xorg runs during boot. Template:Hc In this case you will need to establish an ordering dependency from the display manager to the DRI device. First create device units for DRI devices by creating a new udev rules file. Template:Hc Then create dependencies from the display manager to the device(s). Template:Hc If you have additional cards needed for the desktop then list them in Wants and After seperated by spaces.

See also